Dec 02 06:43:57 localhost kernel: Linux version 5.14.0-284.11.1.el9_2.x86_64 (mockbuild@x86-vm-09.build.eng.bos.redhat.com) (gcc (GCC) 11.3.1 20221121 (Red Hat 11.3.1-4), GNU ld version 2.35.2-37.el9) #1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023
Dec 02 06:43:57 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec 02 06:43:57 localhost kernel: Command line: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M
Dec 02 06:43:57 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec 02 06:43:57 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec 02 06:43:57 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec 02 06:43:57 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec 02 06:43:57 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
Dec 02 06:43:57 localhost kernel: signal: max sigframe size: 1776
Dec 02 06:43:57 localhost kernel: BIOS-provided physical RAM map:
Dec 02 06:43:57 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec 02 06:43:57 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec 02 06:43:57 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec 02 06:43:57 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec 02 06:43:57 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec 02 06:43:57 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec 02 06:43:57 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec 02 06:43:57 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000043fffffff] usable
Dec 02 06:43:57 localhost kernel: NX (Execute Disable) protection: active
Dec 02 06:43:57 localhost kernel: SMBIOS 2.8 present.
Dec 02 06:43:57 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec 02 06:43:57 localhost kernel: Hypervisor detected: KVM
Dec 02 06:43:57 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec 02 06:43:57 localhost kernel: kvm-clock: using sched offset of 1909811483 cycles
Dec 02 06:43:57 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec 02 06:43:57 localhost kernel: tsc: Detected 2799.998 MHz processor
Dec 02 06:43:57 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Dec 02 06:43:57 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Dec 02 06:43:57 localhost kernel: last_pfn = 0x440000 max_arch_pfn = 0x400000000
Dec 02 06:43:57 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec 02 06:43:57 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec 02 06:43:57 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec 02 06:43:57 localhost kernel: Using GB pages for direct mapping
Dec 02 06:43:57 localhost kernel: RAMDISK: [mem 0x2eef4000-0x33771fff]
Dec 02 06:43:57 localhost kernel: ACPI: Early table checksum verification disabled
Dec 02 06:43:57 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec 02 06:43:57 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 02 06:43:57 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 02 06:43:57 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 02 06:43:57 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec 02 06:43:57 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 02 06:43:57 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 02 06:43:57 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec 02 06:43:57 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec 02 06:43:57 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec 02 06:43:57 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec 02 06:43:57 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec 02 06:43:57 localhost kernel: No NUMA configuration found
Dec 02 06:43:57 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000043fffffff]
Dec 02 06:43:57 localhost kernel: NODE_DATA(0) allocated [mem 0x43ffd5000-0x43fffffff]
Dec 02 06:43:57 localhost kernel: Reserving 256MB of memory at 2800MB for crashkernel (System RAM: 16383MB)
Dec 02 06:43:57 localhost kernel: Zone ranges:
Dec 02 06:43:57 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec 02 06:43:57 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec 02 06:43:57 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000043fffffff]
Dec 02 06:43:57 localhost kernel:   Device   empty
Dec 02 06:43:57 localhost kernel: Movable zone start for each node
Dec 02 06:43:57 localhost kernel: Early memory node ranges
Dec 02 06:43:57 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec 02 06:43:57 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec 02 06:43:57 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000043fffffff]
Dec 02 06:43:57 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000043fffffff]
Dec 02 06:43:57 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec 02 06:43:57 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec 02 06:43:57 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec 02 06:43:57 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Dec 02 06:43:57 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec 02 06:43:57 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec 02 06:43:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec 02 06:43:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec 02 06:43:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec 02 06:43:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec 02 06:43:57 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec 02 06:43:57 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec 02 06:43:57 localhost kernel: TSC deadline timer available
Dec 02 06:43:57 localhost kernel: smpboot: Allowing 8 CPUs, 0 hotplug CPUs
Dec 02 06:43:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec 02 06:43:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec 02 06:43:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec 02 06:43:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec 02 06:43:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec 02 06:43:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec 02 06:43:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec 02 06:43:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec 02 06:43:57 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec 02 06:43:57 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec 02 06:43:57 localhost kernel: Booting paravirtualized kernel on KVM
Dec 02 06:43:57 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec 02 06:43:57 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec 02 06:43:57 localhost kernel: percpu: Embedded 55 pages/cpu s188416 r8192 d28672 u262144
Dec 02 06:43:57 localhost kernel: pcpu-alloc: s188416 r8192 d28672 u262144 alloc=1*2097152
Dec 02 06:43:57 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Dec 02 06:43:57 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Dec 02 06:43:57 localhost kernel: Fallback order for Node 0: 0 
Dec 02 06:43:57 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 4128475
Dec 02 06:43:57 localhost kernel: Policy zone: Normal
Dec 02 06:43:57 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M
Dec 02 06:43:57 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64", will be passed to user space.
Dec 02 06:43:57 localhost kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear)
Dec 02 06:43:57 localhost kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec 02 06:43:57 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec 02 06:43:57 localhost kernel: software IO TLB: area num 8.
Dec 02 06:43:57 localhost kernel: Memory: 2873456K/16776676K available (14342K kernel code, 5536K rwdata, 10180K rodata, 2792K init, 7524K bss, 741260K reserved, 0K cma-reserved)
Dec 02 06:43:57 localhost kernel: random: get_random_u64 called from kmem_cache_open+0x1e/0x210 with crng_init=0
Dec 02 06:43:57 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec 02 06:43:57 localhost kernel: ftrace: allocating 44803 entries in 176 pages
Dec 02 06:43:57 localhost kernel: ftrace: allocated 176 pages with 3 groups
Dec 02 06:43:57 localhost kernel: Dynamic Preempt: voluntary
Dec 02 06:43:57 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Dec 02 06:43:57 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec 02 06:43:57 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Dec 02 06:43:57 localhost kernel:         Rude variant of Tasks RCU enabled.
Dec 02 06:43:57 localhost kernel:         Tracing variant of Tasks RCU enabled.
Dec 02 06:43:57 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec 02 06:43:57 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec 02 06:43:57 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec 02 06:43:57 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec 02 06:43:57 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec 02 06:43:57 localhost kernel: random: crng init done (trusting CPU's manufacturer)
Dec 02 06:43:57 localhost kernel: Console: colour VGA+ 80x25
Dec 02 06:43:57 localhost kernel: printk: console [tty0] enabled
Dec 02 06:43:57 localhost kernel: printk: console [ttyS0] enabled
Dec 02 06:43:57 localhost kernel: ACPI: Core revision 20211217
Dec 02 06:43:57 localhost kernel: APIC: Switch to symmetric I/O mode setup
Dec 02 06:43:57 localhost kernel: x2apic enabled
Dec 02 06:43:57 localhost kernel: Switched APIC routing to physical x2apic.
Dec 02 06:43:57 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec 02 06:43:57 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Dec 02 06:43:57 localhost kernel: pid_max: default: 32768 minimum: 301
Dec 02 06:43:57 localhost kernel: LSM: Security Framework initializing
Dec 02 06:43:57 localhost kernel: Yama: becoming mindful.
Dec 02 06:43:57 localhost kernel: SELinux:  Initializing.
Dec 02 06:43:57 localhost kernel: LSM support for eBPF active
Dec 02 06:43:57 localhost kernel: Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear)
Dec 02 06:43:57 localhost kernel: Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear)
Dec 02 06:43:57 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec 02 06:43:57 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec 02 06:43:57 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec 02 06:43:57 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec 02 06:43:57 localhost kernel: Spectre V2 : Mitigation: Retpolines
Dec 02 06:43:57 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Dec 02 06:43:57 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
Dec 02 06:43:57 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec 02 06:43:57 localhost kernel: RETBleed: Mitigation: untrained return thunk
Dec 02 06:43:57 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec 02 06:43:57 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec 02 06:43:57 localhost kernel: Freeing SMP alternatives memory: 36K
Dec 02 06:43:57 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec 02 06:43:57 localhost kernel: cblist_init_generic: Setting adjustable number of callback queues.
Dec 02 06:43:57 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1.
Dec 02 06:43:57 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1.
Dec 02 06:43:57 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1.
Dec 02 06:43:57 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec 02 06:43:57 localhost kernel: ... version:                0
Dec 02 06:43:57 localhost kernel: ... bit width:              48
Dec 02 06:43:57 localhost kernel: ... generic registers:      6
Dec 02 06:43:57 localhost kernel: ... value mask:             0000ffffffffffff
Dec 02 06:43:57 localhost kernel: ... max period:             00007fffffffffff
Dec 02 06:43:57 localhost kernel: ... fixed-purpose events:   0
Dec 02 06:43:57 localhost kernel: ... event mask:             000000000000003f
Dec 02 06:43:57 localhost kernel: rcu: Hierarchical SRCU implementation.
Dec 02 06:43:57 localhost kernel: rcu:         Max phase no-delay instances is 400.
Dec 02 06:43:57 localhost kernel: smp: Bringing up secondary CPUs ...
Dec 02 06:43:57 localhost kernel: x86: Booting SMP configuration:
Dec 02 06:43:57 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec 02 06:43:57 localhost kernel: smp: Brought up 1 node, 8 CPUs
Dec 02 06:43:57 localhost kernel: smpboot: Max logical packages: 8
Dec 02 06:43:57 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Dec 02 06:43:57 localhost kernel: node 0 deferred pages initialised in 24ms
Dec 02 06:43:57 localhost kernel: devtmpfs: initialized
Dec 02 06:43:57 localhost kernel: x86/mm: Memory block size: 128MB
Dec 02 06:43:57 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec 02 06:43:57 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Dec 02 06:43:57 localhost kernel: pinctrl core: initialized pinctrl subsystem
Dec 02 06:43:57 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec 02 06:43:57 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations
Dec 02 06:43:57 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec 02 06:43:57 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec 02 06:43:57 localhost kernel: audit: initializing netlink subsys (disabled)
Dec 02 06:43:57 localhost kernel: audit: type=2000 audit(1764657836.538:1): state=initialized audit_enabled=0 res=1
Dec 02 06:43:57 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec 02 06:43:57 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec 02 06:43:57 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Dec 02 06:43:57 localhost kernel: cpuidle: using governor menu
Dec 02 06:43:57 localhost kernel: HugeTLB: can optimize 4095 vmemmap pages for hugepages-1048576kB
Dec 02 06:43:57 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec 02 06:43:57 localhost kernel: PCI: Using configuration type 1 for base access
Dec 02 06:43:57 localhost kernel: PCI: Using configuration type 1 for extended access
Dec 02 06:43:57 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec 02 06:43:57 localhost kernel: HugeTLB: can optimize 7 vmemmap pages for hugepages-2048kB
Dec 02 06:43:57 localhost kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
Dec 02 06:43:57 localhost kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
Dec 02 06:43:57 localhost kernel: cryptd: max_cpu_qlen set to 1000
Dec 02 06:43:57 localhost kernel: ACPI: Added _OSI(Module Device)
Dec 02 06:43:57 localhost kernel: ACPI: Added _OSI(Processor Device)
Dec 02 06:43:57 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec 02 06:43:57 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec 02 06:43:57 localhost kernel: ACPI: Added _OSI(Linux-Dell-Video)
Dec 02 06:43:57 localhost kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
Dec 02 06:43:57 localhost kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
Dec 02 06:43:57 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec 02 06:43:57 localhost kernel: ACPI: Interpreter enabled
Dec 02 06:43:57 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec 02 06:43:57 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Dec 02 06:43:57 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec 02 06:43:57 localhost kernel: PCI: Using E820 reservations for host bridge windows
Dec 02 06:43:57 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec 02 06:43:57 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec 02 06:43:57 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec 02 06:43:57 localhost kernel: acpiphp: Slot [3] registered
Dec 02 06:43:57 localhost kernel: acpiphp: Slot [4] registered
Dec 02 06:43:57 localhost kernel: acpiphp: Slot [5] registered
Dec 02 06:43:57 localhost kernel: acpiphp: Slot [6] registered
Dec 02 06:43:57 localhost kernel: acpiphp: Slot [7] registered
Dec 02 06:43:57 localhost kernel: acpiphp: Slot [8] registered
Dec 02 06:43:57 localhost kernel: acpiphp: Slot [9] registered
Dec 02 06:43:57 localhost kernel: acpiphp: Slot [10] registered
Dec 02 06:43:57 localhost kernel: acpiphp: Slot [11] registered
Dec 02 06:43:57 localhost kernel: acpiphp: Slot [12] registered
Dec 02 06:43:57 localhost kernel: acpiphp: Slot [13] registered
Dec 02 06:43:57 localhost kernel: acpiphp: Slot [14] registered
Dec 02 06:43:57 localhost kernel: acpiphp: Slot [15] registered
Dec 02 06:43:57 localhost kernel: acpiphp: Slot [16] registered
Dec 02 06:43:57 localhost kernel: acpiphp: Slot [17] registered
Dec 02 06:43:57 localhost kernel: acpiphp: Slot [18] registered
Dec 02 06:43:57 localhost kernel: acpiphp: Slot [19] registered
Dec 02 06:43:57 localhost kernel: acpiphp: Slot [20] registered
Dec 02 06:43:57 localhost kernel: acpiphp: Slot [21] registered
Dec 02 06:43:57 localhost kernel: acpiphp: Slot [22] registered
Dec 02 06:43:57 localhost kernel: acpiphp: Slot [23] registered
Dec 02 06:43:57 localhost kernel: acpiphp: Slot [24] registered
Dec 02 06:43:57 localhost kernel: acpiphp: Slot [25] registered
Dec 02 06:43:57 localhost kernel: acpiphp: Slot [26] registered
Dec 02 06:43:57 localhost kernel: acpiphp: Slot [27] registered
Dec 02 06:43:57 localhost kernel: acpiphp: Slot [28] registered
Dec 02 06:43:57 localhost kernel: acpiphp: Slot [29] registered
Dec 02 06:43:57 localhost kernel: acpiphp: Slot [30] registered
Dec 02 06:43:57 localhost kernel: acpiphp: Slot [31] registered
Dec 02 06:43:57 localhost kernel: PCI host bridge to bus 0000:00
Dec 02 06:43:57 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec 02 06:43:57 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec 02 06:43:57 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec 02 06:43:57 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec 02 06:43:57 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x440000000-0x4bfffffff window]
Dec 02 06:43:57 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec 02 06:43:57 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
Dec 02 06:43:57 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
Dec 02 06:43:57 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180
Dec 02 06:43:57 localhost kernel: pci 0000:00:01.1: reg 0x20: [io  0xc140-0xc14f]
Dec 02 06:43:57 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io  0x01f0-0x01f7]
Dec 02 06:43:57 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io  0x03f6]
Dec 02 06:43:57 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io  0x0170-0x0177]
Dec 02 06:43:57 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io  0x0376]
Dec 02 06:43:57 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300
Dec 02 06:43:57 localhost kernel: pci 0000:00:01.2: reg 0x20: [io  0xc100-0xc11f]
Dec 02 06:43:57 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
Dec 02 06:43:57 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec 02 06:43:57 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec 02 06:43:57 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000
Dec 02 06:43:57 localhost kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref]
Dec 02 06:43:57 localhost kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref]
Dec 02 06:43:57 localhost kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff]
Dec 02 06:43:57 localhost kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref]
Dec 02 06:43:57 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec 02 06:43:57 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000
Dec 02 06:43:57 localhost kernel: pci 0000:00:03.0: reg 0x10: [io  0xc080-0xc0bf]
Dec 02 06:43:57 localhost kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff]
Dec 02 06:43:57 localhost kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref]
Dec 02 06:43:57 localhost kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref]
Dec 02 06:43:57 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000
Dec 02 06:43:57 localhost kernel: pci 0000:00:04.0: reg 0x10: [io  0xc000-0xc07f]
Dec 02 06:43:57 localhost kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff]
Dec 02 06:43:57 localhost kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec 02 06:43:57 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00
Dec 02 06:43:57 localhost kernel: pci 0000:00:05.0: reg 0x10: [io  0xc0c0-0xc0ff]
Dec 02 06:43:57 localhost kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec 02 06:43:57 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00
Dec 02 06:43:57 localhost kernel: pci 0000:00:06.0: reg 0x10: [io  0xc120-0xc13f]
Dec 02 06:43:57 localhost kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref]
Dec 02 06:43:57 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec 02 06:43:57 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec 02 06:43:57 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec 02 06:43:57 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec 02 06:43:57 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec 02 06:43:57 localhost kernel: iommu: Default domain type: Translated 
Dec 02 06:43:57 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode 
Dec 02 06:43:57 localhost kernel: SCSI subsystem initialized
Dec 02 06:43:57 localhost kernel: ACPI: bus type USB registered
Dec 02 06:43:57 localhost kernel: usbcore: registered new interface driver usbfs
Dec 02 06:43:57 localhost kernel: usbcore: registered new interface driver hub
Dec 02 06:43:57 localhost kernel: usbcore: registered new device driver usb
Dec 02 06:43:57 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Dec 02 06:43:57 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec 02 06:43:57 localhost kernel: PTP clock support registered
Dec 02 06:43:57 localhost kernel: EDAC MC: Ver: 3.0.0
Dec 02 06:43:57 localhost kernel: NetLabel: Initializing
Dec 02 06:43:57 localhost kernel: NetLabel:  domain hash size = 128
Dec 02 06:43:57 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec 02 06:43:57 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Dec 02 06:43:57 localhost kernel: PCI: Using ACPI for IRQ routing
Dec 02 06:43:57 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Dec 02 06:43:57 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Dec 02 06:43:57 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Dec 02 06:43:57 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec 02 06:43:57 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec 02 06:43:57 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec 02 06:43:57 localhost kernel: vgaarb: loaded
Dec 02 06:43:57 localhost kernel: clocksource: Switched to clocksource kvm-clock
Dec 02 06:43:57 localhost kernel: VFS: Disk quotas dquot_6.6.0
Dec 02 06:43:57 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec 02 06:43:57 localhost kernel: pnp: PnP ACPI init
Dec 02 06:43:57 localhost kernel: pnp 00:03: [dma 2]
Dec 02 06:43:57 localhost kernel: pnp: PnP ACPI: found 5 devices
Dec 02 06:43:57 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec 02 06:43:57 localhost kernel: NET: Registered PF_INET protocol family
Dec 02 06:43:57 localhost kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Dec 02 06:43:57 localhost kernel: tcp_listen_portaddr_hash hash table entries: 8192 (order: 5, 131072 bytes, linear)
Dec 02 06:43:57 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec 02 06:43:57 localhost kernel: TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec 02 06:43:57 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec 02 06:43:57 localhost kernel: TCP: Hash tables configured (established 131072 bind 65536)
Dec 02 06:43:57 localhost kernel: MPTCP token hash table entries: 16384 (order: 6, 393216 bytes, linear)
Dec 02 06:43:57 localhost kernel: UDP hash table entries: 8192 (order: 6, 262144 bytes, linear)
Dec 02 06:43:57 localhost kernel: UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes, linear)
Dec 02 06:43:57 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec 02 06:43:57 localhost kernel: NET: Registered PF_XDP protocol family
Dec 02 06:43:57 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec 02 06:43:57 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec 02 06:43:57 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec 02 06:43:57 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec 02 06:43:57 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x440000000-0x4bfffffff window]
Dec 02 06:43:57 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec 02 06:43:57 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec 02 06:43:57 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec 02 06:43:57 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 29073 usecs
Dec 02 06:43:57 localhost kernel: PCI: CLS 0 bytes, default 64
Dec 02 06:43:57 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec 02 06:43:57 localhost kernel: Trying to unpack rootfs image as initramfs...
Dec 02 06:43:57 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec 02 06:43:57 localhost kernel: ACPI: bus type thunderbolt registered
Dec 02 06:43:57 localhost kernel: Initialise system trusted keyrings
Dec 02 06:43:57 localhost kernel: Key type blacklist registered
Dec 02 06:43:57 localhost kernel: workingset: timestamp_bits=36 max_order=22 bucket_order=0
Dec 02 06:43:57 localhost kernel: zbud: loaded
Dec 02 06:43:57 localhost kernel: integrity: Platform Keyring initialized
Dec 02 06:43:57 localhost kernel: NET: Registered PF_ALG protocol family
Dec 02 06:43:57 localhost kernel: xor: automatically using best checksumming function   avx       
Dec 02 06:43:57 localhost kernel: Key type asymmetric registered
Dec 02 06:43:57 localhost kernel: Asymmetric key parser 'x509' registered
Dec 02 06:43:57 localhost kernel: Running certificate verification selftests
Dec 02 06:43:57 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec 02 06:43:57 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec 02 06:43:57 localhost kernel: io scheduler mq-deadline registered
Dec 02 06:43:57 localhost kernel: io scheduler kyber registered
Dec 02 06:43:57 localhost kernel: io scheduler bfq registered
Dec 02 06:43:57 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec 02 06:43:57 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec 02 06:43:57 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec 02 06:43:57 localhost kernel: ACPI: button: Power Button [PWRF]
Dec 02 06:43:57 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec 02 06:43:57 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec 02 06:43:57 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec 02 06:43:57 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec 02 06:43:57 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec 02 06:43:57 localhost kernel: Non-volatile memory driver v1.3
Dec 02 06:43:57 localhost kernel: rdac: device handler registered
Dec 02 06:43:57 localhost kernel: hp_sw: device handler registered
Dec 02 06:43:57 localhost kernel: emc: device handler registered
Dec 02 06:43:57 localhost kernel: alua: device handler registered
Dec 02 06:43:57 localhost kernel: libphy: Fixed MDIO Bus: probed
Dec 02 06:43:57 localhost kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
Dec 02 06:43:57 localhost kernel: ehci-pci: EHCI PCI platform driver
Dec 02 06:43:57 localhost kernel: ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
Dec 02 06:43:57 localhost kernel: ohci-pci: OHCI PCI platform driver
Dec 02 06:43:57 localhost kernel: uhci_hcd: USB Universal Host Controller Interface driver
Dec 02 06:43:57 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec 02 06:43:57 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec 02 06:43:57 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec 02 06:43:57 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec 02 06:43:57 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec 02 06:43:57 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec 02 06:43:57 localhost kernel: usb usb1: Product: UHCI Host Controller
Dec 02 06:43:57 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-284.11.1.el9_2.x86_64 uhci_hcd
Dec 02 06:43:57 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec 02 06:43:57 localhost kernel: hub 1-0:1.0: USB hub found
Dec 02 06:43:57 localhost kernel: hub 1-0:1.0: 2 ports detected
Dec 02 06:43:57 localhost kernel: usbcore: registered new interface driver usbserial_generic
Dec 02 06:43:57 localhost kernel: usbserial: USB Serial support registered for generic
Dec 02 06:43:57 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec 02 06:43:57 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec 02 06:43:57 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec 02 06:43:57 localhost kernel: mousedev: PS/2 mouse device common for all mice
Dec 02 06:43:57 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Dec 02 06:43:57 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec 02 06:43:57 localhost kernel: rtc_cmos 00:04: registered as rtc0
Dec 02 06:43:57 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-12-02T06:43:56 UTC (1764657836)
Dec 02 06:43:57 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec 02 06:43:57 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec 02 06:43:57 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Dec 02 06:43:57 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec 02 06:43:57 localhost kernel: usbcore: registered new interface driver usbhid
Dec 02 06:43:57 localhost kernel: usbhid: USB HID core driver
Dec 02 06:43:57 localhost kernel: drop_monitor: Initializing network drop monitor service
Dec 02 06:43:57 localhost kernel: Initializing XFRM netlink socket
Dec 02 06:43:57 localhost kernel: NET: Registered PF_INET6 protocol family
Dec 02 06:43:57 localhost kernel: Segment Routing with IPv6
Dec 02 06:43:57 localhost kernel: NET: Registered PF_PACKET protocol family
Dec 02 06:43:57 localhost kernel: mpls_gso: MPLS GSO support
Dec 02 06:43:57 localhost kernel: IPI shorthand broadcast: enabled
Dec 02 06:43:57 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Dec 02 06:43:57 localhost kernel: AES CTR mode by8 optimization enabled
Dec 02 06:43:57 localhost kernel: sched_clock: Marking stable (775007730, 176358553)->(1092778420, -141412137)
Dec 02 06:43:57 localhost kernel: registered taskstats version 1
Dec 02 06:43:57 localhost kernel: Loading compiled-in X.509 certificates
Dec 02 06:43:57 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: aaec4b640ef162b54684864066c7d4ffd428cd72'
Dec 02 06:43:57 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec 02 06:43:57 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec 02 06:43:57 localhost kernel: zswap: loaded using pool lzo/zbud
Dec 02 06:43:57 localhost kernel: page_owner is disabled
Dec 02 06:43:57 localhost kernel: Key type big_key registered
Dec 02 06:43:57 localhost kernel: Freeing initrd memory: 74232K
Dec 02 06:43:57 localhost kernel: Key type encrypted registered
Dec 02 06:43:57 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Dec 02 06:43:57 localhost kernel: Loading compiled-in module X.509 certificates
Dec 02 06:43:57 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: aaec4b640ef162b54684864066c7d4ffd428cd72'
Dec 02 06:43:57 localhost kernel: ima: Allocated hash algorithm: sha256
Dec 02 06:43:57 localhost kernel: ima: No architecture policies found
Dec 02 06:43:57 localhost kernel: evm: Initialising EVM extended attributes:
Dec 02 06:43:57 localhost kernel: evm: security.selinux
Dec 02 06:43:57 localhost kernel: evm: security.SMACK64 (disabled)
Dec 02 06:43:57 localhost kernel: evm: security.SMACK64EXEC (disabled)
Dec 02 06:43:57 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec 02 06:43:57 localhost kernel: evm: security.SMACK64MMAP (disabled)
Dec 02 06:43:57 localhost kernel: evm: security.apparmor (disabled)
Dec 02 06:43:57 localhost kernel: evm: security.ima
Dec 02 06:43:57 localhost kernel: evm: security.capability
Dec 02 06:43:57 localhost kernel: evm: HMAC attrs: 0x1
Dec 02 06:43:57 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec 02 06:43:57 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec 02 06:43:57 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec 02 06:43:57 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Dec 02 06:43:57 localhost kernel: usb 1-1: Manufacturer: QEMU
Dec 02 06:43:57 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec 02 06:43:57 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec 02 06:43:57 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec 02 06:43:57 localhost kernel: Freeing unused decrypted memory: 2036K
Dec 02 06:43:57 localhost kernel: Freeing unused kernel image (initmem) memory: 2792K
Dec 02 06:43:57 localhost kernel: Write protecting the kernel read-only data: 26624k
Dec 02 06:43:57 localhost kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K
Dec 02 06:43:57 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 60K
Dec 02 06:43:57 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec 02 06:43:57 localhost kernel: Run /init as init process
Dec 02 06:43:57 localhost kernel:   with arguments:
Dec 02 06:43:57 localhost kernel:     /init
Dec 02 06:43:57 localhost kernel:   with environment:
Dec 02 06:43:57 localhost kernel:     HOME=/
Dec 02 06:43:57 localhost kernel:     TERM=linux
Dec 02 06:43:57 localhost kernel:     BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64
Dec 02 06:43:57 localhost systemd[1]: systemd 252-13.el9_2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 02 06:43:57 localhost systemd[1]: Detected virtualization kvm.
Dec 02 06:43:57 localhost systemd[1]: Detected architecture x86-64.
Dec 02 06:43:57 localhost systemd[1]: Running in initrd.
Dec 02 06:43:57 localhost systemd[1]: No hostname configured, using default hostname.
Dec 02 06:43:57 localhost systemd[1]: Hostname set to <localhost>.
Dec 02 06:43:57 localhost systemd[1]: Initializing machine ID from VM UUID.
Dec 02 06:43:57 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Dec 02 06:43:57 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 02 06:43:57 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 02 06:43:57 localhost systemd[1]: Reached target Initrd /usr File System.
Dec 02 06:43:57 localhost systemd[1]: Reached target Local File Systems.
Dec 02 06:43:57 localhost systemd[1]: Reached target Path Units.
Dec 02 06:43:57 localhost systemd[1]: Reached target Slice Units.
Dec 02 06:43:57 localhost systemd[1]: Reached target Swaps.
Dec 02 06:43:57 localhost systemd[1]: Reached target Timer Units.
Dec 02 06:43:57 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 02 06:43:57 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Dec 02 06:43:57 localhost systemd[1]: Listening on Journal Socket.
Dec 02 06:43:57 localhost systemd[1]: Listening on udev Control Socket.
Dec 02 06:43:57 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 02 06:43:57 localhost systemd[1]: Reached target Socket Units.
Dec 02 06:43:57 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 02 06:43:57 localhost systemd[1]: Starting Journal Service...
Dec 02 06:43:57 localhost systemd[1]: Starting Load Kernel Modules...
Dec 02 06:43:57 localhost systemd[1]: Starting Create System Users...
Dec 02 06:43:57 localhost systemd[1]: Starting Setup Virtual Console...
Dec 02 06:43:57 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 02 06:43:57 localhost systemd[1]: Finished Load Kernel Modules.
Dec 02 06:43:57 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 02 06:43:57 localhost systemd-journald[282]: Journal started
Dec 02 06:43:57 localhost systemd-journald[282]: Runtime Journal (/run/log/journal/64aa52087bf7490c857b3c1a3cae8bb3) is 8.0M, max 314.7M, 306.7M free.
Dec 02 06:43:57 localhost systemd-modules-load[283]: Module 'msr' is built in
Dec 02 06:43:57 localhost systemd[1]: Started Journal Service.
Dec 02 06:43:57 localhost systemd[1]: Finished Setup Virtual Console.
Dec 02 06:43:57 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 02 06:43:57 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec 02 06:43:57 localhost systemd[1]: Starting dracut cmdline hook...
Dec 02 06:43:57 localhost systemd-sysusers[284]: Creating group 'sgx' with GID 997.
Dec 02 06:43:57 localhost systemd-sysusers[284]: Creating group 'users' with GID 100.
Dec 02 06:43:57 localhost systemd-sysusers[284]: Creating group 'dbus' with GID 81.
Dec 02 06:43:57 localhost systemd-sysusers[284]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec 02 06:43:57 localhost systemd[1]: Finished Create System Users.
Dec 02 06:43:57 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 02 06:43:57 localhost dracut-cmdline[289]: dracut-9.2 (Plow) dracut-057-21.git20230214.el9
Dec 02 06:43:57 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 02 06:43:57 localhost dracut-cmdline[289]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M
Dec 02 06:43:57 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 02 06:43:57 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 02 06:43:57 localhost systemd[1]: Finished dracut cmdline hook.
Dec 02 06:43:57 localhost systemd[1]: Starting dracut pre-udev hook...
Dec 02 06:43:57 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec 02 06:43:57 localhost kernel: device-mapper: uevent: version 1.0.3
Dec 02 06:43:57 localhost kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com
Dec 02 06:43:57 localhost kernel: RPC: Registered named UNIX socket transport module.
Dec 02 06:43:57 localhost kernel: RPC: Registered udp transport module.
Dec 02 06:43:57 localhost kernel: RPC: Registered tcp transport module.
Dec 02 06:43:57 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec 02 06:43:57 localhost rpc.statd[404]: Version 2.5.4 starting
Dec 02 06:43:57 localhost rpc.statd[404]: Initializing NSM state
Dec 02 06:43:57 localhost rpc.idmapd[409]: Setting log level to 0
Dec 02 06:43:57 localhost systemd[1]: Finished dracut pre-udev hook.
Dec 02 06:43:57 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 02 06:43:57 localhost systemd-udevd[422]: Using default interface naming scheme 'rhel-9.0'.
Dec 02 06:43:57 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 02 06:43:57 localhost systemd[1]: Starting dracut pre-trigger hook...
Dec 02 06:43:57 localhost systemd[1]: Finished dracut pre-trigger hook.
Dec 02 06:43:57 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 02 06:43:57 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 02 06:43:57 localhost systemd[1]: Reached target System Initialization.
Dec 02 06:43:57 localhost systemd[1]: Reached target Basic System.
Dec 02 06:43:57 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 02 06:43:57 localhost systemd[1]: Reached target Network.
Dec 02 06:43:57 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 02 06:43:57 localhost systemd[1]: Starting dracut initqueue hook...
Dec 02 06:43:57 localhost kernel: libata version 3.00 loaded.
Dec 02 06:43:57 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Dec 02 06:43:57 localhost kernel: virtio_blk virtio2: [vda] 838860800 512-byte logical blocks (429 GB/400 GiB)
Dec 02 06:43:57 localhost kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Dec 02 06:43:57 localhost kernel: GPT:20971519 != 838860799
Dec 02 06:43:57 localhost kernel: GPT:Alternate GPT header not at the end of the disk.
Dec 02 06:43:57 localhost kernel: scsi host0: ata_piix
Dec 02 06:43:57 localhost kernel: GPT:20971519 != 838860799
Dec 02 06:43:57 localhost systemd-udevd[440]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 06:43:57 localhost kernel: scsi host1: ata_piix
Dec 02 06:43:57 localhost kernel: GPT: Use GNU Parted to correct GPT errors.
Dec 02 06:43:57 localhost kernel:  vda: vda1 vda2 vda3 vda4
Dec 02 06:43:57 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14
Dec 02 06:43:57 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15
Dec 02 06:43:57 localhost systemd[1]: Found device /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a.
Dec 02 06:43:57 localhost systemd[1]: Reached target Initrd Root Device.
Dec 02 06:43:58 localhost kernel: ata1: found unknown device (class 0)
Dec 02 06:43:58 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec 02 06:43:58 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec 02 06:43:58 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec 02 06:43:58 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec 02 06:43:58 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec 02 06:43:58 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Dec 02 06:43:58 localhost systemd[1]: Finished dracut initqueue hook.
Dec 02 06:43:58 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Dec 02 06:43:58 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Dec 02 06:43:58 localhost systemd[1]: Reached target Remote File Systems.
Dec 02 06:43:58 localhost systemd[1]: Starting dracut pre-mount hook...
Dec 02 06:43:58 localhost systemd[1]: Finished dracut pre-mount hook.
Dec 02 06:43:58 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a...
Dec 02 06:43:58 localhost systemd-fsck[513]: /usr/sbin/fsck.xfs: XFS file system.
Dec 02 06:43:58 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a.
Dec 02 06:43:58 localhost systemd[1]: Mounting /sysroot...
Dec 02 06:43:58 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec 02 06:43:58 localhost kernel: XFS (vda4): Mounting V5 Filesystem
Dec 02 06:43:58 localhost kernel: XFS (vda4): Ending clean mount
Dec 02 06:43:58 localhost systemd[1]: Mounted /sysroot.
Dec 02 06:43:58 localhost systemd[1]: Reached target Initrd Root File System.
Dec 02 06:43:58 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec 02 06:43:58 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec 02 06:43:58 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec 02 06:43:58 localhost systemd[1]: Reached target Initrd File Systems.
Dec 02 06:43:58 localhost systemd[1]: Reached target Initrd Default Target.
Dec 02 06:43:58 localhost systemd[1]: Starting dracut mount hook...
Dec 02 06:43:58 localhost systemd[1]: Finished dracut mount hook.
Dec 02 06:43:58 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec 02 06:43:58 localhost rpc.idmapd[409]: exiting on signal 15
Dec 02 06:43:58 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec 02 06:43:58 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec 02 06:43:58 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec 02 06:43:58 localhost systemd[1]: Stopped target Network.
Dec 02 06:43:58 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Dec 02 06:43:58 localhost systemd[1]: Stopped target Timer Units.
Dec 02 06:43:58 localhost systemd[1]: dbus.socket: Deactivated successfully.
Dec 02 06:43:58 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Dec 02 06:43:58 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec 02 06:43:58 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec 02 06:43:58 localhost systemd[1]: Stopped target Initrd Default Target.
Dec 02 06:43:58 localhost systemd[1]: Stopped target Basic System.
Dec 02 06:43:58 localhost systemd[1]: Stopped target Initrd Root Device.
Dec 02 06:43:58 localhost systemd[1]: Stopped target Initrd /usr File System.
Dec 02 06:43:58 localhost systemd[1]: Stopped target Path Units.
Dec 02 06:43:58 localhost systemd[1]: Stopped target Remote File Systems.
Dec 02 06:43:58 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Dec 02 06:43:58 localhost systemd[1]: Stopped target Slice Units.
Dec 02 06:43:58 localhost systemd[1]: Stopped target Socket Units.
Dec 02 06:43:58 localhost systemd[1]: Stopped target System Initialization.
Dec 02 06:43:58 localhost systemd[1]: Stopped target Local File Systems.
Dec 02 06:43:58 localhost systemd[1]: Stopped target Swaps.
Dec 02 06:43:58 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Dec 02 06:43:58 localhost systemd[1]: Stopped dracut mount hook.
Dec 02 06:43:58 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec 02 06:43:58 localhost systemd[1]: Stopped dracut pre-mount hook.
Dec 02 06:43:58 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Dec 02 06:43:58 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec 02 06:43:58 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec 02 06:43:58 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec 02 06:43:58 localhost systemd[1]: Stopped dracut initqueue hook.
Dec 02 06:43:58 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 02 06:43:58 localhost systemd[1]: Stopped Apply Kernel Variables.
Dec 02 06:43:58 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 02 06:43:58 localhost systemd[1]: Stopped Load Kernel Modules.
Dec 02 06:43:58 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec 02 06:43:58 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Dec 02 06:43:58 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec 02 06:43:58 localhost systemd[1]: Stopped Coldplug All udev Devices.
Dec 02 06:43:58 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec 02 06:43:58 localhost systemd[1]: Stopped dracut pre-trigger hook.
Dec 02 06:43:58 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec 02 06:43:58 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 02 06:43:58 localhost systemd[1]: Stopped Setup Virtual Console.
Dec 02 06:43:58 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec 02 06:43:58 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 02 06:43:58 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec 02 06:43:58 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec 02 06:43:58 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 02 06:43:58 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec 02 06:43:58 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec 02 06:43:58 localhost systemd[1]: Closed udev Control Socket.
Dec 02 06:43:58 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec 02 06:43:58 localhost systemd[1]: Closed udev Kernel Socket.
Dec 02 06:43:58 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec 02 06:43:58 localhost systemd[1]: Stopped dracut pre-udev hook.
Dec 02 06:43:58 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec 02 06:43:58 localhost systemd[1]: Stopped dracut cmdline hook.
Dec 02 06:43:58 localhost systemd[1]: Starting Cleanup udev Database...
Dec 02 06:43:58 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec 02 06:43:58 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec 02 06:43:58 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec 02 06:43:58 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Dec 02 06:43:58 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec 02 06:43:58 localhost systemd[1]: Stopped Create System Users.
Dec 02 06:43:58 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec 02 06:43:58 localhost systemd[1]: Finished Cleanup udev Database.
Dec 02 06:43:58 localhost systemd[1]: Reached target Switch Root.
Dec 02 06:43:58 localhost systemd[1]: Starting Switch Root...
Dec 02 06:43:58 localhost systemd[1]: Switching root.
Dec 02 06:43:58 localhost systemd-journald[282]: Journal stopped
Dec 02 06:43:59 localhost systemd-journald[282]: Received SIGTERM from PID 1 (systemd).
Dec 02 06:43:59 localhost kernel: audit: type=1404 audit(1764657838.973:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec 02 06:43:59 localhost kernel: SELinux:  policy capability network_peer_controls=1
Dec 02 06:43:59 localhost kernel: SELinux:  policy capability open_perms=1
Dec 02 06:43:59 localhost kernel: SELinux:  policy capability extended_socket_class=1
Dec 02 06:43:59 localhost kernel: SELinux:  policy capability always_check_network=0
Dec 02 06:43:59 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 02 06:43:59 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 02 06:43:59 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 02 06:43:59 localhost kernel: audit: type=1403 audit(1764657839.066:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec 02 06:43:59 localhost systemd[1]: Successfully loaded SELinux policy in 95.936ms.
Dec 02 06:43:59 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.990ms.
Dec 02 06:43:59 localhost systemd[1]: systemd 252-13.el9_2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 02 06:43:59 localhost systemd[1]: Detected virtualization kvm.
Dec 02 06:43:59 localhost systemd[1]: Detected architecture x86-64.
Dec 02 06:43:59 localhost systemd-rc-local-generator[583]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 06:43:59 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 06:43:59 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Dec 02 06:43:59 localhost systemd[1]: Stopped Switch Root.
Dec 02 06:43:59 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec 02 06:43:59 localhost systemd[1]: Created slice Slice /system/getty.
Dec 02 06:43:59 localhost systemd[1]: Created slice Slice /system/modprobe.
Dec 02 06:43:59 localhost systemd[1]: Created slice Slice /system/serial-getty.
Dec 02 06:43:59 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Dec 02 06:43:59 localhost systemd[1]: Created slice Slice /system/systemd-fsck.
Dec 02 06:43:59 localhost systemd[1]: Created slice User and Session Slice.
Dec 02 06:43:59 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 02 06:43:59 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Dec 02 06:43:59 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec 02 06:43:59 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 02 06:43:59 localhost systemd[1]: Stopped target Switch Root.
Dec 02 06:43:59 localhost systemd[1]: Stopped target Initrd File Systems.
Dec 02 06:43:59 localhost systemd[1]: Stopped target Initrd Root File System.
Dec 02 06:43:59 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Dec 02 06:43:59 localhost systemd[1]: Reached target Path Units.
Dec 02 06:43:59 localhost systemd[1]: Reached target rpc_pipefs.target.
Dec 02 06:43:59 localhost systemd[1]: Reached target Slice Units.
Dec 02 06:43:59 localhost systemd[1]: Reached target Swaps.
Dec 02 06:43:59 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Dec 02 06:43:59 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Dec 02 06:43:59 localhost systemd[1]: Reached target RPC Port Mapper.
Dec 02 06:43:59 localhost systemd[1]: Listening on Process Core Dump Socket.
Dec 02 06:43:59 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Dec 02 06:43:59 localhost systemd[1]: Listening on udev Control Socket.
Dec 02 06:43:59 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 02 06:43:59 localhost systemd[1]: Mounting Huge Pages File System...
Dec 02 06:43:59 localhost systemd[1]: Mounting POSIX Message Queue File System...
Dec 02 06:43:59 localhost systemd[1]: Mounting Kernel Debug File System...
Dec 02 06:43:59 localhost systemd[1]: Mounting Kernel Trace File System...
Dec 02 06:43:59 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 02 06:43:59 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 02 06:43:59 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 02 06:43:59 localhost systemd[1]: Starting Load Kernel Module drm...
Dec 02 06:43:59 localhost systemd[1]: Starting Load Kernel Module fuse...
Dec 02 06:43:59 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec 02 06:43:59 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Dec 02 06:43:59 localhost systemd[1]: Stopped File System Check on Root Device.
Dec 02 06:43:59 localhost systemd[1]: Stopped Journal Service.
Dec 02 06:43:59 localhost systemd[1]: Starting Journal Service...
Dec 02 06:43:59 localhost systemd[1]: Starting Load Kernel Modules...
Dec 02 06:43:59 localhost systemd[1]: Starting Generate network units from Kernel command line...
Dec 02 06:43:59 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Dec 02 06:43:59 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec 02 06:43:59 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 02 06:43:59 localhost kernel: fuse: init (API version 7.36)
Dec 02 06:43:59 localhost systemd-journald[619]: Journal started
Dec 02 06:43:59 localhost systemd-journald[619]: Runtime Journal (/run/log/journal/510530184876bdc0ebb29e7199f63471) is 8.0M, max 314.7M, 306.7M free.
Dec 02 06:43:59 localhost systemd[1]: Queued start job for default target Multi-User System.
Dec 02 06:43:59 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Dec 02 06:43:59 localhost systemd-modules-load[620]: Module 'msr' is built in
Dec 02 06:43:59 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec 02 06:43:59 localhost systemd[1]: Started Journal Service.
Dec 02 06:43:59 localhost systemd[1]: Mounted Huge Pages File System.
Dec 02 06:43:59 localhost systemd[1]: Mounted POSIX Message Queue File System.
Dec 02 06:43:59 localhost systemd[1]: Mounted Kernel Debug File System.
Dec 02 06:43:59 localhost systemd[1]: Mounted Kernel Trace File System.
Dec 02 06:43:59 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 02 06:43:59 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 02 06:43:59 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 02 06:43:59 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec 02 06:43:59 localhost systemd[1]: Finished Load Kernel Module fuse.
Dec 02 06:43:59 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec 02 06:43:59 localhost systemd[1]: Finished Load Kernel Modules.
Dec 02 06:43:59 localhost systemd[1]: Finished Generate network units from Kernel command line.
Dec 02 06:43:59 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Dec 02 06:43:59 localhost systemd[1]: Mounting FUSE Control File System...
Dec 02 06:43:59 localhost systemd[1]: Mounting Kernel Configuration File System...
Dec 02 06:43:59 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 02 06:43:59 localhost systemd[1]: Starting Rebuild Hardware Database...
Dec 02 06:43:59 localhost kernel: ACPI: bus type drm_connector registered
Dec 02 06:43:59 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Dec 02 06:43:59 localhost systemd[1]: Starting Load/Save Random Seed...
Dec 02 06:43:59 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 02 06:43:59 localhost systemd[1]: Starting Create System Users...
Dec 02 06:43:59 localhost systemd-journald[619]: Runtime Journal (/run/log/journal/510530184876bdc0ebb29e7199f63471) is 8.0M, max 314.7M, 306.7M free.
Dec 02 06:43:59 localhost systemd-journald[619]: Received client request to flush runtime journal.
Dec 02 06:43:59 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 02 06:43:59 localhost systemd[1]: Finished Load Kernel Module drm.
Dec 02 06:43:59 localhost systemd[1]: Mounted FUSE Control File System.
Dec 02 06:43:59 localhost systemd[1]: Mounted Kernel Configuration File System.
Dec 02 06:43:59 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Dec 02 06:43:59 localhost systemd[1]: Finished Load/Save Random Seed.
Dec 02 06:43:59 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 02 06:43:59 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 02 06:43:59 localhost systemd-sysusers[631]: Creating group 'sgx' with GID 989.
Dec 02 06:43:59 localhost systemd-sysusers[631]: Creating group 'systemd-oom' with GID 988.
Dec 02 06:43:59 localhost systemd-sysusers[631]: Creating user 'systemd-oom' (systemd Userspace OOM Killer) with UID 988 and GID 988.
Dec 02 06:43:59 localhost systemd[1]: Finished Create System Users.
Dec 02 06:43:59 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 02 06:43:59 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 02 06:43:59 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 02 06:43:59 localhost systemd[1]: Reached target Preparation for Local File Systems.
Dec 02 06:43:59 localhost systemd[1]: Set up automount EFI System Partition Automount.
Dec 02 06:44:00 localhost systemd[1]: Finished Rebuild Hardware Database.
Dec 02 06:44:00 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 02 06:44:00 localhost systemd-udevd[636]: Using default interface naming scheme 'rhel-9.0'.
Dec 02 06:44:00 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 02 06:44:00 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 02 06:44:00 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 02 06:44:00 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 02 06:44:00 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec 02 06:44:00 localhost systemd-udevd[639]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 06:44:00 localhost systemd[1]: Condition check resulted in /dev/disk/by-uuid/b141154b-6a70-437a-a97f-d160c9ba37eb being skipped.
Dec 02 06:44:00 localhost systemd[1]: Condition check resulted in /dev/disk/by-uuid/7B77-95E7 being skipped.
Dec 02 06:44:00 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/7B77-95E7...
Dec 02 06:44:00 localhost systemd-fsck[678]: fsck.fat 4.2 (2021-01-31)
Dec 02 06:44:00 localhost systemd-fsck[678]: /dev/vda2: 12 files, 1782/51145 clusters
Dec 02 06:44:00 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/7B77-95E7.
Dec 02 06:44:00 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec 02 06:44:00 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec 02 06:44:00 localhost kernel: SVM: TSC scaling supported
Dec 02 06:44:00 localhost kernel: kvm: Nested Virtualization enabled
Dec 02 06:44:00 localhost kernel: SVM: kvm: Nested Paging enabled
Dec 02 06:44:00 localhost kernel: SVM: LBR virtualization supported
Dec 02 06:44:00 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec 02 06:44:00 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec 02 06:44:00 localhost kernel: Console: switching to colour dummy device 80x25
Dec 02 06:44:00 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec 02 06:44:00 localhost kernel: [drm] features: -context_init
Dec 02 06:44:00 localhost kernel: [drm] number of scanouts: 1
Dec 02 06:44:00 localhost kernel: [drm] number of cap sets: 0
Dec 02 06:44:00 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 0 for virtio0 on minor 0
Dec 02 06:44:00 localhost kernel: virtio_gpu virtio0: [drm] drm_plane_enable_fb_damage_clips() not called
Dec 02 06:44:00 localhost kernel: Console: switching to colour frame buffer device 128x48
Dec 02 06:44:00 localhost kernel: virtio_gpu virtio0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec 02 06:44:00 localhost systemd[1]: Mounting /boot...
Dec 02 06:44:00 localhost kernel: XFS (vda3): Mounting V5 Filesystem
Dec 02 06:44:00 localhost kernel: XFS (vda3): Ending clean mount
Dec 02 06:44:00 localhost kernel: xfs filesystem being mounted at /boot supports timestamps until 2038 (0x7fffffff)
Dec 02 06:44:00 localhost systemd[1]: Mounted /boot.
Dec 02 06:44:00 localhost systemd[1]: Mounting /boot/efi...
Dec 02 06:44:00 localhost systemd[1]: Mounted /boot/efi.
Dec 02 06:44:00 localhost systemd[1]: Reached target Local File Systems.
Dec 02 06:44:00 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec 02 06:44:00 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec 02 06:44:00 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec 02 06:44:00 localhost systemd[1]: Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 02 06:44:00 localhost systemd[1]: Starting Automatic Boot Loader Update...
Dec 02 06:44:00 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec 02 06:44:00 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 02 06:44:00 localhost systemd[1]: efi.automount: Got automount request for /efi, triggered by 715 (bootctl)
Dec 02 06:44:00 localhost systemd[1]: Starting File System Check on /dev/vda2...
Dec 02 06:44:00 localhost systemd[1]: Finished File System Check on /dev/vda2.
Dec 02 06:44:00 localhost systemd[1]: Mounting EFI System Partition Automount...
Dec 02 06:44:00 localhost systemd[1]: Mounted EFI System Partition Automount.
Dec 02 06:44:00 localhost systemd[1]: Finished Automatic Boot Loader Update.
Dec 02 06:44:00 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 02 06:44:00 localhost systemd[1]: Starting Security Auditing Service...
Dec 02 06:44:00 localhost systemd[1]: Starting RPC Bind...
Dec 02 06:44:00 localhost systemd[1]: Starting Rebuild Journal Catalog...
Dec 02 06:44:00 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec 02 06:44:00 localhost auditd[726]: audit dispatcher initialized with q_depth=1200 and 1 active plugins
Dec 02 06:44:00 localhost auditd[726]: Init complete, auditd 3.0.7 listening for events (startup state enable)
Dec 02 06:44:00 localhost systemd[1]: Finished Rebuild Journal Catalog.
Dec 02 06:44:00 localhost systemd[1]: Started RPC Bind.
Dec 02 06:44:00 localhost systemd[1]: Starting Update is Completed...
Dec 02 06:44:00 localhost systemd[1]: Finished Update is Completed.
Dec 02 06:44:00 localhost augenrules[731]: /sbin/augenrules: No change
Dec 02 06:44:00 localhost augenrules[742]: No rules
Dec 02 06:44:00 localhost augenrules[742]: enabled 1
Dec 02 06:44:00 localhost augenrules[742]: failure 1
Dec 02 06:44:00 localhost augenrules[742]: pid 726
Dec 02 06:44:00 localhost augenrules[742]: rate_limit 0
Dec 02 06:44:00 localhost augenrules[742]: backlog_limit 8192
Dec 02 06:44:00 localhost augenrules[742]: lost 0
Dec 02 06:44:00 localhost augenrules[742]: backlog 3
Dec 02 06:44:00 localhost augenrules[742]: backlog_wait_time 60000
Dec 02 06:44:00 localhost augenrules[742]: backlog_wait_time_actual 0
Dec 02 06:44:00 localhost augenrules[742]: enabled 1
Dec 02 06:44:00 localhost augenrules[742]: failure 1
Dec 02 06:44:00 localhost augenrules[742]: pid 726
Dec 02 06:44:00 localhost augenrules[742]: rate_limit 0
Dec 02 06:44:00 localhost augenrules[742]: backlog_limit 8192
Dec 02 06:44:00 localhost augenrules[742]: lost 0
Dec 02 06:44:00 localhost augenrules[742]: backlog 2
Dec 02 06:44:00 localhost augenrules[742]: backlog_wait_time 60000
Dec 02 06:44:00 localhost augenrules[742]: backlog_wait_time_actual 0
Dec 02 06:44:00 localhost augenrules[742]: enabled 1
Dec 02 06:44:00 localhost augenrules[742]: failure 1
Dec 02 06:44:00 localhost augenrules[742]: pid 726
Dec 02 06:44:00 localhost augenrules[742]: rate_limit 0
Dec 02 06:44:00 localhost augenrules[742]: backlog_limit 8192
Dec 02 06:44:00 localhost augenrules[742]: lost 0
Dec 02 06:44:00 localhost augenrules[742]: backlog 0
Dec 02 06:44:00 localhost augenrules[742]: backlog_wait_time 60000
Dec 02 06:44:00 localhost augenrules[742]: backlog_wait_time_actual 0
Dec 02 06:44:00 localhost systemd[1]: Started Security Auditing Service.
Dec 02 06:44:00 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec 02 06:44:00 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec 02 06:44:00 localhost systemd[1]: Reached target System Initialization.
Dec 02 06:44:00 localhost systemd[1]: Started dnf makecache --timer.
Dec 02 06:44:00 localhost systemd[1]: Started Daily rotation of log files.
Dec 02 06:44:00 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec 02 06:44:00 localhost systemd[1]: Reached target Timer Units.
Dec 02 06:44:00 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 02 06:44:00 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec 02 06:44:00 localhost systemd[1]: Reached target Socket Units.
Dec 02 06:44:00 localhost systemd[1]: Starting Initial cloud-init job (pre-networking)...
Dec 02 06:44:00 localhost systemd[1]: Starting D-Bus System Message Bus...
Dec 02 06:44:00 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 02 06:44:00 localhost systemd[1]: Started D-Bus System Message Bus.
Dec 02 06:44:00 localhost systemd[1]: Reached target Basic System.
Dec 02 06:44:00 localhost systemd[1]: Starting NTP client/server...
Dec 02 06:44:00 localhost dbus-broker-lau[751]: Ready
Dec 02 06:44:00 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec 02 06:44:00 localhost systemd[1]: Started irqbalance daemon.
Dec 02 06:44:00 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec 02 06:44:00 localhost systemd[1]: Starting System Logging Service...
Dec 02 06:44:00 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 02 06:44:00 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 02 06:44:00 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 02 06:44:00 localhost systemd[1]: Reached target sshd-keygen.target.
Dec 02 06:44:00 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec 02 06:44:00 localhost systemd[1]: Reached target User and Group Name Lookups.
Dec 02 06:44:00 localhost systemd[1]: Starting User Login Management...
Dec 02 06:44:00 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec 02 06:44:00 localhost systemd[1]: Started System Logging Service.
Dec 02 06:44:00 localhost rsyslogd[759]: [origin software="rsyslogd" swVersion="8.2102.0-111.el9" x-pid="759" x-info="https://www.rsyslog.com"] start
Dec 02 06:44:00 localhost rsyslogd[759]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2040 ]
Dec 02 06:44:00 localhost chronyd[766]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Dec 02 06:44:00 localhost chronyd[766]: Using right/UTC timezone to obtain leap second data
Dec 02 06:44:00 localhost chronyd[766]: Loaded seccomp filter (level 2)
Dec 02 06:44:00 localhost systemd-logind[760]: New seat seat0.
Dec 02 06:44:00 localhost systemd[1]: Started NTP client/server.
Dec 02 06:44:00 localhost systemd-logind[760]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 02 06:44:00 localhost systemd-logind[760]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 02 06:44:00 localhost systemd[1]: Started User Login Management.
Dec 02 06:44:00 localhost rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 02 06:44:01 localhost cloud-init[770]: Cloud-init v. 22.1-9.el9 running 'init-local' at Tue, 02 Dec 2025 06:44:01 +0000. Up 5.44 seconds.
Dec 02 06:44:01 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Dec 02 06:44:01 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Dec 02 06:44:01 localhost systemd[1]: Starting Hostname Service...
Dec 02 06:44:01 localhost systemd[1]: Started Hostname Service.
Dec 02 06:44:01 np0005541914.novalocal systemd-hostnamed[785]: Hostname set to <np0005541914.novalocal> (static)
Dec 02 06:44:01 np0005541914.novalocal systemd[1]: Finished Initial cloud-init job (pre-networking).
Dec 02 06:44:01 np0005541914.novalocal systemd[1]: Reached target Preparation for Network.
Dec 02 06:44:01 np0005541914.novalocal systemd[1]: Starting Network Manager...
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.6800] NetworkManager (version 1.42.2-1.el9) is starting... (boot:9c01ca59-fcb0-40d3-99c4-2690b78cc18a)
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.6805] Read config: /etc/NetworkManager/NetworkManager.conf (run: 15-carrier-timeout.conf)
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.6825] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 02 06:44:01 np0005541914.novalocal systemd[1]: Started Network Manager.
Dec 02 06:44:01 np0005541914.novalocal systemd[1]: Reached target Network.
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7085] manager[0x5625da2b3020]: monitoring kernel firmware directory '/lib/firmware'.
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7111] hostname: hostname: using hostnamed
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7111] hostname: static hostname changed from (none) to "np0005541914.novalocal"
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7115] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 02 06:44:01 np0005541914.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 02 06:44:01 np0005541914.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Dec 02 06:44:01 np0005541914.novalocal systemd[1]: Starting Enable periodic update of entitlement certificates....
Dec 02 06:44:01 np0005541914.novalocal systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7301] manager[0x5625da2b3020]: rfkill: Wi-Fi hardware radio set enabled
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7303] manager[0x5625da2b3020]: rfkill: WWAN hardware radio set enabled
Dec 02 06:44:01 np0005541914.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7356] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-device-plugin-team.so)
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7357] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7360] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7362] manager: Networking is enabled by state file
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7378] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7379] settings: Loaded settings plugin: keyfile (internal)
Dec 02 06:44:01 np0005541914.novalocal systemd[1]: Started Enable periodic update of entitlement certificates..
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7408] dhcp: init: Using DHCP client 'internal'
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7413] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7428] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external')
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7434] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external')
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7444] device (lo): Activation: starting connection 'lo' (7d726e81-3c90-4757-9293-46e316e2c15c)
Dec 02 06:44:01 np0005541914.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7454] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7458] device (eth0): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external')
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7494] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external')
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7499] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external')
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7501] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external')
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7503] device (eth0): carrier: link connected
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7506] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external')
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7511] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed')
Dec 02 06:44:01 np0005541914.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 02 06:44:01 np0005541914.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 02 06:44:01 np0005541914.novalocal systemd[1]: Reached target NFS client services.
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7551] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7556] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7557] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed')
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7560] manager: NetworkManager state is now CONNECTING
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7561] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'managed')
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7571] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed')
Dec 02 06:44:01 np0005541914.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7576] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 02 06:44:01 np0005541914.novalocal systemd[1]: Reached target Remote File Systems.
Dec 02 06:44:01 np0005541914.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7648] dhcp4 (eth0): state changed new lease, address=38.102.83.204
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7654] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7681] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed')
Dec 02 06:44:01 np0005541914.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7819] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external')
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7822] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external')
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7830] device (lo): Activation: successful, device activated.
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7841] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed')
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7844] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed')
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7850] manager: NetworkManager state is now CONNECTED_SITE
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7856] device (eth0): Activation: successful, device activated.
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7865] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 02 06:44:01 np0005541914.novalocal NetworkManager[790]: <info>  [1764657841.7872] manager: startup complete
Dec 02 06:44:01 np0005541914.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 02 06:44:01 np0005541914.novalocal systemd[1]: Starting Initial cloud-init job (metadata service crawler)...
Dec 02 06:44:02 np0005541914.novalocal cloud-init[985]: Cloud-init v. 22.1-9.el9 running 'init' at Tue, 02 Dec 2025 06:44:02 +0000. Up 6.26 seconds.
Dec 02 06:44:02 np0005541914.novalocal cloud-init[985]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec 02 06:44:02 np0005541914.novalocal cloud-init[985]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 02 06:44:02 np0005541914.novalocal cloud-init[985]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Dec 02 06:44:02 np0005541914.novalocal cloud-init[985]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 02 06:44:02 np0005541914.novalocal cloud-init[985]: ci-info: |  eth0  | True |        38.102.83.204         | 255.255.255.0 | global | fa:16:3e:75:1e:f2 |
Dec 02 06:44:02 np0005541914.novalocal cloud-init[985]: ci-info: |  eth0  | True | fe80::f816:3eff:fe75:1ef2/64 |       .       |  link  | fa:16:3e:75:1e:f2 |
Dec 02 06:44:02 np0005541914.novalocal cloud-init[985]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Dec 02 06:44:02 np0005541914.novalocal cloud-init[985]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Dec 02 06:44:02 np0005541914.novalocal cloud-init[985]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 02 06:44:02 np0005541914.novalocal cloud-init[985]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Dec 02 06:44:02 np0005541914.novalocal cloud-init[985]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 02 06:44:02 np0005541914.novalocal cloud-init[985]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Dec 02 06:44:02 np0005541914.novalocal cloud-init[985]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 02 06:44:02 np0005541914.novalocal cloud-init[985]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Dec 02 06:44:02 np0005541914.novalocal cloud-init[985]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Dec 02 06:44:02 np0005541914.novalocal cloud-init[985]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Dec 02 06:44:02 np0005541914.novalocal cloud-init[985]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 02 06:44:02 np0005541914.novalocal cloud-init[985]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec 02 06:44:02 np0005541914.novalocal cloud-init[985]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 02 06:44:02 np0005541914.novalocal cloud-init[985]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec 02 06:44:02 np0005541914.novalocal cloud-init[985]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 02 06:44:02 np0005541914.novalocal cloud-init[985]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec 02 06:44:02 np0005541914.novalocal cloud-init[985]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Dec 02 06:44:02 np0005541914.novalocal cloud-init[985]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 02 06:44:02 np0005541914.novalocal systemd[1]: Starting Authorization Manager...
Dec 02 06:44:02 np0005541914.novalocal polkitd[1037]: Started polkitd version 0.117
Dec 02 06:44:02 np0005541914.novalocal systemd[1]: Started Dynamic System Tuning Daemon.
Dec 02 06:44:02 np0005541914.novalocal polkitd[1037]: Loading rules from directory /etc/polkit-1/rules.d
Dec 02 06:44:02 np0005541914.novalocal polkitd[1037]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 02 06:44:02 np0005541914.novalocal polkitd[1037]: Finished loading, compiling and executing 4 rules
Dec 02 06:44:02 np0005541914.novalocal systemd[1]: Started Authorization Manager.
Dec 02 06:44:02 np0005541914.novalocal polkitd[1037]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Dec 02 06:44:03 np0005541914.novalocal useradd[1116]: new group: name=cloud-user, GID=1001
Dec 02 06:44:03 np0005541914.novalocal useradd[1116]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Dec 02 06:44:03 np0005541914.novalocal useradd[1116]: add 'cloud-user' to group 'adm'
Dec 02 06:44:03 np0005541914.novalocal useradd[1116]: add 'cloud-user' to group 'systemd-journal'
Dec 02 06:44:03 np0005541914.novalocal useradd[1116]: add 'cloud-user' to shadow group 'adm'
Dec 02 06:44:03 np0005541914.novalocal useradd[1116]: add 'cloud-user' to shadow group 'systemd-journal'
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: Generating public/private rsa key pair.
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: The key fingerprint is:
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: SHA256:xdbtppxT4ltQpGJoBJx3yaMDScjC2rd4cY4FYBNAH6M root@np0005541914.novalocal
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: The key's randomart image is:
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: +---[RSA 3072]----+
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: |o+=* +o+.. .  .  |
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: | .=.* =..o=. +   |
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: | E o . ooo*.o o  |
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: |. . o o.o+ . o   |
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: |   o B  S.  o +  |
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: |  . + .    o B   |
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: |   .        * .  |
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: |             +   |
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: |            .    |
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: +----[SHA256]-----+
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: Generating public/private ecdsa key pair.
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: The key fingerprint is:
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: SHA256:2VElGY9kbaYJV8HvChLADuIAvZjsxxlz55v2L3TV5i0 root@np0005541914.novalocal
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: The key's randomart image is:
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: +---[ECDSA 256]---+
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: |.o     .    *B+. |
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: |  o . . o .+o+=  |
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: |.o + . o ..o.=o. |
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: |o..o.. ..o..o. o.|
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: |. . = o S ... o..|
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: | . +   . .... E.o|
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: |  .     + .. . o |
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: |       + .    .  |
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: |      . ..o.     |
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: +----[SHA256]-----+
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: Generating public/private ed25519 key pair.
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: The key fingerprint is:
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: SHA256:D2w0SBSn0CoE5kRQMlrgRRjLG27KaVSXQyxgiAmlXn4 root@np0005541914.novalocal
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: The key's randomart image is:
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: +--[ED25519 256]--+
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: |&&Ooo++..        |
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: |%*+..+o+         |
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: |oB o.=o o        |
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: |o B o .o .       |
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: | * o E  S        |
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: |= . .  . o       |
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: |.+        .      |
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: |.                |
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: |                 |
Dec 02 06:44:04 np0005541914.novalocal cloud-init[985]: +----[SHA256]-----+
Dec 02 06:44:04 np0005541914.novalocal systemd[1]: Finished Initial cloud-init job (metadata service crawler).
Dec 02 06:44:04 np0005541914.novalocal systemd[1]: Reached target Cloud-config availability.
Dec 02 06:44:04 np0005541914.novalocal systemd[1]: Reached target Network is Online.
Dec 02 06:44:04 np0005541914.novalocal systemd[1]: Starting Apply the settings specified in cloud-config...
Dec 02 06:44:04 np0005541914.novalocal systemd[1]: Run Insights Client at boot was skipped because of an unmet condition check (ConditionPathExists=/etc/insights-client/.run_insights_client_next_boot).
Dec 02 06:44:04 np0005541914.novalocal systemd[1]: Starting Crash recovery kernel arming...
Dec 02 06:44:04 np0005541914.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Dec 02 06:44:04 np0005541914.novalocal systemd[1]: Starting OpenSSH server daemon...
Dec 02 06:44:04 np0005541914.novalocal sm-notify[1129]: Version 2.5.4 starting
Dec 02 06:44:04 np0005541914.novalocal systemd[1]: Starting Permit User Sessions...
Dec 02 06:44:04 np0005541914.novalocal systemd[1]: Started Notify NFS peers of a restart.
Dec 02 06:44:04 np0005541914.novalocal systemd[1]: Finished Permit User Sessions.
Dec 02 06:44:04 np0005541914.novalocal sshd[1130]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 06:44:04 np0005541914.novalocal systemd[1]: Started Command Scheduler.
Dec 02 06:44:04 np0005541914.novalocal sshd[1130]: Server listening on 0.0.0.0 port 22.
Dec 02 06:44:04 np0005541914.novalocal sshd[1130]: Server listening on :: port 22.
Dec 02 06:44:04 np0005541914.novalocal systemd[1]: Started Getty on tty1.
Dec 02 06:44:04 np0005541914.novalocal crond[1132]: (CRON) STARTUP (1.5.7)
Dec 02 06:44:04 np0005541914.novalocal systemd[1]: Started Serial Getty on ttyS0.
Dec 02 06:44:04 np0005541914.novalocal crond[1132]: (CRON) INFO (Syslog will be used instead of sendmail.)
Dec 02 06:44:04 np0005541914.novalocal crond[1132]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 37% if used.)
Dec 02 06:44:04 np0005541914.novalocal crond[1132]: (CRON) INFO (running with inotify support)
Dec 02 06:44:04 np0005541914.novalocal systemd[1]: Reached target Login Prompts.
Dec 02 06:44:04 np0005541914.novalocal sshd[1134]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 06:44:04 np0005541914.novalocal systemd[1]: Started OpenSSH server daemon.
Dec 02 06:44:04 np0005541914.novalocal systemd[1]: Reached target Multi-User System.
Dec 02 06:44:04 np0005541914.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Dec 02 06:44:04 np0005541914.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec 02 06:44:04 np0005541914.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Dec 02 06:44:04 np0005541914.novalocal sshd[1134]: Connection reset by 38.102.83.114 port 45694 [preauth]
Dec 02 06:44:04 np0005541914.novalocal sshd[1146]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 06:44:04 np0005541914.novalocal sshd[1146]: Unable to negotiate with 38.102.83.114 port 45698: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Dec 02 06:44:04 np0005541914.novalocal sshd[1159]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 06:44:04 np0005541914.novalocal sshd[1159]: Connection reset by 38.102.83.114 port 45712 [preauth]
Dec 02 06:44:04 np0005541914.novalocal sshd[1170]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 06:44:04 np0005541914.novalocal sshd[1170]: Unable to negotiate with 38.102.83.114 port 45716: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Dec 02 06:44:04 np0005541914.novalocal sshd[1188]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 06:44:04 np0005541914.novalocal kdumpctl[1135]: kdump: No kdump initial ramdisk found.
Dec 02 06:44:04 np0005541914.novalocal kdumpctl[1135]: kdump: Rebuilding /boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img
Dec 02 06:44:04 np0005541914.novalocal sshd[1188]: Unable to negotiate with 38.102.83.114 port 45720: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Dec 02 06:44:04 np0005541914.novalocal sshd[1198]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 06:44:04 np0005541914.novalocal sshd[1220]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 06:44:04 np0005541914.novalocal sshd[1220]: Connection reset by 38.102.83.114 port 45742 [preauth]
Dec 02 06:44:04 np0005541914.novalocal sshd[1253]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 06:44:04 np0005541914.novalocal cloud-init[1254]: Cloud-init v. 22.1-9.el9 running 'modules:config' at Tue, 02 Dec 2025 06:44:04 +0000. Up 8.89 seconds.
Dec 02 06:44:04 np0005541914.novalocal sshd[1253]: fatal: mm_answer_sign: sign: error in libcrypto
Dec 02 06:44:04 np0005541914.novalocal sshd[1262]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 06:44:04 np0005541914.novalocal sshd[1262]: Unable to negotiate with 38.102.83.114 port 45760: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Dec 02 06:44:04 np0005541914.novalocal sshd[1198]: Connection closed by 38.102.83.114 port 45734 [preauth]
Dec 02 06:44:04 np0005541914.novalocal systemd[1]: Finished Apply the settings specified in cloud-config.
Dec 02 06:44:04 np0005541914.novalocal systemd[1]: Starting Execute cloud user/final scripts...
Dec 02 06:44:05 np0005541914.novalocal dracut[1432]: dracut-057-21.git20230214.el9
Dec 02 06:44:05 np0005541914.novalocal cloud-init[1450]: Cloud-init v. 22.1-9.el9 running 'modules:final' at Tue, 02 Dec 2025 06:44:05 +0000. Up 9.24 seconds.
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: Executing: /usr/bin/dracut --add kdumpbase --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  -o "plymouth resume ifcfg earlykdump" --mount "/dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device -f /boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img 5.14.0-284.11.1.el9_2.x86_64
Dec 02 06:44:05 np0005541914.novalocal cloud-init[1472]: #############################################################
Dec 02 06:44:05 np0005541914.novalocal cloud-init[1474]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec 02 06:44:05 np0005541914.novalocal cloud-init[1482]: 256 SHA256:2VElGY9kbaYJV8HvChLADuIAvZjsxxlz55v2L3TV5i0 root@np0005541914.novalocal (ECDSA)
Dec 02 06:44:05 np0005541914.novalocal cloud-init[1489]: 256 SHA256:D2w0SBSn0CoE5kRQMlrgRRjLG27KaVSXQyxgiAmlXn4 root@np0005541914.novalocal (ED25519)
Dec 02 06:44:05 np0005541914.novalocal cloud-init[1496]: 3072 SHA256:xdbtppxT4ltQpGJoBJx3yaMDScjC2rd4cY4FYBNAH6M root@np0005541914.novalocal (RSA)
Dec 02 06:44:05 np0005541914.novalocal cloud-init[1497]: -----END SSH HOST KEY FINGERPRINTS-----
Dec 02 06:44:05 np0005541914.novalocal cloud-init[1500]: #############################################################
Dec 02 06:44:05 np0005541914.novalocal cloud-init[1450]: Cloud-init v. 22.1-9.el9 finished at Tue, 02 Dec 2025 06:44:05 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 9.49 seconds
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec 02 06:44:05 np0005541914.novalocal systemd[1]: Reloading Network Manager...
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 02 06:44:05 np0005541914.novalocal NetworkManager[790]: <info>  [1764657845.3663] audit: op="reload" arg="0" pid=1581 uid=0 result="success"
Dec 02 06:44:05 np0005541914.novalocal NetworkManager[790]: <info>  [1764657845.3672] config: signal: SIGHUP (no changes from disk)
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 02 06:44:05 np0005541914.novalocal systemd[1]: Reloaded Network Manager.
Dec 02 06:44:05 np0005541914.novalocal systemd[1]: Finished Execute cloud user/final scripts.
Dec 02 06:44:05 np0005541914.novalocal systemd[1]: Reached target Cloud-init target.
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'ifcfg' will not be installed, because it's in the list to be omitted!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'plymouth' will not be installed, because it's in the list to be omitted!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'resume' will not be installed, because it's in the list to be omitted!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'earlykdump' will not be installed, because it's in the list to be omitted!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: memstrack is not available
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 02 06:44:05 np0005541914.novalocal dracut[1434]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 02 06:44:06 np0005541914.novalocal dracut[1434]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 02 06:44:06 np0005541914.novalocal dracut[1434]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 02 06:44:06 np0005541914.novalocal dracut[1434]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 02 06:44:06 np0005541914.novalocal dracut[1434]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 02 06:44:06 np0005541914.novalocal dracut[1434]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 02 06:44:06 np0005541914.novalocal dracut[1434]: memstrack is not available
Dec 02 06:44:06 np0005541914.novalocal dracut[1434]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 02 06:44:06 np0005541914.novalocal dracut[1434]: *** Including module: systemd ***
Dec 02 06:44:06 np0005541914.novalocal dracut[1434]: *** Including module: systemd-initrd ***
Dec 02 06:44:06 np0005541914.novalocal dracut[1434]: *** Including module: i18n ***
Dec 02 06:44:06 np0005541914.novalocal dracut[1434]: No KEYMAP configured.
Dec 02 06:44:06 np0005541914.novalocal dracut[1434]: *** Including module: drm ***
Dec 02 06:44:06 np0005541914.novalocal chronyd[766]: Selected source 174.138.193.90 (2.rhel.pool.ntp.org)
Dec 02 06:44:06 np0005541914.novalocal chronyd[766]: System clock TAI offset set to 37 seconds
Dec 02 06:44:06 np0005541914.novalocal dracut[1434]: *** Including module: prefixdevname ***
Dec 02 06:44:07 np0005541914.novalocal dracut[1434]: *** Including module: kernel-modules ***
Dec 02 06:44:07 np0005541914.novalocal dracut[1434]: *** Including module: kernel-modules-extra ***
Dec 02 06:44:07 np0005541914.novalocal dracut[1434]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Dec 02 06:44:07 np0005541914.novalocal dracut[1434]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Dec 02 06:44:07 np0005541914.novalocal dracut[1434]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Dec 02 06:44:07 np0005541914.novalocal dracut[1434]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Dec 02 06:44:07 np0005541914.novalocal dracut[1434]: *** Including module: qemu ***
Dec 02 06:44:07 np0005541914.novalocal dracut[1434]: *** Including module: fstab-sys ***
Dec 02 06:44:07 np0005541914.novalocal dracut[1434]: *** Including module: rootfs-block ***
Dec 02 06:44:07 np0005541914.novalocal dracut[1434]: *** Including module: terminfo ***
Dec 02 06:44:07 np0005541914.novalocal dracut[1434]: *** Including module: udev-rules ***
Dec 02 06:44:08 np0005541914.novalocal dracut[1434]: Skipping udev rule: 91-permissions.rules
Dec 02 06:44:08 np0005541914.novalocal dracut[1434]: Skipping udev rule: 80-drivers-modprobe.rules
Dec 02 06:44:08 np0005541914.novalocal dracut[1434]: *** Including module: virtiofs ***
Dec 02 06:44:08 np0005541914.novalocal dracut[1434]: *** Including module: dracut-systemd ***
Dec 02 06:44:08 np0005541914.novalocal dracut[1434]: *** Including module: usrmount ***
Dec 02 06:44:08 np0005541914.novalocal dracut[1434]: *** Including module: base ***
Dec 02 06:44:08 np0005541914.novalocal dracut[1434]: *** Including module: fs-lib ***
Dec 02 06:44:08 np0005541914.novalocal dracut[1434]: *** Including module: kdumpbase ***
Dec 02 06:44:08 np0005541914.novalocal dracut[1434]: *** Including module: microcode_ctl-fw_dir_override ***
Dec 02 06:44:08 np0005541914.novalocal dracut[1434]:   microcode_ctl module: mangling fw_dir
Dec 02 06:44:08 np0005541914.novalocal dracut[1434]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec 02 06:44:08 np0005541914.novalocal dracut[1434]:     microcode_ctl: configuration "intel" is ignored
Dec 02 06:44:08 np0005541914.novalocal dracut[1434]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec 02 06:44:08 np0005541914.novalocal dracut[1434]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec 02 06:44:08 np0005541914.novalocal dracut[1434]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec 02 06:44:08 np0005541914.novalocal dracut[1434]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec 02 06:44:08 np0005541914.novalocal dracut[1434]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec 02 06:44:08 np0005541914.novalocal dracut[1434]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec 02 06:44:08 np0005541914.novalocal dracut[1434]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec 02 06:44:08 np0005541914.novalocal dracut[1434]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Dec 02 06:44:08 np0005541914.novalocal dracut[1434]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec 02 06:44:09 np0005541914.novalocal dracut[1434]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec 02 06:44:09 np0005541914.novalocal dracut[1434]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec 02 06:44:09 np0005541914.novalocal dracut[1434]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec 02 06:44:09 np0005541914.novalocal dracut[1434]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec 02 06:44:09 np0005541914.novalocal dracut[1434]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec 02 06:44:09 np0005541914.novalocal dracut[1434]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec 02 06:44:09 np0005541914.novalocal dracut[1434]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec 02 06:44:09 np0005541914.novalocal dracut[1434]:     microcode_ctl: final fw_dir: "/lib/firmware/updates/5.14.0-284.11.1.el9_2.x86_64 /lib/firmware/updates /lib/firmware/5.14.0-284.11.1.el9_2.x86_64 /lib/firmware"
Dec 02 06:44:09 np0005541914.novalocal dracut[1434]: *** Including module: shutdown ***
Dec 02 06:44:09 np0005541914.novalocal dracut[1434]: *** Including module: squash ***
Dec 02 06:44:09 np0005541914.novalocal dracut[1434]: *** Including modules done ***
Dec 02 06:44:09 np0005541914.novalocal dracut[1434]: *** Installing kernel module dependencies ***
Dec 02 06:44:09 np0005541914.novalocal dracut[1434]: *** Installing kernel module dependencies done ***
Dec 02 06:44:09 np0005541914.novalocal dracut[1434]: *** Resolving executable dependencies ***
Dec 02 06:44:11 np0005541914.novalocal dracut[1434]: *** Resolving executable dependencies done ***
Dec 02 06:44:11 np0005541914.novalocal dracut[1434]: *** Hardlinking files ***
Dec 02 06:44:11 np0005541914.novalocal dracut[1434]: Mode:           real
Dec 02 06:44:11 np0005541914.novalocal dracut[1434]: Files:          1099
Dec 02 06:44:11 np0005541914.novalocal dracut[1434]: Linked:         3 files
Dec 02 06:44:11 np0005541914.novalocal dracut[1434]: Compared:       0 xattrs
Dec 02 06:44:11 np0005541914.novalocal dracut[1434]: Compared:       373 files
Dec 02 06:44:11 np0005541914.novalocal dracut[1434]: Saved:          61.04 KiB
Dec 02 06:44:11 np0005541914.novalocal dracut[1434]: Duration:       0.017542 seconds
Dec 02 06:44:11 np0005541914.novalocal dracut[1434]: *** Hardlinking files done ***
Dec 02 06:44:11 np0005541914.novalocal dracut[1434]: Could not find 'strip'. Not stripping the initramfs.
Dec 02 06:44:11 np0005541914.novalocal dracut[1434]: *** Generating early-microcode cpio image ***
Dec 02 06:44:11 np0005541914.novalocal dracut[1434]: *** Constructing AuthenticAMD.bin ***
Dec 02 06:44:11 np0005541914.novalocal dracut[1434]: *** Store current command line parameters ***
Dec 02 06:44:11 np0005541914.novalocal dracut[1434]: Stored kernel commandline:
Dec 02 06:44:11 np0005541914.novalocal dracut[1434]: No dracut internal kernel commandline stored in the initramfs
Dec 02 06:44:11 np0005541914.novalocal dracut[1434]: *** Install squash loader ***
Dec 02 06:44:11 np0005541914.novalocal dracut[1434]: *** Squashing the files inside the initramfs ***
Dec 02 06:44:11 np0005541914.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 02 06:44:12 np0005541914.novalocal dracut[1434]: *** Squashing the files inside the initramfs done ***
Dec 02 06:44:12 np0005541914.novalocal dracut[1434]: *** Creating image file '/boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img' ***
Dec 02 06:44:13 np0005541914.novalocal dracut[1434]: *** Creating initramfs image file '/boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img' done ***
Dec 02 06:44:14 np0005541914.novalocal kdumpctl[1135]: kdump: kexec: loaded kdump kernel
Dec 02 06:44:14 np0005541914.novalocal kdumpctl[1135]: kdump: Starting kdump: [OK]
Dec 02 06:44:14 np0005541914.novalocal systemd[1]: Finished Crash recovery kernel arming.
Dec 02 06:44:14 np0005541914.novalocal systemd[1]: Startup finished in 1.246s (kernel) + 1.963s (initrd) + 15.176s (userspace) = 18.386s.
Dec 02 06:44:31 np0005541914.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 02 06:44:34 np0005541914.novalocal sshd[4173]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 06:44:34 np0005541914.novalocal sshd[4173]: Accepted publickey for zuul from 38.102.83.114 port 41024 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Dec 02 06:44:34 np0005541914.novalocal systemd[1]: Created slice User Slice of UID 1000.
Dec 02 06:44:34 np0005541914.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec 02 06:44:34 np0005541914.novalocal systemd-logind[760]: New session 1 of user zuul.
Dec 02 06:44:34 np0005541914.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec 02 06:44:34 np0005541914.novalocal systemd[1]: Starting User Manager for UID 1000...
Dec 02 06:44:34 np0005541914.novalocal systemd[4177]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 06:44:35 np0005541914.novalocal systemd[4177]: Queued start job for default target Main User Target.
Dec 02 06:44:35 np0005541914.novalocal systemd[4177]: Created slice User Application Slice.
Dec 02 06:44:35 np0005541914.novalocal systemd[4177]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 02 06:44:35 np0005541914.novalocal systemd[4177]: Started Daily Cleanup of User's Temporary Directories.
Dec 02 06:44:35 np0005541914.novalocal systemd[4177]: Reached target Paths.
Dec 02 06:44:35 np0005541914.novalocal systemd[4177]: Reached target Timers.
Dec 02 06:44:35 np0005541914.novalocal systemd[4177]: Starting D-Bus User Message Bus Socket...
Dec 02 06:44:35 np0005541914.novalocal systemd[4177]: Starting Create User's Volatile Files and Directories...
Dec 02 06:44:35 np0005541914.novalocal systemd[4177]: Finished Create User's Volatile Files and Directories.
Dec 02 06:44:35 np0005541914.novalocal systemd[4177]: Listening on D-Bus User Message Bus Socket.
Dec 02 06:44:35 np0005541914.novalocal systemd[4177]: Reached target Sockets.
Dec 02 06:44:35 np0005541914.novalocal systemd[4177]: Reached target Basic System.
Dec 02 06:44:35 np0005541914.novalocal systemd[4177]: Reached target Main User Target.
Dec 02 06:44:35 np0005541914.novalocal systemd[4177]: Startup finished in 149ms.
Dec 02 06:44:35 np0005541914.novalocal systemd[1]: Started User Manager for UID 1000.
Dec 02 06:44:35 np0005541914.novalocal systemd[1]: Started Session 1 of User zuul.
Dec 02 06:44:35 np0005541914.novalocal sshd[4173]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 06:44:35 np0005541914.novalocal python3[4229]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 06:44:45 np0005541914.novalocal python3[4248]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 06:44:51 np0005541914.novalocal python3[4301]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 06:44:52 np0005541914.novalocal python3[4331]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec 02 06:44:55 np0005541914.novalocal python3[4347]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfcGXFPS+XIPHLw+7WTk1crQnJj1F7l/bATNqEM8HqdPREfaSIeF883HXh8Bv+rj9cjcgSPu+200+1SEsq35V+19mPwwkoxgdhfQu8jGk7vv17tL7k61zl9rWne61hn/7PnFptl+SBaMvOq/9ZdnPuMzb1YBTWbKm6kC3RPkgDUOa/BER5PJh1E6x6wYj1wRGMwVREczSSv+66aA5tTRelsFh16OXZXpq4ddoi7OeuimE3lWuMAHorxzJwF5AN+gPTgKYRkMwbMMHU4nPx7TXt5G3zjqWhmos08Xgdl+lPNHY5i463T96l4hGiycZKO4FOCq0ZMzldYkovXnyZi1CjSYUDcEn+EHIRJyZaK9ZJlJ1no5HVdwv1rwVMw4KkpZvH7HBh/iX47Wsi4qxK+L3X5hwZ7s6iSpNWeEMT5CLZsiDCkrdideFnZ8kW2jgnNIV0h+pUPISFfl1j03bjS9fHJjgl4BndVBxRJZJQf8Szyjx5WcIyBUidtYPnHzSLbmk= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 02 06:44:55 np0005541914.novalocal python3[4361]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 06:44:57 np0005541914.novalocal python3[4420]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 06:44:57 np0005541914.novalocal python3[4461]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764657896.8774223-393-148421869851076/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=fa40fdabeeae48b78b01a4cbccbd42f6_id_rsa follow=False checksum=c9b7a1839a060a12dd883255955d0b791bf96d1d backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 06:44:58 np0005541914.novalocal python3[4534]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 06:44:59 np0005541914.novalocal python3[4575]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764657898.673202-497-255877473705326/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=fa40fdabeeae48b78b01a4cbccbd42f6_id_rsa.pub follow=False checksum=076b8979e1bf6ba70130c32daa0e2e874f6f0bae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 06:45:01 np0005541914.novalocal python3[4603]: ansible-ping Invoked with data=pong
Dec 02 06:45:03 np0005541914.novalocal python3[4617]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 06:45:06 np0005541914.novalocal python3[4670]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec 02 06:45:09 np0005541914.novalocal python3[4692]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 06:45:09 np0005541914.novalocal python3[4706]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 06:45:09 np0005541914.novalocal python3[4720]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 06:45:10 np0005541914.novalocal python3[4734]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 06:45:10 np0005541914.novalocal python3[4748]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 06:45:11 np0005541914.novalocal python3[4762]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 06:45:12 np0005541914.novalocal chronyd[766]: Selected source 162.159.200.1 (2.rhel.pool.ntp.org)
Dec 02 06:45:13 np0005541914.novalocal sudo[4776]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xrrrnapoylmxgadnruphutkwmqpbtdhw ; /usr/bin/python3
Dec 02 06:45:13 np0005541914.novalocal sudo[4776]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 06:45:13 np0005541914.novalocal python3[4778]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 06:45:13 np0005541914.novalocal sudo[4776]: pam_unix(sudo:session): session closed for user root
Dec 02 06:45:15 np0005541914.novalocal sudo[4824]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uwqmiprykcrzzflzshdzmhhhryzdyghs ; /usr/bin/python3
Dec 02 06:45:15 np0005541914.novalocal sudo[4824]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 06:45:15 np0005541914.novalocal python3[4826]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 06:45:15 np0005541914.novalocal sudo[4824]: pam_unix(sudo:session): session closed for user root
Dec 02 06:45:15 np0005541914.novalocal sudo[4867]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lffaokxhbnqpmtigktksniofmtdcmlgj ; /usr/bin/python3
Dec 02 06:45:15 np0005541914.novalocal sudo[4867]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 06:45:15 np0005541914.novalocal python3[4869]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764657914.9519687-103-42547496934856/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 06:45:15 np0005541914.novalocal sudo[4867]: pam_unix(sudo:session): session closed for user root
Dec 02 06:45:23 np0005541914.novalocal python3[4898]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 02 06:45:23 np0005541914.novalocal python3[4912]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 02 06:45:23 np0005541914.novalocal python3[4926]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 02 06:45:23 np0005541914.novalocal python3[4940]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 02 06:45:24 np0005541914.novalocal python3[4954]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 02 06:45:24 np0005541914.novalocal python3[4968]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 02 06:45:24 np0005541914.novalocal python3[4982]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 02 06:45:25 np0005541914.novalocal python3[4996]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 02 06:45:25 np0005541914.novalocal python3[5010]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 02 06:45:25 np0005541914.novalocal python3[5024]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 02 06:45:26 np0005541914.novalocal python3[5038]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 02 06:45:27 np0005541914.novalocal python3[5052]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 02 06:45:27 np0005541914.novalocal python3[5066]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 02 06:45:27 np0005541914.novalocal python3[5080]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 02 06:45:27 np0005541914.novalocal python3[5094]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 02 06:45:28 np0005541914.novalocal python3[5108]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 02 06:45:28 np0005541914.novalocal python3[5122]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 02 06:45:28 np0005541914.novalocal python3[5136]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 02 06:45:29 np0005541914.novalocal python3[5150]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 02 06:45:29 np0005541914.novalocal python3[5164]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 02 06:45:29 np0005541914.novalocal python3[5178]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 02 06:45:29 np0005541914.novalocal python3[5192]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 02 06:45:29 np0005541914.novalocal python3[5206]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 02 06:45:30 np0005541914.novalocal python3[5220]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 02 06:45:30 np0005541914.novalocal python3[5234]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 02 06:45:30 np0005541914.novalocal python3[5248]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 02 06:45:31 np0005541914.novalocal sudo[5262]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-owjoxoouebzshjvqxzzxvvnrnplejtdj ; /usr/bin/python3
Dec 02 06:45:31 np0005541914.novalocal sudo[5262]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 06:45:31 np0005541914.novalocal python3[5264]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 02 06:45:31 np0005541914.novalocal systemd[1]: Starting Time & Date Service...
Dec 02 06:45:31 np0005541914.novalocal systemd[1]: Started Time & Date Service.
Dec 02 06:45:31 np0005541914.novalocal systemd-timedated[5266]: Changed time zone to 'UTC' (UTC).
Dec 02 06:45:31 np0005541914.novalocal sudo[5262]: pam_unix(sudo:session): session closed for user root
Dec 02 06:45:32 np0005541914.novalocal sudo[5283]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qorhjigmeftwcqfosvzgmeatlvodhqnl ; /usr/bin/python3
Dec 02 06:45:32 np0005541914.novalocal sudo[5283]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 06:45:32 np0005541914.novalocal python3[5285]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 06:45:32 np0005541914.novalocal sudo[5283]: pam_unix(sudo:session): session closed for user root
Dec 02 06:45:33 np0005541914.novalocal python3[5331]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 06:45:34 np0005541914.novalocal python3[5372]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764657933.5853722-502-139024217540485/source _original_basename=tmpxaia43oi follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 06:45:35 np0005541914.novalocal python3[5432]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 06:45:35 np0005541914.novalocal python3[5473]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764657935.0998247-588-212760711835856/source _original_basename=tmpl0uvbdp2 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 06:45:37 np0005541914.novalocal sudo[5533]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-crmqgyokpxhqjzlilvwyrjbevlphwppr ; /usr/bin/python3
Dec 02 06:45:37 np0005541914.novalocal sudo[5533]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 06:45:37 np0005541914.novalocal python3[5535]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 06:45:37 np0005541914.novalocal sudo[5533]: pam_unix(sudo:session): session closed for user root
Dec 02 06:45:37 np0005541914.novalocal sudo[5576]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wpnfuldhwdptzrdxsaefqkkzbazlfcju ; /usr/bin/python3
Dec 02 06:45:37 np0005541914.novalocal sudo[5576]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 06:45:38 np0005541914.novalocal python3[5578]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764657937.365522-732-222610062339955/source _original_basename=tmp8166pp9v follow=False checksum=01954034105cdb65b42722894a5c1036808c70c7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 06:45:38 np0005541914.novalocal sudo[5576]: pam_unix(sudo:session): session closed for user root
Dec 02 06:45:39 np0005541914.novalocal python3[5606]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 06:45:39 np0005541914.novalocal python3[5622]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 06:45:40 np0005541914.novalocal sudo[5670]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ptmexlcqgtkdgvczitadbqkeiurqikev ; /usr/bin/python3
Dec 02 06:45:40 np0005541914.novalocal sudo[5670]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 06:45:40 np0005541914.novalocal python3[5672]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 06:45:40 np0005541914.novalocal sudo[5670]: pam_unix(sudo:session): session closed for user root
Dec 02 06:45:40 np0005541914.novalocal sudo[5713]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jsyxugzvspcrhphavnkvdyymtvqppvww ; /usr/bin/python3
Dec 02 06:45:40 np0005541914.novalocal sudo[5713]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 06:45:40 np0005541914.novalocal python3[5715]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764657940.2812839-857-102749255966780/source _original_basename=tmpjz7i88_7 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 06:45:40 np0005541914.novalocal sudo[5713]: pam_unix(sudo:session): session closed for user root
Dec 02 06:45:42 np0005541914.novalocal sudo[5744]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uduwdenjpgdqwupabxvacxyipmykodfs ; /usr/bin/python3
Dec 02 06:45:42 np0005541914.novalocal sudo[5744]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 06:45:42 np0005541914.novalocal python3[5746]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-2304-36f4-000000000023-1-overcloudnovacompute2 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 06:45:42 np0005541914.novalocal sudo[5744]: pam_unix(sudo:session): session closed for user root
Dec 02 06:45:43 np0005541914.novalocal python3[5764]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-2304-36f4-000000000024-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec 02 06:45:45 np0005541914.novalocal python3[5782]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 06:46:01 np0005541914.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 02 06:46:04 np0005541914.novalocal sudo[5799]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ftfcmnbzjenoogqyksdnofvebjewxkqh ; /usr/bin/python3
Dec 02 06:46:04 np0005541914.novalocal sudo[5799]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 06:46:04 np0005541914.novalocal python3[5801]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 06:46:04 np0005541914.novalocal sudo[5799]: pam_unix(sudo:session): session closed for user root
Dec 02 06:46:54 np0005541914.novalocal systemd[4177]: Starting Mark boot as successful...
Dec 02 06:46:54 np0005541914.novalocal systemd[4177]: Finished Mark boot as successful.
Dec 02 06:47:04 np0005541914.novalocal sshd[4186]: Received disconnect from 38.102.83.114 port 41024:11: disconnected by user
Dec 02 06:47:04 np0005541914.novalocal sshd[4186]: Disconnected from user zuul 38.102.83.114 port 41024
Dec 02 06:47:04 np0005541914.novalocal sshd[4173]: pam_unix(sshd:session): session closed for user zuul
Dec 02 06:47:04 np0005541914.novalocal systemd-logind[760]: Session 1 logged out. Waiting for processes to exit.
Dec 02 06:47:22 np0005541914.novalocal chronyd[766]: Selected source 174.138.193.90 (2.rhel.pool.ntp.org)
Dec 02 06:48:01 np0005541914.novalocal systemd[1]: Unmounting EFI System Partition Automount...
Dec 02 06:48:01 np0005541914.novalocal systemd[1]: efi.mount: Deactivated successfully.
Dec 02 06:48:01 np0005541914.novalocal systemd[1]: Unmounted EFI System Partition Automount.
Dec 02 06:49:54 np0005541914.novalocal systemd[4177]: Created slice User Background Tasks Slice.
Dec 02 06:49:54 np0005541914.novalocal systemd[4177]: Starting Cleanup of User's Temporary Files and Directories...
Dec 02 06:49:54 np0005541914.novalocal systemd[4177]: Finished Cleanup of User's Temporary Files and Directories.
Dec 02 06:50:08 np0005541914.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000
Dec 02 06:50:08 np0005541914.novalocal kernel: pci 0000:00:07.0: reg 0x10: [io  0x0000-0x003f]
Dec 02 06:50:08 np0005541914.novalocal kernel: pci 0000:00:07.0: reg 0x14: [mem 0x00000000-0x00000fff]
Dec 02 06:50:08 np0005541914.novalocal kernel: pci 0000:00:07.0: reg 0x20: [mem 0x00000000-0x00003fff 64bit pref]
Dec 02 06:50:08 np0005541914.novalocal kernel: pci 0000:00:07.0: reg 0x30: [mem 0x00000000-0x0007ffff pref]
Dec 02 06:50:08 np0005541914.novalocal kernel: pci 0000:00:07.0: BAR 6: assigned [mem 0xc0000000-0xc007ffff pref]
Dec 02 06:50:08 np0005541914.novalocal kernel: pci 0000:00:07.0: BAR 4: assigned [mem 0x440000000-0x440003fff 64bit pref]
Dec 02 06:50:08 np0005541914.novalocal kernel: pci 0000:00:07.0: BAR 1: assigned [mem 0xc0080000-0xc0080fff]
Dec 02 06:50:08 np0005541914.novalocal kernel: pci 0000:00:07.0: BAR 0: assigned [io  0x1000-0x103f]
Dec 02 06:50:08 np0005541914.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec 02 06:50:08 np0005541914.novalocal NetworkManager[790]: <info>  [1764658208.2320] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 02 06:50:08 np0005541914.novalocal systemd-udevd[5809]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 06:50:08 np0005541914.novalocal NetworkManager[790]: <info>  [1764658208.2454] device (eth1): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external')
Dec 02 06:50:08 np0005541914.novalocal NetworkManager[790]: <info>  [1764658208.2485] settings: (eth1): created default wired connection 'Wired connection 1'
Dec 02 06:50:08 np0005541914.novalocal NetworkManager[790]: <info>  [1764658208.2490] device (eth1): carrier: link connected
Dec 02 06:50:08 np0005541914.novalocal NetworkManager[790]: <info>  [1764658208.2493] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed')
Dec 02 06:50:08 np0005541914.novalocal NetworkManager[790]: <info>  [1764658208.2499] policy: auto-activating connection 'Wired connection 1' (770c572d-e688-3a45-874b-65e8776845d6)
Dec 02 06:50:08 np0005541914.novalocal NetworkManager[790]: <info>  [1764658208.2505] device (eth1): Activation: starting connection 'Wired connection 1' (770c572d-e688-3a45-874b-65e8776845d6)
Dec 02 06:50:08 np0005541914.novalocal NetworkManager[790]: <info>  [1764658208.2507] device (eth1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed')
Dec 02 06:50:08 np0005541914.novalocal NetworkManager[790]: <info>  [1764658208.2511] device (eth1): state change: prepare -> config (reason 'none', sys-iface-state: 'managed')
Dec 02 06:50:08 np0005541914.novalocal NetworkManager[790]: <info>  [1764658208.2518] device (eth1): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed')
Dec 02 06:50:08 np0005541914.novalocal NetworkManager[790]: <info>  [1764658208.2523] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 02 06:50:09 np0005541914.novalocal sshd[5813]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 06:50:09 np0005541914.novalocal sshd[5813]: Accepted publickey for zuul from 38.102.83.114 port 46114 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 06:50:09 np0005541914.novalocal systemd-logind[760]: New session 3 of user zuul.
Dec 02 06:50:09 np0005541914.novalocal systemd[1]: Started Session 3 of User zuul.
Dec 02 06:50:09 np0005541914.novalocal sshd[5813]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 06:50:09 np0005541914.novalocal kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
Dec 02 06:50:09 np0005541914.novalocal python3[5830]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-8e68-9bb8-000000000475-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 06:50:22 np0005541914.novalocal sudo[5878]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lljrqddzjtgupxmqvcunbbpzmeprahba ; OS_CLOUD=vexxhost /usr/bin/python3
Dec 02 06:50:22 np0005541914.novalocal sudo[5878]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 06:50:22 np0005541914.novalocal python3[5880]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 06:50:22 np0005541914.novalocal sudo[5878]: pam_unix(sudo:session): session closed for user root
Dec 02 06:50:22 np0005541914.novalocal sudo[5921]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zmmbcloravdydddeyifmnlyxuhliblmk ; OS_CLOUD=vexxhost /usr/bin/python3
Dec 02 06:50:22 np0005541914.novalocal sudo[5921]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 06:50:22 np0005541914.novalocal python3[5923]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764658222.3319945-537-55041512328690/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=e393cafdfd30e64ab7d980887ec656a764e51bf5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 06:50:22 np0005541914.novalocal sudo[5921]: pam_unix(sudo:session): session closed for user root
Dec 02 06:50:23 np0005541914.novalocal sudo[5951]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rsfasrtuykkqqsjyhkrwfbjxidksbevu ; OS_CLOUD=vexxhost /usr/bin/python3
Dec 02 06:50:23 np0005541914.novalocal sudo[5951]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 06:50:23 np0005541914.novalocal python3[5953]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 02 06:50:23 np0005541914.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 02 06:50:23 np0005541914.novalocal systemd[1]: Stopped Network Manager Wait Online.
Dec 02 06:50:23 np0005541914.novalocal systemd[1]: Stopping Network Manager Wait Online...
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[790]: <info>  [1764658223.6176] caught SIGTERM, shutting down normally.
Dec 02 06:50:23 np0005541914.novalocal systemd[1]: Stopping Network Manager...
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[790]: <info>  [1764658223.6306] dhcp4 (eth0): canceled DHCP transaction
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[790]: <info>  [1764658223.6306] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[790]: <info>  [1764658223.6306] dhcp4 (eth0): state changed no lease
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[790]: <info>  [1764658223.6312] manager: NetworkManager state is now CONNECTING
Dec 02 06:50:23 np0005541914.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[790]: <info>  [1764658223.6425] dhcp4 (eth1): canceled DHCP transaction
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[790]: <info>  [1764658223.6427] dhcp4 (eth1): state changed no lease
Dec 02 06:50:23 np0005541914.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[790]: <info>  [1764658223.6511] exiting (success)
Dec 02 06:50:23 np0005541914.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 02 06:50:23 np0005541914.novalocal systemd[1]: Stopped Network Manager.
Dec 02 06:50:23 np0005541914.novalocal systemd[1]: NetworkManager.service: Consumed 2.529s CPU time.
Dec 02 06:50:23 np0005541914.novalocal systemd[1]: Starting Network Manager...
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.7036] NetworkManager (version 1.42.2-1.el9) is starting... (after a restart, boot:9c01ca59-fcb0-40d3-99c4-2690b78cc18a)
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.7039] Read config: /etc/NetworkManager/NetworkManager.conf (run: 15-carrier-timeout.conf)
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.7064] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 02 06:50:23 np0005541914.novalocal systemd[1]: Started Network Manager.
Dec 02 06:50:23 np0005541914.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.7142] manager[0x5653ca5a9090]: monitoring kernel firmware directory '/lib/firmware'.
Dec 02 06:50:23 np0005541914.novalocal systemd[1]: Starting Hostname Service...
Dec 02 06:50:23 np0005541914.novalocal sudo[5951]: pam_unix(sudo:session): session closed for user root
Dec 02 06:50:23 np0005541914.novalocal systemd[1]: Started Hostname Service.
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.7988] hostname: hostname: using hostnamed
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.7988] hostname: static hostname changed from (none) to "np0005541914.novalocal"
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.7992] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.7997] manager[0x5653ca5a9090]: rfkill: Wi-Fi hardware radio set enabled
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.7997] manager[0x5653ca5a9090]: rfkill: WWAN hardware radio set enabled
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8022] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-device-plugin-team.so)
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8022] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8023] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8023] manager: Networking is enabled by state file
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8031] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8032] settings: Loaded settings plugin: keyfile (internal)
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8079] dhcp: init: Using DHCP client 'internal'
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8083] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8092] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external')
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8102] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external')
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8119] device (lo): Activation: starting connection 'lo' (7d726e81-3c90-4757-9293-46e316e2c15c)
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8129] device (eth0): carrier: link connected
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8135] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8143] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8144] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'assume')
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8152] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'assume')
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8167] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8174] device (eth1): carrier: link connected
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8180] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8187] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (770c572d-e688-3a45-874b-65e8776845d6) (indicated)
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8188] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'assume')
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8196] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'assume')
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8208] device (eth1): Activation: starting connection 'Wired connection 1' (770c572d-e688-3a45-874b-65e8776845d6)
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8249] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external')
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8255] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external')
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8262] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external')
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8268] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'assume')
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8273] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'assume')
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8277] device (eth1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'assume')
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8281] device (eth1): state change: prepare -> config (reason 'none', sys-iface-state: 'assume')
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8285] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external')
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8295] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'assume')
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8301] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8316] device (eth1): state change: config -> ip-config (reason 'none', sys-iface-state: 'assume')
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8321] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8374] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external')
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8378] dhcp4 (eth0): state changed new lease, address=38.102.83.204
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8387] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external')
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8394] device (lo): Activation: successful, device activated.
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8402] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8507] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'assume')
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8554] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'assume')
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8558] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'assume')
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8563] manager: NetworkManager state is now CONNECTED_SITE
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8568] device (eth0): Activation: successful, device activated.
Dec 02 06:50:23 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658223.8574] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 02 06:50:24 np0005541914.novalocal python3[6015]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-8e68-9bb8-000000000136-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 06:50:33 np0005541914.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 02 06:50:53 np0005541914.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 02 06:51:08 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658268.7619] device (eth1): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'assume')
Dec 02 06:51:08 np0005541914.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 02 06:51:08 np0005541914.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 02 06:51:08 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658268.7831] device (eth1): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'assume')
Dec 02 06:51:08 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658268.7836] device (eth1): state change: secondaries -> activated (reason 'none', sys-iface-state: 'assume')
Dec 02 06:51:08 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658268.7846] device (eth1): Activation: successful, device activated.
Dec 02 06:51:08 np0005541914.novalocal NetworkManager[5967]: <info>  [1764658268.7856] manager: startup complete
Dec 02 06:51:08 np0005541914.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 02 06:51:18 np0005541914.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 02 06:51:24 np0005541914.novalocal sshd[5816]: Received disconnect from 38.102.83.114 port 46114:11: disconnected by user
Dec 02 06:51:24 np0005541914.novalocal sshd[5816]: Disconnected from user zuul 38.102.83.114 port 46114
Dec 02 06:51:24 np0005541914.novalocal sshd[5813]: pam_unix(sshd:session): session closed for user zuul
Dec 02 06:51:24 np0005541914.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Dec 02 06:51:24 np0005541914.novalocal systemd[1]: session-3.scope: Consumed 1.401s CPU time.
Dec 02 06:51:24 np0005541914.novalocal systemd-logind[760]: Session 3 logged out. Waiting for processes to exit.
Dec 02 06:51:24 np0005541914.novalocal systemd-logind[760]: Removed session 3.
Dec 02 06:51:42 np0005541914.novalocal sshd[6055]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 06:51:42 np0005541914.novalocal sshd[6055]: Accepted publickey for zuul from 38.102.83.114 port 34908 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 06:51:42 np0005541914.novalocal systemd-logind[760]: New session 4 of user zuul.
Dec 02 06:51:42 np0005541914.novalocal systemd[1]: Started Session 4 of User zuul.
Dec 02 06:51:42 np0005541914.novalocal sshd[6055]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 06:51:42 np0005541914.novalocal sudo[6104]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xtmjbibmxagejqlwfcufmizcarundzlk ; OS_CLOUD=vexxhost /usr/bin/python3
Dec 02 06:51:42 np0005541914.novalocal sudo[6104]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 06:51:42 np0005541914.novalocal python3[6106]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 06:51:42 np0005541914.novalocal sudo[6104]: pam_unix(sudo:session): session closed for user root
Dec 02 06:51:43 np0005541914.novalocal sudo[6147]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-laqlumsinetlpsqqajkfhbmuswrqgfby ; OS_CLOUD=vexxhost /usr/bin/python3
Dec 02 06:51:43 np0005541914.novalocal sudo[6147]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 06:51:43 np0005541914.novalocal python3[6149]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764658302.6624026-628-210851161158306/source _original_basename=tmpreutzs0i follow=False checksum=c2b23ffe44719bb1642f7b68b2bf34d320a2a721 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 06:51:43 np0005541914.novalocal sudo[6147]: pam_unix(sudo:session): session closed for user root
Dec 02 06:51:45 np0005541914.novalocal sshd[6055]: pam_unix(sshd:session): session closed for user zuul
Dec 02 06:51:45 np0005541914.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Dec 02 06:51:45 np0005541914.novalocal systemd-logind[760]: Session 4 logged out. Waiting for processes to exit.
Dec 02 06:51:45 np0005541914.novalocal systemd-logind[760]: Removed session 4.
Dec 02 06:55:54 np0005541914.novalocal systemd[1]: Starting dnf makecache...
Dec 02 06:55:55 np0005541914.novalocal dnf[6165]: Failed determining last makecache time.
Dec 02 06:55:55 np0005541914.novalocal dnf[6165]: There are no enabled repositories in "/etc/yum.repos.d", "/etc/yum/repos.d", "/etc/distro.repos.d".
Dec 02 06:55:55 np0005541914.novalocal systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec 02 06:55:55 np0005541914.novalocal systemd[1]: Finished dnf makecache.
Dec 02 06:57:38 np0005541914.novalocal sshd[6169]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 06:57:39 np0005541914.novalocal sshd[6169]: Accepted publickey for zuul from 38.102.83.114 port 59502 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 06:57:39 np0005541914.novalocal systemd-logind[760]: New session 5 of user zuul.
Dec 02 06:57:39 np0005541914.novalocal systemd[1]: Started Session 5 of User zuul.
Dec 02 06:57:39 np0005541914.novalocal sshd[6169]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 06:57:39 np0005541914.novalocal sudo[6186]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tkllrdijlxlgkqxmnvexoycgzvzzqcno ; /usr/bin/python3
Dec 02 06:57:39 np0005541914.novalocal sudo[6186]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 06:57:39 np0005541914.novalocal python3[6188]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-e6e8-5ca8-000000001d02-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 06:57:39 np0005541914.novalocal sudo[6186]: pam_unix(sudo:session): session closed for user root
Dec 02 06:57:40 np0005541914.novalocal sudo[6205]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ofaaclospihdwmgrqqakknnkvfutycoj ; /usr/bin/python3
Dec 02 06:57:40 np0005541914.novalocal sudo[6205]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 06:57:40 np0005541914.novalocal python3[6207]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 06:57:40 np0005541914.novalocal sudo[6205]: pam_unix(sudo:session): session closed for user root
Dec 02 06:57:40 np0005541914.novalocal sudo[6221]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-efurxzwebtqkbzntqmevclwcvzfadywu ; /usr/bin/python3
Dec 02 06:57:40 np0005541914.novalocal sudo[6221]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 06:57:41 np0005541914.novalocal python3[6223]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 06:57:41 np0005541914.novalocal sudo[6221]: pam_unix(sudo:session): session closed for user root
Dec 02 06:57:41 np0005541914.novalocal sudo[6237]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-arwvbkrfuerrdcmuoeekikdrvtaebqeo ; /usr/bin/python3
Dec 02 06:57:41 np0005541914.novalocal sudo[6237]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 06:57:41 np0005541914.novalocal python3[6239]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 06:57:41 np0005541914.novalocal sudo[6237]: pam_unix(sudo:session): session closed for user root
Dec 02 06:57:41 np0005541914.novalocal sudo[6253]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jvhewakpdwbibmdxbssstmobamtfeojb ; /usr/bin/python3
Dec 02 06:57:41 np0005541914.novalocal sudo[6253]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 06:57:41 np0005541914.novalocal python3[6255]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 06:57:41 np0005541914.novalocal sudo[6253]: pam_unix(sudo:session): session closed for user root
Dec 02 06:57:42 np0005541914.novalocal sudo[6269]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uvnbshecnsxktwupfjybqcjnrgxzkvgt ; /usr/bin/python3
Dec 02 06:57:42 np0005541914.novalocal sudo[6269]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 06:57:42 np0005541914.novalocal python3[6271]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 06:57:42 np0005541914.novalocal sudo[6269]: pam_unix(sudo:session): session closed for user root
Dec 02 06:57:43 np0005541914.novalocal sudo[6317]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dcelcwnxnkfnyrkwbyihcqwwrirvqkap ; /usr/bin/python3
Dec 02 06:57:43 np0005541914.novalocal sudo[6317]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 06:57:43 np0005541914.novalocal python3[6319]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 06:57:43 np0005541914.novalocal sudo[6317]: pam_unix(sudo:session): session closed for user root
Dec 02 06:57:43 np0005541914.novalocal sudo[6360]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lfpniltoqviucldyuaqygmldumxkjmwg ; /usr/bin/python3
Dec 02 06:57:43 np0005541914.novalocal sudo[6360]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 06:57:43 np0005541914.novalocal python3[6362]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764658663.2883444-648-40554091009218/source _original_basename=tmp0hal718q follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 06:57:43 np0005541914.novalocal sudo[6360]: pam_unix(sudo:session): session closed for user root
Dec 02 06:57:45 np0005541914.novalocal sudo[6390]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mrrnuszerfibkvbpxbppqhsjiblzxcws ; /usr/bin/python3
Dec 02 06:57:45 np0005541914.novalocal sudo[6390]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 06:57:45 np0005541914.novalocal python3[6392]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 02 06:57:45 np0005541914.novalocal systemd[1]: Reloading.
Dec 02 06:57:45 np0005541914.novalocal systemd-rc-local-generator[6409]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 06:57:45 np0005541914.novalocal systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 06:57:46 np0005541914.novalocal sudo[6390]: pam_unix(sudo:session): session closed for user root
Dec 02 06:57:46 np0005541914.novalocal sudo[6436]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gjjlzeqcbhxhkszxaqgjfyhwtcnaowhl ; /usr/bin/python3
Dec 02 06:57:46 np0005541914.novalocal sudo[6436]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 06:57:47 np0005541914.novalocal python3[6438]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec 02 06:57:47 np0005541914.novalocal sudo[6436]: pam_unix(sudo:session): session closed for user root
Dec 02 06:57:48 np0005541914.novalocal sudo[6452]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lclbsljjtxmdbinglnftwaorojfdpmcd ; /usr/bin/python3
Dec 02 06:57:48 np0005541914.novalocal sudo[6452]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 06:57:48 np0005541914.novalocal python3[6454]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 06:57:48 np0005541914.novalocal sudo[6452]: pam_unix(sudo:session): session closed for user root
Dec 02 06:57:48 np0005541914.novalocal sudo[6470]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qpluchfcmwasdhuxpinkyqkcrqhaumbb ; /usr/bin/python3
Dec 02 06:57:48 np0005541914.novalocal sudo[6470]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 06:57:48 np0005541914.novalocal python3[6472]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 06:57:48 np0005541914.novalocal sudo[6470]: pam_unix(sudo:session): session closed for user root
Dec 02 06:57:49 np0005541914.novalocal sudo[6488]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fpizfjkmgkqtdaviymrrjyeoczdmmqzb ; /usr/bin/python3
Dec 02 06:57:49 np0005541914.novalocal sudo[6488]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 06:57:49 np0005541914.novalocal python3[6490]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 06:57:49 np0005541914.novalocal sudo[6488]: pam_unix(sudo:session): session closed for user root
Dec 02 06:57:49 np0005541914.novalocal sudo[6506]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pfrcqmzzgqbnxjmxdjplwgvmzpdbjfaf ; /usr/bin/python3
Dec 02 06:57:49 np0005541914.novalocal sudo[6506]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 06:57:49 np0005541914.novalocal python3[6508]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 06:57:49 np0005541914.novalocal sudo[6506]: pam_unix(sudo:session): session closed for user root
Dec 02 06:58:00 np0005541914.novalocal python3[6526]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-e6e8-5ca8-000000001d09-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 06:58:01 np0005541914.novalocal python3[6546]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 06:58:04 np0005541914.novalocal sshd[6169]: pam_unix(sshd:session): session closed for user zuul
Dec 02 06:58:04 np0005541914.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Dec 02 06:58:04 np0005541914.novalocal systemd[1]: session-5.scope: Consumed 4.033s CPU time.
Dec 02 06:58:04 np0005541914.novalocal systemd-logind[760]: Session 5 logged out. Waiting for processes to exit.
Dec 02 06:58:04 np0005541914.novalocal systemd-logind[760]: Removed session 5.
Dec 02 06:59:23 np0005541914.novalocal sshd[6553]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 06:59:23 np0005541914.novalocal systemd[1]: Starting Cleanup of Temporary Directories...
Dec 02 06:59:23 np0005541914.novalocal systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec 02 06:59:23 np0005541914.novalocal systemd[1]: Finished Cleanup of Temporary Directories.
Dec 02 06:59:23 np0005541914.novalocal systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec 02 06:59:23 np0005541914.novalocal sshd[6553]: Accepted publickey for zuul from 38.102.83.114 port 44944 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 06:59:23 np0005541914.novalocal systemd-logind[760]: New session 6 of user zuul.
Dec 02 06:59:23 np0005541914.novalocal systemd[1]: Started Session 6 of User zuul.
Dec 02 06:59:23 np0005541914.novalocal sshd[6553]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 06:59:23 np0005541914.novalocal sudo[6572]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sfqbkbupapsepiqhqyedwezufhdzklil ; /usr/bin/python3
Dec 02 06:59:23 np0005541914.novalocal sudo[6572]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 06:59:24 np0005541914.novalocal systemd[1]: Starting RHSM dbus service...
Dec 02 06:59:24 np0005541914.novalocal systemd[1]: Started RHSM dbus service.
Dec 02 06:59:24 np0005541914.novalocal rhsm-service[6579]:  INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm'
Dec 02 06:59:24 np0005541914.novalocal rhsm-service[6579]:  INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm'
Dec 02 06:59:24 np0005541914.novalocal rhsm-service[6579]:  INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm'
Dec 02 06:59:24 np0005541914.novalocal rhsm-service[6579]:  INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm'
Dec 02 06:59:26 np0005541914.novalocal rhsm-service[6579]:  INFO [subscription_manager.managerlib:90] Consumer created: np0005541914.novalocal (5e8bb4be-b98c-46c0-ac7b-5189dfb48508)
Dec 02 06:59:26 np0005541914.novalocal subscription-manager[6579]: Registered system with identity: 5e8bb4be-b98c-46c0-ac7b-5189dfb48508
Dec 02 06:59:26 np0005541914.novalocal rhsm-service[6579]:  INFO [subscription_manager.entcertlib:131] certs updated:
Dec 02 06:59:26 np0005541914.novalocal rhsm-service[6579]: Total updates: 1
Dec 02 06:59:26 np0005541914.novalocal rhsm-service[6579]: Found (local) serial# []
Dec 02 06:59:26 np0005541914.novalocal rhsm-service[6579]: Expected (UEP) serial# [958660430422246327]
Dec 02 06:59:26 np0005541914.novalocal rhsm-service[6579]: Added (new)
Dec 02 06:59:26 np0005541914.novalocal rhsm-service[6579]:   [sn:958660430422246327 ( Content Access,) @ /etc/pki/entitlement/958660430422246327.pem]
Dec 02 06:59:26 np0005541914.novalocal rhsm-service[6579]: Deleted (rogue):
Dec 02 06:59:26 np0005541914.novalocal rhsm-service[6579]:   <NONE>
Dec 02 06:59:26 np0005541914.novalocal subscription-manager[6579]: Added subscription for 'Content Access' contract 'None'
Dec 02 06:59:26 np0005541914.novalocal subscription-manager[6579]: Added subscription for product ' Content Access'
Dec 02 06:59:28 np0005541914.novalocal rhsm-service[6579]:  INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm'
Dec 02 06:59:28 np0005541914.novalocal rhsm-service[6579]:  INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm'
Dec 02 06:59:28 np0005541914.novalocal rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 02 06:59:28 np0005541914.novalocal rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 02 06:59:28 np0005541914.novalocal rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 02 06:59:28 np0005541914.novalocal rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 02 06:59:28 np0005541914.novalocal rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 02 06:59:29 np0005541914.novalocal sudo[6572]: pam_unix(sudo:session): session closed for user root
Dec 02 06:59:31 np0005541914.novalocal python3[6670]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/redhat-release zuul_log_id=fa163e3b-3c83-0809-2eed-00000000000d-1-overcloudnovacompute2 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:00:24 np0005541914.novalocal sudo[6687]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zjlfynckylfoihbvdldkmpsdssrarfek ; /usr/bin/python3
Dec 02 07:00:24 np0005541914.novalocal sudo[6687]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:00:24 np0005541914.novalocal python3[6689]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 02 07:01:01 np0005541914.novalocal CROND[6710]: (root) CMD (run-parts /etc/cron.hourly)
Dec 02 07:01:01 np0005541914.novalocal run-parts[6713]: (/etc/cron.hourly) starting 0anacron
Dec 02 07:01:01 np0005541914.novalocal anacron[6721]: Anacron started on 2025-12-02
Dec 02 07:01:01 np0005541914.novalocal anacron[6721]: Will run job `cron.daily' in 34 min.
Dec 02 07:01:01 np0005541914.novalocal anacron[6721]: Will run job `cron.weekly' in 54 min.
Dec 02 07:01:01 np0005541914.novalocal anacron[6721]: Will run job `cron.monthly' in 74 min.
Dec 02 07:01:01 np0005541914.novalocal anacron[6721]: Jobs will be executed sequentially
Dec 02 07:01:01 np0005541914.novalocal run-parts[6723]: (/etc/cron.hourly) finished 0anacron
Dec 02 07:01:01 np0005541914.novalocal CROND[6709]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 02 07:01:08 np0005541914.novalocal setsebool[6780]: The virt_use_nfs policy boolean was changed to 1 by root
Dec 02 07:01:08 np0005541914.novalocal setsebool[6780]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec 02 07:01:16 np0005541914.novalocal kernel: SELinux:  Converting 410 SID table entries...
Dec 02 07:01:16 np0005541914.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 02 07:01:16 np0005541914.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 02 07:01:16 np0005541914.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 02 07:01:16 np0005541914.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 02 07:01:16 np0005541914.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 02 07:01:16 np0005541914.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 02 07:01:16 np0005541914.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 02 07:01:29 np0005541914.novalocal dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=3 res=1
Dec 02 07:01:29 np0005541914.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 02 07:01:29 np0005541914.novalocal systemd[1]: Starting man-db-cache-update.service...
Dec 02 07:01:29 np0005541914.novalocal systemd[1]: Reloading.
Dec 02 07:01:29 np0005541914.novalocal systemd-rc-local-generator[7651]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 07:01:29 np0005541914.novalocal systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 07:01:29 np0005541914.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Dec 02 07:01:31 np0005541914.novalocal rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 02 07:01:31 np0005541914.novalocal sudo[6687]: pam_unix(sudo:session): session closed for user root
Dec 02 07:01:31 np0005541914.novalocal rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 02 07:01:33 np0005541914.novalocal sudo[12731]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-alseugfjtmhuppuqbkxrmokaqultwfbl ; /usr/bin/python3
Dec 02 07:01:33 np0005541914.novalocal sudo[12731]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:01:33 np0005541914.novalocal podman[13009]: 2025-12-02 07:01:33.676958984 +0000 UTC m=+0.103677637 system refresh
Dec 02 07:01:34 np0005541914.novalocal sudo[12731]: pam_unix(sudo:session): session closed for user root
Dec 02 07:01:34 np0005541914.novalocal systemd[4177]: Starting D-Bus User Message Bus...
Dec 02 07:01:34 np0005541914.novalocal dbus-broker-launch[14516]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec 02 07:01:34 np0005541914.novalocal dbus-broker-launch[14516]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec 02 07:01:34 np0005541914.novalocal systemd[4177]: Started D-Bus User Message Bus.
Dec 02 07:01:34 np0005541914.novalocal dbus-broker-lau[14516]: Ready
Dec 02 07:01:34 np0005541914.novalocal systemd[4177]: selinux: avc:  op=load_policy lsm=selinux seqno=3 res=1
Dec 02 07:01:34 np0005541914.novalocal systemd[4177]: Created slice Slice /user.
Dec 02 07:01:34 np0005541914.novalocal systemd[4177]: podman-14343.scope: unit configures an IP firewall, but not running as root.
Dec 02 07:01:34 np0005541914.novalocal systemd[4177]: (This warning is only shown for the first unit using IP firewalling.)
Dec 02 07:01:34 np0005541914.novalocal systemd[4177]: Started podman-14343.scope.
Dec 02 07:01:34 np0005541914.novalocal systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 02 07:01:34 np0005541914.novalocal systemd[4177]: Started podman-pause-30e316fa.scope.
Dec 02 07:01:35 np0005541914.novalocal sshd[6553]: pam_unix(sshd:session): session closed for user zuul
Dec 02 07:01:35 np0005541914.novalocal systemd[1]: session-6.scope: Deactivated successfully.
Dec 02 07:01:35 np0005541914.novalocal systemd[1]: session-6.scope: Consumed 51.879s CPU time.
Dec 02 07:01:35 np0005541914.novalocal systemd-logind[760]: Session 6 logged out. Waiting for processes to exit.
Dec 02 07:01:35 np0005541914.novalocal systemd-logind[760]: Removed session 6.
Dec 02 07:01:37 np0005541914.novalocal systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 02 07:01:37 np0005541914.novalocal systemd[1]: Finished man-db-cache-update.service.
Dec 02 07:01:37 np0005541914.novalocal systemd[1]: man-db-cache-update.service: Consumed 9.832s CPU time.
Dec 02 07:01:37 np0005541914.novalocal systemd[1]: run-r0d21e8c1f6f54a6eb748a4c9f9ac015d.service: Deactivated successfully.
Dec 02 07:01:52 np0005541914.novalocal sshd[18439]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:01:52 np0005541914.novalocal sshd[18436]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:01:52 np0005541914.novalocal sshd[18437]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:01:52 np0005541914.novalocal sshd[18438]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:01:52 np0005541914.novalocal sshd[18435]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:01:52 np0005541914.novalocal sshd[18436]: Unable to negotiate with 38.102.83.45 port 53876: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Dec 02 07:01:52 np0005541914.novalocal sshd[18435]: Connection closed by 38.102.83.45 port 53858 [preauth]
Dec 02 07:01:52 np0005541914.novalocal sshd[18437]: Connection closed by 38.102.83.45 port 53874 [preauth]
Dec 02 07:01:52 np0005541914.novalocal sshd[18438]: Unable to negotiate with 38.102.83.45 port 53892: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Dec 02 07:01:52 np0005541914.novalocal sshd[18439]: Unable to negotiate with 38.102.83.45 port 53898: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Dec 02 07:01:56 np0005541914.novalocal sshd[18445]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:01:56 np0005541914.novalocal sshd[18445]: Accepted publickey for zuul from 38.102.83.114 port 52494 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 07:01:56 np0005541914.novalocal systemd-logind[760]: New session 7 of user zuul.
Dec 02 07:01:56 np0005541914.novalocal systemd[1]: Started Session 7 of User zuul.
Dec 02 07:01:56 np0005541914.novalocal sshd[18445]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 07:01:57 np0005541914.novalocal python3[18462]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI3vTocdvpL7KoTE0s+B2HOorkXEJmfFflLp6CHTopK26IhGD4IX+p0PXIjQjXzwbw8u6vDuDtUAlLIH4wGuE2A= zuul@np0005541906.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 02 07:01:57 np0005541914.novalocal sudo[18476]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yjjmidezqdlnfmhrytmgytkxdxjqthvt ; /usr/bin/python3
Dec 02 07:01:57 np0005541914.novalocal sudo[18476]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:01:57 np0005541914.novalocal python3[18478]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI3vTocdvpL7KoTE0s+B2HOorkXEJmfFflLp6CHTopK26IhGD4IX+p0PXIjQjXzwbw8u6vDuDtUAlLIH4wGuE2A= zuul@np0005541906.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 02 07:01:57 np0005541914.novalocal sudo[18476]: pam_unix(sudo:session): session closed for user root
Dec 02 07:01:59 np0005541914.novalocal sshd[18445]: pam_unix(sshd:session): session closed for user zuul
Dec 02 07:01:59 np0005541914.novalocal systemd[1]: session-7.scope: Deactivated successfully.
Dec 02 07:01:59 np0005541914.novalocal systemd-logind[760]: Session 7 logged out. Waiting for processes to exit.
Dec 02 07:01:59 np0005541914.novalocal systemd-logind[760]: Removed session 7.
Dec 02 07:03:27 np0005541914.novalocal sshd[18481]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:03:27 np0005541914.novalocal sshd[18481]: Accepted publickey for zuul from 38.102.83.114 port 59596 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 07:03:27 np0005541914.novalocal systemd-logind[760]: New session 8 of user zuul.
Dec 02 07:03:27 np0005541914.novalocal systemd[1]: Started Session 8 of User zuul.
Dec 02 07:03:27 np0005541914.novalocal sshd[18481]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 07:03:28 np0005541914.novalocal sudo[18498]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jshsekiyitwpwbhtnziirhdlilvelaqf ; /usr/bin/python3
Dec 02 07:03:28 np0005541914.novalocal sudo[18498]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:03:28 np0005541914.novalocal python3[18500]: ansible-authorized_key Invoked with user=root manage_dir=True key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfcGXFPS+XIPHLw+7WTk1crQnJj1F7l/bATNqEM8HqdPREfaSIeF883HXh8Bv+rj9cjcgSPu+200+1SEsq35V+19mPwwkoxgdhfQu8jGk7vv17tL7k61zl9rWne61hn/7PnFptl+SBaMvOq/9ZdnPuMzb1YBTWbKm6kC3RPkgDUOa/BER5PJh1E6x6wYj1wRGMwVREczSSv+66aA5tTRelsFh16OXZXpq4ddoi7OeuimE3lWuMAHorxzJwF5AN+gPTgKYRkMwbMMHU4nPx7TXt5G3zjqWhmos08Xgdl+lPNHY5i463T96l4hGiycZKO4FOCq0ZMzldYkovXnyZi1CjSYUDcEn+EHIRJyZaK9ZJlJ1no5HVdwv1rwVMw4KkpZvH7HBh/iX47Wsi4qxK+L3X5hwZ7s6iSpNWeEMT5CLZsiDCkrdideFnZ8kW2jgnNIV0h+pUPISFfl1j03bjS9fHJjgl4BndVBxRJZJQf8Szyjx5WcIyBUidtYPnHzSLbmk= zuul-build-sshkey state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 02 07:03:28 np0005541914.novalocal sudo[18498]: pam_unix(sudo:session): session closed for user root
Dec 02 07:03:29 np0005541914.novalocal sudo[18514]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-abmhkbarglileakjobzxrlfiitlbjkwl ; /usr/bin/python3
Dec 02 07:03:29 np0005541914.novalocal sudo[18514]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:03:29 np0005541914.novalocal python3[18516]: ansible-user Invoked with name=root state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005541914.novalocal update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec 02 07:03:29 np0005541914.novalocal sudo[18514]: pam_unix(sudo:session): session closed for user root
Dec 02 07:03:30 np0005541914.novalocal sudo[18564]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xczwppyrsaxfrflqrxjihlmvxaxearps ; /usr/bin/python3
Dec 02 07:03:30 np0005541914.novalocal sudo[18564]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:03:30 np0005541914.novalocal python3[18566]: ansible-ansible.legacy.stat Invoked with path=/root/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:03:30 np0005541914.novalocal sudo[18564]: pam_unix(sudo:session): session closed for user root
Dec 02 07:03:31 np0005541914.novalocal sudo[18607]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pdoaaaknjipsxlukfzzrjxxrvucbrgbo ; /usr/bin/python3
Dec 02 07:03:31 np0005541914.novalocal sudo[18607]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:03:31 np0005541914.novalocal python3[18609]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764659010.694755-142-256007831715982/source dest=/root/.ssh/id_rsa mode=384 owner=root force=False _original_basename=fa40fdabeeae48b78b01a4cbccbd42f6_id_rsa follow=False checksum=c9b7a1839a060a12dd883255955d0b791bf96d1d backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:03:31 np0005541914.novalocal sudo[18607]: pam_unix(sudo:session): session closed for user root
Dec 02 07:03:32 np0005541914.novalocal sudo[18669]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vhtoinmhnqyltdzojbduueprurbmzzmr ; /usr/bin/python3
Dec 02 07:03:32 np0005541914.novalocal sudo[18669]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:03:32 np0005541914.novalocal python3[18671]: ansible-ansible.legacy.stat Invoked with path=/root/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:03:32 np0005541914.novalocal sudo[18669]: pam_unix(sudo:session): session closed for user root
Dec 02 07:03:32 np0005541914.novalocal sudo[18712]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-utkkulacvgjeuplmptgvmcaatswnngng ; /usr/bin/python3
Dec 02 07:03:32 np0005541914.novalocal sudo[18712]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:03:33 np0005541914.novalocal python3[18714]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764659012.4336061-229-92085273678354/source dest=/root/.ssh/id_rsa.pub mode=420 owner=root force=False _original_basename=fa40fdabeeae48b78b01a4cbccbd42f6_id_rsa.pub follow=False checksum=076b8979e1bf6ba70130c32daa0e2e874f6f0bae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:03:33 np0005541914.novalocal sudo[18712]: pam_unix(sudo:session): session closed for user root
Dec 02 07:03:35 np0005541914.novalocal sudo[18742]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ojpbfxrxlbhwfdhacaznqrrarkqhsqqs ; /usr/bin/python3
Dec 02 07:03:35 np0005541914.novalocal sudo[18742]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:03:35 np0005541914.novalocal python3[18744]: ansible-ansible.builtin.file Invoked with path=/etc/nodepool state=directory mode=0777 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:03:35 np0005541914.novalocal sudo[18742]: pam_unix(sudo:session): session closed for user root
Dec 02 07:03:36 np0005541914.novalocal python3[18790]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:03:36 np0005541914.novalocal python3[18806]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/sub_nodes _original_basename=tmp54dfnajq recurse=False state=file path=/etc/nodepool/sub_nodes force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:03:37 np0005541914.novalocal python3[18866]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:03:37 np0005541914.novalocal python3[18882]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/sub_nodes_private _original_basename=tmpynf6tx62 recurse=False state=file path=/etc/nodepool/sub_nodes_private force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:03:39 np0005541914.novalocal python3[18942]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:03:39 np0005541914.novalocal python3[18958]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/node_private _original_basename=tmpanf1kntl recurse=False state=file path=/etc/nodepool/node_private force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:03:40 np0005541914.novalocal sshd[18481]: pam_unix(sshd:session): session closed for user zuul
Dec 02 07:03:40 np0005541914.novalocal systemd[1]: session-8.scope: Deactivated successfully.
Dec 02 07:03:40 np0005541914.novalocal systemd[1]: session-8.scope: Consumed 3.766s CPU time.
Dec 02 07:03:40 np0005541914.novalocal systemd-logind[760]: Session 8 logged out. Waiting for processes to exit.
Dec 02 07:03:40 np0005541914.novalocal systemd-logind[760]: Removed session 8.
Dec 02 07:05:53 np0005541914.novalocal sshd[18974]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:05:53 np0005541914.novalocal sshd[18974]: Accepted publickey for zuul from 38.102.83.45 port 60254 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 07:05:53 np0005541914.novalocal systemd-logind[760]: New session 9 of user zuul.
Dec 02 07:05:53 np0005541914.novalocal systemd[1]: Started Session 9 of User zuul.
Dec 02 07:05:53 np0005541914.novalocal sshd[18974]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 07:05:54 np0005541914.novalocal python3[19020]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:10:53 np0005541914.novalocal sshd[18977]: Received disconnect from 38.102.83.45 port 60254:11: disconnected by user
Dec 02 07:10:53 np0005541914.novalocal sshd[18977]: Disconnected from user zuul 38.102.83.45 port 60254
Dec 02 07:10:53 np0005541914.novalocal sshd[18974]: pam_unix(sshd:session): session closed for user zuul
Dec 02 07:10:53 np0005541914.novalocal systemd-logind[760]: Session 9 logged out. Waiting for processes to exit.
Dec 02 07:10:53 np0005541914.novalocal systemd[1]: session-9.scope: Deactivated successfully.
Dec 02 07:10:53 np0005541914.novalocal systemd-logind[760]: Removed session 9.
Dec 02 07:13:25 np0005541914.novalocal sshd[19025]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:13:25 np0005541914.novalocal sshd[19026]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:13:25 np0005541914.novalocal sshd[19026]: error: kex_exchange_identification: read: Connection reset by peer
Dec 02 07:13:25 np0005541914.novalocal sshd[19026]: Connection reset by 45.140.17.97 port 41883
Dec 02 07:14:43 np0005541914.novalocal sshd[19028]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:14:43 np0005541914.novalocal sshd[19028]: error: kex_exchange_identification: Connection closed by remote host
Dec 02 07:14:43 np0005541914.novalocal sshd[19028]: Connection closed by 45.148.10.240 port 52632
Dec 02 07:16:42 np0005541914.novalocal sshd[19029]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:16:43 np0005541914.novalocal sshd[19029]: Invalid user sol from 45.148.10.240 port 36010
Dec 02 07:16:43 np0005541914.novalocal sshd[19029]: Connection closed by invalid user sol 45.148.10.240 port 36010 [preauth]
Dec 02 07:18:16 np0005541914.novalocal sshd[19034]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:18:16 np0005541914.novalocal sshd[19034]: Accepted publickey for zuul from 38.102.83.114 port 50042 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 07:18:16 np0005541914.novalocal systemd-logind[760]: New session 10 of user zuul.
Dec 02 07:18:16 np0005541914.novalocal systemd[1]: Started Session 10 of User zuul.
Dec 02 07:18:16 np0005541914.novalocal sshd[19034]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 07:18:16 np0005541914.novalocal python3[19051]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/redhat-release zuul_log_id=fa163e3b-3c83-8c0a-0232-00000000000c-1-overcloudnovacompute2 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:18:19 np0005541914.novalocal sudo[19069]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dqwwwdlsqdruzkvenyshbbfaltihcppq ; /usr/bin/python3
Dec 02 07:18:19 np0005541914.novalocal sudo[19069]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:18:19 np0005541914.novalocal python3[19071]: ansible-ansible.legacy.command Invoked with _raw_params=yum clean all zuul_log_id=fa163e3b-3c83-8c0a-0232-00000000000d-1-overcloudnovacompute2 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:18:22 np0005541914.novalocal sudo[19069]: pam_unix(sudo:session): session closed for user root
Dec 02 07:18:31 np0005541914.novalocal sshd[19075]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:18:32 np0005541914.novalocal sshd[19075]: Invalid user solana from 45.148.10.240 port 47266
Dec 02 07:18:32 np0005541914.novalocal sshd[19075]: Connection closed by invalid user solana 45.148.10.240 port 47266 [preauth]
Dec 02 07:18:50 np0005541914.novalocal sudo[19090]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ivzballiktitteyosmscqoypmavohxgo ; /usr/bin/python3
Dec 02 07:18:50 np0005541914.novalocal sudo[19090]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:18:50 np0005541914.novalocal python3[19092]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-baseos-eus-rpms'] state=enabled purge=False
Dec 02 07:18:54 np0005541914.novalocal rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 02 07:19:21 np0005541914.novalocal sudo[19090]: pam_unix(sudo:session): session closed for user root
Dec 02 07:19:26 np0005541914.novalocal sudo[19246]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bsgfcofvnhtdfrezspqgnyhsepwqsylf ; /usr/bin/python3
Dec 02 07:19:26 np0005541914.novalocal sudo[19246]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:19:27 np0005541914.novalocal python3[19248]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-appstream-eus-rpms'] state=enabled purge=False
Dec 02 07:19:30 np0005541914.novalocal rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 02 07:19:30 np0005541914.novalocal rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 02 07:19:32 np0005541914.novalocal sudo[19246]: pam_unix(sudo:session): session closed for user root
Dec 02 07:19:49 np0005541914.novalocal sudo[19447]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qzugzahmgmlbyncauuwybkrhnjyqyzbe ; /usr/bin/python3
Dec 02 07:19:49 np0005541914.novalocal sudo[19447]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:19:50 np0005541914.novalocal python3[19449]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-highavailability-eus-rpms'] state=enabled purge=False
Dec 02 07:19:52 np0005541914.novalocal rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 02 07:19:52 np0005541914.novalocal rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 02 07:19:58 np0005541914.novalocal rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 02 07:19:58 np0005541914.novalocal rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 02 07:20:05 np0005541914.novalocal sudo[19447]: pam_unix(sudo:session): session closed for user root
Dec 02 07:20:21 np0005541914.novalocal sudo[19782]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-trtsqhsbmgykidkyqrunqlxhhtnttxds ; /usr/bin/python3
Dec 02 07:20:21 np0005541914.novalocal sudo[19782]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:20:21 np0005541914.novalocal python3[19784]: ansible-community.general.rhsm_repository Invoked with name=['fast-datapath-for-rhel-9-x86_64-rpms'] state=enabled purge=False
Dec 02 07:20:25 np0005541914.novalocal rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 02 07:20:25 np0005541914.novalocal rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 02 07:20:30 np0005541914.novalocal sshd[20029]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:20:30 np0005541914.novalocal rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 02 07:20:30 np0005541914.novalocal rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 02 07:20:30 np0005541914.novalocal sshd[20029]: Invalid user sol from 45.148.10.240 port 35956
Dec 02 07:20:31 np0005541914.novalocal sshd[20029]: Connection closed by invalid user sol 45.148.10.240 port 35956 [preauth]
Dec 02 07:20:37 np0005541914.novalocal sudo[19782]: pam_unix(sudo:session): session closed for user root
Dec 02 07:20:57 np0005541914.novalocal sudo[20179]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lqyfxpsvluibvruqrqzywbknattfmlcw ; /usr/bin/python3
Dec 02 07:20:57 np0005541914.novalocal sudo[20179]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:20:57 np0005541914.novalocal python3[20181]: ansible-community.general.rhsm_repository Invoked with name=['openstack-17.1-for-rhel-9-x86_64-rpms'] state=enabled purge=False
Dec 02 07:21:00 np0005541914.novalocal rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 02 07:21:00 np0005541914.novalocal rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 02 07:21:05 np0005541914.novalocal rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 02 07:21:14 np0005541914.novalocal sudo[20179]: pam_unix(sudo:session): session closed for user root
Dec 02 07:21:17 np0005541914.novalocal sudo[20516]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aaccvmajrdfkwcpmlwyvqtaxhggvrmbq ; /usr/bin/python3
Dec 02 07:21:17 np0005541914.novalocal sudo[20516]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:21:17 np0005541914.novalocal python3[20518]: ansible-ansible.legacy.command Invoked with _raw_params=yum repolist --enabled
                                                        _uses_shell=True zuul_log_id=fa163e3b-3c83-8c0a-0232-000000000013-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:21:19 np0005541914.novalocal sudo[20516]: pam_unix(sudo:session): session closed for user root
Dec 02 07:21:45 np0005541914.novalocal sudo[20536]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vfahendszbqvymvdzeumonoacfiniyuz ; /usr/bin/python3
Dec 02 07:21:45 np0005541914.novalocal sudo[20536]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:21:45 np0005541914.novalocal python3[20538]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch', 'os-net-config', 'ansible-core'] state=present update_cache=True allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 02 07:22:06 np0005541914.novalocal kernel: SELinux:  Converting 490 SID table entries...
Dec 02 07:22:06 np0005541914.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 02 07:22:06 np0005541914.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 02 07:22:06 np0005541914.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 02 07:22:06 np0005541914.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 02 07:22:06 np0005541914.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 02 07:22:06 np0005541914.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 02 07:22:06 np0005541914.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 02 07:22:06 np0005541914.novalocal groupadd[20646]: group added to /etc/group: name=unbound, GID=987
Dec 02 07:22:06 np0005541914.novalocal groupadd[20646]: group added to /etc/gshadow: name=unbound
Dec 02 07:22:06 np0005541914.novalocal groupadd[20646]: new group: name=unbound, GID=987
Dec 02 07:22:06 np0005541914.novalocal useradd[20653]: new user: name=unbound, UID=987, GID=987, home=/etc/unbound, shell=/sbin/nologin, from=none
Dec 02 07:22:06 np0005541914.novalocal dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Dec 02 07:22:06 np0005541914.novalocal systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec 02 07:22:06 np0005541914.novalocal groupadd[20666]: group added to /etc/group: name=openvswitch, GID=986
Dec 02 07:22:06 np0005541914.novalocal groupadd[20666]: group added to /etc/gshadow: name=openvswitch
Dec 02 07:22:06 np0005541914.novalocal groupadd[20666]: new group: name=openvswitch, GID=986
Dec 02 07:22:06 np0005541914.novalocal useradd[20673]: new user: name=openvswitch, UID=986, GID=986, home=/, shell=/sbin/nologin, from=none
Dec 02 07:22:06 np0005541914.novalocal groupadd[20681]: group added to /etc/group: name=hugetlbfs, GID=985
Dec 02 07:22:06 np0005541914.novalocal groupadd[20681]: group added to /etc/gshadow: name=hugetlbfs
Dec 02 07:22:06 np0005541914.novalocal groupadd[20681]: new group: name=hugetlbfs, GID=985
Dec 02 07:22:06 np0005541914.novalocal usermod[20689]: add 'openvswitch' to group 'hugetlbfs'
Dec 02 07:22:06 np0005541914.novalocal usermod[20689]: add 'openvswitch' to shadow group 'hugetlbfs'
Dec 02 07:22:09 np0005541914.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 02 07:22:09 np0005541914.novalocal systemd[1]: Starting man-db-cache-update.service...
Dec 02 07:22:09 np0005541914.novalocal systemd[1]: Reloading.
Dec 02 07:22:09 np0005541914.novalocal systemd-rc-local-generator[21180]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 07:22:09 np0005541914.novalocal systemd-sysv-generator[21185]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 07:22:09 np0005541914.novalocal systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 07:22:09 np0005541914.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Dec 02 07:22:10 np0005541914.novalocal systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 02 07:22:10 np0005541914.novalocal systemd[1]: Finished man-db-cache-update.service.
Dec 02 07:22:10 np0005541914.novalocal systemd[1]: run-r942c90c646fe4ff28a4562496897aeac.service: Deactivated successfully.
Dec 02 07:22:11 np0005541914.novalocal rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 02 07:22:11 np0005541914.novalocal sudo[20536]: pam_unix(sudo:session): session closed for user root
Dec 02 07:22:11 np0005541914.novalocal rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 02 07:22:26 np0005541914.novalocal sudo[21767]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vqaajcvhbnxfmpytxirgqyjunvczfijk ; /usr/bin/python3
Dec 02 07:22:26 np0005541914.novalocal sudo[21767]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:22:26 np0005541914.novalocal python3[21769]: ansible-ansible.legacy.command Invoked with _raw_params=ansible-galaxy collection install ansible.posix
                                                        _uses_shell=True zuul_log_id=fa163e3b-3c83-8c0a-0232-000000000015-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:22:28 np0005541914.novalocal sshd[21773]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:22:29 np0005541914.novalocal sshd[21773]: Invalid user ubuntu from 45.148.10.240 port 39576
Dec 02 07:22:29 np0005541914.novalocal sshd[21773]: Connection closed by invalid user ubuntu 45.148.10.240 port 39576 [preauth]
Dec 02 07:22:42 np0005541914.novalocal sudo[21767]: pam_unix(sudo:session): session closed for user root
Dec 02 07:22:45 np0005541914.novalocal sudo[21789]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eazjchjndxfyqovwjrtdemdtcuvmuevj ; /usr/bin/python3
Dec 02 07:22:45 np0005541914.novalocal sudo[21789]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:22:45 np0005541914.novalocal python3[21791]: ansible-ansible.builtin.file Invoked with path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:22:45 np0005541914.novalocal sudo[21789]: pam_unix(sudo:session): session closed for user root
Dec 02 07:22:46 np0005541914.novalocal sudo[21837]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ihdnhjxqjrcvohuvpfmsaprqnyxggvjh ; /usr/bin/python3
Dec 02 07:22:46 np0005541914.novalocal sudo[21837]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:22:47 np0005541914.novalocal python3[21839]: ansible-ansible.legacy.stat Invoked with path=/etc/os-net-config/tripleo_config.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:22:47 np0005541914.novalocal sudo[21837]: pam_unix(sudo:session): session closed for user root
Dec 02 07:22:47 np0005541914.novalocal sudo[21880]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dcezyisiekarybmpwsfebyfcduqdxvzu ; /usr/bin/python3
Dec 02 07:22:47 np0005541914.novalocal sudo[21880]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:22:47 np0005541914.novalocal python3[21882]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764660166.8141565-334-92596069481338/source dest=/etc/os-net-config/tripleo_config.yaml mode=None follow=False _original_basename=overcloud_net_config.j2 checksum=91bc45728dd9738fc644e3ada9d8642294da29ff backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:22:47 np0005541914.novalocal sudo[21880]: pam_unix(sudo:session): session closed for user root
Dec 02 07:22:49 np0005541914.novalocal sudo[21910]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zhjxjuroalxlhdltrvzgdvhqvhbgzhbv ; /usr/bin/python3
Dec 02 07:22:49 np0005541914.novalocal sudo[21910]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:22:49 np0005541914.novalocal python3[21912]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network  state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None
Dec 02 07:22:49 np0005541914.novalocal sudo[21910]: pam_unix(sudo:session): session closed for user root
Dec 02 07:22:49 np0005541914.novalocal systemd-journald[619]: Field hash table of /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal has a fill level at 89.2 (297 of 333 items), suggesting rotation.
Dec 02 07:22:49 np0005541914.novalocal systemd-journald[619]: /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal: Journal header limits reached or header out-of-date, rotating.
Dec 02 07:22:49 np0005541914.novalocal rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 02 07:22:49 np0005541914.novalocal rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 02 07:22:49 np0005541914.novalocal sudo[21931]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lvxfraclqjhenzpppeycsjqnxnaewfvr ; /usr/bin/python3
Dec 02 07:22:49 np0005541914.novalocal sudo[21931]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:22:49 np0005541914.novalocal python3[21933]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-20 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None
Dec 02 07:22:49 np0005541914.novalocal sudo[21931]: pam_unix(sudo:session): session closed for user root
Dec 02 07:22:49 np0005541914.novalocal sudo[21951]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ygatjwmchphedisggcxkqnpqwhddddgs ; /usr/bin/python3
Dec 02 07:22:49 np0005541914.novalocal sudo[21951]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:22:49 np0005541914.novalocal python3[21953]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-21 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None
Dec 02 07:22:49 np0005541914.novalocal sudo[21951]: pam_unix(sudo:session): session closed for user root
Dec 02 07:22:49 np0005541914.novalocal sudo[21971]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-navenchhietufxrzakxtvirddkjkospu ; /usr/bin/python3
Dec 02 07:22:49 np0005541914.novalocal sudo[21971]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:22:50 np0005541914.novalocal python3[21973]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-22 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None
Dec 02 07:22:50 np0005541914.novalocal sudo[21971]: pam_unix(sudo:session): session closed for user root
Dec 02 07:22:50 np0005541914.novalocal sudo[21991]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uwsedlrilhradktturnfgvwrniiloggw ; /usr/bin/python3
Dec 02 07:22:50 np0005541914.novalocal sudo[21991]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:22:50 np0005541914.novalocal python3[21993]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-23 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None
Dec 02 07:22:50 np0005541914.novalocal sudo[21991]: pam_unix(sudo:session): session closed for user root
Dec 02 07:22:53 np0005541914.novalocal sudo[22011]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-levkacdueaxhabpnlstolgvdyrbfqxtw ; /usr/bin/python3
Dec 02 07:22:53 np0005541914.novalocal sudo[22011]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:22:53 np0005541914.novalocal python3[22013]: ansible-ansible.builtin.systemd Invoked with name=network state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 02 07:22:53 np0005541914.novalocal systemd[1]: Starting LSB: Bring up/down networking...
Dec 02 07:22:53 np0005541914.novalocal network[22016]: WARN      : [network] You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 02 07:22:53 np0005541914.novalocal network[22027]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 02 07:22:53 np0005541914.novalocal network[22016]: WARN      : [network] 'network-scripts' will be removed from distribution in near future.
Dec 02 07:22:53 np0005541914.novalocal network[22028]: 'network-scripts' will be removed from distribution in near future.
Dec 02 07:22:53 np0005541914.novalocal network[22016]: WARN      : [network] It is advised to switch to 'NetworkManager' instead for network management.
Dec 02 07:22:53 np0005541914.novalocal network[22029]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 02 07:22:53 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660173.3911] audit: op="connections-reload" pid=22057 uid=0 result="success"
Dec 02 07:22:53 np0005541914.novalocal network[22016]: Bringing up loopback interface:  [  OK  ]
Dec 02 07:22:53 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660173.5960] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth0" pid=22145 uid=0 result="success"
Dec 02 07:22:53 np0005541914.novalocal network[22016]: Bringing up interface eth0:  [  OK  ]
Dec 02 07:22:53 np0005541914.novalocal systemd[1]: Started LSB: Bring up/down networking.
Dec 02 07:22:53 np0005541914.novalocal sudo[22011]: pam_unix(sudo:session): session closed for user root
Dec 02 07:22:53 np0005541914.novalocal sudo[22185]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pwveylnqaaypjwmrnebdjauaraufkjbi ; /usr/bin/python3
Dec 02 07:22:53 np0005541914.novalocal sudo[22185]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:22:54 np0005541914.novalocal python3[22187]: ansible-ansible.builtin.systemd Invoked with name=openvswitch state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 02 07:22:54 np0005541914.novalocal systemd[1]: Starting Open vSwitch Database Unit...
Dec 02 07:22:54 np0005541914.novalocal chown[22191]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec 02 07:22:54 np0005541914.novalocal ovs-ctl[22196]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec 02 07:22:54 np0005541914.novalocal ovs-ctl[22196]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec 02 07:22:54 np0005541914.novalocal ovs-ctl[22196]: Starting ovsdb-server [  OK  ]
Dec 02 07:22:54 np0005541914.novalocal ovs-vsctl[22245]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec 02 07:22:54 np0005541914.novalocal ovs-vsctl[22265]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.6-141.el9fdp "external-ids:system-id=\"515e0717-8baa-40e6-ac30-5fb148626504\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"rhel\"" "system-version=\"9.2\""
Dec 02 07:22:54 np0005541914.novalocal ovs-ctl[22196]: Configuring Open vSwitch system IDs [  OK  ]
Dec 02 07:22:54 np0005541914.novalocal ovs-ctl[22196]: Enabling remote OVSDB managers [  OK  ]
Dec 02 07:22:54 np0005541914.novalocal systemd[1]: Started Open vSwitch Database Unit.
Dec 02 07:22:54 np0005541914.novalocal ovs-vsctl[22271]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=np0005541914.novalocal
Dec 02 07:22:54 np0005541914.novalocal systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec 02 07:22:54 np0005541914.novalocal systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec 02 07:22:54 np0005541914.novalocal systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec 02 07:22:54 np0005541914.novalocal kernel: openvswitch: Open vSwitch switching datapath
Dec 02 07:22:54 np0005541914.novalocal ovs-ctl[22315]: Inserting openvswitch module [  OK  ]
Dec 02 07:22:54 np0005541914.novalocal ovs-ctl[22284]: Starting ovs-vswitchd [  OK  ]
Dec 02 07:22:54 np0005541914.novalocal ovs-vsctl[22333]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=np0005541914.novalocal
Dec 02 07:22:54 np0005541914.novalocal ovs-ctl[22284]: Enabling remote OVSDB managers [  OK  ]
Dec 02 07:22:54 np0005541914.novalocal systemd[1]: Started Open vSwitch Forwarding Unit.
Dec 02 07:22:54 np0005541914.novalocal systemd[1]: Starting Open vSwitch...
Dec 02 07:22:54 np0005541914.novalocal systemd[1]: Finished Open vSwitch.
Dec 02 07:22:54 np0005541914.novalocal sudo[22185]: pam_unix(sudo:session): session closed for user root
Dec 02 07:23:25 np0005541914.novalocal sudo[22349]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sonfgxdtdkfgtekxlhxobicdsckjetuj ; /usr/bin/python3
Dec 02 07:23:25 np0005541914.novalocal sudo[22349]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:23:25 np0005541914.novalocal python3[22351]: ansible-ansible.legacy.command Invoked with _raw_params=os-net-config -c /etc/os-net-config/tripleo_config.yaml
                                                        _uses_shell=True zuul_log_id=fa163e3b-3c83-8c0a-0232-00000000001a-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:23:26 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660206.6463] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22510 uid=0 result="success"
Dec 02 07:23:26 np0005541914.novalocal ifup[22511]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 02 07:23:26 np0005541914.novalocal ifup[22512]: 'network-scripts' will be removed from distribution in near future.
Dec 02 07:23:26 np0005541914.novalocal ifup[22513]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 02 07:23:26 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660206.6676] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22519 uid=0 result="success"
Dec 02 07:23:26 np0005541914.novalocal ovs-vsctl[22521]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --may-exist add-br br-ex -- set bridge br-ex other-config:mac-table-size=50000 -- set bridge br-ex other-config:hwaddr=fa:16:3e:08:72:ba -- set bridge br-ex fail_mode=standalone -- del-controller br-ex
Dec 02 07:23:26 np0005541914.novalocal kernel: device ovs-system entered promiscuous mode
Dec 02 07:23:26 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660206.7213] manager: (ovs-system): new Generic device (/org/freedesktop/NetworkManager/Devices/4)
Dec 02 07:23:26 np0005541914.novalocal systemd-udevd[22522]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 07:23:26 np0005541914.novalocal kernel: Timeout policy base is empty
Dec 02 07:23:26 np0005541914.novalocal kernel: Failed to associated timeout policy `ovs_test_tp'
Dec 02 07:23:26 np0005541914.novalocal kernel: device br-ex entered promiscuous mode
Dec 02 07:23:26 np0005541914.novalocal systemd-udevd[22533]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 07:23:26 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660206.7623] manager: (br-ex): new Generic device (/org/freedesktop/NetworkManager/Devices/5)
Dec 02 07:23:26 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660206.7836] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22547 uid=0 result="success"
Dec 02 07:23:26 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660206.8025] device (br-ex): carrier: link connected
Dec 02 07:23:29 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660209.8580] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22576 uid=0 result="success"
Dec 02 07:23:29 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660209.9053] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22591 uid=0 result="success"
Dec 02 07:23:29 np0005541914.novalocal NET[22616]: /etc/sysconfig/network-scripts/ifup-post : updated /etc/resolv.conf
Dec 02 07:23:30 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660210.0004] device (eth1): state change: activated -> unmanaged (reason 'unmanaged', sys-iface-state: 'managed')
Dec 02 07:23:30 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660210.0106] dhcp4 (eth1): canceled DHCP transaction
Dec 02 07:23:30 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660210.0106] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 02 07:23:30 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660210.0106] dhcp4 (eth1): state changed no lease
Dec 02 07:23:30 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660210.0147] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22625 uid=0 result="success"
Dec 02 07:23:30 np0005541914.novalocal ifup[22626]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 02 07:23:30 np0005541914.novalocal ifup[22627]: 'network-scripts' will be removed from distribution in near future.
Dec 02 07:23:30 np0005541914.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 02 07:23:30 np0005541914.novalocal ifup[22629]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 02 07:23:30 np0005541914.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 02 07:23:30 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660210.0517] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22642 uid=0 result="success"
Dec 02 07:23:30 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660210.1018] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22653 uid=0 result="success"
Dec 02 07:23:30 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660210.1098] device (eth1): carrier: link connected
Dec 02 07:23:30 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660210.1288] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22662 uid=0 result="success"
Dec 02 07:23:30 np0005541914.novalocal ipv6_wait_tentative[22674]: Waiting for interface eth1 IPv6 address(es) to leave the 'tentative' state
Dec 02 07:23:31 np0005541914.novalocal ipv6_wait_tentative[22679]: Waiting for interface eth1 IPv6 address(es) to leave the 'tentative' state
Dec 02 07:23:32 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660212.2007] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22688 uid=0 result="success"
Dec 02 07:23:32 np0005541914.novalocal ovs-vsctl[22703]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex eth1 -- add-port br-ex eth1
Dec 02 07:23:32 np0005541914.novalocal kernel: device eth1 entered promiscuous mode
Dec 02 07:23:32 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660212.2654] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22711 uid=0 result="success"
Dec 02 07:23:32 np0005541914.novalocal ifup[22712]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 02 07:23:32 np0005541914.novalocal ifup[22713]: 'network-scripts' will be removed from distribution in near future.
Dec 02 07:23:32 np0005541914.novalocal ifup[22714]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 02 07:23:32 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660212.2908] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22720 uid=0 result="success"
Dec 02 07:23:32 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660212.3309] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22730 uid=0 result="success"
Dec 02 07:23:32 np0005541914.novalocal ifup[22731]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 02 07:23:32 np0005541914.novalocal ifup[22732]: 'network-scripts' will be removed from distribution in near future.
Dec 02 07:23:32 np0005541914.novalocal ifup[22733]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 02 07:23:32 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660212.3610] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22739 uid=0 result="success"
Dec 02 07:23:32 np0005541914.novalocal ovs-vsctl[22742]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan20 -- add-port br-ex vlan20 tag=20 -- set Interface vlan20 type=internal
Dec 02 07:23:32 np0005541914.novalocal kernel: device vlan20 entered promiscuous mode
Dec 02 07:23:32 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660212.3987] manager: (vlan20): new Generic device (/org/freedesktop/NetworkManager/Devices/6)
Dec 02 07:23:32 np0005541914.novalocal systemd-udevd[22744]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 07:23:32 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660212.4222] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22753 uid=0 result="success"
Dec 02 07:23:32 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660212.4410] device (vlan20): carrier: link connected
Dec 02 07:23:35 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660215.4908] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22782 uid=0 result="success"
Dec 02 07:23:35 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660215.5369] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22797 uid=0 result="success"
Dec 02 07:23:35 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660215.5967] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22818 uid=0 result="success"
Dec 02 07:23:35 np0005541914.novalocal ifup[22819]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 02 07:23:35 np0005541914.novalocal ifup[22820]: 'network-scripts' will be removed from distribution in near future.
Dec 02 07:23:35 np0005541914.novalocal ifup[22821]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 02 07:23:35 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660215.6284] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22827 uid=0 result="success"
Dec 02 07:23:35 np0005541914.novalocal ovs-vsctl[22830]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan22 -- add-port br-ex vlan22 tag=22 -- set Interface vlan22 type=internal
Dec 02 07:23:35 np0005541914.novalocal kernel: device vlan22 entered promiscuous mode
Dec 02 07:23:35 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660215.7001] manager: (vlan22): new Generic device (/org/freedesktop/NetworkManager/Devices/7)
Dec 02 07:23:35 np0005541914.novalocal systemd-udevd[22833]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 07:23:35 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660215.7261] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22842 uid=0 result="success"
Dec 02 07:23:35 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660215.7466] device (vlan22): carrier: link connected
Dec 02 07:23:38 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660218.8005] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22872 uid=0 result="success"
Dec 02 07:23:38 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660218.8498] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=22887 uid=0 result="success"
Dec 02 07:23:38 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660218.9192] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22908 uid=0 result="success"
Dec 02 07:23:38 np0005541914.novalocal ifup[22909]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 02 07:23:38 np0005541914.novalocal ifup[22910]: 'network-scripts' will be removed from distribution in near future.
Dec 02 07:23:38 np0005541914.novalocal ifup[22911]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 02 07:23:38 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660218.9504] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22917 uid=0 result="success"
Dec 02 07:23:38 np0005541914.novalocal ovs-vsctl[22920]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan44 -- add-port br-ex vlan44 tag=44 -- set Interface vlan44 type=internal
Dec 02 07:23:38 np0005541914.novalocal kernel: device vlan44 entered promiscuous mode
Dec 02 07:23:38 np0005541914.novalocal systemd-udevd[22922]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 07:23:38 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660218.9892] manager: (vlan44): new Generic device (/org/freedesktop/NetworkManager/Devices/8)
Dec 02 07:23:39 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660219.0147] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22932 uid=0 result="success"
Dec 02 07:23:39 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660219.0374] device (vlan44): carrier: link connected
Dec 02 07:23:40 np0005541914.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 02 07:23:42 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660222.0903] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22962 uid=0 result="success"
Dec 02 07:23:42 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660222.1372] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22977 uid=0 result="success"
Dec 02 07:23:42 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660222.1962] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=22998 uid=0 result="success"
Dec 02 07:23:42 np0005541914.novalocal ifup[22999]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 02 07:23:42 np0005541914.novalocal ifup[23000]: 'network-scripts' will be removed from distribution in near future.
Dec 02 07:23:42 np0005541914.novalocal ifup[23001]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 02 07:23:42 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660222.2310] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23007 uid=0 result="success"
Dec 02 07:23:42 np0005541914.novalocal ovs-vsctl[23010]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan23 -- add-port br-ex vlan23 tag=23 -- set Interface vlan23 type=internal
Dec 02 07:23:42 np0005541914.novalocal kernel: device vlan23 entered promiscuous mode
Dec 02 07:23:42 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660222.2728] manager: (vlan23): new Generic device (/org/freedesktop/NetworkManager/Devices/9)
Dec 02 07:23:42 np0005541914.novalocal systemd-udevd[23012]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 07:23:42 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660222.2999] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23022 uid=0 result="success"
Dec 02 07:23:42 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660222.3207] device (vlan23): carrier: link connected
Dec 02 07:23:45 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660225.3846] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23052 uid=0 result="success"
Dec 02 07:23:45 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660225.4283] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23067 uid=0 result="success"
Dec 02 07:23:45 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660225.4959] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23088 uid=0 result="success"
Dec 02 07:23:45 np0005541914.novalocal ifup[23089]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 02 07:23:45 np0005541914.novalocal ifup[23090]: 'network-scripts' will be removed from distribution in near future.
Dec 02 07:23:45 np0005541914.novalocal ifup[23091]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 02 07:23:45 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660225.5387] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23097 uid=0 result="success"
Dec 02 07:23:45 np0005541914.novalocal ovs-vsctl[23100]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan21 -- add-port br-ex vlan21 tag=21 -- set Interface vlan21 type=internal
Dec 02 07:23:45 np0005541914.novalocal kernel: device vlan21 entered promiscuous mode
Dec 02 07:23:45 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660225.5820] manager: (vlan21): new Generic device (/org/freedesktop/NetworkManager/Devices/10)
Dec 02 07:23:45 np0005541914.novalocal systemd-udevd[23103]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 07:23:45 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660225.6116] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23112 uid=0 result="success"
Dec 02 07:23:45 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660225.6328] device (vlan21): carrier: link connected
Dec 02 07:23:48 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660228.7275] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23142 uid=0 result="success"
Dec 02 07:23:48 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660228.7766] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23157 uid=0 result="success"
Dec 02 07:23:48 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660228.8379] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23178 uid=0 result="success"
Dec 02 07:23:48 np0005541914.novalocal ifup[23179]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 02 07:23:48 np0005541914.novalocal ifup[23180]: 'network-scripts' will be removed from distribution in near future.
Dec 02 07:23:48 np0005541914.novalocal ifup[23181]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 02 07:23:48 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660228.8732] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23187 uid=0 result="success"
Dec 02 07:23:48 np0005541914.novalocal ovs-vsctl[23190]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan44 -- add-port br-ex vlan44 tag=44 -- set Interface vlan44 type=internal
Dec 02 07:23:48 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660228.9761] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23197 uid=0 result="success"
Dec 02 07:23:50 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660230.0380] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23224 uid=0 result="success"
Dec 02 07:23:50 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660230.0834] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23239 uid=0 result="success"
Dec 02 07:23:50 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660230.1464] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23260 uid=0 result="success"
Dec 02 07:23:50 np0005541914.novalocal ifup[23261]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 02 07:23:50 np0005541914.novalocal ifup[23262]: 'network-scripts' will be removed from distribution in near future.
Dec 02 07:23:50 np0005541914.novalocal ifup[23263]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 02 07:23:50 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660230.1763] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23269 uid=0 result="success"
Dec 02 07:23:50 np0005541914.novalocal ovs-vsctl[23272]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan20 -- add-port br-ex vlan20 tag=20 -- set Interface vlan20 type=internal
Dec 02 07:23:50 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660230.2343] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23279 uid=0 result="success"
Dec 02 07:23:51 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660231.3019] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23307 uid=0 result="success"
Dec 02 07:23:51 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660231.3462] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23322 uid=0 result="success"
Dec 02 07:23:51 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660231.4064] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23343 uid=0 result="success"
Dec 02 07:23:51 np0005541914.novalocal ifup[23344]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 02 07:23:51 np0005541914.novalocal ifup[23345]: 'network-scripts' will be removed from distribution in near future.
Dec 02 07:23:51 np0005541914.novalocal ifup[23346]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 02 07:23:51 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660231.4371] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23352 uid=0 result="success"
Dec 02 07:23:51 np0005541914.novalocal ovs-vsctl[23355]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan21 -- add-port br-ex vlan21 tag=21 -- set Interface vlan21 type=internal
Dec 02 07:23:51 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660231.5266] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23362 uid=0 result="success"
Dec 02 07:23:52 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660232.5928] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23390 uid=0 result="success"
Dec 02 07:23:52 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660232.6431] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23405 uid=0 result="success"
Dec 02 07:23:52 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660232.6991] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23426 uid=0 result="success"
Dec 02 07:23:52 np0005541914.novalocal ifup[23427]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 02 07:23:52 np0005541914.novalocal ifup[23428]: 'network-scripts' will be removed from distribution in near future.
Dec 02 07:23:52 np0005541914.novalocal ifup[23429]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 02 07:23:52 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660232.7289] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23435 uid=0 result="success"
Dec 02 07:23:52 np0005541914.novalocal ovs-vsctl[23438]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan23 -- add-port br-ex vlan23 tag=23 -- set Interface vlan23 type=internal
Dec 02 07:23:52 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660232.7834] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23445 uid=0 result="success"
Dec 02 07:23:53 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660233.8401] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23473 uid=0 result="success"
Dec 02 07:23:53 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660233.8874] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23488 uid=0 result="success"
Dec 02 07:23:53 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660233.9519] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23509 uid=0 result="success"
Dec 02 07:23:53 np0005541914.novalocal ifup[23510]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Dec 02 07:23:53 np0005541914.novalocal ifup[23511]: 'network-scripts' will be removed from distribution in near future.
Dec 02 07:23:53 np0005541914.novalocal ifup[23512]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Dec 02 07:23:53 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660233.9851] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23518 uid=0 result="success"
Dec 02 07:23:54 np0005541914.novalocal ovs-vsctl[23521]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan22 -- add-port br-ex vlan22 tag=22 -- set Interface vlan22 type=internal
Dec 02 07:23:54 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660234.0766] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23528 uid=0 result="success"
Dec 02 07:23:55 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660235.1348] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23556 uid=0 result="success"
Dec 02 07:23:55 np0005541914.novalocal NetworkManager[5967]: <info>  [1764660235.1819] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23571 uid=0 result="success"
Dec 02 07:23:55 np0005541914.novalocal sudo[22349]: pam_unix(sudo:session): session closed for user root
Dec 02 07:24:18 np0005541914.novalocal sshd[23590]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:24:18 np0005541914.novalocal sshd[23590]: Invalid user ubuntu from 45.148.10.240 port 53026
Dec 02 07:24:18 np0005541914.novalocal sshd[23590]: Connection closed by invalid user ubuntu 45.148.10.240 port 53026 [preauth]
Dec 02 07:24:19 np0005541914.novalocal python3[23605]: ansible-ansible.legacy.command Invoked with _raw_params=ip a
                                                       ping -c 2 -W 2 192.168.122.10
                                                       ping -c 2 -W 2 192.168.122.11
                                                        _uses_shell=True zuul_log_id=fa163e3b-3c83-8c0a-0232-00000000001b-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:24:24 np0005541914.novalocal python3[23624]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfcGXFPS+XIPHLw+7WTk1crQnJj1F7l/bATNqEM8HqdPREfaSIeF883HXh8Bv+rj9cjcgSPu+200+1SEsq35V+19mPwwkoxgdhfQu8jGk7vv17tL7k61zl9rWne61hn/7PnFptl+SBaMvOq/9ZdnPuMzb1YBTWbKm6kC3RPkgDUOa/BER5PJh1E6x6wYj1wRGMwVREczSSv+66aA5tTRelsFh16OXZXpq4ddoi7OeuimE3lWuMAHorxzJwF5AN+gPTgKYRkMwbMMHU4nPx7TXt5G3zjqWhmos08Xgdl+lPNHY5i463T96l4hGiycZKO4FOCq0ZMzldYkovXnyZi1CjSYUDcEn+EHIRJyZaK9ZJlJ1no5HVdwv1rwVMw4KkpZvH7HBh/iX47Wsi4qxK+L3X5hwZ7s6iSpNWeEMT5CLZsiDCkrdideFnZ8kW2jgnNIV0h+pUPISFfl1j03bjS9fHJjgl4BndVBxRJZJQf8Szyjx5WcIyBUidtYPnHzSLbmk= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 02 07:24:24 np0005541914.novalocal sudo[23638]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rdlqyeaquqpbfdilswtitkhdfpyaqkne ; /usr/bin/python3
Dec 02 07:24:24 np0005541914.novalocal sudo[23638]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:24:24 np0005541914.novalocal python3[23640]: ansible-ansible.posix.authorized_key Invoked with user=root key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfcGXFPS+XIPHLw+7WTk1crQnJj1F7l/bATNqEM8HqdPREfaSIeF883HXh8Bv+rj9cjcgSPu+200+1SEsq35V+19mPwwkoxgdhfQu8jGk7vv17tL7k61zl9rWne61hn/7PnFptl+SBaMvOq/9ZdnPuMzb1YBTWbKm6kC3RPkgDUOa/BER5PJh1E6x6wYj1wRGMwVREczSSv+66aA5tTRelsFh16OXZXpq4ddoi7OeuimE3lWuMAHorxzJwF5AN+gPTgKYRkMwbMMHU4nPx7TXt5G3zjqWhmos08Xgdl+lPNHY5i463T96l4hGiycZKO4FOCq0ZMzldYkovXnyZi1CjSYUDcEn+EHIRJyZaK9ZJlJ1no5HVdwv1rwVMw4KkpZvH7HBh/iX47Wsi4qxK+L3X5hwZ7s6iSpNWeEMT5CLZsiDCkrdideFnZ8kW2jgnNIV0h+pUPISFfl1j03bjS9fHJjgl4BndVBxRJZJQf8Szyjx5WcIyBUidtYPnHzSLbmk= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 02 07:24:24 np0005541914.novalocal sudo[23638]: pam_unix(sudo:session): session closed for user root
Dec 02 07:24:26 np0005541914.novalocal python3[23654]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfcGXFPS+XIPHLw+7WTk1crQnJj1F7l/bATNqEM8HqdPREfaSIeF883HXh8Bv+rj9cjcgSPu+200+1SEsq35V+19mPwwkoxgdhfQu8jGk7vv17tL7k61zl9rWne61hn/7PnFptl+SBaMvOq/9ZdnPuMzb1YBTWbKm6kC3RPkgDUOa/BER5PJh1E6x6wYj1wRGMwVREczSSv+66aA5tTRelsFh16OXZXpq4ddoi7OeuimE3lWuMAHorxzJwF5AN+gPTgKYRkMwbMMHU4nPx7TXt5G3zjqWhmos08Xgdl+lPNHY5i463T96l4hGiycZKO4FOCq0ZMzldYkovXnyZi1CjSYUDcEn+EHIRJyZaK9ZJlJ1no5HVdwv1rwVMw4KkpZvH7HBh/iX47Wsi4qxK+L3X5hwZ7s6iSpNWeEMT5CLZsiDCkrdideFnZ8kW2jgnNIV0h+pUPISFfl1j03bjS9fHJjgl4BndVBxRJZJQf8Szyjx5WcIyBUidtYPnHzSLbmk= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 02 07:24:26 np0005541914.novalocal sudo[23668]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aprlqnxopzwsqedoomayjbzbegunkjml ; /usr/bin/python3
Dec 02 07:24:26 np0005541914.novalocal sudo[23668]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:24:26 np0005541914.novalocal python3[23670]: ansible-ansible.posix.authorized_key Invoked with user=root key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfcGXFPS+XIPHLw+7WTk1crQnJj1F7l/bATNqEM8HqdPREfaSIeF883HXh8Bv+rj9cjcgSPu+200+1SEsq35V+19mPwwkoxgdhfQu8jGk7vv17tL7k61zl9rWne61hn/7PnFptl+SBaMvOq/9ZdnPuMzb1YBTWbKm6kC3RPkgDUOa/BER5PJh1E6x6wYj1wRGMwVREczSSv+66aA5tTRelsFh16OXZXpq4ddoi7OeuimE3lWuMAHorxzJwF5AN+gPTgKYRkMwbMMHU4nPx7TXt5G3zjqWhmos08Xgdl+lPNHY5i463T96l4hGiycZKO4FOCq0ZMzldYkovXnyZi1CjSYUDcEn+EHIRJyZaK9ZJlJ1no5HVdwv1rwVMw4KkpZvH7HBh/iX47Wsi4qxK+L3X5hwZ7s6iSpNWeEMT5CLZsiDCkrdideFnZ8kW2jgnNIV0h+pUPISFfl1j03bjS9fHJjgl4BndVBxRJZJQf8Szyjx5WcIyBUidtYPnHzSLbmk= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 02 07:24:26 np0005541914.novalocal sudo[23668]: pam_unix(sudo:session): session closed for user root
Dec 02 07:24:27 np0005541914.novalocal python3[23684]: ansible-ansible.builtin.slurp Invoked with path=/etc/hostname src=/etc/hostname
Dec 02 07:24:27 np0005541914.novalocal python3[23699]: ansible-ansible.legacy.command Invoked with _raw_params=hostname="np0005541914.novalocal"
                                                       hostname_str_array=(${hostname//./ })
                                                       echo ${hostname_str_array[0]} > /home/zuul/ansible_hostname
                                                        _uses_shell=True zuul_log_id=fa163e3b-3c83-8c0a-0232-000000000022-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:24:28 np0005541914.novalocal sudo[23717]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hiyginijsadahzwjjucrrsbmtcsxlvdw ; /usr/bin/python3
Dec 02 07:24:28 np0005541914.novalocal sudo[23717]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:24:28 np0005541914.novalocal python3[23719]: ansible-ansible.legacy.command Invoked with _raw_params=hostname=$(cat /home/zuul/ansible_hostname)
                                                       hostnamectl hostname "$hostname.localdomain"
                                                        _uses_shell=True zuul_log_id=fa163e3b-3c83-8c0a-0232-000000000023-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:24:28 np0005541914.novalocal systemd[1]: Starting Hostname Service...
Dec 02 07:24:28 np0005541914.novalocal systemd[1]: Started Hostname Service.
Dec 02 07:24:28 np0005541914.localdomain systemd-hostnamed[23723]: Hostname set to <np0005541914.localdomain> (static)
Dec 02 07:24:28 np0005541914.localdomain NetworkManager[5967]: <info>  [1764660268.9065] hostname: static hostname changed from "np0005541914.novalocal" to "np0005541914.localdomain"
Dec 02 07:24:28 np0005541914.localdomain systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 02 07:24:28 np0005541914.localdomain systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 02 07:24:28 np0005541914.localdomain sudo[23717]: pam_unix(sudo:session): session closed for user root
Dec 02 07:24:29 np0005541914.localdomain sshd[19034]: pam_unix(sshd:session): session closed for user zuul
Dec 02 07:24:29 np0005541914.localdomain systemd[1]: session-10.scope: Deactivated successfully.
Dec 02 07:24:29 np0005541914.localdomain systemd[1]: session-10.scope: Consumed 1min 47.687s CPU time.
Dec 02 07:24:29 np0005541914.localdomain systemd-logind[760]: Session 10 logged out. Waiting for processes to exit.
Dec 02 07:24:29 np0005541914.localdomain systemd-logind[760]: Removed session 10.
Dec 02 07:24:33 np0005541914.localdomain sshd[23734]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:24:33 np0005541914.localdomain sshd[23734]: Accepted publickey for zuul from 38.102.83.114 port 58540 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 07:24:33 np0005541914.localdomain systemd-logind[760]: New session 11 of user zuul.
Dec 02 07:24:33 np0005541914.localdomain systemd[1]: Started Session 11 of User zuul.
Dec 02 07:24:33 np0005541914.localdomain sshd[23734]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 07:24:33 np0005541914.localdomain python3[23751]: ansible-ansible.builtin.slurp Invoked with path=/home/zuul/ansible_hostname src=/home/zuul/ansible_hostname
Dec 02 07:24:34 np0005541914.localdomain sshd[23734]: pam_unix(sshd:session): session closed for user zuul
Dec 02 07:24:34 np0005541914.localdomain systemd-logind[760]: Session 11 logged out. Waiting for processes to exit.
Dec 02 07:24:34 np0005541914.localdomain systemd[1]: session-11.scope: Deactivated successfully.
Dec 02 07:24:34 np0005541914.localdomain systemd-logind[760]: Removed session 11.
Dec 02 07:24:38 np0005541914.localdomain systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 02 07:24:58 np0005541914.localdomain systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 02 07:25:28 np0005541914.localdomain sshd[23755]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:25:28 np0005541914.localdomain sshd[23755]: Accepted publickey for zuul from 38.102.83.114 port 47396 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 07:25:28 np0005541914.localdomain systemd-logind[760]: New session 12 of user zuul.
Dec 02 07:25:28 np0005541914.localdomain systemd[1]: Started Session 12 of User zuul.
Dec 02 07:25:28 np0005541914.localdomain sshd[23755]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 07:25:28 np0005541914.localdomain sudo[23772]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yxgppjknyufyusomtvewigvzlbfdvqfz ; /usr/bin/python3
Dec 02 07:25:28 np0005541914.localdomain sudo[23772]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:25:28 np0005541914.localdomain python3[23774]: ansible-ansible.legacy.dnf Invoked with name=['lvm2', 'jq'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 02 07:25:32 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 07:25:32 np0005541914.localdomain systemd-rc-local-generator[23819]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 07:25:32 np0005541914.localdomain systemd-sysv-generator[23822]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 07:25:32 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 07:25:32 np0005541914.localdomain systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec 02 07:25:32 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 07:25:32 np0005541914.localdomain systemd-rc-local-generator[23855]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 07:25:32 np0005541914.localdomain systemd-sysv-generator[23858]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 07:25:32 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 07:25:33 np0005541914.localdomain systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec 02 07:25:33 np0005541914.localdomain systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec 02 07:25:33 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 07:25:33 np0005541914.localdomain systemd-sysv-generator[23899]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 07:25:33 np0005541914.localdomain systemd-rc-local-generator[23896]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 07:25:33 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 07:25:33 np0005541914.localdomain systemd[1]: Listening on LVM2 poll daemon socket.
Dec 02 07:25:33 np0005541914.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 02 07:25:33 np0005541914.localdomain systemd[1]: Starting man-db-cache-update.service...
Dec 02 07:25:33 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 07:25:33 np0005541914.localdomain systemd-sysv-generator[23959]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 07:25:33 np0005541914.localdomain systemd-rc-local-generator[23956]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 07:25:33 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 07:25:33 np0005541914.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Dec 02 07:25:33 np0005541914.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 02 07:25:34 np0005541914.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 02 07:25:34 np0005541914.localdomain systemd[1]: Finished man-db-cache-update.service.
Dec 02 07:25:34 np0005541914.localdomain systemd[1]: run-r35502efc4f0b46f59e35d5704b3f2b6e.service: Deactivated successfully.
Dec 02 07:25:34 np0005541914.localdomain systemd[1]: run-red15ff76a146406ba84677f30b2b9e6b.service: Deactivated successfully.
Dec 02 07:25:34 np0005541914.localdomain sudo[23772]: pam_unix(sudo:session): session closed for user root
Dec 02 07:26:11 np0005541914.localdomain sshd[24546]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:26:12 np0005541914.localdomain sshd[24546]: Invalid user sol from 45.148.10.240 port 58526
Dec 02 07:26:12 np0005541914.localdomain sshd[24546]: Connection closed by invalid user sol 45.148.10.240 port 58526 [preauth]
Dec 02 07:26:34 np0005541914.localdomain sshd[23758]: Received disconnect from 38.102.83.114 port 47396:11: disconnected by user
Dec 02 07:26:34 np0005541914.localdomain sshd[23758]: Disconnected from user zuul 38.102.83.114 port 47396
Dec 02 07:26:34 np0005541914.localdomain sshd[23755]: pam_unix(sshd:session): session closed for user zuul
Dec 02 07:26:34 np0005541914.localdomain systemd[1]: session-12.scope: Deactivated successfully.
Dec 02 07:26:34 np0005541914.localdomain systemd[1]: session-12.scope: Consumed 4.658s CPU time.
Dec 02 07:26:34 np0005541914.localdomain systemd-logind[760]: Session 12 logged out. Waiting for processes to exit.
Dec 02 07:26:34 np0005541914.localdomain systemd-logind[760]: Removed session 12.
Dec 02 07:27:35 np0005541914.localdomain rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 02 07:28:10 np0005541914.localdomain sshd[24727]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:28:11 np0005541914.localdomain sshd[24727]: Invalid user solana from 45.148.10.240 port 38872
Dec 02 07:28:11 np0005541914.localdomain sshd[24727]: Connection closed by invalid user solana 45.148.10.240 port 38872 [preauth]
Dec 02 07:30:03 np0005541914.localdomain sshd[24729]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:30:03 np0005541914.localdomain sshd[24729]: Invalid user solana from 45.148.10.240 port 32982
Dec 02 07:30:03 np0005541914.localdomain sshd[24729]: Connection closed by invalid user solana 45.148.10.240 port 32982 [preauth]
Dec 02 07:32:03 np0005541914.localdomain sshd[24732]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:32:03 np0005541914.localdomain sshd[24732]: Invalid user sol from 45.148.10.240 port 37106
Dec 02 07:32:03 np0005541914.localdomain sshd[24732]: Connection closed by invalid user sol 45.148.10.240 port 37106 [preauth]
Dec 02 07:34:11 np0005541914.localdomain sshd[24736]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:34:11 np0005541914.localdomain sshd[24736]: Invalid user sol from 45.148.10.240 port 55502
Dec 02 07:34:11 np0005541914.localdomain sshd[24736]: Connection closed by invalid user sol 45.148.10.240 port 55502 [preauth]
Dec 02 07:35:01 np0005541914.localdomain anacron[6721]: Job `cron.daily' started
Dec 02 07:35:01 np0005541914.localdomain anacron[6721]: Job `cron.daily' terminated
Dec 02 07:36:13 np0005541914.localdomain sshd[24740]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:36:13 np0005541914.localdomain sshd[24740]: Invalid user solana from 45.148.10.240 port 57726
Dec 02 07:36:13 np0005541914.localdomain sshd[24740]: Connection closed by invalid user solana 45.148.10.240 port 57726 [preauth]
Dec 02 07:38:13 np0005541914.localdomain sshd[24743]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:38:13 np0005541914.localdomain sshd[24743]: Invalid user solana from 45.148.10.240 port 44244
Dec 02 07:38:13 np0005541914.localdomain sshd[24743]: Connection closed by invalid user solana 45.148.10.240 port 44244 [preauth]
Dec 02 07:40:15 np0005541914.localdomain sshd[24747]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:40:15 np0005541914.localdomain sshd[24747]: Invalid user solana from 45.148.10.240 port 35348
Dec 02 07:40:15 np0005541914.localdomain sshd[24747]: Connection closed by invalid user solana 45.148.10.240 port 35348 [preauth]
Dec 02 07:40:45 np0005541914.localdomain sshd[24749]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:40:48 np0005541914.localdomain sshd[24749]: Invalid user ubuntu from 103.52.115.25 port 41728
Dec 02 07:40:48 np0005541914.localdomain sshd[24749]: Received disconnect from 103.52.115.25 port 41728:11: Bye Bye [preauth]
Dec 02 07:40:48 np0005541914.localdomain sshd[24749]: Disconnected from invalid user ubuntu 103.52.115.25 port 41728 [preauth]
Dec 02 07:41:46 np0005541914.localdomain sshd[24751]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:41:47 np0005541914.localdomain sshd[24751]: Invalid user develop from 182.253.156.173 port 48264
Dec 02 07:41:47 np0005541914.localdomain sshd[24751]: Received disconnect from 182.253.156.173 port 48264:11: Bye Bye [preauth]
Dec 02 07:41:47 np0005541914.localdomain sshd[24751]: Disconnected from invalid user develop 182.253.156.173 port 48264 [preauth]
Dec 02 07:41:57 np0005541914.localdomain sshd[24753]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:42:13 np0005541914.localdomain sshd[24754]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:42:13 np0005541914.localdomain sshd[24754]: Invalid user sol from 45.148.10.240 port 33396
Dec 02 07:42:13 np0005541914.localdomain sshd[24754]: Connection closed by invalid user sol 45.148.10.240 port 33396 [preauth]
Dec 02 07:42:20 np0005541914.localdomain sshd[24756]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:42:20 np0005541914.localdomain sshd[24756]: Accepted publickey for zuul from 192.168.122.100 port 55506 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 07:42:20 np0005541914.localdomain systemd-logind[760]: New session 13 of user zuul.
Dec 02 07:42:20 np0005541914.localdomain systemd[1]: Started Session 13 of User zuul.
Dec 02 07:42:20 np0005541914.localdomain sshd[24756]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 07:42:20 np0005541914.localdomain sudo[24802]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wohgieenayhdtrtcujalewdotsquasfa ; /usr/bin/python3
Dec 02 07:42:20 np0005541914.localdomain sudo[24802]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:42:21 np0005541914.localdomain python3[24804]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 07:42:21 np0005541914.localdomain sudo[24802]: pam_unix(sudo:session): session closed for user root
Dec 02 07:42:22 np0005541914.localdomain sudo[24889]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-syzawspuylenzziiqkgnqvxwxbvzmfzr ; /usr/bin/python3
Dec 02 07:42:22 np0005541914.localdomain sudo[24889]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:42:23 np0005541914.localdomain python3[24891]: ansible-ansible.builtin.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 02 07:42:25 np0005541914.localdomain sudo[24889]: pam_unix(sudo:session): session closed for user root
Dec 02 07:42:26 np0005541914.localdomain sudo[24906]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vemcenfvuskwyvvckwqkqcksbqllhyor ; /usr/bin/python3
Dec 02 07:42:26 np0005541914.localdomain sudo[24906]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:42:26 np0005541914.localdomain python3[24908]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 07:42:26 np0005541914.localdomain sudo[24906]: pam_unix(sudo:session): session closed for user root
Dec 02 07:42:26 np0005541914.localdomain sudo[24922]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bhunmtetzzrvqihyaohznopzndwfeieo ; /usr/bin/python3
Dec 02 07:42:26 np0005541914.localdomain sudo[24922]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:42:26 np0005541914.localdomain python3[24924]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G
                                                         losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                                         lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:42:27 np0005541914.localdomain kernel: loop: module loaded
Dec 02 07:42:27 np0005541914.localdomain kernel: loop3: detected capacity change from 0 to 14680064
Dec 02 07:42:27 np0005541914.localdomain sudo[24922]: pam_unix(sudo:session): session closed for user root
Dec 02 07:42:27 np0005541914.localdomain sudo[24947]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kibtukdtzoutwtfvryaivqqxtjkgkhum ; /usr/bin/python3
Dec 02 07:42:27 np0005541914.localdomain sudo[24947]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:42:27 np0005541914.localdomain python3[24949]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                                         vgcreate ceph_vg0 /dev/loop3
                                                         lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                                         lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:42:27 np0005541914.localdomain lvm[24952]: PV /dev/loop3 not used.
Dec 02 07:42:27 np0005541914.localdomain lvm[24954]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 02 07:42:27 np0005541914.localdomain systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Dec 02 07:42:27 np0005541914.localdomain lvm[24963]:   1 logical volume(s) in volume group "ceph_vg0" now active
Dec 02 07:42:27 np0005541914.localdomain systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Dec 02 07:42:27 np0005541914.localdomain sudo[24947]: pam_unix(sudo:session): session closed for user root
Dec 02 07:42:28 np0005541914.localdomain sudo[25009]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hjqwegoqghefkzfxwkrwyrmochlrjiur ; /usr/bin/python3
Dec 02 07:42:28 np0005541914.localdomain sudo[25009]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:42:28 np0005541914.localdomain python3[25011]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:42:28 np0005541914.localdomain sudo[25009]: pam_unix(sudo:session): session closed for user root
Dec 02 07:42:28 np0005541914.localdomain sudo[25052]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eqpbcqnfmwitrlhlpesmmmgyoewhgqxk ; /usr/bin/python3
Dec 02 07:42:28 np0005541914.localdomain sudo[25052]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:42:28 np0005541914.localdomain python3[25054]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764661348.0403972-53936-81282343706678/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:42:28 np0005541914.localdomain sudo[25052]: pam_unix(sudo:session): session closed for user root
Dec 02 07:42:29 np0005541914.localdomain sudo[25082]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nkwhpopqzxuqnoyltrsopzzhbnsxglbu ; /usr/bin/python3
Dec 02 07:42:29 np0005541914.localdomain sudo[25082]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:42:29 np0005541914.localdomain python3[25084]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 07:42:30 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 07:42:30 np0005541914.localdomain systemd-rc-local-generator[25114]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 07:42:30 np0005541914.localdomain systemd-sysv-generator[25117]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 07:42:30 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 07:42:30 np0005541914.localdomain systemd[1]: Starting Ceph OSD losetup...
Dec 02 07:42:30 np0005541914.localdomain bash[25125]: /dev/loop3: [64516]:8399529 (/var/lib/ceph-osd-0.img)
Dec 02 07:42:31 np0005541914.localdomain systemd[1]: Finished Ceph OSD losetup.
Dec 02 07:42:31 np0005541914.localdomain lvm[25127]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 02 07:42:31 np0005541914.localdomain lvm[25127]: VG ceph_vg0 finished
Dec 02 07:42:31 np0005541914.localdomain sudo[25082]: pam_unix(sudo:session): session closed for user root
Dec 02 07:42:31 np0005541914.localdomain sudo[25142]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lhbwdfrpiveswxxmstnmsmqxpakmperi ; /usr/bin/python3
Dec 02 07:42:31 np0005541914.localdomain sudo[25142]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:42:31 np0005541914.localdomain python3[25144]: ansible-ansible.builtin.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 02 07:42:34 np0005541914.localdomain sudo[25142]: pam_unix(sudo:session): session closed for user root
Dec 02 07:42:34 np0005541914.localdomain sudo[25159]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fcfushlxlqnnjrqehibhhxugwubjrbzd ; /usr/bin/python3
Dec 02 07:42:34 np0005541914.localdomain sudo[25159]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:42:34 np0005541914.localdomain python3[25161]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 07:42:34 np0005541914.localdomain sudo[25159]: pam_unix(sudo:session): session closed for user root
Dec 02 07:42:35 np0005541914.localdomain sudo[25175]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kexayvgwvfvpzqxfuklasjfmoclfmssr ; /usr/bin/python3
Dec 02 07:42:35 np0005541914.localdomain sudo[25175]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:42:35 np0005541914.localdomain python3[25177]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=7G
                                                         losetup /dev/loop4 /var/lib/ceph-osd-1.img
                                                         lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:42:35 np0005541914.localdomain kernel: loop4: detected capacity change from 0 to 14680064
Dec 02 07:42:35 np0005541914.localdomain sudo[25175]: pam_unix(sudo:session): session closed for user root
Dec 02 07:42:35 np0005541914.localdomain sudo[25197]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wfneotrgqpndhylloxzasklfldnivkbg ; /usr/bin/python3
Dec 02 07:42:35 np0005541914.localdomain sudo[25197]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:42:36 np0005541914.localdomain python3[25199]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4
                                                         vgcreate ceph_vg1 /dev/loop4
                                                         lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1
                                                         lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:42:36 np0005541914.localdomain lvm[25202]: PV /dev/loop4 not used.
Dec 02 07:42:36 np0005541914.localdomain lvm[25204]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 02 07:42:36 np0005541914.localdomain systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Dec 02 07:42:36 np0005541914.localdomain lvm[25213]:   1 logical volume(s) in volume group "ceph_vg1" now active
Dec 02 07:42:36 np0005541914.localdomain lvm[25215]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 02 07:42:36 np0005541914.localdomain lvm[25215]: VG ceph_vg1 finished
Dec 02 07:42:36 np0005541914.localdomain systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Dec 02 07:42:36 np0005541914.localdomain sudo[25197]: pam_unix(sudo:session): session closed for user root
Dec 02 07:42:36 np0005541914.localdomain sudo[25262]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-estypolugxtcygoqjjakpduitdmtlayr ; /usr/bin/python3
Dec 02 07:42:36 np0005541914.localdomain sudo[25262]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:42:36 np0005541914.localdomain python3[25264]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:42:36 np0005541914.localdomain sudo[25262]: pam_unix(sudo:session): session closed for user root
Dec 02 07:42:37 np0005541914.localdomain sudo[25305]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qiyweqzufjlqqrwyqkdigllwpwlxsayb ; /usr/bin/python3
Dec 02 07:42:37 np0005541914.localdomain sudo[25305]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:42:37 np0005541914.localdomain python3[25307]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764661356.5437486-54106-3564664512598/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:42:37 np0005541914.localdomain sudo[25305]: pam_unix(sudo:session): session closed for user root
Dec 02 07:42:37 np0005541914.localdomain sudo[25335]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xdkqbwmododlcidclbmacslcsqnmslfy ; /usr/bin/python3
Dec 02 07:42:37 np0005541914.localdomain sudo[25335]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:42:37 np0005541914.localdomain python3[25337]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 07:42:37 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 07:42:38 np0005541914.localdomain systemd-rc-local-generator[25364]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 07:42:38 np0005541914.localdomain systemd-sysv-generator[25371]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 07:42:38 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 07:42:38 np0005541914.localdomain systemd[1]: Starting Ceph OSD losetup...
Dec 02 07:42:38 np0005541914.localdomain bash[25379]: /dev/loop4: [64516]:8606979 (/var/lib/ceph-osd-1.img)
Dec 02 07:42:38 np0005541914.localdomain systemd[1]: Finished Ceph OSD losetup.
Dec 02 07:42:38 np0005541914.localdomain lvm[25380]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 02 07:42:38 np0005541914.localdomain lvm[25380]: VG ceph_vg1 finished
Dec 02 07:42:38 np0005541914.localdomain sudo[25335]: pam_unix(sudo:session): session closed for user root
Dec 02 07:42:45 np0005541914.localdomain sshd[25381]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:42:46 np0005541914.localdomain sshd[25412]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:42:46 np0005541914.localdomain sudo[25426]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rmcdtafigpmjyryeppdbhashlfduzhdl ; /usr/bin/python3
Dec 02 07:42:46 np0005541914.localdomain sudo[25426]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:42:46 np0005541914.localdomain sshd[25412]: error: kex_exchange_identification: Connection closed by remote host
Dec 02 07:42:46 np0005541914.localdomain sshd[25412]: Connection closed by 43.225.159.111 port 49846
Dec 02 07:42:46 np0005541914.localdomain python3[25428]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 07:42:46 np0005541914.localdomain sudo[25426]: pam_unix(sudo:session): session closed for user root
Dec 02 07:42:47 np0005541914.localdomain sudo[25446]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ctahkcbmsemhgjelxspjpkzmhoywhckr ; /usr/bin/python3
Dec 02 07:42:47 np0005541914.localdomain sudo[25446]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:42:48 np0005541914.localdomain python3[25448]: ansible-hostname Invoked with name=np0005541914.localdomain use=None
Dec 02 07:42:48 np0005541914.localdomain systemd[1]: Starting Hostname Service...
Dec 02 07:42:48 np0005541914.localdomain systemd[1]: Started Hostname Service.
Dec 02 07:42:48 np0005541914.localdomain sudo[25446]: pam_unix(sudo:session): session closed for user root
Dec 02 07:42:55 np0005541914.localdomain sudo[25469]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nmevzncsprqqrujgvbvifiipyfgjwraa ; /usr/bin/python3
Dec 02 07:42:55 np0005541914.localdomain sudo[25469]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:42:55 np0005541914.localdomain python3[25471]: ansible-tempfile Invoked with state=file suffix=tmphosts prefix=ansible. path=None
Dec 02 07:42:55 np0005541914.localdomain sudo[25469]: pam_unix(sudo:session): session closed for user root
Dec 02 07:42:55 np0005541914.localdomain sudo[25517]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bkckobeazcchmfjfyizjixdtesqbckqt ; /usr/bin/python3
Dec 02 07:42:55 np0005541914.localdomain sudo[25517]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:42:56 np0005541914.localdomain python3[25519]: ansible-ansible.legacy.copy Invoked with remote_src=True src=/etc/hosts dest=/tmp/ansible.1a22a8zctmphosts mode=preserve backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:42:56 np0005541914.localdomain sudo[25517]: pam_unix(sudo:session): session closed for user root
Dec 02 07:42:56 np0005541914.localdomain sudo[25547]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hpdvopqwbnxowguauxvvacratenketse ; /usr/bin/python3
Dec 02 07:42:56 np0005541914.localdomain sudo[25547]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:42:56 np0005541914.localdomain python3[25549]: ansible-blockinfile Invoked with state=absent path=/tmp/ansible.1a22a8zctmphosts block= marker=# {mark} marker_begin=HEAT_HOSTS_START - Do not edit manually within this section! marker_end=HEAT_HOSTS_END create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:42:56 np0005541914.localdomain sudo[25547]: pam_unix(sudo:session): session closed for user root
Dec 02 07:42:57 np0005541914.localdomain sudo[25563]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-upqjutnsliezkpworfuqgkvwzqzcjkgh ; /usr/bin/python3
Dec 02 07:42:57 np0005541914.localdomain sudo[25563]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:42:57 np0005541914.localdomain python3[25565]: ansible-blockinfile Invoked with create=True path=/tmp/ansible.1a22a8zctmphosts insertbefore=BOF block=192.168.122.106 np0005541912.localdomain np0005541912
                                                         192.168.122.106 np0005541912.ctlplane.localdomain np0005541912.ctlplane
                                                         192.168.122.107 np0005541913.localdomain np0005541913
                                                         192.168.122.107 np0005541913.ctlplane.localdomain np0005541913.ctlplane
                                                         192.168.122.108 np0005541914.localdomain np0005541914
                                                         192.168.122.108 np0005541914.ctlplane.localdomain np0005541914.ctlplane
                                                         192.168.122.103 np0005541909.localdomain np0005541909
                                                         192.168.122.103 np0005541909.ctlplane.localdomain np0005541909.ctlplane
                                                         192.168.122.104 np0005541910.localdomain np0005541910
                                                         192.168.122.104 np0005541910.ctlplane.localdomain np0005541910.ctlplane
                                                         192.168.122.105 np0005541911.localdomain np0005541911
                                                         192.168.122.105 np0005541911.ctlplane.localdomain np0005541911.ctlplane
                                                         
                                                         192.168.122.100 undercloud.ctlplane.localdomain undercloud.ctlplane
                                                          marker=# {mark} marker_begin=START_HOST_ENTRIES_FOR_STACK: overcloud marker_end=END_HOST_ENTRIES_FOR_STACK: overcloud state=present backup=False unsafe_writes=False insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:42:57 np0005541914.localdomain sudo[25563]: pam_unix(sudo:session): session closed for user root
Dec 02 07:42:57 np0005541914.localdomain sudo[25579]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-maizilcbnuseldzawachadnjejxpzmcu ; /usr/bin/python3
Dec 02 07:42:57 np0005541914.localdomain sudo[25579]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:42:57 np0005541914.localdomain python3[25581]: ansible-ansible.legacy.command Invoked with _raw_params=cp "/tmp/ansible.1a22a8zctmphosts" "/etc/hosts" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:42:57 np0005541914.localdomain sudo[25579]: pam_unix(sudo:session): session closed for user root
Dec 02 07:42:58 np0005541914.localdomain sudo[25596]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ywzkilcwxubjflyylhdifvefpewncpjm ; /usr/bin/python3
Dec 02 07:42:58 np0005541914.localdomain sudo[25596]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:42:58 np0005541914.localdomain python3[25598]: ansible-file Invoked with path=/tmp/ansible.1a22a8zctmphosts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:42:58 np0005541914.localdomain sudo[25596]: pam_unix(sudo:session): session closed for user root
Dec 02 07:43:00 np0005541914.localdomain sudo[25612]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-usoypmfyrzcethajisgzbncniucdqflb ; /usr/bin/python3
Dec 02 07:43:00 np0005541914.localdomain sudo[25612]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:43:00 np0005541914.localdomain python3[25614]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:43:00 np0005541914.localdomain sudo[25612]: pam_unix(sudo:session): session closed for user root
Dec 02 07:43:00 np0005541914.localdomain sudo[25630]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mfryfwzuudtcjmojqjiqmcditwcrtwcc ; /usr/bin/python3
Dec 02 07:43:00 np0005541914.localdomain sudo[25630]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:43:01 np0005541914.localdomain python3[25632]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 02 07:43:03 np0005541914.localdomain sudo[25630]: pam_unix(sudo:session): session closed for user root
Dec 02 07:43:05 np0005541914.localdomain sudo[25679]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lfzkcgjtroxqqxspvpkqqfdqklguvxse ; /usr/bin/python3
Dec 02 07:43:05 np0005541914.localdomain sudo[25679]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:43:05 np0005541914.localdomain python3[25681]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:43:05 np0005541914.localdomain sudo[25679]: pam_unix(sudo:session): session closed for user root
Dec 02 07:43:05 np0005541914.localdomain sudo[25724]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zixewocwyakqteomsukhgszgkpopkgnc ; /usr/bin/python3
Dec 02 07:43:05 np0005541914.localdomain sudo[25724]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:43:06 np0005541914.localdomain python3[25726]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764661385.0833302-55052-279488438052816/source dest=/etc/chrony.conf owner=root group=root mode=420 follow=False _original_basename=chrony.conf.j2 checksum=4fd4fbbb2de00c70a54478b7feb8ef8adf6a3362 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:43:06 np0005541914.localdomain sudo[25724]: pam_unix(sudo:session): session closed for user root
Dec 02 07:43:07 np0005541914.localdomain sudo[25754]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vnkpeozmgjbozvrsabmfwvnnnbqyuxcp ; /usr/bin/python3
Dec 02 07:43:07 np0005541914.localdomain sudo[25754]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:43:07 np0005541914.localdomain python3[25756]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 07:43:07 np0005541914.localdomain sudo[25754]: pam_unix(sudo:session): session closed for user root
Dec 02 07:43:07 np0005541914.localdomain sudo[25772]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ahfcljpqtsqmcfzsgukaiomgmqetoirp ; /usr/bin/python3
Dec 02 07:43:07 np0005541914.localdomain sudo[25772]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:43:08 np0005541914.localdomain python3[25774]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 02 07:43:08 np0005541914.localdomain chronyd[766]: chronyd exiting
Dec 02 07:43:08 np0005541914.localdomain systemd[1]: Stopping NTP client/server...
Dec 02 07:43:08 np0005541914.localdomain systemd[1]: chronyd.service: Deactivated successfully.
Dec 02 07:43:08 np0005541914.localdomain systemd[1]: Stopped NTP client/server.
Dec 02 07:43:08 np0005541914.localdomain systemd[1]: chronyd.service: Consumed 129ms CPU time, read 1.9M from disk, written 0B to disk.
Dec 02 07:43:08 np0005541914.localdomain systemd[1]: Starting NTP client/server...
Dec 02 07:43:08 np0005541914.localdomain chronyd[25782]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Dec 02 07:43:08 np0005541914.localdomain chronyd[25782]: Frequency -30.274 +/- 0.056 ppm read from /var/lib/chrony/drift
Dec 02 07:43:08 np0005541914.localdomain chronyd[25782]: Loaded seccomp filter (level 2)
Dec 02 07:43:08 np0005541914.localdomain systemd[1]: Started NTP client/server.
Dec 02 07:43:08 np0005541914.localdomain sudo[25772]: pam_unix(sudo:session): session closed for user root
Dec 02 07:43:08 np0005541914.localdomain sudo[25829]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uxwfokphkxqlddeupgixqkfiavpdpnye ; /usr/bin/python3
Dec 02 07:43:08 np0005541914.localdomain sudo[25829]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:43:08 np0005541914.localdomain python3[25831]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/chrony-online.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:43:08 np0005541914.localdomain sudo[25829]: pam_unix(sudo:session): session closed for user root
Dec 02 07:43:09 np0005541914.localdomain sudo[25872]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zpzrrzqyqhpzwagrdhqdvgajjwsaivse ; /usr/bin/python3
Dec 02 07:43:09 np0005541914.localdomain sudo[25872]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:43:09 np0005541914.localdomain python3[25874]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764661388.6881104-55196-171937319089839/source dest=/etc/systemd/system/chrony-online.service _original_basename=chrony-online.service follow=False checksum=d4d85e046d61f558ac7ec8178c6d529d893e81e1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:43:09 np0005541914.localdomain sudo[25872]: pam_unix(sudo:session): session closed for user root
Dec 02 07:43:09 np0005541914.localdomain sudo[25902]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ozlicypzyjidrnuwvufpyhskvqesaxyc ; /usr/bin/python3
Dec 02 07:43:09 np0005541914.localdomain sudo[25902]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:43:09 np0005541914.localdomain python3[25904]: ansible-systemd Invoked with state=started name=chrony-online.service enabled=True daemon-reload=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 07:43:09 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 07:43:10 np0005541914.localdomain systemd-rc-local-generator[25930]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 07:43:10 np0005541914.localdomain systemd-sysv-generator[25933]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 07:43:10 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 07:43:10 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 07:43:10 np0005541914.localdomain systemd-rc-local-generator[25969]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 07:43:10 np0005541914.localdomain systemd-sysv-generator[25974]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 07:43:10 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 07:43:10 np0005541914.localdomain systemd[1]: Starting chronyd online sources service...
Dec 02 07:43:10 np0005541914.localdomain chronyc[25980]: 200 OK
Dec 02 07:43:10 np0005541914.localdomain systemd[1]: chrony-online.service: Deactivated successfully.
Dec 02 07:43:10 np0005541914.localdomain systemd[1]: Finished chronyd online sources service.
Dec 02 07:43:10 np0005541914.localdomain sudo[25902]: pam_unix(sudo:session): session closed for user root
Dec 02 07:43:11 np0005541914.localdomain sudo[25994]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kntjnatazyzqszwyzbmdwzxbucqqtlxy ; /usr/bin/python3
Dec 02 07:43:11 np0005541914.localdomain sudo[25994]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:43:11 np0005541914.localdomain python3[25996]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:43:11 np0005541914.localdomain chronyd[25782]: System clock was stepped by -0.000000 seconds
Dec 02 07:43:11 np0005541914.localdomain sudo[25994]: pam_unix(sudo:session): session closed for user root
Dec 02 07:43:11 np0005541914.localdomain sudo[26011]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ypztvonymsblpmdpakrslnatuzwhmnnb ; /usr/bin/python3
Dec 02 07:43:11 np0005541914.localdomain sudo[26011]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:43:11 np0005541914.localdomain python3[26013]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:43:12 np0005541914.localdomain chronyd[25782]: Selected source 51.222.12.92 (pool.ntp.org)
Dec 02 07:43:18 np0005541914.localdomain systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 02 07:43:21 np0005541914.localdomain sudo[26011]: pam_unix(sudo:session): session closed for user root
Dec 02 07:43:21 np0005541914.localdomain sudo[26030]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-erkxrcopftmgyddejdmzgwyvsxfibrsb ; /usr/bin/python3
Dec 02 07:43:21 np0005541914.localdomain sudo[26030]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:43:22 np0005541914.localdomain python3[26032]: ansible-timezone Invoked with name=UTC hwclock=None
Dec 02 07:43:22 np0005541914.localdomain systemd[1]: Starting Time & Date Service...
Dec 02 07:43:22 np0005541914.localdomain systemd[1]: Started Time & Date Service.
Dec 02 07:43:22 np0005541914.localdomain sudo[26030]: pam_unix(sudo:session): session closed for user root
Dec 02 07:43:22 np0005541914.localdomain sudo[26050]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tuqbizwpmrfdabdrymyjrahcsxoaetpb ; /usr/bin/python3
Dec 02 07:43:22 np0005541914.localdomain sudo[26050]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:43:22 np0005541914.localdomain sshd[26053]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:43:23 np0005541914.localdomain python3[26052]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 02 07:43:23 np0005541914.localdomain chronyd[25782]: chronyd exiting
Dec 02 07:43:23 np0005541914.localdomain systemd[1]: Stopping NTP client/server...
Dec 02 07:43:23 np0005541914.localdomain systemd[1]: chronyd.service: Deactivated successfully.
Dec 02 07:43:23 np0005541914.localdomain systemd[1]: Stopped NTP client/server.
Dec 02 07:43:23 np0005541914.localdomain systemd[1]: Starting NTP client/server...
Dec 02 07:43:23 np0005541914.localdomain chronyd[26062]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Dec 02 07:43:23 np0005541914.localdomain chronyd[26062]: Frequency -30.274 +/- 0.061 ppm read from /var/lib/chrony/drift
Dec 02 07:43:23 np0005541914.localdomain chronyd[26062]: Loaded seccomp filter (level 2)
Dec 02 07:43:23 np0005541914.localdomain systemd[1]: Started NTP client/server.
Dec 02 07:43:23 np0005541914.localdomain sudo[26050]: pam_unix(sudo:session): session closed for user root
Dec 02 07:43:24 np0005541914.localdomain sshd[26053]: Received disconnect from 103.52.115.25 port 39600:11: Bye Bye [preauth]
Dec 02 07:43:24 np0005541914.localdomain sshd[26053]: Disconnected from authenticating user root 103.52.115.25 port 39600 [preauth]
Dec 02 07:43:27 np0005541914.localdomain chronyd[26062]: Selected source 51.222.12.92 (pool.ntp.org)
Dec 02 07:43:36 np0005541914.localdomain sshd[26064]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:43:37 np0005541914.localdomain sshd[26064]: Invalid user ben from 182.253.156.173 port 51774
Dec 02 07:43:37 np0005541914.localdomain sshd[26064]: Received disconnect from 182.253.156.173 port 51774:11: Bye Bye [preauth]
Dec 02 07:43:37 np0005541914.localdomain sshd[26064]: Disconnected from invalid user ben 182.253.156.173 port 51774 [preauth]
Dec 02 07:43:39 np0005541914.localdomain sudo[26079]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nfvjuiajxgqwbegfxvapogyhizhxyixn ; /usr/bin/python3
Dec 02 07:43:39 np0005541914.localdomain sudo[26079]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:43:39 np0005541914.localdomain useradd[26083]: new group: name=ceph-admin, GID=1002
Dec 02 07:43:39 np0005541914.localdomain useradd[26083]: new user: name=ceph-admin, UID=1002, GID=1002, home=/home/ceph-admin, shell=/bin/bash, from=none
Dec 02 07:43:39 np0005541914.localdomain sudo[26079]: pam_unix(sudo:session): session closed for user root
Dec 02 07:43:39 np0005541914.localdomain sudo[26135]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ihxjkiavbpamltuiwgkrfepdtgkknvml ; /usr/bin/python3
Dec 02 07:43:39 np0005541914.localdomain sudo[26135]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:43:39 np0005541914.localdomain sudo[26135]: pam_unix(sudo:session): session closed for user root
Dec 02 07:43:40 np0005541914.localdomain sudo[26178]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-srpsczosbgkohwxcjwgvkxrwsvtylnxr ; /usr/bin/python3
Dec 02 07:43:40 np0005541914.localdomain sudo[26178]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:43:40 np0005541914.localdomain sudo[26178]: pam_unix(sudo:session): session closed for user root
Dec 02 07:43:40 np0005541914.localdomain sudo[26208]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-icshnssyifceeohlyygkmozyuplwzcdn ; /usr/bin/python3
Dec 02 07:43:40 np0005541914.localdomain sudo[26208]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:43:40 np0005541914.localdomain sudo[26208]: pam_unix(sudo:session): session closed for user root
Dec 02 07:43:41 np0005541914.localdomain sudo[26224]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ohqavzarfowsodsrrtsqzktkjrycilwp ; /usr/bin/python3
Dec 02 07:43:41 np0005541914.localdomain sudo[26224]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:43:41 np0005541914.localdomain sudo[26224]: pam_unix(sudo:session): session closed for user root
Dec 02 07:43:41 np0005541914.localdomain sudo[26240]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-caamfrtxgnwndmhdysfpqhmlhitkyfbw ; /usr/bin/python3
Dec 02 07:43:41 np0005541914.localdomain sudo[26240]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:43:41 np0005541914.localdomain sudo[26240]: pam_unix(sudo:session): session closed for user root
Dec 02 07:43:42 np0005541914.localdomain sudo[26256]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-btbigfktxmzylyieuksqowbjsfwswdek ; /usr/bin/python3
Dec 02 07:43:42 np0005541914.localdomain sudo[26256]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:43:42 np0005541914.localdomain sudo[26256]: pam_unix(sudo:session): session closed for user root
Dec 02 07:43:52 np0005541914.localdomain systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 02 07:43:57 np0005541914.localdomain sshd[24753]: fatal: Timeout before authentication for 14.103.168.81 port 43240
Dec 02 07:44:09 np0005541914.localdomain sshd[26261]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:44:09 np0005541914.localdomain sshd[26261]: Invalid user sol from 45.148.10.240 port 53746
Dec 02 07:44:09 np0005541914.localdomain sshd[26261]: Connection closed by invalid user sol 45.148.10.240 port 53746 [preauth]
Dec 02 07:44:43 np0005541914.localdomain sshd[26263]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:44:45 np0005541914.localdomain sshd[26263]: Invalid user cloudera from 103.52.115.25 port 34530
Dec 02 07:44:45 np0005541914.localdomain sshd[26263]: Received disconnect from 103.52.115.25 port 34530:11: Bye Bye [preauth]
Dec 02 07:44:45 np0005541914.localdomain sshd[26263]: Disconnected from invalid user cloudera 103.52.115.25 port 34530 [preauth]
Dec 02 07:44:45 np0005541914.localdomain sshd[25381]: fatal: Timeout before authentication for 122.114.69.235 port 43136
Dec 02 07:44:54 np0005541914.localdomain sshd[26265]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:44:55 np0005541914.localdomain sshd[26265]: Invalid user tmp from 182.253.156.173 port 39386
Dec 02 07:44:55 np0005541914.localdomain sshd[26265]: Received disconnect from 182.253.156.173 port 39386:11: Bye Bye [preauth]
Dec 02 07:44:55 np0005541914.localdomain sshd[26265]: Disconnected from invalid user tmp 182.253.156.173 port 39386 [preauth]
Dec 02 07:45:20 np0005541914.localdomain sshd[26267]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:45:29 np0005541914.localdomain sshd[26268]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:45:29 np0005541914.localdomain sshd[26268]: Accepted publickey for ceph-admin from 192.168.122.103 port 49076 ssh2: RSA SHA256:/YNuT9SDdDKzRDkfkhY38xwkOxxhXakQO/p8xUCPPz0
Dec 02 07:45:29 np0005541914.localdomain systemd-logind[760]: New session 14 of user ceph-admin.
Dec 02 07:45:29 np0005541914.localdomain systemd[1]: Created slice User Slice of UID 1002.
Dec 02 07:45:29 np0005541914.localdomain systemd[1]: Starting User Runtime Directory /run/user/1002...
Dec 02 07:45:29 np0005541914.localdomain systemd[1]: Finished User Runtime Directory /run/user/1002.
Dec 02 07:45:29 np0005541914.localdomain systemd[1]: Starting User Manager for UID 1002...
Dec 02 07:45:29 np0005541914.localdomain systemd[26272]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 02 07:45:29 np0005541914.localdomain sshd[26286]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:45:29 np0005541914.localdomain systemd[26272]: Queued start job for default target Main User Target.
Dec 02 07:45:29 np0005541914.localdomain systemd[26272]: Created slice User Application Slice.
Dec 02 07:45:29 np0005541914.localdomain systemd[26272]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 02 07:45:29 np0005541914.localdomain systemd[26272]: Started Daily Cleanup of User's Temporary Directories.
Dec 02 07:45:29 np0005541914.localdomain systemd[26272]: Reached target Paths.
Dec 02 07:45:29 np0005541914.localdomain systemd[26272]: Reached target Timers.
Dec 02 07:45:29 np0005541914.localdomain systemd[26272]: Starting D-Bus User Message Bus Socket...
Dec 02 07:45:29 np0005541914.localdomain systemd[26272]: Starting Create User's Volatile Files and Directories...
Dec 02 07:45:29 np0005541914.localdomain systemd[26272]: Listening on D-Bus User Message Bus Socket.
Dec 02 07:45:29 np0005541914.localdomain systemd[26272]: Reached target Sockets.
Dec 02 07:45:29 np0005541914.localdomain systemd[26272]: Finished Create User's Volatile Files and Directories.
Dec 02 07:45:29 np0005541914.localdomain systemd[26272]: Reached target Basic System.
Dec 02 07:45:29 np0005541914.localdomain systemd[26272]: Reached target Main User Target.
Dec 02 07:45:29 np0005541914.localdomain systemd[26272]: Startup finished in 122ms.
Dec 02 07:45:29 np0005541914.localdomain systemd[1]: Started User Manager for UID 1002.
Dec 02 07:45:29 np0005541914.localdomain systemd[1]: Started Session 14 of User ceph-admin.
Dec 02 07:45:29 np0005541914.localdomain sshd[26286]: Accepted publickey for ceph-admin from 192.168.122.103 port 49090 ssh2: RSA SHA256:/YNuT9SDdDKzRDkfkhY38xwkOxxhXakQO/p8xUCPPz0
Dec 02 07:45:29 np0005541914.localdomain sshd[26268]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 02 07:45:29 np0005541914.localdomain systemd-logind[760]: New session 16 of user ceph-admin.
Dec 02 07:45:29 np0005541914.localdomain systemd[1]: Started Session 16 of User ceph-admin.
Dec 02 07:45:29 np0005541914.localdomain sshd[26286]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 02 07:45:29 np0005541914.localdomain sudo[26293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 07:45:29 np0005541914.localdomain sudo[26293]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:45:29 np0005541914.localdomain sudo[26293]: pam_unix(sudo:session): session closed for user root
Dec 02 07:45:29 np0005541914.localdomain sshd[26308]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:45:30 np0005541914.localdomain sshd[26308]: Accepted publickey for ceph-admin from 192.168.122.103 port 49102 ssh2: RSA SHA256:/YNuT9SDdDKzRDkfkhY38xwkOxxhXakQO/p8xUCPPz0
Dec 02 07:45:30 np0005541914.localdomain systemd-logind[760]: New session 17 of user ceph-admin.
Dec 02 07:45:30 np0005541914.localdomain systemd[1]: Started Session 17 of User ceph-admin.
Dec 02 07:45:30 np0005541914.localdomain sshd[26308]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 02 07:45:30 np0005541914.localdomain sudo[26312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host --expect-hostname np0005541914.localdomain
Dec 02 07:45:30 np0005541914.localdomain sudo[26312]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:45:30 np0005541914.localdomain sudo[26312]: pam_unix(sudo:session): session closed for user root
Dec 02 07:45:30 np0005541914.localdomain sshd[26327]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:45:30 np0005541914.localdomain sshd[26327]: Accepted publickey for ceph-admin from 192.168.122.103 port 49106 ssh2: RSA SHA256:/YNuT9SDdDKzRDkfkhY38xwkOxxhXakQO/p8xUCPPz0
Dec 02 07:45:30 np0005541914.localdomain systemd-logind[760]: New session 18 of user ceph-admin.
Dec 02 07:45:30 np0005541914.localdomain systemd[1]: Started Session 18 of User ceph-admin.
Dec 02 07:45:30 np0005541914.localdomain sshd[26327]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 02 07:45:30 np0005541914.localdomain sudo[26331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3
Dec 02 07:45:30 np0005541914.localdomain sudo[26331]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:45:30 np0005541914.localdomain sudo[26331]: pam_unix(sudo:session): session closed for user root
Dec 02 07:45:30 np0005541914.localdomain sshd[26346]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:45:30 np0005541914.localdomain sshd[26346]: Accepted publickey for ceph-admin from 192.168.122.103 port 49110 ssh2: RSA SHA256:/YNuT9SDdDKzRDkfkhY38xwkOxxhXakQO/p8xUCPPz0
Dec 02 07:45:30 np0005541914.localdomain systemd-logind[760]: New session 19 of user ceph-admin.
Dec 02 07:45:30 np0005541914.localdomain systemd[1]: Started Session 19 of User ceph-admin.
Dec 02 07:45:30 np0005541914.localdomain sshd[26346]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 02 07:45:30 np0005541914.localdomain sudo[26350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 07:45:30 np0005541914.localdomain sudo[26350]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:45:30 np0005541914.localdomain sudo[26350]: pam_unix(sudo:session): session closed for user root
Dec 02 07:45:31 np0005541914.localdomain sshd[26365]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:45:31 np0005541914.localdomain sshd[26365]: Accepted publickey for ceph-admin from 192.168.122.103 port 49118 ssh2: RSA SHA256:/YNuT9SDdDKzRDkfkhY38xwkOxxhXakQO/p8xUCPPz0
Dec 02 07:45:31 np0005541914.localdomain systemd-logind[760]: New session 20 of user ceph-admin.
Dec 02 07:45:31 np0005541914.localdomain systemd[1]: Started Session 20 of User ceph-admin.
Dec 02 07:45:31 np0005541914.localdomain sshd[26365]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 02 07:45:31 np0005541914.localdomain sudo[26369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 07:45:31 np0005541914.localdomain sudo[26369]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:45:31 np0005541914.localdomain sudo[26369]: pam_unix(sudo:session): session closed for user root
Dec 02 07:45:31 np0005541914.localdomain sshd[26384]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:45:31 np0005541914.localdomain sshd[26384]: Accepted publickey for ceph-admin from 192.168.122.103 port 49132 ssh2: RSA SHA256:/YNuT9SDdDKzRDkfkhY38xwkOxxhXakQO/p8xUCPPz0
Dec 02 07:45:31 np0005541914.localdomain systemd-logind[760]: New session 21 of user ceph-admin.
Dec 02 07:45:31 np0005541914.localdomain systemd[1]: Started Session 21 of User ceph-admin.
Dec 02 07:45:31 np0005541914.localdomain sshd[26384]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 02 07:45:31 np0005541914.localdomain sudo[26388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3.new
Dec 02 07:45:31 np0005541914.localdomain sudo[26388]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:45:31 np0005541914.localdomain sudo[26388]: pam_unix(sudo:session): session closed for user root
Dec 02 07:45:31 np0005541914.localdomain sshd[26403]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:45:31 np0005541914.localdomain sshd[26403]: Accepted publickey for ceph-admin from 192.168.122.103 port 45108 ssh2: RSA SHA256:/YNuT9SDdDKzRDkfkhY38xwkOxxhXakQO/p8xUCPPz0
Dec 02 07:45:31 np0005541914.localdomain systemd-logind[760]: New session 22 of user ceph-admin.
Dec 02 07:45:31 np0005541914.localdomain systemd[1]: Started Session 22 of User ceph-admin.
Dec 02 07:45:31 np0005541914.localdomain sshd[26403]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 02 07:45:31 np0005541914.localdomain sudo[26407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 07:45:31 np0005541914.localdomain sudo[26407]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:45:31 np0005541914.localdomain sudo[26407]: pam_unix(sudo:session): session closed for user root
Dec 02 07:45:32 np0005541914.localdomain sshd[26422]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:45:32 np0005541914.localdomain sshd[26422]: Accepted publickey for ceph-admin from 192.168.122.103 port 45120 ssh2: RSA SHA256:/YNuT9SDdDKzRDkfkhY38xwkOxxhXakQO/p8xUCPPz0
Dec 02 07:45:32 np0005541914.localdomain systemd-logind[760]: New session 23 of user ceph-admin.
Dec 02 07:45:32 np0005541914.localdomain systemd[1]: Started Session 23 of User ceph-admin.
Dec 02 07:45:32 np0005541914.localdomain sshd[26422]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 02 07:45:32 np0005541914.localdomain sudo[26426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3.new
Dec 02 07:45:32 np0005541914.localdomain sudo[26426]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:45:32 np0005541914.localdomain sudo[26426]: pam_unix(sudo:session): session closed for user root
Dec 02 07:45:32 np0005541914.localdomain sshd[26441]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:45:32 np0005541914.localdomain sshd[26441]: Accepted publickey for ceph-admin from 192.168.122.103 port 45132 ssh2: RSA SHA256:/YNuT9SDdDKzRDkfkhY38xwkOxxhXakQO/p8xUCPPz0
Dec 02 07:45:32 np0005541914.localdomain systemd-logind[760]: New session 24 of user ceph-admin.
Dec 02 07:45:32 np0005541914.localdomain systemd[1]: Started Session 24 of User ceph-admin.
Dec 02 07:45:32 np0005541914.localdomain sshd[26441]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 02 07:45:32 np0005541914.localdomain sshd[26458]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:45:33 np0005541914.localdomain sshd[26458]: Accepted publickey for ceph-admin from 192.168.122.103 port 45142 ssh2: RSA SHA256:/YNuT9SDdDKzRDkfkhY38xwkOxxhXakQO/p8xUCPPz0
Dec 02 07:45:33 np0005541914.localdomain systemd-logind[760]: New session 25 of user ceph-admin.
Dec 02 07:45:33 np0005541914.localdomain systemd[1]: Started Session 25 of User ceph-admin.
Dec 02 07:45:33 np0005541914.localdomain sshd[26458]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 02 07:45:33 np0005541914.localdomain sudo[26462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3.new /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3
Dec 02 07:45:33 np0005541914.localdomain sudo[26462]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:45:33 np0005541914.localdomain sudo[26462]: pam_unix(sudo:session): session closed for user root
Dec 02 07:45:33 np0005541914.localdomain sshd[26477]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:45:33 np0005541914.localdomain sshd[26477]: Accepted publickey for ceph-admin from 192.168.122.103 port 45146 ssh2: RSA SHA256:/YNuT9SDdDKzRDkfkhY38xwkOxxhXakQO/p8xUCPPz0
Dec 02 07:45:33 np0005541914.localdomain systemd-logind[760]: New session 26 of user ceph-admin.
Dec 02 07:45:33 np0005541914.localdomain systemd[1]: Started Session 26 of User ceph-admin.
Dec 02 07:45:33 np0005541914.localdomain sshd[26477]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 02 07:45:33 np0005541914.localdomain sudo[26481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host --expect-hostname np0005541914.localdomain
Dec 02 07:45:33 np0005541914.localdomain sudo[26481]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:45:33 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 02 07:45:33 np0005541914.localdomain sudo[26481]: pam_unix(sudo:session): session closed for user root
Dec 02 07:45:48 np0005541914.localdomain sudo[26517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 07:45:48 np0005541914.localdomain sudo[26517]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:45:48 np0005541914.localdomain sudo[26517]: pam_unix(sudo:session): session closed for user root
Dec 02 07:45:48 np0005541914.localdomain sudo[26532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 07:45:48 np0005541914.localdomain sudo[26532]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:45:48 np0005541914.localdomain sudo[26532]: pam_unix(sudo:session): session closed for user root
Dec 02 07:45:48 np0005541914.localdomain sudo[26547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 02 07:45:48 np0005541914.localdomain sudo[26547]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:45:48 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 02 07:45:48 np0005541914.localdomain sudo[26547]: pam_unix(sudo:session): session closed for user root
Dec 02 07:45:49 np0005541914.localdomain sudo[26584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 07:45:49 np0005541914.localdomain sudo[26584]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:45:49 np0005541914.localdomain sudo[26584]: pam_unix(sudo:session): session closed for user root
Dec 02 07:45:49 np0005541914.localdomain sudo[26599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 02 07:45:49 np0005541914.localdomain sudo[26599]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:45:49 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 02 07:45:49 np0005541914.localdomain sudo[26599]: pam_unix(sudo:session): session closed for user root
Dec 02 07:45:49 np0005541914.localdomain sudo[26653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 07:45:49 np0005541914.localdomain sudo[26653]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:45:49 np0005541914.localdomain sudo[26653]: pam_unix(sudo:session): session closed for user root
Dec 02 07:45:49 np0005541914.localdomain sudo[26668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 07:45:49 np0005541914.localdomain sudo[26668]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:45:49 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 02 07:45:50 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 02 07:45:50 np0005541914.localdomain systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 26696 (sysctl)
Dec 02 07:45:50 np0005541914.localdomain systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec 02 07:45:50 np0005541914.localdomain systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec 02 07:45:50 np0005541914.localdomain sudo[26668]: pam_unix(sudo:session): session closed for user root
Dec 02 07:45:50 np0005541914.localdomain sudo[26718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 07:45:50 np0005541914.localdomain sudo[26718]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:45:50 np0005541914.localdomain sudo[26718]: pam_unix(sudo:session): session closed for user root
Dec 02 07:45:50 np0005541914.localdomain sudo[26733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Dec 02 07:45:50 np0005541914.localdomain sudo[26733]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:45:50 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 02 07:45:51 np0005541914.localdomain sudo[26733]: pam_unix(sudo:session): session closed for user root
Dec 02 07:45:51 np0005541914.localdomain sudo[26766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 07:45:51 np0005541914.localdomain sudo[26766]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:45:51 np0005541914.localdomain sudo[26766]: pam_unix(sudo:session): session closed for user root
Dec 02 07:45:51 np0005541914.localdomain sudo[26781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074 -- inventory --format=json-pretty --filter-for-batch
Dec 02 07:45:51 np0005541914.localdomain sudo[26781]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:45:51 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 02 07:45:55 np0005541914.localdomain kernel: VFS: idmapped mount is not enabled.
Dec 02 07:46:11 np0005541914.localdomain sshd[26932]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:46:11 np0005541914.localdomain sshd[26932]: Invalid user sol from 45.148.10.240 port 54740
Dec 02 07:46:11 np0005541914.localdomain sshd[26932]: Connection closed by invalid user sol 45.148.10.240 port 54740 [preauth]
Dec 02 07:46:15 np0005541914.localdomain podman[26837]: 
Dec 02 07:46:15 np0005541914.localdomain podman[26837]: 2025-12-02 07:46:15.70425502 +0000 UTC m=+24.066758952 container create 4a9a86550e72004a2943217285cff6d2ebe46cd9f32d2ef144fd548ca306fc24 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_mendeleev, GIT_BRANCH=main, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, version=7, build-date=2025-11-26T19:44:28Z, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, release=1763362218, io.openshift.tags=rhceph ceph)
Dec 02 07:46:15 np0005541914.localdomain systemd[1]: Created slice Slice /machine.
Dec 02 07:46:15 np0005541914.localdomain systemd[1]: Started libpod-conmon-4a9a86550e72004a2943217285cff6d2ebe46cd9f32d2ef144fd548ca306fc24.scope.
Dec 02 07:46:15 np0005541914.localdomain podman[26837]: 2025-12-02 07:45:51.68218483 +0000 UTC m=+0.044688782 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 07:46:15 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 07:46:15 np0005541914.localdomain podman[26837]: 2025-12-02 07:46:15.836357446 +0000 UTC m=+24.198861408 container init 4a9a86550e72004a2943217285cff6d2ebe46cd9f32d2ef144fd548ca306fc24 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_mendeleev, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_BRANCH=main, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, com.redhat.component=rhceph-container, GIT_CLEAN=True, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.buildah.version=1.41.4, name=rhceph, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 02 07:46:15 np0005541914.localdomain podman[26837]: 2025-12-02 07:46:15.846225969 +0000 UTC m=+24.208729931 container start 4a9a86550e72004a2943217285cff6d2ebe46cd9f32d2ef144fd548ca306fc24 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_mendeleev, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.openshift.expose-services=, ceph=True, name=rhceph, RELEASE=main, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 02 07:46:15 np0005541914.localdomain podman[26837]: 2025-12-02 07:46:15.846864748 +0000 UTC m=+24.209368750 container attach 4a9a86550e72004a2943217285cff6d2ebe46cd9f32d2ef144fd548ca306fc24 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_mendeleev, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.expose-services=, version=7, build-date=2025-11-26T19:44:28Z, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_CLEAN=True)
Dec 02 07:46:15 np0005541914.localdomain eager_mendeleev[26940]: 167 167
Dec 02 07:46:15 np0005541914.localdomain systemd[1]: libpod-4a9a86550e72004a2943217285cff6d2ebe46cd9f32d2ef144fd548ca306fc24.scope: Deactivated successfully.
Dec 02 07:46:15 np0005541914.localdomain podman[26837]: 2025-12-02 07:46:15.850174399 +0000 UTC m=+24.212678401 container died 4a9a86550e72004a2943217285cff6d2ebe46cd9f32d2ef144fd548ca306fc24 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_mendeleev, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.buildah.version=1.41.4, name=rhceph, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, version=7, io.openshift.tags=rhceph ceph, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, distribution-scope=public, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 02 07:46:15 np0005541914.localdomain podman[26945]: 2025-12-02 07:46:15.940818196 +0000 UTC m=+0.078235097 container remove 4a9a86550e72004a2943217285cff6d2ebe46cd9f32d2ef144fd548ca306fc24 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_mendeleev, name=rhceph, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-type=git, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, version=7, RELEASE=main, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, ceph=True, io.k8s.description=Red Hat Ceph Storage 7)
Dec 02 07:46:15 np0005541914.localdomain systemd[1]: libpod-conmon-4a9a86550e72004a2943217285cff6d2ebe46cd9f32d2ef144fd548ca306fc24.scope: Deactivated successfully.
Dec 02 07:46:16 np0005541914.localdomain podman[27026]: 
Dec 02 07:46:16 np0005541914.localdomain podman[27026]: 2025-12-02 07:46:16.143849605 +0000 UTC m=+0.044504705 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 07:46:16 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-5552358c14b5098670e608db9adf2e42b0f1c9fab4f8eb8fb57ce98f077e09c9-merged.mount: Deactivated successfully.
Dec 02 07:46:17 np0005541914.localdomain sshd[27041]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:46:18 np0005541914.localdomain sshd[27133]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:46:19 np0005541914.localdomain podman[27026]: 2025-12-02 07:46:19.685965999 +0000 UTC m=+3.586621079 container create 6604d46701b9cf5ec9afe73a895835530a5f2bee19ef376f05f1113bfe811c71 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_cartwright, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, GIT_CLEAN=True, io.openshift.expose-services=, RELEASE=main, version=7, distribution-scope=public)
Dec 02 07:46:19 np0005541914.localdomain sshd[27133]: Received disconnect from 182.253.156.173 port 53428:11: Bye Bye [preauth]
Dec 02 07:46:19 np0005541914.localdomain sshd[27133]: Disconnected from authenticating user root 182.253.156.173 port 53428 [preauth]
Dec 02 07:46:19 np0005541914.localdomain systemd[1]: Started libpod-conmon-6604d46701b9cf5ec9afe73a895835530a5f2bee19ef376f05f1113bfe811c71.scope.
Dec 02 07:46:19 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 07:46:20 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2f786a354fdf3b857b21088ea03c97b77e80eba87315cef388051689a36f867/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 02 07:46:20 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2f786a354fdf3b857b21088ea03c97b77e80eba87315cef388051689a36f867/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 02 07:46:20 np0005541914.localdomain podman[27026]: 2025-12-02 07:46:20.022610311 +0000 UTC m=+3.923265411 container init 6604d46701b9cf5ec9afe73a895835530a5f2bee19ef376f05f1113bfe811c71 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_cartwright, CEPH_POINT_RELEASE=, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, name=rhceph, ceph=True, vendor=Red Hat, Inc., distribution-scope=public, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, architecture=x86_64, GIT_BRANCH=main, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4)
Dec 02 07:46:20 np0005541914.localdomain podman[27026]: 2025-12-02 07:46:20.033615158 +0000 UTC m=+3.934270258 container start 6604d46701b9cf5ec9afe73a895835530a5f2bee19ef376f05f1113bfe811c71 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_cartwright, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, distribution-scope=public, name=rhceph, ceph=True, release=1763362218, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.41.4, GIT_BRANCH=main, RELEASE=main, GIT_CLEAN=True, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7)
Dec 02 07:46:20 np0005541914.localdomain podman[27026]: 2025-12-02 07:46:20.033892487 +0000 UTC m=+3.934547607 container attach 6604d46701b9cf5ec9afe73a895835530a5f2bee19ef376f05f1113bfe811c71 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_cartwright, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 07:46:20 np0005541914.localdomain sshd[27041]: Invalid user gits from 103.52.115.25 port 43646
Dec 02 07:46:20 np0005541914.localdomain admiring_cartwright[27138]: [
Dec 02 07:46:20 np0005541914.localdomain admiring_cartwright[27138]:     {
Dec 02 07:46:20 np0005541914.localdomain admiring_cartwright[27138]:         "available": false,
Dec 02 07:46:20 np0005541914.localdomain admiring_cartwright[27138]:         "ceph_device": false,
Dec 02 07:46:20 np0005541914.localdomain admiring_cartwright[27138]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 02 07:46:20 np0005541914.localdomain admiring_cartwright[27138]:         "lsm_data": {},
Dec 02 07:46:20 np0005541914.localdomain admiring_cartwright[27138]:         "lvs": [],
Dec 02 07:46:20 np0005541914.localdomain admiring_cartwright[27138]:         "path": "/dev/sr0",
Dec 02 07:46:20 np0005541914.localdomain admiring_cartwright[27138]:         "rejected_reasons": [
Dec 02 07:46:20 np0005541914.localdomain admiring_cartwright[27138]:             "Has a FileSystem",
Dec 02 07:46:20 np0005541914.localdomain admiring_cartwright[27138]:             "Insufficient space (<5GB)"
Dec 02 07:46:20 np0005541914.localdomain admiring_cartwright[27138]:         ],
Dec 02 07:46:20 np0005541914.localdomain admiring_cartwright[27138]:         "sys_api": {
Dec 02 07:46:20 np0005541914.localdomain admiring_cartwright[27138]:             "actuators": null,
Dec 02 07:46:20 np0005541914.localdomain admiring_cartwright[27138]:             "device_nodes": "sr0",
Dec 02 07:46:20 np0005541914.localdomain admiring_cartwright[27138]:             "human_readable_size": "482.00 KB",
Dec 02 07:46:20 np0005541914.localdomain admiring_cartwright[27138]:             "id_bus": "ata",
Dec 02 07:46:20 np0005541914.localdomain admiring_cartwright[27138]:             "model": "QEMU DVD-ROM",
Dec 02 07:46:20 np0005541914.localdomain admiring_cartwright[27138]:             "nr_requests": "2",
Dec 02 07:46:20 np0005541914.localdomain admiring_cartwright[27138]:             "partitions": {},
Dec 02 07:46:20 np0005541914.localdomain admiring_cartwright[27138]:             "path": "/dev/sr0",
Dec 02 07:46:20 np0005541914.localdomain admiring_cartwright[27138]:             "removable": "1",
Dec 02 07:46:20 np0005541914.localdomain admiring_cartwright[27138]:             "rev": "2.5+",
Dec 02 07:46:20 np0005541914.localdomain admiring_cartwright[27138]:             "ro": "0",
Dec 02 07:46:20 np0005541914.localdomain admiring_cartwright[27138]:             "rotational": "1",
Dec 02 07:46:20 np0005541914.localdomain admiring_cartwright[27138]:             "sas_address": "",
Dec 02 07:46:20 np0005541914.localdomain admiring_cartwright[27138]:             "sas_device_handle": "",
Dec 02 07:46:20 np0005541914.localdomain admiring_cartwright[27138]:             "scheduler_mode": "mq-deadline",
Dec 02 07:46:20 np0005541914.localdomain admiring_cartwright[27138]:             "sectors": 0,
Dec 02 07:46:20 np0005541914.localdomain admiring_cartwright[27138]:             "sectorsize": "2048",
Dec 02 07:46:20 np0005541914.localdomain admiring_cartwright[27138]:             "size": 493568.0,
Dec 02 07:46:20 np0005541914.localdomain admiring_cartwright[27138]:             "support_discard": "0",
Dec 02 07:46:20 np0005541914.localdomain admiring_cartwright[27138]:             "type": "disk",
Dec 02 07:46:20 np0005541914.localdomain admiring_cartwright[27138]:             "vendor": "QEMU"
Dec 02 07:46:20 np0005541914.localdomain admiring_cartwright[27138]:         }
Dec 02 07:46:20 np0005541914.localdomain admiring_cartwright[27138]:     }
Dec 02 07:46:20 np0005541914.localdomain admiring_cartwright[27138]: ]
Dec 02 07:46:20 np0005541914.localdomain systemd[1]: libpod-6604d46701b9cf5ec9afe73a895835530a5f2bee19ef376f05f1113bfe811c71.scope: Deactivated successfully.
Dec 02 07:46:20 np0005541914.localdomain podman[27026]: 2025-12-02 07:46:20.83843907 +0000 UTC m=+4.739094220 container died 6604d46701b9cf5ec9afe73a895835530a5f2bee19ef376f05f1113bfe811c71 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_cartwright, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, release=1763362218, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, version=7, vendor=Red Hat, Inc., GIT_CLEAN=True, distribution-scope=public)
Dec 02 07:46:20 np0005541914.localdomain systemd[1]: tmp-crun.6pY4V0.mount: Deactivated successfully.
Dec 02 07:46:20 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e2f786a354fdf3b857b21088ea03c97b77e80eba87315cef388051689a36f867-merged.mount: Deactivated successfully.
Dec 02 07:46:20 np0005541914.localdomain podman[28523]: 2025-12-02 07:46:20.915303804 +0000 UTC m=+0.066409985 container remove 6604d46701b9cf5ec9afe73a895835530a5f2bee19ef376f05f1113bfe811c71 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_cartwright, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, name=rhceph, release=1763362218, ceph=True, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public)
Dec 02 07:46:20 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 02 07:46:20 np0005541914.localdomain systemd[1]: libpod-conmon-6604d46701b9cf5ec9afe73a895835530a5f2bee19ef376f05f1113bfe811c71.scope: Deactivated successfully.
Dec 02 07:46:20 np0005541914.localdomain sudo[26781]: pam_unix(sudo:session): session closed for user root
Dec 02 07:46:21 np0005541914.localdomain sudo[28536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 07:46:21 np0005541914.localdomain sudo[28536]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:46:21 np0005541914.localdomain sudo[28536]: pam_unix(sudo:session): session closed for user root
Dec 02 07:46:21 np0005541914.localdomain sudo[28551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 _orch set-coredump-overrides --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074 --coredump-max-size=32G
Dec 02 07:46:21 np0005541914.localdomain sudo[28551]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:46:21 np0005541914.localdomain systemd[1]: systemd-coredump.socket: Deactivated successfully.
Dec 02 07:46:21 np0005541914.localdomain systemd[1]: Closed Process Core Dump Socket.
Dec 02 07:46:21 np0005541914.localdomain systemd[1]: Stopping Process Core Dump Socket...
Dec 02 07:46:21 np0005541914.localdomain systemd[1]: Listening on Process Core Dump Socket.
Dec 02 07:46:21 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 07:46:21 np0005541914.localdomain systemd-sysv-generator[28610]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 07:46:21 np0005541914.localdomain systemd-rc-local-generator[28607]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 07:46:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 07:46:21 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 07:46:21 np0005541914.localdomain sshd[27041]: Received disconnect from 103.52.115.25 port 43646:11: Bye Bye [preauth]
Dec 02 07:46:21 np0005541914.localdomain sshd[27041]: Disconnected from invalid user gits 103.52.115.25 port 43646 [preauth]
Dec 02 07:46:21 np0005541914.localdomain systemd-rc-local-generator[28643]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 07:46:21 np0005541914.localdomain systemd-sysv-generator[28649]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 07:46:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 07:46:21 np0005541914.localdomain sudo[28551]: pam_unix(sudo:session): session closed for user root
Dec 02 07:46:48 np0005541914.localdomain sudo[28654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 07:46:48 np0005541914.localdomain sudo[28654]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:46:48 np0005541914.localdomain sudo[28654]: pam_unix(sudo:session): session closed for user root
Dec 02 07:46:48 np0005541914.localdomain sudo[28669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 07:46:48 np0005541914.localdomain sudo[28669]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:46:49 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 02 07:46:49 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 02 07:46:49 np0005541914.localdomain podman[28726]: 
Dec 02 07:46:49 np0005541914.localdomain podman[28726]: 2025-12-02 07:46:49.434098717 +0000 UTC m=+0.064215993 container create 8f361238b069243c719de0076dbc7e14aa1c768958e05f6845d6ff6fc549700d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_chaplygin, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, RELEASE=main)
Dec 02 07:46:49 np0005541914.localdomain systemd[1]: Started libpod-conmon-8f361238b069243c719de0076dbc7e14aa1c768958e05f6845d6ff6fc549700d.scope.
Dec 02 07:46:49 np0005541914.localdomain podman[28726]: 2025-12-02 07:46:49.402854499 +0000 UTC m=+0.032971775 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 07:46:49 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 07:46:49 np0005541914.localdomain podman[28726]: 2025-12-02 07:46:49.518629024 +0000 UTC m=+0.148746300 container init 8f361238b069243c719de0076dbc7e14aa1c768958e05f6845d6ff6fc549700d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_chaplygin, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_CLEAN=True, architecture=x86_64, release=1763362218, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 07:46:49 np0005541914.localdomain podman[28726]: 2025-12-02 07:46:49.529091651 +0000 UTC m=+0.159208917 container start 8f361238b069243c719de0076dbc7e14aa1c768958e05f6845d6ff6fc549700d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_chaplygin, GIT_CLEAN=True, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.buildah.version=1.41.4, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, vcs-type=git, distribution-scope=public, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc.)
Dec 02 07:46:49 np0005541914.localdomain podman[28726]: 2025-12-02 07:46:49.529378522 +0000 UTC m=+0.159495828 container attach 8f361238b069243c719de0076dbc7e14aa1c768958e05f6845d6ff6fc549700d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_chaplygin, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_BRANCH=main, GIT_CLEAN=True, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, release=1763362218)
Dec 02 07:46:49 np0005541914.localdomain modest_chaplygin[28742]: 167 167
Dec 02 07:46:49 np0005541914.localdomain systemd[1]: libpod-8f361238b069243c719de0076dbc7e14aa1c768958e05f6845d6ff6fc549700d.scope: Deactivated successfully.
Dec 02 07:46:49 np0005541914.localdomain podman[28726]: 2025-12-02 07:46:49.535348896 +0000 UTC m=+0.165466152 container died 8f361238b069243c719de0076dbc7e14aa1c768958e05f6845d6ff6fc549700d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_chaplygin, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, vcs-type=git, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, distribution-scope=public, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True)
Dec 02 07:46:49 np0005541914.localdomain podman[28747]: 2025-12-02 07:46:49.622648869 +0000 UTC m=+0.075483814 container remove 8f361238b069243c719de0076dbc7e14aa1c768958e05f6845d6ff6fc549700d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_chaplygin, name=rhceph, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-type=git, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, io.buildah.version=1.41.4, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.openshift.tags=rhceph ceph)
Dec 02 07:46:49 np0005541914.localdomain systemd[1]: libpod-conmon-8f361238b069243c719de0076dbc7e14aa1c768958e05f6845d6ff6fc549700d.scope: Deactivated successfully.
Dec 02 07:46:49 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 07:46:49 np0005541914.localdomain systemd-sysv-generator[28793]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 07:46:49 np0005541914.localdomain systemd-rc-local-generator[28789]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 07:46:49 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 07:46:49 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 02 07:46:49 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 07:46:50 np0005541914.localdomain systemd-rc-local-generator[28826]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 07:46:50 np0005541914.localdomain systemd-sysv-generator[28829]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 07:46:50 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 07:46:50 np0005541914.localdomain systemd[1]: Reached target All Ceph clusters and services.
Dec 02 07:46:50 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 07:46:50 np0005541914.localdomain systemd-rc-local-generator[28865]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 07:46:50 np0005541914.localdomain systemd-sysv-generator[28870]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 07:46:50 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 07:46:50 np0005541914.localdomain systemd[1]: Reached target Ceph cluster c7c8e171-a193-56fb-95fa-8879fcfa7074.
Dec 02 07:46:50 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 07:46:50 np0005541914.localdomain systemd-rc-local-generator[28904]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 07:46:50 np0005541914.localdomain systemd-sysv-generator[28910]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 07:46:50 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 07:46:50 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 07:46:50 np0005541914.localdomain systemd-rc-local-generator[28945]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 07:46:50 np0005541914.localdomain systemd-sysv-generator[28950]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 07:46:50 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 07:46:50 np0005541914.localdomain systemd[1]: Created slice Slice /system/ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074.
Dec 02 07:46:50 np0005541914.localdomain systemd[1]: Reached target System Time Set.
Dec 02 07:46:50 np0005541914.localdomain systemd[1]: Reached target System Time Synchronized.
Dec 02 07:46:50 np0005541914.localdomain systemd[1]: Starting Ceph crash.np0005541914 for c7c8e171-a193-56fb-95fa-8879fcfa7074...
Dec 02 07:46:50 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 02 07:46:51 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 02 07:46:51 np0005541914.localdomain podman[29009]: 
Dec 02 07:46:51 np0005541914.localdomain podman[29009]: 2025-12-02 07:46:51.23193845 +0000 UTC m=+0.079346294 container create 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, RELEASE=main, architecture=x86_64, version=7, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, release=1763362218)
Dec 02 07:46:51 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69e6bb70e7fa91d1c0ac445661cb691ca616e5dd0f18f242c2c9791f48b514dd/merged/etc/ceph/ceph.client.crash.np0005541914.keyring supports timestamps until 2038 (0x7fffffff)
Dec 02 07:46:51 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69e6bb70e7fa91d1c0ac445661cb691ca616e5dd0f18f242c2c9791f48b514dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 02 07:46:51 np0005541914.localdomain podman[29009]: 2025-12-02 07:46:51.202846016 +0000 UTC m=+0.050253890 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 07:46:51 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69e6bb70e7fa91d1c0ac445661cb691ca616e5dd0f18f242c2c9791f48b514dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 02 07:46:51 np0005541914.localdomain podman[29009]: 2025-12-02 07:46:51.326136613 +0000 UTC m=+0.173544447 container init 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, vendor=Red Hat, Inc., release=1763362218, com.redhat.component=rhceph-container, RELEASE=main, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_BRANCH=main, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, ceph=True, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 07:46:51 np0005541914.localdomain podman[29009]: 2025-12-02 07:46:51.337818628 +0000 UTC m=+0.185226472 container start 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, CEPH_POINT_RELEASE=, architecture=x86_64, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, release=1763362218, vcs-type=git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z)
Dec 02 07:46:51 np0005541914.localdomain bash[29009]: 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c
Dec 02 07:46:51 np0005541914.localdomain systemd[1]: Started Ceph crash.np0005541914 for c7c8e171-a193-56fb-95fa-8879fcfa7074.
Dec 02 07:46:51 np0005541914.localdomain sudo[28669]: pam_unix(sudo:session): session closed for user root
Dec 02 07:46:51 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914[29023]: INFO:ceph-crash:pinging cluster to exercise our key
Dec 02 07:46:51 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914[29023]: 2025-12-02T07:46:51.534+0000 7f2c7fba3640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 02 07:46:51 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914[29023]: 2025-12-02T07:46:51.534+0000 7f2c7fba3640 -1 AuthRegistry(0x7f2c780680d0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 02 07:46:51 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914[29023]: 2025-12-02T07:46:51.535+0000 7f2c7fba3640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 02 07:46:51 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914[29023]: 2025-12-02T07:46:51.535+0000 7f2c7fba3640 -1 AuthRegistry(0x7f2c7fba2000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 02 07:46:51 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914[29023]: 2025-12-02T07:46:51.543+0000 7f2c7d918640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec 02 07:46:51 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914[29023]: 2025-12-02T07:46:51.544+0000 7f2c7d117640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec 02 07:46:51 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914[29023]: 2025-12-02T07:46:51.545+0000 7f2c7e119640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec 02 07:46:51 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914[29023]: 2025-12-02T07:46:51.545+0000 7f2c7fba3640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Dec 02 07:46:51 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914[29023]: [errno 13] RADOS permission denied (error connecting to the cluster)
Dec 02 07:46:51 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914[29023]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Dec 02 07:46:52 np0005541914.localdomain systemd[1]: tmp-crun.e5LWQ3.mount: Deactivated successfully.
Dec 02 07:46:54 np0005541914.localdomain sudo[29040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 07:46:54 np0005541914.localdomain sudo[29040]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:46:54 np0005541914.localdomain sudo[29040]: pam_unix(sudo:session): session closed for user root
Dec 02 07:46:54 np0005541914.localdomain sudo[29055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 --yes --no-systemd
Dec 02 07:46:54 np0005541914.localdomain sudo[29055]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:46:54 np0005541914.localdomain podman[29108]: 
Dec 02 07:46:54 np0005541914.localdomain podman[29108]: 2025-12-02 07:46:54.92827539 +0000 UTC m=+0.078469281 container create eac3b088d0df393e51ab09798a396d0a37d3183f237973ebdfc6587ac1de631d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_grothendieck, GIT_CLEAN=True, CEPH_POINT_RELEASE=, name=rhceph, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, version=7, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 02 07:46:54 np0005541914.localdomain systemd[1]: Started libpod-conmon-eac3b088d0df393e51ab09798a396d0a37d3183f237973ebdfc6587ac1de631d.scope.
Dec 02 07:46:54 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 07:46:54 np0005541914.localdomain podman[29108]: 2025-12-02 07:46:54.896100015 +0000 UTC m=+0.046293926 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 07:46:55 np0005541914.localdomain podman[29108]: 2025-12-02 07:46:55.008222806 +0000 UTC m=+0.158416677 container init eac3b088d0df393e51ab09798a396d0a37d3183f237973ebdfc6587ac1de631d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_grothendieck, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, RELEASE=main, version=7, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, name=rhceph, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 02 07:46:55 np0005541914.localdomain systemd[1]: tmp-crun.jEh5WT.mount: Deactivated successfully.
Dec 02 07:46:55 np0005541914.localdomain podman[29108]: 2025-12-02 07:46:55.02113899 +0000 UTC m=+0.171332871 container start eac3b088d0df393e51ab09798a396d0a37d3183f237973ebdfc6587ac1de631d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_grothendieck, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.buildah.version=1.41.4, vcs-type=git, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, release=1763362218, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, name=rhceph, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 02 07:46:55 np0005541914.localdomain elastic_grothendieck[29123]: 167 167
Dec 02 07:46:55 np0005541914.localdomain systemd[1]: libpod-eac3b088d0df393e51ab09798a396d0a37d3183f237973ebdfc6587ac1de631d.scope: Deactivated successfully.
Dec 02 07:46:55 np0005541914.localdomain podman[29108]: 2025-12-02 07:46:55.02165352 +0000 UTC m=+0.171847391 container attach eac3b088d0df393e51ab09798a396d0a37d3183f237973ebdfc6587ac1de631d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_grothendieck, version=7, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, release=1763362218, GIT_CLEAN=True, RELEASE=main, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, vcs-type=git, description=Red Hat Ceph Storage 7, distribution-scope=public, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 07:46:55 np0005541914.localdomain podman[29108]: 2025-12-02 07:46:55.034861305 +0000 UTC m=+0.185055196 container died eac3b088d0df393e51ab09798a396d0a37d3183f237973ebdfc6587ac1de631d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_grothendieck, architecture=x86_64, vcs-type=git, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, GIT_CLEAN=True, RELEASE=main, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph)
Dec 02 07:46:55 np0005541914.localdomain podman[29128]: 2025-12-02 07:46:55.119288256 +0000 UTC m=+0.082302629 container remove eac3b088d0df393e51ab09798a396d0a37d3183f237973ebdfc6587ac1de631d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_grothendieck, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, ceph=True, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, name=rhceph, vcs-type=git, architecture=x86_64, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7)
Dec 02 07:46:55 np0005541914.localdomain systemd[1]: libpod-conmon-eac3b088d0df393e51ab09798a396d0a37d3183f237973ebdfc6587ac1de631d.scope: Deactivated successfully.
Dec 02 07:46:55 np0005541914.localdomain podman[29148]: 
Dec 02 07:46:55 np0005541914.localdomain podman[29148]: 2025-12-02 07:46:55.354487926 +0000 UTC m=+0.075634270 container create 1bd3f6dd87a46a4dfba14de163fae8aa1833b080a50f194766a9615e16f5cb09 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_poitras, build-date=2025-11-26T19:44:28Z, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vendor=Red Hat, Inc., GIT_CLEAN=True, distribution-scope=public, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7)
Dec 02 07:46:55 np0005541914.localdomain systemd[1]: Started libpod-conmon-1bd3f6dd87a46a4dfba14de163fae8aa1833b080a50f194766a9615e16f5cb09.scope.
Dec 02 07:46:55 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 07:46:55 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef207e59e192e6959d3447acef19d3d8276fb37b06a73030ebcf12024972096c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 02 07:46:55 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef207e59e192e6959d3447acef19d3d8276fb37b06a73030ebcf12024972096c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 02 07:46:55 np0005541914.localdomain podman[29148]: 2025-12-02 07:46:55.330333884 +0000 UTC m=+0.051480248 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 07:46:55 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef207e59e192e6959d3447acef19d3d8276fb37b06a73030ebcf12024972096c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 02 07:46:55 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef207e59e192e6959d3447acef19d3d8276fb37b06a73030ebcf12024972096c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 02 07:46:55 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef207e59e192e6959d3447acef19d3d8276fb37b06a73030ebcf12024972096c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 02 07:46:55 np0005541914.localdomain podman[29148]: 2025-12-02 07:46:55.477496702 +0000 UTC m=+0.198643056 container init 1bd3f6dd87a46a4dfba14de163fae8aa1833b080a50f194766a9615e16f5cb09 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_poitras, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, CEPH_POINT_RELEASE=, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, ceph=True, distribution-scope=public, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, version=7, architecture=x86_64, name=rhceph, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 02 07:46:55 np0005541914.localdomain podman[29148]: 2025-12-02 07:46:55.488183899 +0000 UTC m=+0.209330223 container start 1bd3f6dd87a46a4dfba14de163fae8aa1833b080a50f194766a9615e16f5cb09 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_poitras, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., name=rhceph, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, architecture=x86_64, distribution-scope=public, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True)
Dec 02 07:46:55 np0005541914.localdomain podman[29148]: 2025-12-02 07:46:55.488402457 +0000 UTC m=+0.209548821 container attach 1bd3f6dd87a46a4dfba14de163fae8aa1833b080a50f194766a9615e16f5cb09 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_poitras, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, RELEASE=main, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., architecture=x86_64, release=1763362218, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_CLEAN=True, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 07:46:55 np0005541914.localdomain systemd[1]: tmp-crun.7J9zDH.mount: Deactivated successfully.
Dec 02 07:46:55 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f206391c52d49a4b86af2ec2006fd5c884ec2aecfa3ca60806c0f52919ef512c-merged.mount: Deactivated successfully.
Dec 02 07:46:55 np0005541914.localdomain funny_poitras[29164]: --> passed data devices: 0 physical, 2 LVM
Dec 02 07:46:55 np0005541914.localdomain funny_poitras[29164]: --> relative data size: 1.0
Dec 02 07:46:56 np0005541914.localdomain funny_poitras[29164]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 02 07:46:56 np0005541914.localdomain funny_poitras[29164]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 27399dc0-3412-47da-81e0-87f9f4a96daf
Dec 02 07:46:56 np0005541914.localdomain funny_poitras[29164]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 02 07:46:56 np0005541914.localdomain lvm[29218]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 02 07:46:56 np0005541914.localdomain lvm[29218]: VG ceph_vg0 finished
Dec 02 07:46:56 np0005541914.localdomain funny_poitras[29164]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Dec 02 07:46:56 np0005541914.localdomain funny_poitras[29164]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Dec 02 07:46:56 np0005541914.localdomain funny_poitras[29164]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 02 07:46:56 np0005541914.localdomain funny_poitras[29164]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec 02 07:46:56 np0005541914.localdomain funny_poitras[29164]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Dec 02 07:46:57 np0005541914.localdomain funny_poitras[29164]:  stderr: got monmap epoch 3
Dec 02 07:46:57 np0005541914.localdomain funny_poitras[29164]: --> Creating keyring file for osd.1
Dec 02 07:46:57 np0005541914.localdomain funny_poitras[29164]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Dec 02 07:46:57 np0005541914.localdomain funny_poitras[29164]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Dec 02 07:46:57 np0005541914.localdomain funny_poitras[29164]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 27399dc0-3412-47da-81e0-87f9f4a96daf --setuser ceph --setgroup ceph
Dec 02 07:46:59 np0005541914.localdomain funny_poitras[29164]:  stderr: 2025-12-02T07:46:57.201+0000 7f3895057a80 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec 02 07:46:59 np0005541914.localdomain funny_poitras[29164]:  stderr: 2025-12-02T07:46:57.201+0000 7f3895057a80 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Dec 02 07:46:59 np0005541914.localdomain funny_poitras[29164]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Dec 02 07:46:59 np0005541914.localdomain funny_poitras[29164]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 02 07:46:59 np0005541914.localdomain funny_poitras[29164]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 02 07:46:59 np0005541914.localdomain funny_poitras[29164]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec 02 07:46:59 np0005541914.localdomain funny_poitras[29164]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 02 07:46:59 np0005541914.localdomain funny_poitras[29164]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 02 07:46:59 np0005541914.localdomain funny_poitras[29164]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 02 07:46:59 np0005541914.localdomain funny_poitras[29164]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 02 07:46:59 np0005541914.localdomain funny_poitras[29164]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Dec 02 07:46:59 np0005541914.localdomain funny_poitras[29164]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 02 07:46:59 np0005541914.localdomain funny_poitras[29164]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new e70bab01-7143-4db1-8b99-c97ca4b22476
Dec 02 07:47:00 np0005541914.localdomain lvm[30163]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 02 07:47:00 np0005541914.localdomain lvm[30163]: VG ceph_vg1 finished
Dec 02 07:47:00 np0005541914.localdomain funny_poitras[29164]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 02 07:47:00 np0005541914.localdomain funny_poitras[29164]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-4
Dec 02 07:47:00 np0005541914.localdomain funny_poitras[29164]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Dec 02 07:47:00 np0005541914.localdomain funny_poitras[29164]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 02 07:47:00 np0005541914.localdomain funny_poitras[29164]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-4/block
Dec 02 07:47:00 np0005541914.localdomain funny_poitras[29164]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-4/activate.monmap
Dec 02 07:47:00 np0005541914.localdomain funny_poitras[29164]:  stderr: got monmap epoch 3
Dec 02 07:47:00 np0005541914.localdomain funny_poitras[29164]: --> Creating keyring file for osd.4
Dec 02 07:47:00 np0005541914.localdomain funny_poitras[29164]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4/keyring
Dec 02 07:47:00 np0005541914.localdomain funny_poitras[29164]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4/
Dec 02 07:47:00 np0005541914.localdomain funny_poitras[29164]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 4 --monmap /var/lib/ceph/osd/ceph-4/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-4/ --osd-uuid e70bab01-7143-4db1-8b99-c97ca4b22476 --setuser ceph --setgroup ceph
Dec 02 07:47:03 np0005541914.localdomain funny_poitras[29164]:  stderr: 2025-12-02T07:47:00.987+0000 7f606666ba80 -1 bluestore(/var/lib/ceph/osd/ceph-4//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Dec 02 07:47:03 np0005541914.localdomain funny_poitras[29164]:  stderr: 2025-12-02T07:47:00.987+0000 7f606666ba80 -1 bluestore(/var/lib/ceph/osd/ceph-4/) _read_fsid unparsable uuid
Dec 02 07:47:03 np0005541914.localdomain funny_poitras[29164]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Dec 02 07:47:03 np0005541914.localdomain funny_poitras[29164]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4
Dec 02 07:47:03 np0005541914.localdomain funny_poitras[29164]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-4 --no-mon-config
Dec 02 07:47:03 np0005541914.localdomain funny_poitras[29164]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-4/block
Dec 02 07:47:03 np0005541914.localdomain funny_poitras[29164]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-4/block
Dec 02 07:47:03 np0005541914.localdomain funny_poitras[29164]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 02 07:47:03 np0005541914.localdomain funny_poitras[29164]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4
Dec 02 07:47:03 np0005541914.localdomain funny_poitras[29164]: --> ceph-volume lvm activate successful for osd ID: 4
Dec 02 07:47:03 np0005541914.localdomain funny_poitras[29164]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Dec 02 07:47:03 np0005541914.localdomain systemd[1]: libpod-1bd3f6dd87a46a4dfba14de163fae8aa1833b080a50f194766a9615e16f5cb09.scope: Deactivated successfully.
Dec 02 07:47:03 np0005541914.localdomain systemd[1]: libpod-1bd3f6dd87a46a4dfba14de163fae8aa1833b080a50f194766a9615e16f5cb09.scope: Consumed 3.766s CPU time.
Dec 02 07:47:03 np0005541914.localdomain podman[31077]: 2025-12-02 07:47:03.632537383 +0000 UTC m=+0.036706172 container died 1bd3f6dd87a46a4dfba14de163fae8aa1833b080a50f194766a9615e16f5cb09 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_poitras, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, version=7, release=1763362218, vendor=Red Hat, Inc., ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 02 07:47:03 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-ef207e59e192e6959d3447acef19d3d8276fb37b06a73030ebcf12024972096c-merged.mount: Deactivated successfully.
Dec 02 07:47:03 np0005541914.localdomain podman[31077]: 2025-12-02 07:47:03.669898799 +0000 UTC m=+0.074067568 container remove 1bd3f6dd87a46a4dfba14de163fae8aa1833b080a50f194766a9615e16f5cb09 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_poitras, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, vendor=Red Hat, Inc., GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_BRANCH=main, description=Red Hat Ceph Storage 7)
Dec 02 07:47:03 np0005541914.localdomain systemd[1]: libpod-conmon-1bd3f6dd87a46a4dfba14de163fae8aa1833b080a50f194766a9615e16f5cb09.scope: Deactivated successfully.
Dec 02 07:47:03 np0005541914.localdomain sudo[29055]: pam_unix(sudo:session): session closed for user root
Dec 02 07:47:03 np0005541914.localdomain sudo[31092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 07:47:03 np0005541914.localdomain sudo[31092]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:47:03 np0005541914.localdomain sudo[31092]: pam_unix(sudo:session): session closed for user root
Dec 02 07:47:03 np0005541914.localdomain sudo[31107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074 -- lvm list --format json
Dec 02 07:47:03 np0005541914.localdomain sudo[31107]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:47:04 np0005541914.localdomain podman[31163]: 
Dec 02 07:47:04 np0005541914.localdomain podman[31163]: 2025-12-02 07:47:04.36271235 +0000 UTC m=+0.062436685 container create f11fbf21e62bf5bfdfdd740d714b3e6559bd3e91b646d68e33d09c5b1fd99d6b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_hypatia, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, build-date=2025-11-26T19:44:28Z, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, ceph=True, io.openshift.expose-services=, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vendor=Red Hat, Inc., GIT_CLEAN=True, release=1763362218, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vcs-type=git)
Dec 02 07:47:04 np0005541914.localdomain systemd[1]: Started libpod-conmon-f11fbf21e62bf5bfdfdd740d714b3e6559bd3e91b646d68e33d09c5b1fd99d6b.scope.
Dec 02 07:47:04 np0005541914.localdomain podman[31163]: 2025-12-02 07:47:04.334622094 +0000 UTC m=+0.034346429 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 07:47:04 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 07:47:04 np0005541914.localdomain podman[31163]: 2025-12-02 07:47:04.472957738 +0000 UTC m=+0.172682073 container init f11fbf21e62bf5bfdfdd740d714b3e6559bd3e91b646d68e33d09c5b1fd99d6b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_hypatia, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64)
Dec 02 07:47:04 np0005541914.localdomain podman[31163]: 2025-12-02 07:47:04.482138096 +0000 UTC m=+0.181862431 container start f11fbf21e62bf5bfdfdd740d714b3e6559bd3e91b646d68e33d09c5b1fd99d6b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_hypatia, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, ceph=True, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, RELEASE=main, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 02 07:47:04 np0005541914.localdomain podman[31163]: 2025-12-02 07:47:04.482715138 +0000 UTC m=+0.182439523 container attach f11fbf21e62bf5bfdfdd740d714b3e6559bd3e91b646d68e33d09c5b1fd99d6b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_hypatia, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, release=1763362218, name=rhceph, ceph=True, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, RELEASE=main)
Dec 02 07:47:04 np0005541914.localdomain practical_hypatia[31178]: 167 167
Dec 02 07:47:04 np0005541914.localdomain systemd[1]: libpod-f11fbf21e62bf5bfdfdd740d714b3e6559bd3e91b646d68e33d09c5b1fd99d6b.scope: Deactivated successfully.
Dec 02 07:47:04 np0005541914.localdomain podman[31163]: 2025-12-02 07:47:04.486128251 +0000 UTC m=+0.185852596 container died f11fbf21e62bf5bfdfdd740d714b3e6559bd3e91b646d68e33d09c5b1fd99d6b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_hypatia, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, io.openshift.expose-services=, GIT_CLEAN=True, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 07:47:04 np0005541914.localdomain podman[31183]: 2025-12-02 07:47:04.58048728 +0000 UTC m=+0.081111223 container remove f11fbf21e62bf5bfdfdd740d714b3e6559bd3e91b646d68e33d09c5b1fd99d6b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_hypatia, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vcs-type=git, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, GIT_CLEAN=True, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.openshift.tags=rhceph ceph, version=7, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4)
Dec 02 07:47:04 np0005541914.localdomain systemd[1]: libpod-conmon-f11fbf21e62bf5bfdfdd740d714b3e6559bd3e91b646d68e33d09c5b1fd99d6b.scope: Deactivated successfully.
Dec 02 07:47:04 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3254d7836790191d0343d08b2b1e49e4862bbe42c4dbd69830baa428da8f1264-merged.mount: Deactivated successfully.
Dec 02 07:47:04 np0005541914.localdomain podman[31202]: 
Dec 02 07:47:04 np0005541914.localdomain podman[31202]: 2025-12-02 07:47:04.777195319 +0000 UTC m=+0.078321695 container create a8326a40a29f022cf796a9eb0dfbf08f9ebad09dd52a6eacd1f65340104661f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_cohen, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, release=1763362218, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, name=rhceph, GIT_BRANCH=main, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, RELEASE=main)
Dec 02 07:47:04 np0005541914.localdomain systemd[1]: Started libpod-conmon-a8326a40a29f022cf796a9eb0dfbf08f9ebad09dd52a6eacd1f65340104661f8.scope.
Dec 02 07:47:04 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 07:47:04 np0005541914.localdomain podman[31202]: 2025-12-02 07:47:04.746116567 +0000 UTC m=+0.047242913 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 07:47:04 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d2ab4ecc6ff019beb442334811f3160869f27cc786f5c706351cf560b46594c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:04 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d2ab4ecc6ff019beb442334811f3160869f27cc786f5c706351cf560b46594c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:04 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d2ab4ecc6ff019beb442334811f3160869f27cc786f5c706351cf560b46594c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:04 np0005541914.localdomain podman[31202]: 2025-12-02 07:47:04.891403432 +0000 UTC m=+0.192529778 container init a8326a40a29f022cf796a9eb0dfbf08f9ebad09dd52a6eacd1f65340104661f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_cohen, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, architecture=x86_64, com.redhat.component=rhceph-container, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, io.openshift.tags=rhceph ceph, name=rhceph, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vendor=Red Hat, Inc., distribution-scope=public, CEPH_POINT_RELEASE=)
Dec 02 07:47:04 np0005541914.localdomain podman[31202]: 2025-12-02 07:47:04.902308827 +0000 UTC m=+0.203435173 container start a8326a40a29f022cf796a9eb0dfbf08f9ebad09dd52a6eacd1f65340104661f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_cohen, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, ceph=True, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.buildah.version=1.41.4)
Dec 02 07:47:04 np0005541914.localdomain podman[31202]: 2025-12-02 07:47:04.902961062 +0000 UTC m=+0.204087468 container attach a8326a40a29f022cf796a9eb0dfbf08f9ebad09dd52a6eacd1f65340104661f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_cohen, distribution-scope=public, version=7, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, vcs-type=git, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.openshift.tags=rhceph ceph, name=rhceph)
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]: {
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:     "1": [
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:         {
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:             "devices": [
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:                 "/dev/loop3"
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:             ],
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:             "lv_name": "ceph_lv0",
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:             "lv_size": "7511998464",
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=NWFjF3-2e4a-cjYp-Y2T1-lfu3-Zb16-4w56yf,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c7c8e171-a193-56fb-95fa-8879fcfa7074,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=27399dc0-3412-47da-81e0-87f9f4a96daf,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:             "lv_uuid": "NWFjF3-2e4a-cjYp-Y2T1-lfu3-Zb16-4w56yf",
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:             "name": "ceph_lv0",
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:             "tags": {
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:                 "ceph.block_uuid": "NWFjF3-2e4a-cjYp-Y2T1-lfu3-Zb16-4w56yf",
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:                 "ceph.cephx_lockbox_secret": "",
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:                 "ceph.cluster_fsid": "c7c8e171-a193-56fb-95fa-8879fcfa7074",
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:                 "ceph.cluster_name": "ceph",
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:                 "ceph.crush_device_class": "",
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:                 "ceph.encrypted": "0",
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:                 "ceph.osd_fsid": "27399dc0-3412-47da-81e0-87f9f4a96daf",
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:                 "ceph.osd_id": "1",
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:                 "ceph.type": "block",
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:                 "ceph.vdo": "0"
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:             },
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:             "type": "block",
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:             "vg_name": "ceph_vg0"
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:         }
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:     ],
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:     "4": [
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:         {
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:             "devices": [
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:                 "/dev/loop4"
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:             ],
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:             "lv_name": "ceph_lv1",
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:             "lv_size": "7511998464",
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=Sb75eh-ZyAN-pVXU-lBgz-dsu2-qa6Y-DlAUu4,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=c7c8e171-a193-56fb-95fa-8879fcfa7074,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e70bab01-7143-4db1-8b99-c97ca4b22476,ceph.osd_id=4,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:             "lv_uuid": "Sb75eh-ZyAN-pVXU-lBgz-dsu2-qa6Y-DlAUu4",
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:             "name": "ceph_lv1",
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:             "tags": {
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:                 "ceph.block_uuid": "Sb75eh-ZyAN-pVXU-lBgz-dsu2-qa6Y-DlAUu4",
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:                 "ceph.cephx_lockbox_secret": "",
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:                 "ceph.cluster_fsid": "c7c8e171-a193-56fb-95fa-8879fcfa7074",
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:                 "ceph.cluster_name": "ceph",
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:                 "ceph.crush_device_class": "",
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:                 "ceph.encrypted": "0",
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:                 "ceph.osd_fsid": "e70bab01-7143-4db1-8b99-c97ca4b22476",
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:                 "ceph.osd_id": "4",
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:                 "ceph.type": "block",
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:                 "ceph.vdo": "0"
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:             },
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:             "type": "block",
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:             "vg_name": "ceph_vg1"
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:         }
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]:     ]
Dec 02 07:47:05 np0005541914.localdomain nice_cohen[31218]: }
Dec 02 07:47:05 np0005541914.localdomain systemd[1]: libpod-a8326a40a29f022cf796a9eb0dfbf08f9ebad09dd52a6eacd1f65340104661f8.scope: Deactivated successfully.
Dec 02 07:47:05 np0005541914.localdomain podman[31202]: 2025-12-02 07:47:05.305945644 +0000 UTC m=+0.607072040 container died a8326a40a29f022cf796a9eb0dfbf08f9ebad09dd52a6eacd1f65340104661f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_cohen, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, version=7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, architecture=x86_64, GIT_BRANCH=main, GIT_CLEAN=True, vcs-type=git, ceph=True, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=)
Dec 02 07:47:05 np0005541914.localdomain podman[31227]: 2025-12-02 07:47:05.392092012 +0000 UTC m=+0.077464741 container remove a8326a40a29f022cf796a9eb0dfbf08f9ebad09dd52a6eacd1f65340104661f8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_cohen, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.tags=rhceph ceph, release=1763362218, name=rhceph, vcs-type=git, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, version=7, io.buildah.version=1.41.4, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True)
Dec 02 07:47:05 np0005541914.localdomain systemd[1]: libpod-conmon-a8326a40a29f022cf796a9eb0dfbf08f9ebad09dd52a6eacd1f65340104661f8.scope: Deactivated successfully.
Dec 02 07:47:05 np0005541914.localdomain sudo[31107]: pam_unix(sudo:session): session closed for user root
Dec 02 07:47:05 np0005541914.localdomain sudo[31241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 07:47:05 np0005541914.localdomain sudo[31241]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:47:05 np0005541914.localdomain sudo[31241]: pam_unix(sudo:session): session closed for user root
Dec 02 07:47:05 np0005541914.localdomain sudo[31256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 07:47:05 np0005541914.localdomain sudo[31256]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:47:05 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-7d2ab4ecc6ff019beb442334811f3160869f27cc786f5c706351cf560b46594c-merged.mount: Deactivated successfully.
Dec 02 07:47:06 np0005541914.localdomain podman[31313]: 
Dec 02 07:47:06 np0005541914.localdomain podman[31313]: 2025-12-02 07:47:06.108843757 +0000 UTC m=+0.057676130 container create 04a0a9f430a73cb3f84aa5c91c6e0fe74dbe600cbdab49bf78c1b28a3a6353ec (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_jackson, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_BRANCH=main, GIT_CLEAN=True, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.openshift.expose-services=, name=rhceph, io.buildah.version=1.41.4, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 02 07:47:06 np0005541914.localdomain systemd[1]: Started libpod-conmon-04a0a9f430a73cb3f84aa5c91c6e0fe74dbe600cbdab49bf78c1b28a3a6353ec.scope.
Dec 02 07:47:06 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 07:47:06 np0005541914.localdomain podman[31313]: 2025-12-02 07:47:06.174368581 +0000 UTC m=+0.123200924 container init 04a0a9f430a73cb3f84aa5c91c6e0fe74dbe600cbdab49bf78c1b28a3a6353ec (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_jackson, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., ceph=True, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, RELEASE=main, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, name=rhceph, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, version=7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, release=1763362218)
Dec 02 07:47:06 np0005541914.localdomain podman[31313]: 2025-12-02 07:47:06.080664368 +0000 UTC m=+0.029496771 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 07:47:06 np0005541914.localdomain podman[31313]: 2025-12-02 07:47:06.184185524 +0000 UTC m=+0.133017927 container start 04a0a9f430a73cb3f84aa5c91c6e0fe74dbe600cbdab49bf78c1b28a3a6353ec (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_jackson, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, distribution-scope=public, ceph=True, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, RELEASE=main)
Dec 02 07:47:06 np0005541914.localdomain podman[31313]: 2025-12-02 07:47:06.184432734 +0000 UTC m=+0.133265077 container attach 04a0a9f430a73cb3f84aa5c91c6e0fe74dbe600cbdab49bf78c1b28a3a6353ec (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_jackson, name=rhceph, architecture=x86_64, ceph=True, vcs-type=git, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, vendor=Red Hat, Inc., GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_CLEAN=True, distribution-scope=public, com.redhat.component=rhceph-container, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 07:47:06 np0005541914.localdomain romantic_jackson[31328]: 167 167
Dec 02 07:47:06 np0005541914.localdomain systemd[1]: libpod-04a0a9f430a73cb3f84aa5c91c6e0fe74dbe600cbdab49bf78c1b28a3a6353ec.scope: Deactivated successfully.
Dec 02 07:47:06 np0005541914.localdomain podman[31313]: 2025-12-02 07:47:06.187096448 +0000 UTC m=+0.135928831 container died 04a0a9f430a73cb3f84aa5c91c6e0fe74dbe600cbdab49bf78c1b28a3a6353ec (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_jackson, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, vcs-type=git, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, RELEASE=main, GIT_BRANCH=main, vendor=Red Hat, Inc., version=7, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True)
Dec 02 07:47:06 np0005541914.localdomain podman[31333]: 2025-12-02 07:47:06.275175141 +0000 UTC m=+0.078194570 container remove 04a0a9f430a73cb3f84aa5c91c6e0fe74dbe600cbdab49bf78c1b28a3a6353ec (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_jackson, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, architecture=x86_64, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, version=7, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, distribution-scope=public, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, ceph=True, release=1763362218)
Dec 02 07:47:06 np0005541914.localdomain systemd[1]: libpod-conmon-04a0a9f430a73cb3f84aa5c91c6e0fe74dbe600cbdab49bf78c1b28a3a6353ec.scope: Deactivated successfully.
Dec 02 07:47:06 np0005541914.localdomain podman[31361]: 2025-12-02 07:47:06.563341376 +0000 UTC m=+0.042680605 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 07:47:06 np0005541914.localdomain podman[31361]: 
Dec 02 07:47:06 np0005541914.localdomain podman[31361]: 2025-12-02 07:47:06.59113859 +0000 UTC m=+0.070477809 container create f228c3deeb20edd8aba62b7cf3f1885b151f57fdcb6681269de0c7578b65c949 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate-test, build-date=2025-11-26T19:44:28Z, vcs-type=git, architecture=x86_64, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, distribution-scope=public, GIT_BRANCH=main, release=1763362218, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 07:47:06 np0005541914.localdomain systemd[1]: Started libpod-conmon-f228c3deeb20edd8aba62b7cf3f1885b151f57fdcb6681269de0c7578b65c949.scope.
Dec 02 07:47:06 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 07:47:06 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-1f5f03fd0defdf4be6f50a6bd2280c9157dd5cb009a445f1ea0fe590b44c5439-merged.mount: Deactivated successfully.
Dec 02 07:47:06 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f54b186727602b9ca9eee2285af88533dcdfac5b9e766d66dbbcf1a5912fdeeb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:06 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f54b186727602b9ca9eee2285af88533dcdfac5b9e766d66dbbcf1a5912fdeeb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:06 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f54b186727602b9ca9eee2285af88533dcdfac5b9e766d66dbbcf1a5912fdeeb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:06 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f54b186727602b9ca9eee2285af88533dcdfac5b9e766d66dbbcf1a5912fdeeb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:06 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f54b186727602b9ca9eee2285af88533dcdfac5b9e766d66dbbcf1a5912fdeeb/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:06 np0005541914.localdomain podman[31361]: 2025-12-02 07:47:06.712317224 +0000 UTC m=+0.191656443 container init f228c3deeb20edd8aba62b7cf3f1885b151f57fdcb6681269de0c7578b65c949 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate-test, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, architecture=x86_64, release=1763362218, description=Red Hat Ceph Storage 7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., GIT_BRANCH=main, CEPH_POINT_RELEASE=)
Dec 02 07:47:06 np0005541914.localdomain systemd[1]: tmp-crun.utJxGp.mount: Deactivated successfully.
Dec 02 07:47:06 np0005541914.localdomain podman[31361]: 2025-12-02 07:47:06.724917326 +0000 UTC m=+0.204256555 container start f228c3deeb20edd8aba62b7cf3f1885b151f57fdcb6681269de0c7578b65c949 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate-test, build-date=2025-11-26T19:44:28Z, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_CLEAN=True, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, version=7, com.redhat.component=rhceph-container, vendor=Red Hat, Inc.)
Dec 02 07:47:06 np0005541914.localdomain podman[31361]: 2025-12-02 07:47:06.725224327 +0000 UTC m=+0.204563586 container attach f228c3deeb20edd8aba62b7cf3f1885b151f57fdcb6681269de0c7578b65c949 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate-test, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_BRANCH=main, release=1763362218, com.redhat.component=rhceph-container, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, ceph=True, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4)
Dec 02 07:47:06 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate-test[31376]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Dec 02 07:47:06 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate-test[31376]:                             [--no-systemd] [--no-tmpfs]
Dec 02 07:47:06 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate-test[31376]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 02 07:47:06 np0005541914.localdomain systemd[1]: libpod-f228c3deeb20edd8aba62b7cf3f1885b151f57fdcb6681269de0c7578b65c949.scope: Deactivated successfully.
Dec 02 07:47:06 np0005541914.localdomain podman[31361]: 2025-12-02 07:47:06.945939873 +0000 UTC m=+0.425279152 container died f228c3deeb20edd8aba62b7cf3f1885b151f57fdcb6681269de0c7578b65c949 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate-test, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=7, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, release=1763362218, name=rhceph, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 02 07:47:07 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f54b186727602b9ca9eee2285af88533dcdfac5b9e766d66dbbcf1a5912fdeeb-merged.mount: Deactivated successfully.
Dec 02 07:47:07 np0005541914.localdomain systemd-journald[619]: Field hash table of /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal has a fill level at 75.1 (250 of 333 items), suggesting rotation.
Dec 02 07:47:07 np0005541914.localdomain systemd-journald[619]: /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal: Journal header limits reached or header out-of-date, rotating.
Dec 02 07:47:07 np0005541914.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 02 07:47:07 np0005541914.localdomain podman[31381]: 2025-12-02 07:47:07.04233564 +0000 UTC m=+0.086825726 container remove f228c3deeb20edd8aba62b7cf3f1885b151f57fdcb6681269de0c7578b65c949 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate-test, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, release=1763362218, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, RELEASE=main, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, GIT_CLEAN=True)
Dec 02 07:47:07 np0005541914.localdomain systemd[1]: libpod-conmon-f228c3deeb20edd8aba62b7cf3f1885b151f57fdcb6681269de0c7578b65c949.scope: Deactivated successfully.
Dec 02 07:47:07 np0005541914.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 02 07:47:07 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 07:47:07 np0005541914.localdomain systemd-sysv-generator[31444]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 07:47:07 np0005541914.localdomain systemd-rc-local-generator[31440]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 07:47:07 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 07:47:07 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 07:47:07 np0005541914.localdomain systemd-rc-local-generator[31479]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 07:47:07 np0005541914.localdomain systemd-sysv-generator[31484]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 07:47:07 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 07:47:07 np0005541914.localdomain systemd[1]: Starting Ceph osd.1 for c7c8e171-a193-56fb-95fa-8879fcfa7074...
Dec 02 07:47:08 np0005541914.localdomain podman[31545]: 
Dec 02 07:47:08 np0005541914.localdomain podman[31545]: 2025-12-02 07:47:08.249178112 +0000 UTC m=+0.088086376 container create 2a6b46d3cff9fa20dc013978ce8dc0be0b1f7f1ba9e57b439649287a6dfde9fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.openshift.expose-services=, GIT_CLEAN=True, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, vcs-type=git, architecture=x86_64, name=rhceph, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.openshift.tags=rhceph ceph)
Dec 02 07:47:08 np0005541914.localdomain systemd[1]: tmp-crun.CkIe7G.mount: Deactivated successfully.
Dec 02 07:47:08 np0005541914.localdomain podman[31545]: 2025-12-02 07:47:08.207672694 +0000 UTC m=+0.046580988 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 07:47:08 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 07:47:08 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0583ca6775c994fa7ed8a77ec4e549f4e241ffad956cfbe9948cd544cbd459f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:08 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0583ca6775c994fa7ed8a77ec4e549f4e241ffad956cfbe9948cd544cbd459f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:08 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0583ca6775c994fa7ed8a77ec4e549f4e241ffad956cfbe9948cd544cbd459f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:08 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0583ca6775c994fa7ed8a77ec4e549f4e241ffad956cfbe9948cd544cbd459f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:08 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0583ca6775c994fa7ed8a77ec4e549f4e241ffad956cfbe9948cd544cbd459f7/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:08 np0005541914.localdomain podman[31545]: 2025-12-02 07:47:08.380111916 +0000 UTC m=+0.219020190 container init 2a6b46d3cff9fa20dc013978ce8dc0be0b1f7f1ba9e57b439649287a6dfde9fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate, ceph=True, release=1763362218, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, distribution-scope=public, version=7, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 07:47:08 np0005541914.localdomain podman[31545]: 2025-12-02 07:47:08.390776302 +0000 UTC m=+0.229684566 container start 2a6b46d3cff9fa20dc013978ce8dc0be0b1f7f1ba9e57b439649287a6dfde9fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate, CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, ceph=True, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, version=7, RELEASE=main, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 07:47:08 np0005541914.localdomain podman[31545]: 2025-12-02 07:47:08.391069583 +0000 UTC m=+0.229977927 container attach 2a6b46d3cff9fa20dc013978ce8dc0be0b1f7f1ba9e57b439649287a6dfde9fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, io.openshift.tags=rhceph ceph, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_BRANCH=main, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, vendor=Red Hat, Inc., GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, distribution-scope=public)
Dec 02 07:47:09 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate[31560]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 02 07:47:09 np0005541914.localdomain bash[31545]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 02 07:47:09 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate[31560]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Dec 02 07:47:09 np0005541914.localdomain bash[31545]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Dec 02 07:47:09 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate[31560]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Dec 02 07:47:09 np0005541914.localdomain bash[31545]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Dec 02 07:47:09 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate[31560]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 02 07:47:09 np0005541914.localdomain bash[31545]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 02 07:47:09 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate[31560]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec 02 07:47:09 np0005541914.localdomain bash[31545]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec 02 07:47:09 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate[31560]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 02 07:47:09 np0005541914.localdomain bash[31545]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 02 07:47:09 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate[31560]: --> ceph-volume raw activate successful for osd ID: 1
Dec 02 07:47:09 np0005541914.localdomain bash[31545]: --> ceph-volume raw activate successful for osd ID: 1
Dec 02 07:47:09 np0005541914.localdomain systemd[1]: libpod-2a6b46d3cff9fa20dc013978ce8dc0be0b1f7f1ba9e57b439649287a6dfde9fc.scope: Deactivated successfully.
Dec 02 07:47:09 np0005541914.localdomain podman[31545]: 2025-12-02 07:47:09.172410065 +0000 UTC m=+1.011318339 container died 2a6b46d3cff9fa20dc013978ce8dc0be0b1f7f1ba9e57b439649287a6dfde9fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate, ceph=True, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vcs-type=git, name=rhceph, RELEASE=main, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, release=1763362218, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 02 07:47:09 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-0583ca6775c994fa7ed8a77ec4e549f4e241ffad956cfbe9948cd544cbd459f7-merged.mount: Deactivated successfully.
Dec 02 07:47:09 np0005541914.localdomain podman[31691]: 2025-12-02 07:47:09.271592123 +0000 UTC m=+0.087496253 container remove 2a6b46d3cff9fa20dc013978ce8dc0be0b1f7f1ba9e57b439649287a6dfde9fc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1-activate, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, version=7, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, vcs-type=git, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, architecture=x86_64, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public)
Dec 02 07:47:09 np0005541914.localdomain podman[31752]: 
Dec 02 07:47:09 np0005541914.localdomain podman[31752]: 2025-12-02 07:47:09.631864068 +0000 UTC m=+0.071376033 container create 3d64e5e3c63fd4353268c2b77cd98845bd8df4357249a1c3c35d00ad296d91be (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, ceph=True, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_CLEAN=True, io.openshift.expose-services=, name=rhceph, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.buildah.version=1.41.4, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64)
Dec 02 07:47:09 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30eda494c8fcfc88d0d18776bddf0fd1b75b39c1a1b38c1572e6fd850e5a6fe5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:09 np0005541914.localdomain podman[31752]: 2025-12-02 07:47:09.604812184 +0000 UTC m=+0.044324149 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 07:47:09 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30eda494c8fcfc88d0d18776bddf0fd1b75b39c1a1b38c1572e6fd850e5a6fe5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:09 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30eda494c8fcfc88d0d18776bddf0fd1b75b39c1a1b38c1572e6fd850e5a6fe5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:09 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30eda494c8fcfc88d0d18776bddf0fd1b75b39c1a1b38c1572e6fd850e5a6fe5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:09 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30eda494c8fcfc88d0d18776bddf0fd1b75b39c1a1b38c1572e6fd850e5a6fe5/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:09 np0005541914.localdomain podman[31752]: 2025-12-02 07:47:09.754850883 +0000 UTC m=+0.194362858 container init 3d64e5e3c63fd4353268c2b77cd98845bd8df4357249a1c3c35d00ad296d91be (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1, ceph=True, RELEASE=main, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., release=1763362218, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, com.redhat.component=rhceph-container, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 02 07:47:09 np0005541914.localdomain podman[31752]: 2025-12-02 07:47:09.7647591 +0000 UTC m=+0.204271065 container start 3d64e5e3c63fd4353268c2b77cd98845bd8df4357249a1c3c35d00ad296d91be (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.openshift.expose-services=, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, ceph=True, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, name=rhceph, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, architecture=x86_64, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 02 07:47:09 np0005541914.localdomain bash[31752]: 3d64e5e3c63fd4353268c2b77cd98845bd8df4357249a1c3c35d00ad296d91be
Dec 02 07:47:09 np0005541914.localdomain systemd[1]: Started Ceph osd.1 for c7c8e171-a193-56fb-95fa-8879fcfa7074.
Dec 02 07:47:09 np0005541914.localdomain ceph-osd[31770]: set uid:gid to 167:167 (ceph:ceph)
Dec 02 07:47:09 np0005541914.localdomain ceph-osd[31770]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-osd, pid 2
Dec 02 07:47:09 np0005541914.localdomain ceph-osd[31770]: pidfile_write: ignore empty --pid-file
Dec 02 07:47:09 np0005541914.localdomain ceph-osd[31770]: bdev(0x56102bf7ee00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 02 07:47:09 np0005541914.localdomain ceph-osd[31770]: bdev(0x56102bf7ee00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 02 07:47:09 np0005541914.localdomain ceph-osd[31770]: bdev(0x56102bf7ee00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 02 07:47:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Dec 02 07:47:09 np0005541914.localdomain ceph-osd[31770]: bdev(0x56102bf7f180 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 02 07:47:09 np0005541914.localdomain ceph-osd[31770]: bdev(0x56102bf7f180 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 02 07:47:09 np0005541914.localdomain ceph-osd[31770]: bdev(0x56102bf7f180 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 02 07:47:09 np0005541914.localdomain ceph-osd[31770]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Dec 02 07:47:09 np0005541914.localdomain ceph-osd[31770]: bdev(0x56102bf7f180 /var/lib/ceph/osd/ceph-1/block) close
Dec 02 07:47:09 np0005541914.localdomain sudo[31256]: pam_unix(sudo:session): session closed for user root
Dec 02 07:47:09 np0005541914.localdomain sudo[31783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 07:47:09 np0005541914.localdomain sudo[31783]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:47:09 np0005541914.localdomain sudo[31783]: pam_unix(sudo:session): session closed for user root
Dec 02 07:47:10 np0005541914.localdomain sudo[31798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 07:47:10 np0005541914.localdomain sudo[31798]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: bdev(0x56102bf7ee00 /var/lib/ceph/osd/ceph-1/block) close
Dec 02 07:47:10 np0005541914.localdomain systemd[1]: tmp-crun.AEtDqd.mount: Deactivated successfully.
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: load: jerasure load: lrc 
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: bdev(0x56102bf7ee00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: bdev(0x56102bf7ee00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: bdev(0x56102bf7ee00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: bdev(0x56102bf7ee00 /var/lib/ceph/osd/ceph-1/block) close
Dec 02 07:47:10 np0005541914.localdomain podman[31861]: 
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: bdev(0x56102bf7ee00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: bdev(0x56102bf7ee00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: bdev(0x56102bf7ee00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: bdev(0x56102bf7ee00 /var/lib/ceph/osd/ceph-1/block) close
Dec 02 07:47:10 np0005541914.localdomain podman[31861]: 2025-12-02 07:47:10.628483884 +0000 UTC m=+0.071623303 container create f878db47a90dab16f6f42cde5c789346b3844459ce2464244b3fde469898b026 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_jepsen, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, com.redhat.component=rhceph-container, version=7, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., RELEASE=main, GIT_BRANCH=main, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 07:47:10 np0005541914.localdomain systemd[1]: Started libpod-conmon-f878db47a90dab16f6f42cde5c789346b3844459ce2464244b3fde469898b026.scope.
Dec 02 07:47:10 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 07:47:10 np0005541914.localdomain podman[31861]: 2025-12-02 07:47:10.598010575 +0000 UTC m=+0.041149974 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 07:47:10 np0005541914.localdomain podman[31861]: 2025-12-02 07:47:10.702458147 +0000 UTC m=+0.145597546 container init f878db47a90dab16f6f42cde5c789346b3844459ce2464244b3fde469898b026 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_jepsen, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, version=7, architecture=x86_64, com.redhat.component=rhceph-container, release=1763362218, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, GIT_CLEAN=True)
Dec 02 07:47:10 np0005541914.localdomain funny_jepsen[31880]: 167 167
Dec 02 07:47:10 np0005541914.localdomain systemd[1]: libpod-f878db47a90dab16f6f42cde5c789346b3844459ce2464244b3fde469898b026.scope: Deactivated successfully.
Dec 02 07:47:10 np0005541914.localdomain podman[31861]: 2025-12-02 07:47:10.715108491 +0000 UTC m=+0.158247920 container start f878db47a90dab16f6f42cde5c789346b3844459ce2464244b3fde469898b026 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_jepsen, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, ceph=True, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat Ceph Storage 7, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, version=7, vcs-type=git)
Dec 02 07:47:10 np0005541914.localdomain podman[31861]: 2025-12-02 07:47:10.715420203 +0000 UTC m=+0.158559652 container attach f878db47a90dab16f6f42cde5c789346b3844459ce2464244b3fde469898b026 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_jepsen, distribution-scope=public, GIT_CLEAN=True, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, version=7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, name=rhceph, architecture=x86_64, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, GIT_BRANCH=main, io.openshift.expose-services=)
Dec 02 07:47:10 np0005541914.localdomain podman[31861]: 2025-12-02 07:47:10.718601167 +0000 UTC m=+0.161740626 container died f878db47a90dab16f6f42cde5c789346b3844459ce2464244b3fde469898b026 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_jepsen, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, architecture=x86_64, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, name=rhceph, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, vcs-type=git, ceph=True, GIT_BRANCH=main, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 02 07:47:10 np0005541914.localdomain podman[31885]: 2025-12-02 07:47:10.809906717 +0000 UTC m=+0.081005099 container remove f878db47a90dab16f6f42cde5c789346b3844459ce2464244b3fde469898b026 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_jepsen, GIT_CLEAN=True, ceph=True, GIT_BRANCH=main, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., name=rhceph, release=1763362218, vcs-type=git, version=7, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, RELEASE=main)
Dec 02 07:47:10 np0005541914.localdomain systemd[1]: libpod-conmon-f878db47a90dab16f6f42cde5c789346b3844459ce2464244b3fde469898b026.scope: Deactivated successfully.
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: bdev(0x56102bf7ee00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: bdev(0x56102bf7ee00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: bdev(0x56102bf7ee00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: bdev(0x56102bf7f180 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: bdev(0x56102bf7f180 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: bdev(0x56102bf7f180 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: bluefs mount
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: bluefs mount shared_bdev_used = 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: RocksDB version: 7.9.2
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Git sha 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Compile date 2025-09-23 00:00:00
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: DB SUMMARY
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: DB Session ID:  LVMBL2HNWNG9X0KVIREE
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: CURRENT file:  CURRENT
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: IDENTITY file:  IDENTITY
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                         Options.error_if_exists: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                       Options.create_if_missing: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                         Options.paranoid_checks: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                                     Options.env: 0x56102cd99c70
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                                Options.info_log: 0x56102cf1cbe0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.max_file_opening_threads: 16
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                              Options.statistics: (nil)
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                               Options.use_fsync: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                       Options.max_log_file_size: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                         Options.allow_fallocate: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.use_direct_reads: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.create_missing_column_families: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                              Options.db_log_dir: 
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                                 Options.wal_dir: db.wal
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.advise_random_on_open: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.write_buffer_manager: 0x56102bf68140
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                            Options.rate_limiter: (nil)
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.unordered_write: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                               Options.row_cache: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                              Options.wal_filter: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.allow_ingest_behind: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.two_write_queues: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.manual_wal_flush: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.wal_compression: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.atomic_flush: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                 Options.log_readahead_size: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.allow_data_in_errors: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.db_host_id: __hostname__
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.max_background_jobs: 4
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.max_background_compactions: -1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.max_subcompactions: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.max_open_files: -1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.bytes_per_sync: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.max_background_flushes: -1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Compression algorithms supported:
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         kZSTD supported: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         kXpressCompression supported: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         kBZip2Compression supported: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         kLZ4Compression supported: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         kZlibCompression supported: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         kLZ4HCCompression supported: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         kSnappyCompression supported: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default)
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf1cda0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56102bf56850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.table_properties_collectors: 
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0)
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:           Options.merge_operator: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf1cda0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56102bf56850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1)
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:           Options.merge_operator: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf1cda0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56102bf56850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2)
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:           Options.merge_operator: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf1cda0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56102bf56850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0)
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:           Options.merge_operator: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf1cda0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56102bf56850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1)
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:           Options.merge_operator: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf1cda0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56102bf56850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2)
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:           Options.merge_operator: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf1cda0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56102bf56850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0)
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:           Options.merge_operator: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf1cfc0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56102bf562d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1)
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:           Options.merge_operator: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf1cfc0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56102bf562d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2)
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:           Options.merge_operator: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf1cfc0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56102bf562d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L)
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P)
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a5e44943-061a-4b6e-9a59-1f91106222d9
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661630928004, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661630928256, "job": 1, "event": "recovery_finished"}
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: freelist init
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: freelist _read_cfg
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: bluefs umount
Dec 02 07:47:10 np0005541914.localdomain ceph-osd[31770]: bdev(0x56102bf7f180 /var/lib/ceph/osd/ceph-1/block) close
Dec 02 07:47:11 np0005541914.localdomain podman[32106]: 
Dec 02 07:47:11 np0005541914.localdomain podman[32106]: 2025-12-02 07:47:11.149420433 +0000 UTC m=+0.069975189 container create 24bbd25405b94bd93c620e13171c24eb0dac242bb2a7fd4e5b49fa8d8bac2e57 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate-test, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, RELEASE=main, io.openshift.expose-services=, GIT_BRANCH=main, version=7, ceph=True, com.redhat.component=rhceph-container, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: bdev(0x56102bf7f180 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: bdev(0x56102bf7f180 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: bdev(0x56102bf7f180 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: bluefs mount
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: bluefs mount shared_bdev_used = 4718592
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: RocksDB version: 7.9.2
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Git sha 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Compile date 2025-09-23 00:00:00
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: DB SUMMARY
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: DB Session ID:  LVMBL2HNWNG9X0KVIREF
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: CURRENT file:  CURRENT
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: IDENTITY file:  IDENTITY
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                         Options.error_if_exists: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                       Options.create_if_missing: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                         Options.paranoid_checks: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                                     Options.env: 0x56102c00a700
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                                Options.info_log: 0x56102cf488e0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.max_file_opening_threads: 16
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                              Options.statistics: (nil)
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                               Options.use_fsync: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                       Options.max_log_file_size: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                         Options.allow_fallocate: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.use_direct_reads: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.create_missing_column_families: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                              Options.db_log_dir: 
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                                 Options.wal_dir: db.wal
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.advise_random_on_open: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.write_buffer_manager: 0x56102bf695e0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                            Options.rate_limiter: (nil)
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.unordered_write: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                               Options.row_cache: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                              Options.wal_filter: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.allow_ingest_behind: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.two_write_queues: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.manual_wal_flush: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.wal_compression: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.atomic_flush: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                 Options.log_readahead_size: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.allow_data_in_errors: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.db_host_id: __hostname__
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.max_background_jobs: 4
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.max_background_compactions: -1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.max_subcompactions: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.max_open_files: -1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.bytes_per_sync: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.max_background_flushes: -1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Compression algorithms supported:
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         kZSTD supported: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         kXpressCompression supported: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         kBZip2Compression supported: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         kLZ4Compression supported: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         kZlibCompression supported: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         kLZ4HCCompression supported: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         kSnappyCompression supported: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default)
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf1d180)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56102bf562d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:11 np0005541914.localdomain systemd[1]: Started libpod-conmon-24bbd25405b94bd93c620e13171c24eb0dac242bb2a7fd4e5b49fa8d8bac2e57.scope.
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.table_properties_collectors: 
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0)
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:           Options.merge_operator: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf1d180)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56102bf562d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1)
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:           Options.merge_operator: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf1d180)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56102bf562d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2)
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:           Options.merge_operator: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf1d180)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56102bf562d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0)
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:           Options.merge_operator: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf1d180)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56102bf562d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1)
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:           Options.merge_operator: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf1d180)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56102bf562d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2)
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:           Options.merge_operator: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf1d180)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56102bf562d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:11 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0)
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:           Options.merge_operator: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf48a60)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56102bf57610
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1)
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:           Options.merge_operator: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf48a60)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56102bf57610
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2)
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:           Options.merge_operator: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56102cf48a60)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x56102bf57610
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L)
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P)
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a5e44943-061a-4b6e-9a59-1f91106222d9
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661631209801, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661631215833, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764661631, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5e44943-061a-4b6e-9a59-1f91106222d9", "db_session_id": "LVMBL2HNWNG9X0KVIREF", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661631220609, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764661631, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5e44943-061a-4b6e-9a59-1f91106222d9", "db_session_id": "LVMBL2HNWNG9X0KVIREF", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 02 07:47:11 np0005541914.localdomain podman[32106]: 2025-12-02 07:47:11.121961403 +0000 UTC m=+0.042516149 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661631230161, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764661631, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a5e44943-061a-4b6e-9a59-1f91106222d9", "db_session_id": "LVMBL2HNWNG9X0KVIREF", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 02 07:47:11 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/857771b9da63fd80334a4d85644724539b2942fbe5375bd4441f8fb15562019f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl_open.cc:1432] Failed to truncate log #31: IO error: No such file or directory: While open a file for appending: db.wal/000031.log: No such file or directory
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661631235584, "job": 1, "event": "recovery_finished"}
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 02 07:47:11 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/857771b9da63fd80334a4d85644724539b2942fbe5375bd4441f8fb15562019f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:11 np0005541914.localdomain systemd[1]: tmp-crun.hvM3ZP.mount: Deactivated successfully.
Dec 02 07:47:11 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-24d99e87a9caef3e8a709065b423d27bec43ce946290715f03773f8cd58f18f2-merged.mount: Deactivated successfully.
Dec 02 07:47:11 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/857771b9da63fd80334a4d85644724539b2942fbe5375bd4441f8fb15562019f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:11 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/857771b9da63fd80334a4d85644724539b2942fbe5375bd4441f8fb15562019f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:11 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/857771b9da63fd80334a4d85644724539b2942fbe5375bd4441f8fb15562019f/merged/var/lib/ceph/osd/ceph-4 supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56102bfbe700
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: DB pointer 0x56102ce79a00
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                                          Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          
                                                          ** Compaction Stats [default] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                           Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [default] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [default] **
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-0] **
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-1] **
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-2] **
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.57 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                           Sum      1/0    1.57 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-0] **
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-1] **
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-2] **
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf57610#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-0] **
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf57610#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-1] **
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.26 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                                           Sum      1/0    1.26 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf57610#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-2] **
                                                          
                                                          ** Compaction Stats [L] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.005       0      0       0.0       0.0
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.005       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.005       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [L] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.005       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [L] **
                                                          
                                                          ** Compaction Stats [P] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [P] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [P] **
Dec 02 07:47:11 np0005541914.localdomain podman[32106]: 2025-12-02 07:47:11.294620594 +0000 UTC m=+0.215175320 container init 24bbd25405b94bd93c620e13171c24eb0dac242bb2a7fd4e5b49fa8d8bac2e57 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate-test, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.openshift.expose-services=, GIT_CLEAN=True, name=rhceph, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., architecture=x86_64, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, RELEASE=main, vcs-type=git, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: <cls> /builddir/build/BUILD/ceph-18.2.1/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: <cls> /builddir/build/BUILD/ceph-18.2.1/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: _get_class not permitted to load lua
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: _get_class not permitted to load sdk
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: _get_class not permitted to load test_remote_reads
Dec 02 07:47:11 np0005541914.localdomain podman[32106]: 2025-12-02 07:47:11.304287091 +0000 UTC m=+0.224841837 container start 24bbd25405b94bd93c620e13171c24eb0dac242bb2a7fd4e5b49fa8d8bac2e57 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate-test, distribution-scope=public, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, release=1763362218, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, architecture=x86_64, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 02 07:47:11 np0005541914.localdomain podman[32106]: 2025-12-02 07:47:11.30452686 +0000 UTC m=+0.225081606 container attach 24bbd25405b94bd93c620e13171c24eb0dac242bb2a7fd4e5b49fa8d8bac2e57 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate-test, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, version=7, ceph=True, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, release=1763362218)
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: osd.1 0 load_pgs
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: osd.1 0 load_pgs opened 0 pgs
Dec 02 07:47:11 np0005541914.localdomain ceph-osd[31770]: osd.1 0 log_to_monitors true
Dec 02 07:47:11 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1[31766]: 2025-12-02T07:47:11.306+0000 7f831d222a80 -1 osd.1 0 log_to_monitors true
Dec 02 07:47:11 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate-test[32133]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Dec 02 07:47:11 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate-test[32133]:                             [--no-systemd] [--no-tmpfs]
Dec 02 07:47:11 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate-test[32133]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 02 07:47:11 np0005541914.localdomain systemd[1]: libpod-24bbd25405b94bd93c620e13171c24eb0dac242bb2a7fd4e5b49fa8d8bac2e57.scope: Deactivated successfully.
Dec 02 07:47:11 np0005541914.localdomain podman[32106]: 2025-12-02 07:47:11.527111299 +0000 UTC m=+0.447666085 container died 24bbd25405b94bd93c620e13171c24eb0dac242bb2a7fd4e5b49fa8d8bac2e57 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate-test, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, version=7, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., vcs-type=git, release=1763362218, description=Red Hat Ceph Storage 7, name=rhceph, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, RELEASE=main)
Dec 02 07:47:11 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-857771b9da63fd80334a4d85644724539b2942fbe5375bd4441f8fb15562019f-merged.mount: Deactivated successfully.
Dec 02 07:47:11 np0005541914.localdomain podman[32341]: 2025-12-02 07:47:11.618049994 +0000 UTC m=+0.078954759 container remove 24bbd25405b94bd93c620e13171c24eb0dac242bb2a7fd4e5b49fa8d8bac2e57 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate-test, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat Ceph Storage 7, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., release=1763362218, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, version=7, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 02 07:47:11 np0005541914.localdomain systemd[1]: libpod-conmon-24bbd25405b94bd93c620e13171c24eb0dac242bb2a7fd4e5b49fa8d8bac2e57.scope: Deactivated successfully.
Dec 02 07:47:11 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 07:47:11 np0005541914.localdomain systemd-sysv-generator[32399]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 07:47:11 np0005541914.localdomain systemd-rc-local-generator[32394]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 07:47:12 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 07:47:12 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 07:47:12 np0005541914.localdomain systemd-rc-local-generator[32437]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 07:47:12 np0005541914.localdomain systemd-sysv-generator[32442]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 07:47:12 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 07:47:12 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 02 07:47:12 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 02 07:47:12 np0005541914.localdomain systemd[1]: Starting Ceph osd.4 for c7c8e171-a193-56fb-95fa-8879fcfa7074...
Dec 02 07:47:12 np0005541914.localdomain podman[32499]: 
Dec 02 07:47:12 np0005541914.localdomain podman[32499]: 2025-12-02 07:47:12.764833904 +0000 UTC m=+0.075782406 container create a13d48fc38ccb1cdd3a3ab2c2dd62eca088408b0a5363814517e93ae7f73c789 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=7, GIT_CLEAN=True, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, RELEASE=main, release=1763362218, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 02 07:47:12 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 07:47:12 np0005541914.localdomain podman[32499]: 2025-12-02 07:47:12.733819375 +0000 UTC m=+0.044767897 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 07:47:12 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d21c6f473da69728c129edfbf629f19d9847d1c96f622ccd62f7909840db2079/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:12 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d21c6f473da69728c129edfbf629f19d9847d1c96f622ccd62f7909840db2079/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:12 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d21c6f473da69728c129edfbf629f19d9847d1c96f622ccd62f7909840db2079/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:12 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d21c6f473da69728c129edfbf629f19d9847d1c96f622ccd62f7909840db2079/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:12 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d21c6f473da69728c129edfbf629f19d9847d1c96f622ccd62f7909840db2079/merged/var/lib/ceph/osd/ceph-4 supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:12 np0005541914.localdomain podman[32499]: 2025-12-02 07:47:12.893888325 +0000 UTC m=+0.204836827 container init a13d48fc38ccb1cdd3a3ab2c2dd62eca088408b0a5363814517e93ae7f73c789 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate, vcs-type=git, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, architecture=x86_64, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, name=rhceph, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph)
Dec 02 07:47:12 np0005541914.localdomain podman[32499]: 2025-12-02 07:47:12.90324465 +0000 UTC m=+0.214193152 container start a13d48fc38ccb1cdd3a3ab2c2dd62eca088408b0a5363814517e93ae7f73c789 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate, architecture=x86_64, io.buildah.version=1.41.4, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, vcs-type=git, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True)
Dec 02 07:47:12 np0005541914.localdomain podman[32499]: 2025-12-02 07:47:12.903474259 +0000 UTC m=+0.214422761 container attach a13d48fc38ccb1cdd3a3ab2c2dd62eca088408b0a5363814517e93ae7f73c789 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, distribution-scope=public, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, release=1763362218, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container)
Dec 02 07:47:13 np0005541914.localdomain ceph-osd[31770]: osd.1 0 done with init, starting boot process
Dec 02 07:47:13 np0005541914.localdomain ceph-osd[31770]: osd.1 0 start_boot
Dec 02 07:47:13 np0005541914.localdomain ceph-osd[31770]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 02 07:47:13 np0005541914.localdomain ceph-osd[31770]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 02 07:47:13 np0005541914.localdomain ceph-osd[31770]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 02 07:47:13 np0005541914.localdomain ceph-osd[31770]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 02 07:47:13 np0005541914.localdomain ceph-osd[31770]: osd.1 0  bench count 12288000 bsize 4 KiB
Dec 02 07:47:13 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate[32513]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4
Dec 02 07:47:13 np0005541914.localdomain bash[32499]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4
Dec 02 07:47:13 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate[32513]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-4 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Dec 02 07:47:13 np0005541914.localdomain bash[32499]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-4 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Dec 02 07:47:13 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate[32513]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Dec 02 07:47:13 np0005541914.localdomain bash[32499]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Dec 02 07:47:13 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate[32513]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 02 07:47:13 np0005541914.localdomain bash[32499]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 02 07:47:13 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate[32513]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-4/block
Dec 02 07:47:13 np0005541914.localdomain bash[32499]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-4/block
Dec 02 07:47:13 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate[32513]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4
Dec 02 07:47:13 np0005541914.localdomain bash[32499]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4
Dec 02 07:47:13 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate[32513]: --> ceph-volume raw activate successful for osd ID: 4
Dec 02 07:47:13 np0005541914.localdomain bash[32499]: --> ceph-volume raw activate successful for osd ID: 4
Dec 02 07:47:13 np0005541914.localdomain systemd[1]: libpod-a13d48fc38ccb1cdd3a3ab2c2dd62eca088408b0a5363814517e93ae7f73c789.scope: Deactivated successfully.
Dec 02 07:47:13 np0005541914.localdomain podman[32499]: 2025-12-02 07:47:13.576072781 +0000 UTC m=+0.887021283 container died a13d48fc38ccb1cdd3a3ab2c2dd62eca088408b0a5363814517e93ae7f73c789 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, GIT_CLEAN=True, release=1763362218, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, version=7, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 07:47:13 np0005541914.localdomain systemd[1]: tmp-crun.GIZjtC.mount: Deactivated successfully.
Dec 02 07:47:13 np0005541914.localdomain podman[32628]: 2025-12-02 07:47:13.69249282 +0000 UTC m=+0.105910850 container remove a13d48fc38ccb1cdd3a3ab2c2dd62eca088408b0a5363814517e93ae7f73c789 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4-activate, version=7, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, release=1763362218, GIT_CLEAN=True, vcs-type=git, com.redhat.component=rhceph-container, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, architecture=x86_64, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 02 07:47:13 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d21c6f473da69728c129edfbf629f19d9847d1c96f622ccd62f7909840db2079-merged.mount: Deactivated successfully.
Dec 02 07:47:14 np0005541914.localdomain podman[32689]: 
Dec 02 07:47:14 np0005541914.localdomain podman[32689]: 2025-12-02 07:47:14.048377875 +0000 UTC m=+0.079477810 container create 83594567313a2481b3b1ef77fd5820fe0afdf611352f9cd399dd2edd595553b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4, RELEASE=main, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, release=1763362218, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_CLEAN=True, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, name=rhceph)
Dec 02 07:47:14 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99d296e38dcf84f39ca144658150c830740a09750e397b5878edd0161b2f87a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:14 np0005541914.localdomain podman[32689]: 2025-12-02 07:47:14.019164296 +0000 UTC m=+0.050264301 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 07:47:14 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99d296e38dcf84f39ca144658150c830740a09750e397b5878edd0161b2f87a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:14 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99d296e38dcf84f39ca144658150c830740a09750e397b5878edd0161b2f87a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:14 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99d296e38dcf84f39ca144658150c830740a09750e397b5878edd0161b2f87a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:14 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99d296e38dcf84f39ca144658150c830740a09750e397b5878edd0161b2f87a/merged/var/lib/ceph/osd/ceph-4 supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:14 np0005541914.localdomain podman[32689]: 2025-12-02 07:47:14.17109636 +0000 UTC m=+0.202196295 container init 83594567313a2481b3b1ef77fd5820fe0afdf611352f9cd399dd2edd595553b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4, architecture=x86_64, release=1763362218, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, GIT_CLEAN=True, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, ceph=True, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc.)
Dec 02 07:47:14 np0005541914.localdomain podman[32689]: 2025-12-02 07:47:14.199910243 +0000 UTC m=+0.231010178 container start 83594567313a2481b3b1ef77fd5820fe0afdf611352f9cd399dd2edd595553b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, RELEASE=main, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, version=7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 02 07:47:14 np0005541914.localdomain bash[32689]: 83594567313a2481b3b1ef77fd5820fe0afdf611352f9cd399dd2edd595553b5
Dec 02 07:47:14 np0005541914.localdomain systemd[1]: Started Ceph osd.4 for c7c8e171-a193-56fb-95fa-8879fcfa7074.
Dec 02 07:47:14 np0005541914.localdomain ceph-osd[32707]: set uid:gid to 167:167 (ceph:ceph)
Dec 02 07:47:14 np0005541914.localdomain ceph-osd[32707]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-osd, pid 2
Dec 02 07:47:14 np0005541914.localdomain ceph-osd[32707]: pidfile_write: ignore empty --pid-file
Dec 02 07:47:14 np0005541914.localdomain ceph-osd[32707]: bdev(0x562050348e00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block
Dec 02 07:47:14 np0005541914.localdomain ceph-osd[32707]: bdev(0x562050348e00 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument
Dec 02 07:47:14 np0005541914.localdomain ceph-osd[32707]: bdev(0x562050348e00 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 02 07:47:14 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Dec 02 07:47:14 np0005541914.localdomain ceph-osd[32707]: bdev(0x562050349180 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block
Dec 02 07:47:14 np0005541914.localdomain ceph-osd[32707]: bdev(0x562050349180 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument
Dec 02 07:47:14 np0005541914.localdomain ceph-osd[32707]: bdev(0x562050349180 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 02 07:47:14 np0005541914.localdomain ceph-osd[32707]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-4/block size 7.0 GiB
Dec 02 07:47:14 np0005541914.localdomain ceph-osd[32707]: bdev(0x562050349180 /var/lib/ceph/osd/ceph-4/block) close
Dec 02 07:47:14 np0005541914.localdomain sudo[31798]: pam_unix(sudo:session): session closed for user root
Dec 02 07:47:14 np0005541914.localdomain sudo[32720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 07:47:14 np0005541914.localdomain sudo[32720]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:47:14 np0005541914.localdomain sudo[32720]: pam_unix(sudo:session): session closed for user root
Dec 02 07:47:14 np0005541914.localdomain sudo[32735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074 -- raw list --format json
Dec 02 07:47:14 np0005541914.localdomain sudo[32735]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:47:14 np0005541914.localdomain ceph-osd[32707]: bdev(0x562050348e00 /var/lib/ceph/osd/ceph-4/block) close
Dec 02 07:47:14 np0005541914.localdomain ceph-osd[32707]: starting osd.4 osd_data /var/lib/ceph/osd/ceph-4 /var/lib/ceph/osd/ceph-4/journal
Dec 02 07:47:14 np0005541914.localdomain ceph-osd[32707]: load: jerasure load: lrc 
Dec 02 07:47:14 np0005541914.localdomain ceph-osd[32707]: bdev(0x562050348e00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block
Dec 02 07:47:14 np0005541914.localdomain ceph-osd[32707]: bdev(0x562050348e00 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument
Dec 02 07:47:14 np0005541914.localdomain ceph-osd[32707]: bdev(0x562050348e00 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 02 07:47:14 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Dec 02 07:47:14 np0005541914.localdomain ceph-osd[32707]: bdev(0x562050348e00 /var/lib/ceph/osd/ceph-4/block) close
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: bdev(0x562050348e00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: bdev(0x562050348e00 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: bdev(0x562050348e00 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: bdev(0x562050348e00 /var/lib/ceph/osd/ceph-4/block) close
Dec 02 07:47:15 np0005541914.localdomain podman[32798]: 
Dec 02 07:47:15 np0005541914.localdomain podman[32798]: 2025-12-02 07:47:15.127796508 +0000 UTC m=+0.072795839 container create 9ccdb68a2592a25b39210cd511c8bf0fbde1ea8308cd3888dec57d622a234710 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_snyder, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, distribution-scope=public, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, ceph=True, architecture=x86_64, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, RELEASE=main, CEPH_POINT_RELEASE=)
Dec 02 07:47:15 np0005541914.localdomain systemd[1]: Started libpod-conmon-9ccdb68a2592a25b39210cd511c8bf0fbde1ea8308cd3888dec57d622a234710.scope.
Dec 02 07:47:15 np0005541914.localdomain podman[32798]: 2025-12-02 07:47:15.097251257 +0000 UTC m=+0.042250588 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 07:47:15 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 07:47:15 np0005541914.localdomain podman[32798]: 2025-12-02 07:47:15.227518216 +0000 UTC m=+0.172517547 container init 9ccdb68a2592a25b39210cd511c8bf0fbde1ea8308cd3888dec57d622a234710 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_snyder, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, name=rhceph, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, GIT_BRANCH=main, vcs-type=git, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=1763362218, build-date=2025-11-26T19:44:28Z, distribution-scope=public, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 07:47:15 np0005541914.localdomain systemd[1]: tmp-crun.WJ4iBY.mount: Deactivated successfully.
Dec 02 07:47:15 np0005541914.localdomain hungry_snyder[32813]: 167 167
Dec 02 07:47:15 np0005541914.localdomain systemd[1]: libpod-9ccdb68a2592a25b39210cd511c8bf0fbde1ea8308cd3888dec57d622a234710.scope: Deactivated successfully.
Dec 02 07:47:15 np0005541914.localdomain podman[32798]: 2025-12-02 07:47:15.253394685 +0000 UTC m=+0.198394016 container start 9ccdb68a2592a25b39210cd511c8bf0fbde1ea8308cd3888dec57d622a234710 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_snyder, vendor=Red Hat, Inc., GIT_BRANCH=main, description=Red Hat Ceph Storage 7, release=1763362218, build-date=2025-11-26T19:44:28Z, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, version=7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True)
Dec 02 07:47:15 np0005541914.localdomain podman[32798]: 2025-12-02 07:47:15.254383903 +0000 UTC m=+0.199383274 container attach 9ccdb68a2592a25b39210cd511c8bf0fbde1ea8308cd3888dec57d622a234710 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_snyder, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, version=7, release=1763362218, RELEASE=main, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, ceph=True)
Dec 02 07:47:15 np0005541914.localdomain podman[32798]: 2025-12-02 07:47:15.258008005 +0000 UTC m=+0.203007366 container died 9ccdb68a2592a25b39210cd511c8bf0fbde1ea8308cd3888dec57d622a234710 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_snyder, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_BRANCH=main, CEPH_POINT_RELEASE=, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, distribution-scope=public, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: osd.4:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: bdev(0x562050348e00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: bdev(0x562050348e00 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: bdev(0x562050348e00 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: bdev(0x562050349180 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: bdev(0x562050349180 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: bdev(0x562050349180 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-4/block size 7.0 GiB
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: bluefs mount
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: bluefs mount shared_bdev_used = 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: RocksDB version: 7.9.2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Git sha 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Compile date 2025-09-23 00:00:00
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: DB SUMMARY
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: DB Session ID:  WSVYTTET1R9PTPBU71LV
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: CURRENT file:  CURRENT
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: IDENTITY file:  IDENTITY
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                         Options.error_if_exists: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                       Options.create_if_missing: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                         Options.paranoid_checks: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                                     Options.env: 0x5620505dcc40
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                                Options.info_log: 0x5620512e8740
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.max_file_opening_threads: 16
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                              Options.statistics: (nil)
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                               Options.use_fsync: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                       Options.max_log_file_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                         Options.allow_fallocate: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.use_direct_reads: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.create_missing_column_families: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                              Options.db_log_dir: 
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                                 Options.wal_dir: db.wal
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.advise_random_on_open: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.write_buffer_manager: 0x562050332140
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                            Options.rate_limiter: (nil)
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.unordered_write: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                               Options.row_cache: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                              Options.wal_filter: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.allow_ingest_behind: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.two_write_queues: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.manual_wal_flush: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.wal_compression: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.atomic_flush: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                 Options.log_readahead_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.allow_data_in_errors: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.db_host_id: __hostname__
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.max_background_jobs: 4
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.max_background_compactions: -1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.max_subcompactions: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.max_open_files: -1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.bytes_per_sync: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.max_background_flushes: -1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Compression algorithms supported:
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         kZSTD supported: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         kXpressCompression supported: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         kBZip2Compression supported: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         kLZ4Compression supported: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         kZlibCompression supported: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         kLZ4HCCompression supported: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         kSnappyCompression supported: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default)
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620512e8900)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x562050320850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.table_properties_collectors: 
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0)
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:           Options.merge_operator: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620512e8900)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x562050320850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1)
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:           Options.merge_operator: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620512e8900)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x562050320850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2)
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:           Options.merge_operator: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620512e8900)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x562050320850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0)
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:           Options.merge_operator: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620512e8900)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x562050320850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1)
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:           Options.merge_operator: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620512e8900)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x562050320850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2)
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:           Options.merge_operator: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620512e8900)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x562050320850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0)
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:           Options.merge_operator: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620512e8b20)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x5620503202d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1)
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:           Options.merge_operator: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620512e8b20)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x5620503202d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2)
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:           Options.merge_operator: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620512e8b20)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x5620503202d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L)
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P)
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 565f4603-89cf-4617-b1e1-97bdb3afd91c
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661635342679, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661635342971, "job": 1, "event": "recovery_finished"}
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _open_super_meta old nid_max 1025
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _open_super_meta old blobid_max 10240
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _open_super_meta min_alloc_size 0x1000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: freelist init
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: freelist _read_cfg
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: bluefs umount
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: bdev(0x562050349180 /var/lib/ceph/osd/ceph-4/block) close
Dec 02 07:47:15 np0005541914.localdomain podman[32818]: 2025-12-02 07:47:15.393473636 +0000 UTC m=+0.133828158 container remove 9ccdb68a2592a25b39210cd511c8bf0fbde1ea8308cd3888dec57d622a234710 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_snyder, ceph=True, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-type=git, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 02 07:47:15 np0005541914.localdomain systemd[1]: libpod-conmon-9ccdb68a2592a25b39210cd511c8bf0fbde1ea8308cd3888dec57d622a234710.scope: Deactivated successfully.
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: bdev(0x562050349180 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: bdev(0x562050349180 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: bdev(0x562050349180 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-4/block size 7.0 GiB
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: bluefs mount
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: bluefs mount shared_bdev_used = 4718592
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Dec 02 07:47:15 np0005541914.localdomain podman[33033]: 
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: RocksDB version: 7.9.2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Git sha 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Compile date 2025-09-23 00:00:00
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: DB SUMMARY
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: DB Session ID:  WSVYTTET1R9PTPBU71LU
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: CURRENT file:  CURRENT
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: IDENTITY file:  IDENTITY
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                         Options.error_if_exists: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                       Options.create_if_missing: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                         Options.paranoid_checks: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                                     Options.env: 0x56205046e310
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                                Options.info_log: 0x5620512e9240
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.max_file_opening_threads: 16
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                              Options.statistics: (nil)
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                               Options.use_fsync: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                       Options.max_log_file_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                         Options.allow_fallocate: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.use_direct_reads: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.create_missing_column_families: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                              Options.db_log_dir: 
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                                 Options.wal_dir: db.wal
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.advise_random_on_open: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.write_buffer_manager: 0x562050332140
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                            Options.rate_limiter: (nil)
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.unordered_write: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                               Options.row_cache: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                              Options.wal_filter: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.allow_ingest_behind: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.two_write_queues: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.manual_wal_flush: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.wal_compression: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.atomic_flush: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                 Options.log_readahead_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.allow_data_in_errors: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.db_host_id: __hostname__
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.max_background_jobs: 4
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.max_background_compactions: -1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.max_subcompactions: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.max_open_files: -1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.bytes_per_sync: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.max_background_flushes: -1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Compression algorithms supported:
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         kZSTD supported: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         kXpressCompression supported: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         kBZip2Compression supported: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         kLZ4Compression supported: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         kZlibCompression supported: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         kLZ4HCCompression supported: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         kSnappyCompression supported: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default)
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620503e91e0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x5620503202d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.table_properties_collectors: 
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0)
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:           Options.merge_operator: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620503e91e0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x5620503202d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1)
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:           Options.merge_operator: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620503e91e0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x5620503202d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2)
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:           Options.merge_operator: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620503e91e0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x5620503202d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0)
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:           Options.merge_operator: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620503e91e0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x5620503202d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1)
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:           Options.merge_operator: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620503e91e0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x5620503202d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:15 np0005541914.localdomain podman[33033]: 2025-12-02 07:47:15.620359472 +0000 UTC m=+0.093759346 container create 8fe6ce595256c406e84c62dfba6a65b82019cdff56650d85f36c19a600280125 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_khayyam, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, GIT_BRANCH=main, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.expose-services=)
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2)
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:           Options.merge_operator: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620503e91e0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x5620503202d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0)
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:           Options.merge_operator: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620503e9240)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x562050321610
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1)
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:           Options.merge_operator: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620503e9240)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x562050321610
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2)
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:           Options.merge_operator: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5620503e9240)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x562050321610
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.write_buffer_size: 16777216
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.max_write_buffer_number: 64
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.compression: LZ4
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.num_levels: 7
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.bloom_locality: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                               Options.ttl: 2592000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                       Options.enable_blob_files: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                           Options.min_blob_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L)
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P)
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 565f4603-89cf-4617-b1e1-97bdb3afd91c
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661635616423, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661635622408, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764661635, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "565f4603-89cf-4617-b1e1-97bdb3afd91c", "db_session_id": "WSVYTTET1R9PTPBU71LU", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661635645251, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764661635, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "565f4603-89cf-4617-b1e1-97bdb3afd91c", "db_session_id": "WSVYTTET1R9PTPBU71LU", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 02 07:47:15 np0005541914.localdomain systemd[1]: Started libpod-conmon-8fe6ce595256c406e84c62dfba6a65b82019cdff56650d85f36c19a600280125.scope.
Dec 02 07:47:15 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 07:47:15 np0005541914.localdomain podman[33033]: 2025-12-02 07:47:15.574272406 +0000 UTC m=+0.047672300 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661635678773, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764661635, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "565f4603-89cf-4617-b1e1-97bdb3afd91c", "db_session_id": "WSVYTTET1R9PTPBU71LU", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 02 07:47:15 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5810abde662f821ee66832af087dbd454962fcba27c58cdd5ff6343d4ccdda/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:15 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5810abde662f821ee66832af087dbd454962fcba27c58cdd5ff6343d4ccdda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:15 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f5810abde662f821ee66832af087dbd454962fcba27c58cdd5ff6343d4ccdda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:15 np0005541914.localdomain podman[33033]: 2025-12-02 07:47:15.713760533 +0000 UTC m=+0.187160377 container init 8fe6ce595256c406e84c62dfba6a65b82019cdff56650d85f36c19a600280125 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_khayyam, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.openshift.expose-services=, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, distribution-scope=public, com.redhat.component=rhceph-container, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl_open.cc:1432] Failed to truncate log #31: IO error: No such file or directory: While open a file for appending: db.wal/000031.log: No such file or directory
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764661635716303, "job": 1, "event": "recovery_finished"}
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 02 07:47:15 np0005541914.localdomain podman[33033]: 2025-12-02 07:47:15.741080478 +0000 UTC m=+0.214480342 container start 8fe6ce595256c406e84c62dfba6a65b82019cdff56650d85f36c19a600280125 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_khayyam, GIT_CLEAN=True, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, name=rhceph, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=)
Dec 02 07:47:15 np0005541914.localdomain podman[33033]: 2025-12-02 07:47:15.74139093 +0000 UTC m=+0.214790764 container attach 8fe6ce595256c406e84c62dfba6a65b82019cdff56650d85f36c19a600280125 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_khayyam, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, distribution-scope=public, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, version=7, GIT_BRANCH=main, GIT_CLEAN=True, RELEASE=main, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4)
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x562050388700
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: DB pointer 0x56205123fa00
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _upgrade_super from 4, latest 4
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _upgrade_super done
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 0.2 total, 0.2 interval
                                                          Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                                          Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          
                                                          ** Compaction Stats [default] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                           Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [default] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.2 total, 0.2 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.9e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [default] **
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.2 total, 0.2 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.9e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-0] **
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.2 total, 0.2 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.9e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-1] **
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.2 total, 0.2 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.9e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-2] **
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.57 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.023       0      0       0.0       0.0
                                                           Sum      1/0    1.57 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.023       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.023       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.023       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.2 total, 0.2 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.9e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-0] **
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.2 total, 0.2 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.9e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-1] **
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.2 total, 0.2 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.9e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-2] **
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.2 total, 0.2 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x562050321610#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-0] **
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.2 total, 0.2 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x562050321610#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-1] **
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.26 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                                           Sum      1/0    1.26 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.2 total, 0.2 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x562050321610#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-2] **
                                                          
                                                          ** Compaction Stats [L] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [L] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.2 total, 0.2 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.9e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [L] **
                                                          
                                                          ** Compaction Stats [P] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [P] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.2 total, 0.2 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.9e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [P] **
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: <cls> /builddir/build/BUILD/ceph-18.2.1/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: <cls> /builddir/build/BUILD/ceph-18.2.1/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: _get_class not permitted to load lua
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: _get_class not permitted to load sdk
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: _get_class not permitted to load test_remote_reads
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: osd.4 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: osd.4 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: osd.4 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: osd.4 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: osd.4 0 load_pgs
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: osd.4 0 load_pgs opened 0 pgs
Dec 02 07:47:15 np0005541914.localdomain ceph-osd[32707]: osd.4 0 log_to_monitors true
Dec 02 07:47:15 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4[32703]: 2025-12-02T07:47:15.846+0000 7f36cff76a80 -1 osd.4 0 log_to_monitors true
Dec 02 07:47:16 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-452369b48fca2f2ac87b01de59b0b8d8712884bae06e23116539c12224dd8f96-merged.mount: Deactivated successfully.
Dec 02 07:47:16 np0005541914.localdomain sleepy_khayyam[33230]: {
Dec 02 07:47:16 np0005541914.localdomain sleepy_khayyam[33230]:     "27399dc0-3412-47da-81e0-87f9f4a96daf": {
Dec 02 07:47:16 np0005541914.localdomain sleepy_khayyam[33230]:         "ceph_fsid": "c7c8e171-a193-56fb-95fa-8879fcfa7074",
Dec 02 07:47:16 np0005541914.localdomain sleepy_khayyam[33230]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Dec 02 07:47:16 np0005541914.localdomain sleepy_khayyam[33230]:         "osd_id": 1,
Dec 02 07:47:16 np0005541914.localdomain sleepy_khayyam[33230]:         "osd_uuid": "27399dc0-3412-47da-81e0-87f9f4a96daf",
Dec 02 07:47:16 np0005541914.localdomain sleepy_khayyam[33230]:         "type": "bluestore"
Dec 02 07:47:16 np0005541914.localdomain sleepy_khayyam[33230]:     },
Dec 02 07:47:16 np0005541914.localdomain sleepy_khayyam[33230]:     "e70bab01-7143-4db1-8b99-c97ca4b22476": {
Dec 02 07:47:16 np0005541914.localdomain sleepy_khayyam[33230]:         "ceph_fsid": "c7c8e171-a193-56fb-95fa-8879fcfa7074",
Dec 02 07:47:16 np0005541914.localdomain sleepy_khayyam[33230]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Dec 02 07:47:16 np0005541914.localdomain sleepy_khayyam[33230]:         "osd_id": 4,
Dec 02 07:47:16 np0005541914.localdomain sleepy_khayyam[33230]:         "osd_uuid": "e70bab01-7143-4db1-8b99-c97ca4b22476",
Dec 02 07:47:16 np0005541914.localdomain sleepy_khayyam[33230]:         "type": "bluestore"
Dec 02 07:47:16 np0005541914.localdomain sleepy_khayyam[33230]:     }
Dec 02 07:47:16 np0005541914.localdomain sleepy_khayyam[33230]: }
Dec 02 07:47:16 np0005541914.localdomain systemd[1]: libpod-8fe6ce595256c406e84c62dfba6a65b82019cdff56650d85f36c19a600280125.scope: Deactivated successfully.
Dec 02 07:47:16 np0005541914.localdomain podman[33033]: 2025-12-02 07:47:16.276881987 +0000 UTC m=+0.750281931 container died 8fe6ce595256c406e84c62dfba6a65b82019cdff56650d85f36c19a600280125 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_khayyam, version=7, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., RELEASE=main, vcs-type=git, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, distribution-scope=public)
Dec 02 07:47:16 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-7f5810abde662f821ee66832af087dbd454962fcba27c58cdd5ff6343d4ccdda-merged.mount: Deactivated successfully.
Dec 02 07:47:16 np0005541914.localdomain podman[33300]: 2025-12-02 07:47:16.418066822 +0000 UTC m=+0.132669534 container remove 8fe6ce595256c406e84c62dfba6a65b82019cdff56650d85f36c19a600280125 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_khayyam, GIT_CLEAN=True, ceph=True, description=Red Hat Ceph Storage 7, name=rhceph, architecture=x86_64, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, build-date=2025-11-26T19:44:28Z, version=7, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, io.openshift.expose-services=, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 07:47:16 np0005541914.localdomain systemd[1]: libpod-conmon-8fe6ce595256c406e84c62dfba6a65b82019cdff56650d85f36c19a600280125.scope: Deactivated successfully.
Dec 02 07:47:16 np0005541914.localdomain sudo[32735]: pam_unix(sudo:session): session closed for user root
Dec 02 07:47:16 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 02 07:47:16 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 02 07:47:17 np0005541914.localdomain ceph-osd[32707]: osd.4 0 done with init, starting boot process
Dec 02 07:47:17 np0005541914.localdomain ceph-osd[32707]: osd.4 0 start_boot
Dec 02 07:47:17 np0005541914.localdomain ceph-osd[32707]: osd.4 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 02 07:47:17 np0005541914.localdomain ceph-osd[32707]: osd.4 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 02 07:47:17 np0005541914.localdomain ceph-osd[32707]: osd.4 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 02 07:47:17 np0005541914.localdomain ceph-osd[32707]: osd.4 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 02 07:47:17 np0005541914.localdomain ceph-osd[32707]: osd.4 0  bench count 12288000 bsize 4 KiB
Dec 02 07:47:17 np0005541914.localdomain ceph-osd[31770]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 15.718 iops: 4023.857 elapsed_sec: 0.746
Dec 02 07:47:17 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [WRN] : OSD bench result of 4023.857035 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 02 07:47:17 np0005541914.localdomain ceph-osd[31770]: osd.1 0 waiting for initial osdmap
Dec 02 07:47:17 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1[31766]: 2025-12-02T07:47:17.446+0000 7f83199b6640 -1 osd.1 0 waiting for initial osdmap
Dec 02 07:47:17 np0005541914.localdomain ceph-osd[31770]: osd.1 13 crush map has features 288514050185494528, adjusting msgr requires for clients
Dec 02 07:47:17 np0005541914.localdomain ceph-osd[31770]: osd.1 13 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Dec 02 07:47:17 np0005541914.localdomain ceph-osd[31770]: osd.1 13 crush map has features 3314932999778484224, adjusting msgr requires for osds
Dec 02 07:47:17 np0005541914.localdomain ceph-osd[31770]: osd.1 13 check_osdmap_features require_osd_release unknown -> reef
Dec 02 07:47:17 np0005541914.localdomain ceph-osd[31770]: osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 02 07:47:17 np0005541914.localdomain ceph-osd[31770]: osd.1 13 set_numa_affinity not setting numa affinity
Dec 02 07:47:17 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-1[31766]: 2025-12-02T07:47:17.486+0000 7f83147cb640 -1 osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 02 07:47:17 np0005541914.localdomain ceph-osd[31770]: osd.1 13 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Dec 02 07:47:17 np0005541914.localdomain sudo[33316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 07:47:17 np0005541914.localdomain sudo[33316]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:47:17 np0005541914.localdomain sudo[33316]: pam_unix(sudo:session): session closed for user root
Dec 02 07:47:17 np0005541914.localdomain sudo[33331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 07:47:17 np0005541914.localdomain sudo[33331]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:47:17 np0005541914.localdomain sudo[33331]: pam_unix(sudo:session): session closed for user root
Dec 02 07:47:17 np0005541914.localdomain sudo[33346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 02 07:47:17 np0005541914.localdomain sudo[33346]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:47:18 np0005541914.localdomain ceph-osd[31770]: osd.1 14 state: booting -> active
Dec 02 07:47:18 np0005541914.localdomain podman[33429]: 2025-12-02 07:47:18.677268061 +0000 UTC m=+0.108972069 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, version=7, build-date=2025-11-26T19:44:28Z, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, architecture=x86_64)
Dec 02 07:47:18 np0005541914.localdomain podman[33429]: 2025-12-02 07:47:18.782765075 +0000 UTC m=+0.214469083 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, architecture=x86_64, description=Red Hat Ceph Storage 7, name=rhceph, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vendor=Red Hat, Inc., io.buildah.version=1.41.4)
Dec 02 07:47:19 np0005541914.localdomain sudo[33346]: pam_unix(sudo:session): session closed for user root
Dec 02 07:47:19 np0005541914.localdomain sudo[33494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 07:47:19 np0005541914.localdomain sudo[33494]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:47:19 np0005541914.localdomain sudo[33494]: pam_unix(sudo:session): session closed for user root
Dec 02 07:47:19 np0005541914.localdomain sudo[33509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 07:47:19 np0005541914.localdomain sudo[33509]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:47:19 np0005541914.localdomain sudo[33509]: pam_unix(sudo:session): session closed for user root
Dec 02 07:47:20 np0005541914.localdomain sudo[33555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 07:47:20 np0005541914.localdomain sudo[33555]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:47:20 np0005541914.localdomain sudo[33555]: pam_unix(sudo:session): session closed for user root
Dec 02 07:47:20 np0005541914.localdomain sshd[26267]: fatal: Timeout before authentication for 220.190.3.100 port 20784
Dec 02 07:47:20 np0005541914.localdomain sudo[33570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074 -- inventory --format=json-pretty --filter-for-batch
Dec 02 07:47:20 np0005541914.localdomain sudo[33570]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:47:20 np0005541914.localdomain ceph-osd[31770]: osd.1 16 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 02 07:47:20 np0005541914.localdomain ceph-osd[31770]: osd.1 16 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Dec 02 07:47:20 np0005541914.localdomain ceph-osd[31770]: osd.1 16 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 02 07:47:20 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 16 pg[1.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 07:47:20 np0005541914.localdomain podman[33626]: 
Dec 02 07:47:20 np0005541914.localdomain podman[33626]: 2025-12-02 07:47:20.660909277 +0000 UTC m=+0.049935108 container create 52b761a5e9884a2012318a2f9ce185adc5f511014bb01e3b7c70bd5548d74ecb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_brahmagupta, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, architecture=x86_64, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_BRANCH=main, name=rhceph)
Dec 02 07:47:20 np0005541914.localdomain systemd[1]: Started libpod-conmon-52b761a5e9884a2012318a2f9ce185adc5f511014bb01e3b7c70bd5548d74ecb.scope.
Dec 02 07:47:20 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 07:47:20 np0005541914.localdomain podman[33626]: 2025-12-02 07:47:20.638565376 +0000 UTC m=+0.027591207 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 07:47:20 np0005541914.localdomain podman[33626]: 2025-12-02 07:47:20.75818068 +0000 UTC m=+0.147206531 container init 52b761a5e9884a2012318a2f9ce185adc5f511014bb01e3b7c70bd5548d74ecb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_brahmagupta, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, release=1763362218, description=Red Hat Ceph Storage 7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7)
Dec 02 07:47:20 np0005541914.localdomain wonderful_brahmagupta[33641]: 167 167
Dec 02 07:47:20 np0005541914.localdomain systemd[1]: libpod-52b761a5e9884a2012318a2f9ce185adc5f511014bb01e3b7c70bd5548d74ecb.scope: Deactivated successfully.
Dec 02 07:47:20 np0005541914.localdomain podman[33626]: 2025-12-02 07:47:20.777838226 +0000 UTC m=+0.166864087 container start 52b761a5e9884a2012318a2f9ce185adc5f511014bb01e3b7c70bd5548d74ecb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_brahmagupta, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, version=7, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 02 07:47:20 np0005541914.localdomain podman[33626]: 2025-12-02 07:47:20.778351596 +0000 UTC m=+0.167377447 container attach 52b761a5e9884a2012318a2f9ce185adc5f511014bb01e3b7c70bd5548d74ecb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_brahmagupta, distribution-scope=public, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, vendor=Red Hat, Inc., name=rhceph, GIT_CLEAN=True, architecture=x86_64, GIT_BRANCH=main, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vcs-type=git, ceph=True, RELEASE=main, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 07:47:20 np0005541914.localdomain podman[33626]: 2025-12-02 07:47:20.780348844 +0000 UTC m=+0.169374685 container died 52b761a5e9884a2012318a2f9ce185adc5f511014bb01e3b7c70bd5548d74ecb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_brahmagupta, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, ceph=True, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 02 07:47:20 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-82a02638e2b1e03fa57eaeda5b3473b19a5a5e24ad16cfd971266b3602f13e9e-merged.mount: Deactivated successfully.
Dec 02 07:47:20 np0005541914.localdomain podman[33646]: 2025-12-02 07:47:20.894149011 +0000 UTC m=+0.104503516 container remove 52b761a5e9884a2012318a2f9ce185adc5f511014bb01e3b7c70bd5548d74ecb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_brahmagupta, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, vendor=Red Hat, Inc., ceph=True, vcs-type=git, GIT_CLEAN=True, release=1763362218)
Dec 02 07:47:20 np0005541914.localdomain systemd[1]: libpod-conmon-52b761a5e9884a2012318a2f9ce185adc5f511014bb01e3b7c70bd5548d74ecb.scope: Deactivated successfully.
Dec 02 07:47:21 np0005541914.localdomain ceph-osd[32707]: osd.4 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 23.144 iops: 5924.796 elapsed_sec: 0.506
Dec 02 07:47:21 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [WRN] : OSD bench result of 5924.795610 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.4. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 02 07:47:21 np0005541914.localdomain ceph-osd[32707]: osd.4 0 waiting for initial osdmap
Dec 02 07:47:21 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4[32703]: 2025-12-02T07:47:21.042+0000 7f36cc70a640 -1 osd.4 0 waiting for initial osdmap
Dec 02 07:47:21 np0005541914.localdomain ceph-osd[32707]: osd.4 16 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 02 07:47:21 np0005541914.localdomain ceph-osd[32707]: osd.4 16 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Dec 02 07:47:21 np0005541914.localdomain ceph-osd[32707]: osd.4 16 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 02 07:47:21 np0005541914.localdomain ceph-osd[32707]: osd.4 16 check_osdmap_features require_osd_release unknown -> reef
Dec 02 07:47:21 np0005541914.localdomain ceph-osd[32707]: osd.4 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 02 07:47:21 np0005541914.localdomain ceph-osd[32707]: osd.4 16 set_numa_affinity not setting numa affinity
Dec 02 07:47:21 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-osd-4[32703]: 2025-12-02T07:47:21.062+0000 7f36c751f640 -1 osd.4 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 02 07:47:21 np0005541914.localdomain ceph-osd[32707]: osd.4 16 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Dec 02 07:47:21 np0005541914.localdomain podman[33665]: 
Dec 02 07:47:21 np0005541914.localdomain podman[33665]: 2025-12-02 07:47:21.12370174 +0000 UTC m=+0.082770337 container create b4d0c4ab04985b50a04b0cfb92b19fc9d02fe214dbdd0680c8263e85c3a1a1eb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_proskuriakova, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, version=7, GIT_BRANCH=main, architecture=x86_64, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 07:47:21 np0005541914.localdomain systemd[1]: Started libpod-conmon-b4d0c4ab04985b50a04b0cfb92b19fc9d02fe214dbdd0680c8263e85c3a1a1eb.scope.
Dec 02 07:47:21 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 07:47:21 np0005541914.localdomain podman[33665]: 2025-12-02 07:47:21.095930198 +0000 UTC m=+0.054998805 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 07:47:21 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3e1b53ebe299577311c4fbd12fbe8d40f40acbf6b438cafbac5e8299f13e27c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:21 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 17 pg[1.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=17) [1,3] r=0 lpr=17 pi=[16,17)/0 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [1] -> [1,3], acting [1] -> [1,3], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 07:47:21 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 17 pg[1.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=17) [1,3] r=0 lpr=17 pi=[16,17)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 07:47:21 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3e1b53ebe299577311c4fbd12fbe8d40f40acbf6b438cafbac5e8299f13e27c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:21 np0005541914.localdomain ceph-osd[32707]: osd.4 17 state: booting -> active
Dec 02 07:47:21 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3e1b53ebe299577311c4fbd12fbe8d40f40acbf6b438cafbac5e8299f13e27c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 02 07:47:21 np0005541914.localdomain podman[33665]: 2025-12-02 07:47:21.237938044 +0000 UTC m=+0.197006651 container init b4d0c4ab04985b50a04b0cfb92b19fc9d02fe214dbdd0680c8263e85c3a1a1eb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_proskuriakova, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, vcs-type=git, GIT_BRANCH=main, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, RELEASE=main, version=7, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 02 07:47:21 np0005541914.localdomain podman[33665]: 2025-12-02 07:47:21.24887589 +0000 UTC m=+0.207944487 container start b4d0c4ab04985b50a04b0cfb92b19fc9d02fe214dbdd0680c8263e85c3a1a1eb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_proskuriakova, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, GIT_BRANCH=main, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., distribution-scope=public, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, architecture=x86_64, vcs-type=git, version=7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, io.buildah.version=1.41.4, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 02 07:47:21 np0005541914.localdomain podman[33665]: 2025-12-02 07:47:21.249154361 +0000 UTC m=+0.208222978 container attach b4d0c4ab04985b50a04b0cfb92b19fc9d02fe214dbdd0680c8263e85c3a1a1eb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_proskuriakova, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_BRANCH=main, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 02 07:47:22 np0005541914.localdomain distracted_proskuriakova[33682]: [
Dec 02 07:47:22 np0005541914.localdomain distracted_proskuriakova[33682]:     {
Dec 02 07:47:22 np0005541914.localdomain distracted_proskuriakova[33682]:         "available": false,
Dec 02 07:47:22 np0005541914.localdomain distracted_proskuriakova[33682]:         "ceph_device": false,
Dec 02 07:47:22 np0005541914.localdomain distracted_proskuriakova[33682]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 02 07:47:22 np0005541914.localdomain distracted_proskuriakova[33682]:         "lsm_data": {},
Dec 02 07:47:22 np0005541914.localdomain distracted_proskuriakova[33682]:         "lvs": [],
Dec 02 07:47:22 np0005541914.localdomain distracted_proskuriakova[33682]:         "path": "/dev/sr0",
Dec 02 07:47:22 np0005541914.localdomain distracted_proskuriakova[33682]:         "rejected_reasons": [
Dec 02 07:47:22 np0005541914.localdomain distracted_proskuriakova[33682]:             "Has a FileSystem",
Dec 02 07:47:22 np0005541914.localdomain distracted_proskuriakova[33682]:             "Insufficient space (<5GB)"
Dec 02 07:47:22 np0005541914.localdomain distracted_proskuriakova[33682]:         ],
Dec 02 07:47:22 np0005541914.localdomain distracted_proskuriakova[33682]:         "sys_api": {
Dec 02 07:47:22 np0005541914.localdomain distracted_proskuriakova[33682]:             "actuators": null,
Dec 02 07:47:22 np0005541914.localdomain distracted_proskuriakova[33682]:             "device_nodes": "sr0",
Dec 02 07:47:22 np0005541914.localdomain distracted_proskuriakova[33682]:             "human_readable_size": "482.00 KB",
Dec 02 07:47:22 np0005541914.localdomain distracted_proskuriakova[33682]:             "id_bus": "ata",
Dec 02 07:47:22 np0005541914.localdomain distracted_proskuriakova[33682]:             "model": "QEMU DVD-ROM",
Dec 02 07:47:22 np0005541914.localdomain distracted_proskuriakova[33682]:             "nr_requests": "2",
Dec 02 07:47:22 np0005541914.localdomain distracted_proskuriakova[33682]:             "partitions": {},
Dec 02 07:47:22 np0005541914.localdomain distracted_proskuriakova[33682]:             "path": "/dev/sr0",
Dec 02 07:47:22 np0005541914.localdomain distracted_proskuriakova[33682]:             "removable": "1",
Dec 02 07:47:22 np0005541914.localdomain distracted_proskuriakova[33682]:             "rev": "2.5+",
Dec 02 07:47:22 np0005541914.localdomain distracted_proskuriakova[33682]:             "ro": "0",
Dec 02 07:47:22 np0005541914.localdomain distracted_proskuriakova[33682]:             "rotational": "1",
Dec 02 07:47:22 np0005541914.localdomain distracted_proskuriakova[33682]:             "sas_address": "",
Dec 02 07:47:22 np0005541914.localdomain distracted_proskuriakova[33682]:             "sas_device_handle": "",
Dec 02 07:47:22 np0005541914.localdomain distracted_proskuriakova[33682]:             "scheduler_mode": "mq-deadline",
Dec 02 07:47:22 np0005541914.localdomain distracted_proskuriakova[33682]:             "sectors": 0,
Dec 02 07:47:22 np0005541914.localdomain distracted_proskuriakova[33682]:             "sectorsize": "2048",
Dec 02 07:47:22 np0005541914.localdomain distracted_proskuriakova[33682]:             "size": 493568.0,
Dec 02 07:47:22 np0005541914.localdomain distracted_proskuriakova[33682]:             "support_discard": "0",
Dec 02 07:47:22 np0005541914.localdomain distracted_proskuriakova[33682]:             "type": "disk",
Dec 02 07:47:22 np0005541914.localdomain distracted_proskuriakova[33682]:             "vendor": "QEMU"
Dec 02 07:47:22 np0005541914.localdomain distracted_proskuriakova[33682]:         }
Dec 02 07:47:22 np0005541914.localdomain distracted_proskuriakova[33682]:     }
Dec 02 07:47:22 np0005541914.localdomain distracted_proskuriakova[33682]: ]
Dec 02 07:47:22 np0005541914.localdomain systemd[1]: libpod-b4d0c4ab04985b50a04b0cfb92b19fc9d02fe214dbdd0680c8263e85c3a1a1eb.scope: Deactivated successfully.
Dec 02 07:47:22 np0005541914.localdomain podman[33665]: 2025-12-02 07:47:22.084055492 +0000 UTC m=+1.043124059 container died b4d0c4ab04985b50a04b0cfb92b19fc9d02fe214dbdd0680c8263e85c3a1a1eb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_proskuriakova, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, release=1763362218, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, architecture=x86_64, version=7, GIT_CLEAN=True)
Dec 02 07:47:22 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a3e1b53ebe299577311c4fbd12fbe8d40f40acbf6b438cafbac5e8299f13e27c-merged.mount: Deactivated successfully.
Dec 02 07:47:22 np0005541914.localdomain podman[34995]: 2025-12-02 07:47:22.147410622 +0000 UTC m=+0.056769434 container remove b4d0c4ab04985b50a04b0cfb92b19fc9d02fe214dbdd0680c8263e85c3a1a1eb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=distracted_proskuriakova, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, release=1763362218, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7)
Dec 02 07:47:22 np0005541914.localdomain systemd[1]: libpod-conmon-b4d0c4ab04985b50a04b0cfb92b19fc9d02fe214dbdd0680c8263e85c3a1a1eb.scope: Deactivated successfully.
Dec 02 07:47:22 np0005541914.localdomain sudo[33570]: pam_unix(sudo:session): session closed for user root
Dec 02 07:47:22 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 18 pg[1.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=18) [1,5,3] r=0 lpr=18 pi=[16,18)/0 crt=0'0 mlcod 0'0 unknown mbc={}] start_peering_interval up [1,3] -> [1,5,3], acting [1,3] -> [1,5,3], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 07:47:22 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 18 pg[1.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=18) [1,5,3] r=0 lpr=18 pi=[16,18)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 07:47:23 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 19 pg[1.0( empty local-lis/les=18/19 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=18) [1,5,3] r=0 lpr=18 pi=[16,18)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 07:47:24 np0005541914.localdomain sudo[35009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 07:47:24 np0005541914.localdomain sudo[35009]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:47:24 np0005541914.localdomain sudo[35009]: pam_unix(sudo:session): session closed for user root
Dec 02 07:47:30 np0005541914.localdomain sudo[35024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 07:47:30 np0005541914.localdomain sudo[35024]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:47:30 np0005541914.localdomain sudo[35024]: pam_unix(sudo:session): session closed for user root
Dec 02 07:47:30 np0005541914.localdomain sudo[35039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 02 07:47:30 np0005541914.localdomain sudo[35039]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:47:31 np0005541914.localdomain systemd[26272]: Starting Mark boot as successful...
Dec 02 07:47:31 np0005541914.localdomain podman[35122]: 2025-12-02 07:47:31.506152792 +0000 UTC m=+0.105314878 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, version=7, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, io.buildah.version=1.41.4, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 02 07:47:31 np0005541914.localdomain systemd[26272]: Finished Mark boot as successful.
Dec 02 07:47:31 np0005541914.localdomain podman[35122]: 2025-12-02 07:47:31.610365814 +0000 UTC m=+0.209527930 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, version=7, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_CLEAN=True, distribution-scope=public)
Dec 02 07:47:31 np0005541914.localdomain sudo[35039]: pam_unix(sudo:session): session closed for user root
Dec 02 07:47:32 np0005541914.localdomain sudo[35192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 07:47:32 np0005541914.localdomain sudo[35192]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:47:32 np0005541914.localdomain sudo[35192]: pam_unix(sudo:session): session closed for user root
Dec 02 07:47:36 np0005541914.localdomain sshd[35207]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:47:37 np0005541914.localdomain sshd[35209]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:47:37 np0005541914.localdomain sshd[35207]: Received disconnect from 182.253.156.173 port 42612:11: Bye Bye [preauth]
Dec 02 07:47:37 np0005541914.localdomain sshd[35207]: Disconnected from authenticating user root 182.253.156.173 port 42612 [preauth]
Dec 02 07:47:38 np0005541914.localdomain sshd[35209]: Invalid user zookeeper from 103.52.115.25 port 37052
Dec 02 07:47:39 np0005541914.localdomain sshd[35209]: Received disconnect from 103.52.115.25 port 37052:11: Bye Bye [preauth]
Dec 02 07:47:39 np0005541914.localdomain sshd[35209]: Disconnected from invalid user zookeeper 103.52.115.25 port 37052 [preauth]
Dec 02 07:48:15 np0005541914.localdomain sshd[35211]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:48:15 np0005541914.localdomain sshd[35211]: Invalid user sol from 45.148.10.240 port 47260
Dec 02 07:48:16 np0005541914.localdomain sshd[35211]: Connection closed by invalid user sol 45.148.10.240 port 47260 [preauth]
Dec 02 07:48:32 np0005541914.localdomain sudo[35214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 07:48:32 np0005541914.localdomain sudo[35214]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:48:32 np0005541914.localdomain sudo[35214]: pam_unix(sudo:session): session closed for user root
Dec 02 07:48:32 np0005541914.localdomain sudo[35229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 02 07:48:32 np0005541914.localdomain sudo[35229]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:48:33 np0005541914.localdomain podman[35314]: 2025-12-02 07:48:33.51165103 +0000 UTC m=+0.091257036 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, architecture=x86_64, RELEASE=main, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.buildah.version=1.41.4, name=rhceph, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 02 07:48:33 np0005541914.localdomain podman[35314]: 2025-12-02 07:48:33.616325567 +0000 UTC m=+0.195931573 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, vcs-type=git, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, architecture=x86_64, ceph=True, GIT_CLEAN=True, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, release=1763362218, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 02 07:48:33 np0005541914.localdomain sudo[35229]: pam_unix(sudo:session): session closed for user root
Dec 02 07:48:34 np0005541914.localdomain sudo[35382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 07:48:34 np0005541914.localdomain sudo[35382]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:48:34 np0005541914.localdomain sudo[35382]: pam_unix(sudo:session): session closed for user root
Dec 02 07:48:34 np0005541914.localdomain sudo[35397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 07:48:34 np0005541914.localdomain sudo[35397]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:48:34 np0005541914.localdomain sudo[35397]: pam_unix(sudo:session): session closed for user root
Dec 02 07:48:35 np0005541914.localdomain sudo[35444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 07:48:35 np0005541914.localdomain sudo[35444]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:48:35 np0005541914.localdomain sudo[35444]: pam_unix(sudo:session): session closed for user root
Dec 02 07:48:42 np0005541914.localdomain sshd[24759]: Received disconnect from 192.168.122.100 port 55506:11: disconnected by user
Dec 02 07:48:42 np0005541914.localdomain sshd[24759]: Disconnected from user zuul 192.168.122.100 port 55506
Dec 02 07:48:42 np0005541914.localdomain sshd[24756]: pam_unix(sshd:session): session closed for user zuul
Dec 02 07:48:42 np0005541914.localdomain systemd-logind[760]: Session 13 logged out. Waiting for processes to exit.
Dec 02 07:48:42 np0005541914.localdomain systemd[1]: session-13.scope: Deactivated successfully.
Dec 02 07:48:42 np0005541914.localdomain systemd[1]: session-13.scope: Consumed 22.531s CPU time.
Dec 02 07:48:42 np0005541914.localdomain systemd-logind[760]: Removed session 13.
Dec 02 07:48:50 np0005541914.localdomain sshd[35459]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:48:51 np0005541914.localdomain sshd[35459]: Invalid user debian from 182.253.156.173 port 59226
Dec 02 07:48:51 np0005541914.localdomain sshd[35459]: Received disconnect from 182.253.156.173 port 59226:11: Bye Bye [preauth]
Dec 02 07:48:51 np0005541914.localdomain sshd[35459]: Disconnected from invalid user debian 182.253.156.173 port 59226 [preauth]
Dec 02 07:48:56 np0005541914.localdomain sshd[35461]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:48:57 np0005541914.localdomain sshd[35461]: Invalid user ivan from 103.52.115.25 port 37292
Dec 02 07:48:58 np0005541914.localdomain sshd[35461]: Received disconnect from 103.52.115.25 port 37292:11: Bye Bye [preauth]
Dec 02 07:48:58 np0005541914.localdomain sshd[35461]: Disconnected from invalid user ivan 103.52.115.25 port 37292 [preauth]
Dec 02 07:49:35 np0005541914.localdomain sudo[35463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 07:49:35 np0005541914.localdomain sudo[35463]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:49:35 np0005541914.localdomain sudo[35463]: pam_unix(sudo:session): session closed for user root
Dec 02 07:49:35 np0005541914.localdomain sudo[35478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 07:49:35 np0005541914.localdomain sudo[35478]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:49:36 np0005541914.localdomain sudo[35478]: pam_unix(sudo:session): session closed for user root
Dec 02 07:49:36 np0005541914.localdomain sudo[35524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 07:49:36 np0005541914.localdomain sudo[35524]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:49:36 np0005541914.localdomain sudo[35524]: pam_unix(sudo:session): session closed for user root
Dec 02 07:50:02 np0005541914.localdomain sshd[35539]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:50:04 np0005541914.localdomain sshd[35539]: Received disconnect from 182.253.156.173 port 39588:11: Bye Bye [preauth]
Dec 02 07:50:04 np0005541914.localdomain sshd[35539]: Disconnected from authenticating user root 182.253.156.173 port 39588 [preauth]
Dec 02 07:50:11 np0005541914.localdomain sshd[35541]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:50:12 np0005541914.localdomain sshd[35541]: Invalid user scan from 103.52.115.25 port 48112
Dec 02 07:50:13 np0005541914.localdomain sshd[35541]: Received disconnect from 103.52.115.25 port 48112:11: Bye Bye [preauth]
Dec 02 07:50:13 np0005541914.localdomain sshd[35541]: Disconnected from invalid user scan 103.52.115.25 port 48112 [preauth]
Dec 02 07:50:17 np0005541914.localdomain sshd[35543]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:50:17 np0005541914.localdomain sshd[35543]: Invalid user sol from 45.148.10.240 port 42052
Dec 02 07:50:17 np0005541914.localdomain sshd[35543]: Connection closed by invalid user sol 45.148.10.240 port 42052 [preauth]
Dec 02 07:50:36 np0005541914.localdomain sudo[35545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 07:50:36 np0005541914.localdomain sudo[35545]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:50:36 np0005541914.localdomain sudo[35545]: pam_unix(sudo:session): session closed for user root
Dec 02 07:50:37 np0005541914.localdomain sudo[35560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 07:50:37 np0005541914.localdomain sudo[35560]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:50:37 np0005541914.localdomain sudo[35560]: pam_unix(sudo:session): session closed for user root
Dec 02 07:50:38 np0005541914.localdomain sudo[35607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 07:50:38 np0005541914.localdomain sudo[35607]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:50:38 np0005541914.localdomain sudo[35607]: pam_unix(sudo:session): session closed for user root
Dec 02 07:50:54 np0005541914.localdomain systemd[26272]: Created slice User Background Tasks Slice.
Dec 02 07:50:54 np0005541914.localdomain systemd[26272]: Starting Cleanup of User's Temporary Files and Directories...
Dec 02 07:50:54 np0005541914.localdomain systemd[26272]: Finished Cleanup of User's Temporary Files and Directories.
Dec 02 07:51:17 np0005541914.localdomain sshd[35623]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:51:19 np0005541914.localdomain sshd[35623]: Invalid user exx from 182.253.156.173 port 34704
Dec 02 07:51:19 np0005541914.localdomain sshd[35623]: Received disconnect from 182.253.156.173 port 34704:11: Bye Bye [preauth]
Dec 02 07:51:19 np0005541914.localdomain sshd[35623]: Disconnected from invalid user exx 182.253.156.173 port 34704 [preauth]
Dec 02 07:51:27 np0005541914.localdomain sshd[35625]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:51:29 np0005541914.localdomain sshd[35625]: Received disconnect from 103.52.115.25 port 51416:11: Bye Bye [preauth]
Dec 02 07:51:29 np0005541914.localdomain sshd[35625]: Disconnected from authenticating user root 103.52.115.25 port 51416 [preauth]
Dec 02 07:51:38 np0005541914.localdomain sudo[35627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 07:51:38 np0005541914.localdomain sudo[35627]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:51:38 np0005541914.localdomain sudo[35627]: pam_unix(sudo:session): session closed for user root
Dec 02 07:51:38 np0005541914.localdomain sudo[35642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 07:51:38 np0005541914.localdomain sudo[35642]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:51:39 np0005541914.localdomain sudo[35642]: pam_unix(sudo:session): session closed for user root
Dec 02 07:51:39 np0005541914.localdomain sudo[35688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 07:51:39 np0005541914.localdomain sudo[35688]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:51:39 np0005541914.localdomain sudo[35688]: pam_unix(sudo:session): session closed for user root
Dec 02 07:51:44 np0005541914.localdomain sshd[35703]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:51:45 np0005541914.localdomain sshd[35703]: Received disconnect from 43.225.159.111 port 45432:11:  [preauth]
Dec 02 07:51:45 np0005541914.localdomain sshd[35703]: Disconnected from authenticating user root 43.225.159.111 port 45432 [preauth]
Dec 02 07:52:06 np0005541914.localdomain sshd[35705]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:52:06 np0005541914.localdomain sshd[35705]: Accepted publickey for zuul from 192.168.122.100 port 60334 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 07:52:06 np0005541914.localdomain systemd-logind[760]: New session 27 of user zuul.
Dec 02 07:52:06 np0005541914.localdomain systemd[1]: Started Session 27 of User zuul.
Dec 02 07:52:06 np0005541914.localdomain sshd[35705]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 07:52:06 np0005541914.localdomain sudo[35751]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dgaerdgqsjqnyrccnuepoxrmkaunotfm ; /usr/bin/python3
Dec 02 07:52:06 np0005541914.localdomain sudo[35751]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:52:06 np0005541914.localdomain python3[35753]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 02 07:52:06 np0005541914.localdomain sudo[35751]: pam_unix(sudo:session): session closed for user root
Dec 02 07:52:07 np0005541914.localdomain sudo[35796]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oruanmyttsrsofsmicufzkdgxfykwlrg ; /usr/bin/python3
Dec 02 07:52:07 np0005541914.localdomain sudo[35796]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:52:07 np0005541914.localdomain python3[35798]: ansible-setup Invoked with gather_subset=['!facter', '!ohai'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 07:52:07 np0005541914.localdomain sudo[35796]: pam_unix(sudo:session): session closed for user root
Dec 02 07:52:08 np0005541914.localdomain sudo[35816]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yzvnokeoygardsqdrxdedyyzkeabkslr ; /usr/bin/python3
Dec 02 07:52:08 np0005541914.localdomain sudo[35816]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:52:08 np0005541914.localdomain python3[35818]: ansible-user Invoked with name=tripleo-admin generate_ssh_key=False state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005541914.localdomain update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec 02 07:52:08 np0005541914.localdomain useradd[35820]: new group: name=tripleo-admin, GID=1003
Dec 02 07:52:08 np0005541914.localdomain useradd[35820]: new user: name=tripleo-admin, UID=1003, GID=1003, home=/home/tripleo-admin, shell=/bin/bash, from=none
Dec 02 07:52:08 np0005541914.localdomain sudo[35816]: pam_unix(sudo:session): session closed for user root
Dec 02 07:52:08 np0005541914.localdomain sudo[35872]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zlzxouxdouvqzqugsaxbrlfzthtagbjc ; /usr/bin/python3
Dec 02 07:52:08 np0005541914.localdomain sudo[35872]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:52:08 np0005541914.localdomain python3[35874]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/tripleo-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:52:08 np0005541914.localdomain sudo[35872]: pam_unix(sudo:session): session closed for user root
Dec 02 07:52:09 np0005541914.localdomain sudo[35915]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dopuuonovteloeglcbqiviqrxktehxia ; /usr/bin/python3
Dec 02 07:52:09 np0005541914.localdomain sudo[35915]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:52:09 np0005541914.localdomain python3[35917]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/tripleo-admin mode=288 owner=root group=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764661928.6526706-65484-123595612237668/source _original_basename=tmps7tvav1t follow=False checksum=b3e7ecdcc699d217c6b083a91b07208207813d93 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:52:09 np0005541914.localdomain sudo[35915]: pam_unix(sudo:session): session closed for user root
Dec 02 07:52:09 np0005541914.localdomain sudo[35945]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ebdfihqwsiapqlmlwajnbcotkswisagg ; /usr/bin/python3
Dec 02 07:52:09 np0005541914.localdomain sudo[35945]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:52:09 np0005541914.localdomain python3[35947]: ansible-file Invoked with path=/home/tripleo-admin state=directory owner=tripleo-admin group=tripleo-admin mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:52:09 np0005541914.localdomain sudo[35945]: pam_unix(sudo:session): session closed for user root
Dec 02 07:52:09 np0005541914.localdomain sudo[35961]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xhuicdbdqtfebkqbotejwdnodfktikht ; /usr/bin/python3
Dec 02 07:52:09 np0005541914.localdomain sudo[35961]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:52:10 np0005541914.localdomain python3[35963]: ansible-file Invoked with path=/home/tripleo-admin/.ssh state=directory owner=tripleo-admin group=tripleo-admin mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:52:10 np0005541914.localdomain sudo[35961]: pam_unix(sudo:session): session closed for user root
Dec 02 07:52:10 np0005541914.localdomain sudo[35977]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-prlczuygztmebamooeteaveuimonthgv ; /usr/bin/python3
Dec 02 07:52:10 np0005541914.localdomain sudo[35977]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:52:10 np0005541914.localdomain python3[35979]: ansible-file Invoked with path=/home/tripleo-admin/.ssh/authorized_keys state=touch owner=tripleo-admin group=tripleo-admin mode=384 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:52:10 np0005541914.localdomain sudo[35977]: pam_unix(sudo:session): session closed for user root
Dec 02 07:52:11 np0005541914.localdomain sudo[35993]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mmasurwtalxblidwmxyrgircuenruzei ; /usr/bin/python3
Dec 02 07:52:11 np0005541914.localdomain sudo[35993]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 07:52:11 np0005541914.localdomain python3[35995]: ansible-lineinfile Invoked with path=/home/tripleo-admin/.ssh/authorized_keys line=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfcGXFPS+XIPHLw+7WTk1crQnJj1F7l/bATNqEM8HqdPREfaSIeF883HXh8Bv+rj9cjcgSPu+200+1SEsq35V+19mPwwkoxgdhfQu8jGk7vv17tL7k61zl9rWne61hn/7PnFptl+SBaMvOq/9ZdnPuMzb1YBTWbKm6kC3RPkgDUOa/BER5PJh1E6x6wYj1wRGMwVREczSSv+66aA5tTRelsFh16OXZXpq4ddoi7OeuimE3lWuMAHorxzJwF5AN+gPTgKYRkMwbMMHU4nPx7TXt5G3zjqWhmos08Xgdl+lPNHY5i463T96l4hGiycZKO4FOCq0ZMzldYkovXnyZi1CjSYUDcEn+EHIRJyZaK9ZJlJ1no5HVdwv1rwVMw4KkpZvH7HBh/iX47Wsi4qxK+L3X5hwZ7s6iSpNWeEMT5CLZsiDCkrdideFnZ8kW2jgnNIV0h+pUPISFfl1j03bjS9fHJjgl4BndVBxRJZJQf8Szyjx5WcIyBUidtYPnHzSLbmk= zuul-build-sshkey
                                                          regexp=Generated by TripleO state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:52:11 np0005541914.localdomain sudo[35993]: pam_unix(sudo:session): session closed for user root
Dec 02 07:52:11 np0005541914.localdomain python3[36009]: ansible-ping Invoked with data=pong
Dec 02 07:52:22 np0005541914.localdomain sshd[36010]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:52:22 np0005541914.localdomain sshd[36010]: Accepted publickey for tripleo-admin from 192.168.122.100 port 55594 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 07:52:22 np0005541914.localdomain systemd-logind[760]: New session 28 of user tripleo-admin.
Dec 02 07:52:22 np0005541914.localdomain systemd[1]: Created slice User Slice of UID 1003.
Dec 02 07:52:22 np0005541914.localdomain systemd[1]: Starting User Runtime Directory /run/user/1003...
Dec 02 07:52:22 np0005541914.localdomain systemd[1]: Finished User Runtime Directory /run/user/1003.
Dec 02 07:52:22 np0005541914.localdomain systemd[1]: Starting User Manager for UID 1003...
Dec 02 07:52:22 np0005541914.localdomain systemd[36014]: pam_unix(systemd-user:session): session opened for user tripleo-admin(uid=1003) by (uid=0)
Dec 02 07:52:22 np0005541914.localdomain systemd[36014]: Queued start job for default target Main User Target.
Dec 02 07:52:22 np0005541914.localdomain systemd[36014]: Created slice User Application Slice.
Dec 02 07:52:22 np0005541914.localdomain systemd[36014]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 02 07:52:22 np0005541914.localdomain systemd[36014]: Started Daily Cleanup of User's Temporary Directories.
Dec 02 07:52:22 np0005541914.localdomain systemd[36014]: Reached target Paths.
Dec 02 07:52:22 np0005541914.localdomain systemd[36014]: Reached target Timers.
Dec 02 07:52:22 np0005541914.localdomain systemd[36014]: Starting D-Bus User Message Bus Socket...
Dec 02 07:52:22 np0005541914.localdomain systemd[36014]: Starting Create User's Volatile Files and Directories...
Dec 02 07:52:22 np0005541914.localdomain systemd[36014]: Listening on D-Bus User Message Bus Socket.
Dec 02 07:52:22 np0005541914.localdomain systemd[36014]: Reached target Sockets.
Dec 02 07:52:22 np0005541914.localdomain systemd[36014]: Finished Create User's Volatile Files and Directories.
Dec 02 07:52:22 np0005541914.localdomain systemd[36014]: Reached target Basic System.
Dec 02 07:52:22 np0005541914.localdomain systemd[36014]: Reached target Main User Target.
Dec 02 07:52:22 np0005541914.localdomain systemd[36014]: Startup finished in 136ms.
Dec 02 07:52:22 np0005541914.localdomain systemd[1]: Started User Manager for UID 1003.
Dec 02 07:52:22 np0005541914.localdomain systemd[1]: Started Session 28 of User tripleo-admin.
Dec 02 07:52:22 np0005541914.localdomain sshd[36010]: pam_unix(sshd:session): session opened for user tripleo-admin(uid=1003) by (uid=0)
Dec 02 07:52:23 np0005541914.localdomain sudo[36073]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gzhsqgauygaciglsiiiaimuwojmxqcqg ; /usr/bin/python3
Dec 02 07:52:23 np0005541914.localdomain sudo[36073]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:52:23 np0005541914.localdomain python3[36075]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 07:52:23 np0005541914.localdomain sudo[36073]: pam_unix(sudo:session): session closed for user root
Dec 02 07:52:24 np0005541914.localdomain sshd[36080]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:52:25 np0005541914.localdomain sshd[36080]: Invalid user sol from 45.148.10.240 port 59516
Dec 02 07:52:25 np0005541914.localdomain sshd[36080]: Connection closed by invalid user sol 45.148.10.240 port 59516 [preauth]
Dec 02 07:52:28 np0005541914.localdomain sudo[36095]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ajswklfajgcqhluwerpbnwbbdkxbhmec ; /usr/bin/python3
Dec 02 07:52:28 np0005541914.localdomain sudo[36095]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:52:28 np0005541914.localdomain python3[36097]: ansible-selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config
Dec 02 07:52:28 np0005541914.localdomain sudo[36095]: pam_unix(sudo:session): session closed for user root
Dec 02 07:52:29 np0005541914.localdomain sudo[36111]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vmeyxgwsitlwtbamkrdcqxeuqpkjhsvz ; /usr/bin/python3
Dec 02 07:52:29 np0005541914.localdomain sudo[36111]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:52:29 np0005541914.localdomain python3[36113]: ansible-tempfile Invoked with state=file suffix=tmphosts prefix=ansible. path=None
Dec 02 07:52:29 np0005541914.localdomain sudo[36111]: pam_unix(sudo:session): session closed for user root
Dec 02 07:52:29 np0005541914.localdomain sudo[36159]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hmpcfvxmoywjpspildxltjzfymwhrgxg ; /usr/bin/python3
Dec 02 07:52:29 np0005541914.localdomain sudo[36159]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:52:29 np0005541914.localdomain python3[36161]: ansible-ansible.legacy.copy Invoked with remote_src=True src=/etc/hosts dest=/tmp/ansible.yjfp0a4itmphosts mode=preserve backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:52:29 np0005541914.localdomain sudo[36159]: pam_unix(sudo:session): session closed for user root
Dec 02 07:52:30 np0005541914.localdomain sudo[36189]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nksajdvdgfqfrpcqiavevndcfozydzuv ; /usr/bin/python3
Dec 02 07:52:30 np0005541914.localdomain sudo[36189]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:52:30 np0005541914.localdomain python3[36191]: ansible-blockinfile Invoked with state=absent path=/tmp/ansible.yjfp0a4itmphosts block= marker=# {mark} marker_begin=HEAT_HOSTS_START - Do not edit manually within this section! marker_end=HEAT_HOSTS_END create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:52:30 np0005541914.localdomain sudo[36189]: pam_unix(sudo:session): session closed for user root
Dec 02 07:52:31 np0005541914.localdomain sudo[36205]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kyefgpbwkkqwalbdjpdtwwjkpqaecuxt ; /usr/bin/python3
Dec 02 07:52:31 np0005541914.localdomain sudo[36205]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:52:31 np0005541914.localdomain python3[36207]: ansible-blockinfile Invoked with create=True path=/tmp/ansible.yjfp0a4itmphosts insertbefore=BOF block=172.17.0.106 np0005541912.localdomain np0005541912
                                                         172.18.0.106 np0005541912.storage.localdomain np0005541912.storage
                                                         172.20.0.106 np0005541912.storagemgmt.localdomain np0005541912.storagemgmt
                                                         172.17.0.106 np0005541912.internalapi.localdomain np0005541912.internalapi
                                                         172.19.0.106 np0005541912.tenant.localdomain np0005541912.tenant
                                                         192.168.122.106 np0005541912.ctlplane.localdomain np0005541912.ctlplane
                                                         172.17.0.107 np0005541913.localdomain np0005541913
                                                         172.18.0.107 np0005541913.storage.localdomain np0005541913.storage
                                                         172.20.0.107 np0005541913.storagemgmt.localdomain np0005541913.storagemgmt
                                                         172.17.0.107 np0005541913.internalapi.localdomain np0005541913.internalapi
                                                         172.19.0.107 np0005541913.tenant.localdomain np0005541913.tenant
                                                         192.168.122.107 np0005541913.ctlplane.localdomain np0005541913.ctlplane
                                                         172.17.0.108 np0005541914.localdomain np0005541914
                                                         172.18.0.108 np0005541914.storage.localdomain np0005541914.storage
                                                         172.20.0.108 np0005541914.storagemgmt.localdomain np0005541914.storagemgmt
                                                         172.17.0.108 np0005541914.internalapi.localdomain np0005541914.internalapi
                                                         172.19.0.108 np0005541914.tenant.localdomain np0005541914.tenant
                                                         192.168.122.108 np0005541914.ctlplane.localdomain np0005541914.ctlplane
                                                         172.17.0.103 np0005541909.localdomain np0005541909
                                                         172.18.0.103 np0005541909.storage.localdomain np0005541909.storage
                                                         172.20.0.103 np0005541909.storagemgmt.localdomain np0005541909.storagemgmt
                                                         172.17.0.103 np0005541909.internalapi.localdomain np0005541909.internalapi
                                                         172.19.0.103 np0005541909.tenant.localdomain np0005541909.tenant
                                                         192.168.122.103 np0005541909.ctlplane.localdomain np0005541909.ctlplane
                                                         172.17.0.104 np0005541910.localdomain np0005541910
                                                         172.18.0.104 np0005541910.storage.localdomain np0005541910.storage
                                                         172.20.0.104 np0005541910.storagemgmt.localdomain np0005541910.storagemgmt
                                                         172.17.0.104 np0005541910.internalapi.localdomain np0005541910.internalapi
                                                         172.19.0.104 np0005541910.tenant.localdomain np0005541910.tenant
                                                         192.168.122.104 np0005541910.ctlplane.localdomain np0005541910.ctlplane
                                                         172.17.0.105 np0005541911.localdomain np0005541911
                                                         172.18.0.105 np0005541911.storage.localdomain np0005541911.storage
                                                         172.20.0.105 np0005541911.storagemgmt.localdomain np0005541911.storagemgmt
                                                         172.17.0.105 np0005541911.internalapi.localdomain np0005541911.internalapi
                                                         172.19.0.105 np0005541911.tenant.localdomain np0005541911.tenant
                                                         192.168.122.105 np0005541911.ctlplane.localdomain np0005541911.ctlplane
                                                         
                                                         192.168.122.100 undercloud.ctlplane.localdomain undercloud.ctlplane
                                                         192.168.122.99  overcloud.ctlplane.localdomain
                                                         172.18.0.121  overcloud.storage.localdomain
                                                         172.20.0.222  overcloud.storagemgmt.localdomain
                                                         172.17.0.136  overcloud.internalapi.localdomain
                                                         172.21.0.241  overcloud.localdomain
                                                          marker=# {mark} marker_begin=START_HOST_ENTRIES_FOR_STACK: overcloud marker_end=END_HOST_ENTRIES_FOR_STACK: overcloud state=present backup=False unsafe_writes=False insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:52:31 np0005541914.localdomain sudo[36205]: pam_unix(sudo:session): session closed for user root
Dec 02 07:52:31 np0005541914.localdomain sudo[36221]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gjwnyzhtmyimqlzvuzgrzryebzktvyeg ; /usr/bin/python3
Dec 02 07:52:31 np0005541914.localdomain sudo[36221]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:52:31 np0005541914.localdomain python3[36223]: ansible-ansible.legacy.command Invoked with _raw_params=cp "/tmp/ansible.yjfp0a4itmphosts" "/etc/hosts" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:52:31 np0005541914.localdomain sudo[36221]: pam_unix(sudo:session): session closed for user root
Dec 02 07:52:32 np0005541914.localdomain sudo[36238]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-datwvwebucssmwmtttyjdkoiqthgnfag ; /usr/bin/python3
Dec 02 07:52:32 np0005541914.localdomain sudo[36238]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:52:32 np0005541914.localdomain python3[36240]: ansible-file Invoked with path=/tmp/ansible.yjfp0a4itmphosts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:52:32 np0005541914.localdomain sudo[36238]: pam_unix(sudo:session): session closed for user root
Dec 02 07:52:33 np0005541914.localdomain sudo[36254]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iwmhmoxibsfgjeowulfwpcmsovjwvapx ; /usr/bin/python3
Dec 02 07:52:33 np0005541914.localdomain sudo[36254]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:52:33 np0005541914.localdomain python3[36256]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides rhosp-release _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:52:33 np0005541914.localdomain sudo[36254]: pam_unix(sudo:session): session closed for user root
Dec 02 07:52:33 np0005541914.localdomain sudo[36271]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-edzperbgrwnwebdkjiuaiowpqpbzlxrq ; /usr/bin/python3
Dec 02 07:52:33 np0005541914.localdomain sudo[36271]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:52:34 np0005541914.localdomain python3[36273]: ansible-ansible.legacy.dnf Invoked with name=['rhosp-release'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 02 07:52:35 np0005541914.localdomain sshd[36275]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:52:36 np0005541914.localdomain sshd[36275]: Received disconnect from 182.253.156.173 port 47466:11: Bye Bye [preauth]
Dec 02 07:52:36 np0005541914.localdomain sshd[36275]: Disconnected from authenticating user root 182.253.156.173 port 47466 [preauth]
Dec 02 07:52:37 np0005541914.localdomain sudo[36271]: pam_unix(sudo:session): session closed for user root
Dec 02 07:52:38 np0005541914.localdomain sudo[36292]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eyhkkxdnmvpywlrdgakpqvngycduohhq ; /usr/bin/python3
Dec 02 07:52:38 np0005541914.localdomain sudo[36292]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:52:38 np0005541914.localdomain python3[36294]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides driverctl lvm2 jq nftables openvswitch openstack-heat-agents openstack-selinux os-net-config python3-libselinux python3-pyyaml puppet-tripleo rsync tmpwatch sysstat iproute-tc _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:52:38 np0005541914.localdomain sudo[36292]: pam_unix(sudo:session): session closed for user root
Dec 02 07:52:38 np0005541914.localdomain sudo[36309]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-odjayobktcckoyjxpfhnwfpkjdltrmvs ; /usr/bin/python3
Dec 02 07:52:38 np0005541914.localdomain sudo[36309]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:52:39 np0005541914.localdomain python3[36311]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'jq', 'nftables', 'openvswitch', 'openstack-heat-agents', 'openstack-selinux', 'os-net-config', 'python3-libselinux', 'python3-pyyaml', 'puppet-tripleo', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 02 07:52:39 np0005541914.localdomain sudo[36313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 07:52:39 np0005541914.localdomain sudo[36313]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:52:39 np0005541914.localdomain sudo[36313]: pam_unix(sudo:session): session closed for user root
Dec 02 07:52:39 np0005541914.localdomain sudo[36328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 07:52:39 np0005541914.localdomain sudo[36328]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:52:40 np0005541914.localdomain sudo[36328]: pam_unix(sudo:session): session closed for user root
Dec 02 07:52:41 np0005541914.localdomain sudo[36375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 07:52:41 np0005541914.localdomain sudo[36375]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:52:41 np0005541914.localdomain sudo[36375]: pam_unix(sudo:session): session closed for user root
Dec 02 07:52:53 np0005541914.localdomain sshd[36550]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:52:55 np0005541914.localdomain groupadd[36562]: group added to /etc/group: name=puppet, GID=52
Dec 02 07:52:55 np0005541914.localdomain groupadd[36562]: group added to /etc/gshadow: name=puppet
Dec 02 07:52:55 np0005541914.localdomain groupadd[36562]: new group: name=puppet, GID=52
Dec 02 07:52:55 np0005541914.localdomain sshd[36550]: Invalid user mc from 103.52.115.25 port 33034
Dec 02 07:52:55 np0005541914.localdomain useradd[36569]: new user: name=puppet, UID=52, GID=52, home=/var/lib/puppet, shell=/sbin/nologin, from=none
Dec 02 07:52:55 np0005541914.localdomain sshd[36550]: Received disconnect from 103.52.115.25 port 33034:11: Bye Bye [preauth]
Dec 02 07:52:55 np0005541914.localdomain sshd[36550]: Disconnected from invalid user mc 103.52.115.25 port 33034 [preauth]
Dec 02 07:53:41 np0005541914.localdomain sudo[37041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 07:53:41 np0005541914.localdomain sudo[37041]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:53:41 np0005541914.localdomain sudo[37041]: pam_unix(sudo:session): session closed for user root
Dec 02 07:53:41 np0005541914.localdomain sudo[37056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 07:53:41 np0005541914.localdomain sudo[37056]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:53:41 np0005541914.localdomain sudo[37056]: pam_unix(sudo:session): session closed for user root
Dec 02 07:53:42 np0005541914.localdomain sudo[37103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 07:53:42 np0005541914.localdomain sudo[37103]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:53:42 np0005541914.localdomain sudo[37103]: pam_unix(sudo:session): session closed for user root
Dec 02 07:53:48 np0005541914.localdomain kernel: SELinux:  Converting 2699 SID table entries...
Dec 02 07:53:48 np0005541914.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 02 07:53:48 np0005541914.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 02 07:53:48 np0005541914.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 02 07:53:48 np0005541914.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 02 07:53:48 np0005541914.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 02 07:53:48 np0005541914.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 02 07:53:48 np0005541914.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 02 07:53:48 np0005541914.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 02 07:53:48 np0005541914.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 02 07:53:48 np0005541914.localdomain systemd[1]: Starting man-db-cache-update.service...
Dec 02 07:53:48 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 07:53:49 np0005541914.localdomain systemd-rc-local-generator[37249]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 07:53:49 np0005541914.localdomain systemd-sysv-generator[37256]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 07:53:49 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 07:53:49 np0005541914.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Dec 02 07:53:49 np0005541914.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 02 07:53:49 np0005541914.localdomain systemd[1]: Finished man-db-cache-update.service.
Dec 02 07:53:49 np0005541914.localdomain systemd[1]: run-ra17f050e829d4feb884d043e6e127c91.service: Deactivated successfully.
Dec 02 07:53:49 np0005541914.localdomain sshd[37676]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:53:50 np0005541914.localdomain sudo[36309]: pam_unix(sudo:session): session closed for user root
Dec 02 07:53:51 np0005541914.localdomain sshd[37676]: Received disconnect from 182.253.156.173 port 51710:11: Bye Bye [preauth]
Dec 02 07:53:51 np0005541914.localdomain sshd[37676]: Disconnected from authenticating user root 182.253.156.173 port 51710 [preauth]
Dec 02 07:53:55 np0005541914.localdomain sudo[37691]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ppkjdjvtaamewxscfseppjmofubntvsh ; /usr/bin/python3
Dec 02 07:53:55 np0005541914.localdomain sudo[37691]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:53:55 np0005541914.localdomain python3[37693]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 jq nftables openvswitch openstack-heat-agents openstack-selinux os-net-config python3-libselinux python3-pyyaml puppet-tripleo rsync tmpwatch sysstat iproute-tc _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:53:56 np0005541914.localdomain sudo[37691]: pam_unix(sudo:session): session closed for user root
Dec 02 07:53:56 np0005541914.localdomain sudo[37830]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sqvmjnfgmwkwcyurieecjbjaubwbomax ; /usr/bin/python3
Dec 02 07:53:56 np0005541914.localdomain sudo[37830]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:53:57 np0005541914.localdomain python3[37832]: ansible-ansible.legacy.systemd Invoked with name=openvswitch enabled=True state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 07:53:57 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 07:53:57 np0005541914.localdomain systemd-rc-local-generator[37856]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 07:53:57 np0005541914.localdomain systemd-sysv-generator[37861]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 07:53:57 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 07:53:57 np0005541914.localdomain sudo[37830]: pam_unix(sudo:session): session closed for user root
Dec 02 07:53:57 np0005541914.localdomain sudo[37884]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rsfengrlpvzhpeylexjldflzkbptrmee ; /usr/bin/python3
Dec 02 07:53:57 np0005541914.localdomain sudo[37884]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:53:57 np0005541914.localdomain python3[37886]: ansible-file Invoked with path=/var/lib/heat-config/tripleo-config-download state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:53:57 np0005541914.localdomain sudo[37884]: pam_unix(sudo:session): session closed for user root
Dec 02 07:53:58 np0005541914.localdomain sudo[37900]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qlyilbawdttijcbddstnpbqmxuunudkc ; /usr/bin/python3
Dec 02 07:53:58 np0005541914.localdomain sudo[37900]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:53:58 np0005541914.localdomain python3[37902]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides openstack-network-scripts _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:53:59 np0005541914.localdomain sudo[37900]: pam_unix(sudo:session): session closed for user root
Dec 02 07:53:59 np0005541914.localdomain sudo[37917]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tffacnigaklxiqpiqwhpwxuxffitcuqp ; /usr/bin/python3
Dec 02 07:53:59 np0005541914.localdomain sudo[37917]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:53:59 np0005541914.localdomain python3[37919]: ansible-systemd Invoked with name=NetworkManager enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 02 07:54:01 np0005541914.localdomain sudo[37917]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:01 np0005541914.localdomain sudo[37935]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xoxwelkuohmsjyrvllrzxgfmdvsbkotu ; /usr/bin/python3
Dec 02 07:54:01 np0005541914.localdomain sudo[37935]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:01 np0005541914.localdomain python3[37937]: ansible-ini_file Invoked with path=/etc/NetworkManager/NetworkManager.conf state=present no_extra_spaces=True section=main option=dns value=none backup=True exclusive=True allow_no_value=False create=True unsafe_writes=False values=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:54:01 np0005541914.localdomain sudo[37935]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:01 np0005541914.localdomain sudo[37953]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pripfkbqmqsmchxdeatqnpafcxwtqirn ; /usr/bin/python3
Dec 02 07:54:01 np0005541914.localdomain sudo[37953]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:01 np0005541914.localdomain python3[37955]: ansible-ini_file Invoked with path=/etc/NetworkManager/NetworkManager.conf state=present no_extra_spaces=True section=main option=rc-manager value=unmanaged backup=True exclusive=True allow_no_value=False create=True unsafe_writes=False values=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:54:01 np0005541914.localdomain sudo[37953]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:02 np0005541914.localdomain sudo[37971]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nncuionrghbdkmvtzeqgoljvrwkgmpbb ; /usr/bin/python3
Dec 02 07:54:02 np0005541914.localdomain sudo[37971]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:02 np0005541914.localdomain python3[37973]: ansible-ansible.legacy.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 02 07:54:02 np0005541914.localdomain systemd[1]: Reloading Network Manager...
Dec 02 07:54:02 np0005541914.localdomain NetworkManager[5967]: <info>  [1764662042.4941] audit: op="reload" arg="0" pid=37976 uid=0 result="success"
Dec 02 07:54:02 np0005541914.localdomain NetworkManager[5967]: <info>  [1764662042.4957] config: signal: SIGHUP,config-files,values,values-user,no-auto-default,dns-mode,rc-manager (/etc/NetworkManager/NetworkManager.conf (lib: 00-server.conf) (run: 15-carrier-timeout.conf))
Dec 02 07:54:02 np0005541914.localdomain NetworkManager[5967]: <info>  [1764662042.4958] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Dec 02 07:54:02 np0005541914.localdomain systemd[1]: Reloaded Network Manager.
Dec 02 07:54:02 np0005541914.localdomain sudo[37971]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:02 np0005541914.localdomain sudo[37990]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nqeoaylqtrwpwrsnvezfnvppuxabqzqg ; /usr/bin/python3
Dec 02 07:54:02 np0005541914.localdomain sudo[37990]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:02 np0005541914.localdomain python3[37992]: ansible-ansible.legacy.command Invoked with _raw_params=ln -f -s /usr/share/openstack-puppet/modules/* /etc/puppet/modules/ _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:54:03 np0005541914.localdomain sudo[37990]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:04 np0005541914.localdomain sudo[38007]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-arbgspzdnzzyzuspxcevfgitnizcdnmx ; /usr/bin/python3
Dec 02 07:54:04 np0005541914.localdomain sudo[38007]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:04 np0005541914.localdomain python3[38009]: ansible-stat Invoked with path=/usr/bin/ansible-playbook follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 07:54:04 np0005541914.localdomain sudo[38007]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:04 np0005541914.localdomain sudo[38025]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bwerdyjxiccyupzbmtukabnicalnfpda ; /usr/bin/python3
Dec 02 07:54:04 np0005541914.localdomain sudo[38025]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:04 np0005541914.localdomain python3[38027]: ansible-stat Invoked with path=/usr/bin/ansible-playbook-3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 07:54:04 np0005541914.localdomain sudo[38025]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:04 np0005541914.localdomain sudo[38041]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kwxejvtokqmmfhoemxhizxqebhhnqtke ; /usr/bin/python3
Dec 02 07:54:04 np0005541914.localdomain sudo[38041]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:05 np0005541914.localdomain python3[38043]: ansible-file Invoked with state=link src=/usr/bin/ansible-playbook path=/usr/bin/ansible-playbook-3 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:54:05 np0005541914.localdomain sudo[38041]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:05 np0005541914.localdomain sudo[38057]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-utsptkknrqdfajgxxlhysurnuhgmvtuh ; /usr/bin/python3
Dec 02 07:54:05 np0005541914.localdomain sudo[38057]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:06 np0005541914.localdomain python3[38059]: ansible-tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 02 07:54:06 np0005541914.localdomain sudo[38057]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:06 np0005541914.localdomain sudo[38073]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-usukxpcxczdyftrjriddqqrtrietqygt ; /usr/bin/python3
Dec 02 07:54:06 np0005541914.localdomain sudo[38073]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:06 np0005541914.localdomain python3[38075]: ansible-stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 07:54:06 np0005541914.localdomain sudo[38073]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:07 np0005541914.localdomain sudo[38089]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cxrrbylwjcrcjiprysvboyxupqtfhjpb ; /usr/bin/python3
Dec 02 07:54:07 np0005541914.localdomain sudo[38089]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:07 np0005541914.localdomain python3[38091]: ansible-blockinfile Invoked with path=/tmp/ansible.fwl50lyf block=[192.168.122.106]*,[np0005541912.ctlplane.localdomain]*,[172.17.0.106]*,[np0005541912.internalapi.localdomain]*,[172.18.0.106]*,[np0005541912.storage.localdomain]*,[172.20.0.106]*,[np0005541912.storagemgmt.localdomain]*,[172.19.0.106]*,[np0005541912.tenant.localdomain]*,[np0005541912.localdomain]*,[np0005541912]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKgyHtHHKWFdaOqx5AsvOJPmNsbjVxvzh05A7Hy02rgbdg4zBUd/E0mqG+tYVGg12fIdbRNgjUfM+PEGJznZdEQnZCtLgMhbpRC33IbCXMw7Ev/tRfkffpP+H8VdyGL83zCFFnMIMD2IDWU+MjTf/ais63Zv/UiBL24pkZ18u3nypjN3uN2FdeDF4JNtnSVK6i1a+wE6wLmdSAfX8ovFbLhZMgAAPU3I3Fu5D/pSa6OjKshEcNy0m6KCKwQoT6cbDGsnMjd2sdE1Vc+KgkrBN3fMmrChdgi2Ig7CpkdGvQF0G/t53cwNatjp78FrNCHjpLcIAFw3QgfepiTiXQbXQ/jC5xkdM+5wIcSmB3rf3GKaUgaxnjk55GAXxrHwAFwOi+ltxSNPszH9vfIBLluThUdmQmvtCOCvEFZ5uuVuu94A5frS9BzOIzz7ylrqau3nHGaPjbT80XubnqZsHlOahsovbk1mu3ewvoitAVb0E+BBroNWeHT9BbA8Igh+sxwGM=
                                                         [192.168.122.107]*,[np0005541913.ctlplane.localdomain]*,[172.17.0.107]*,[np0005541913.internalapi.localdomain]*,[172.18.0.107]*,[np0005541913.storage.localdomain]*,[172.20.0.107]*,[np0005541913.storagemgmt.localdomain]*,[172.19.0.107]*,[np0005541913.tenant.localdomain]*,[np0005541913.localdomain]*,[np0005541913]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDYXeXWwxJkeR9i2V9hYiVGqEGSbkwFIKUbTm3m8em9m5o380jUORSYXOITLm0CAl/waSYEc4fiPu2sAYDISig1zqAItfAODEdayFoKK63ui7vq92ZPKayhmjahj2jNo3KMAZ5aFzNBcowsRooRqLNJ7R9BAQ4H8kdqL9xdRjy5bvfWJHGrm8PvWcUaRYebCQ35j+7nHq4RFRYsd964NKjrq+FxkjyOSs2AxE+SHYOVgAAd8Jp2uyr3dR56IzWy8WqQzPj6tlsER8+/Kt1lASATcuMFeteA0M7tbjZxEIAPyfktPVQOq9mgeFOFmTf8oTbt94Rk2QmyNI4oE7sQHFWo9UWrvZd9LpDDartUls5uHunn4SzvgvtRimO3e1hNXn0VQLGNfSUwGij0R3iOYJpACHgly3J7sbX3tROvwRpawZlGIGZY46vaYRMXGClXz+lUCa6ZZO+f6BX6bEt0VfYWX8IVmnH2oJXEJBYJPVXZML+OcczJc8zEfHxBylpZn4k=
                                                         [192.168.122.108]*,[np0005541914.ctlplane.localdomain]*,[172.17.0.108]*,[np0005541914.internalapi.localdomain]*,[172.18.0.108]*,[np0005541914.storage.localdomain]*,[172.20.0.108]*,[np0005541914.storagemgmt.localdomain]*,[172.19.0.108]*,[np0005541914.tenant.localdomain]*,[np0005541914.localdomain]*,[np0005541914]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCHh7115UF/t7QzqWY1fk2wHPOuHuMPRhaYTC/yfMWr+nqJ5/TNZTuFxq0aW/1gHanB2usmC0wpWf4c1KsPZ71Ehs/j5nV1wfGtNVEq5Zj7uhs0ea/SQToF2RS406RoIzJW6ogv4Kl3nxGEK6c44WCu8+Ki98dCQ4wesh5kSBkqgiSq2IZkL2gjoAKeXdracGRJ596gTB0yfsMl/qdJDneVHMq/rptlFhabLeiEN+7C0o0gsZwYsxCd2oSB+DD9KfXhWIBeXRr1B7mFcMZpGNG7pG0d1IjYOUmqjvVpECHrLvjiitS3800ZEFwygU4sbM/DWHelobjtJB/fxxPTtGNlbH4MK/OGFh2mm5jB1LMqWSsifA/ZAHASAAffWDwKtF+xJ06OHRDT6gjzOd7VJpc8kR9Jn9pT7UnjypnrM12GtrO0CH8Lf3rin71kf9iZRIphqWXhiLN3G/mdJC2XPIxJp7NQ1Mqc5IhHciCv80bvsGrzLCtAr16/b+cPYo7vIGU=
                                                         [192.168.122.103]*,[np0005541909.ctlplane.localdomain]*,[172.17.0.103]*,[np0005541909.internalapi.localdomain]*,[172.18.0.103]*,[np0005541909.storage.localdomain]*,[172.20.0.103]*,[np0005541909.storagemgmt.localdomain]*,[172.19.0.103]*,[np0005541909.tenant.localdomain]*,[np0005541909.localdomain]*,[np0005541909]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0b4xecJ9cZa0s7FCPYSs6kLrfHyBh8YL/KS+tj3DrfUU03KCcmbHQesHBBcRxB6PDYjueAsvx5rGXzjMojO5Jz2DlZoSPaBM9tm/HAKWhaiL+seTfrRsNLFvxfWyxU/x0FUSOTf01ZThrT/IJ5WkfJD4UgZQSzUPucffImwFt4y2oERfa96sAwSwE4o5RuLzRdKuWB3npxcApj2/3+pyWR59yubokMiU506MI37Hbg8xCaC5qn4ISKB8WBJObICoNQoatrbcqSOrrUEFv/vcWANDYUEw6XzTTwkuIu6dJPJiJh8j5TzDnnvKSK+f3eEG7OCiz814F+o82tDo7U6k5ERO0xmElXdOlPYsiuM5+CTQmmm6xmFN2L3HIvZlyPn3oF26oV+INAd3XsF5MIFcfpGUXH5b04gE7LhpdVLVfLGGYSVWjZhzxl/Wa0OiHoMaDUYoN2bPG0h5SPUDIyDv2jW3FDxhOWANR/9ITUCQpz3gSwl/1AVN3HCWf+RUeLuE=
                                                         [192.168.122.104]*,[np0005541910.ctlplane.localdomain]*,[172.17.0.104]*,[np0005541910.internalapi.localdomain]*,[172.18.0.104]*,[np0005541910.storage.localdomain]*,[172.20.0.104]*,[np0005541910.storagemgmt.localdomain]*,[172.19.0.104]*,[np0005541910.tenant.localdomain]*,[np0005541910.localdomain]*,[np0005541910]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDOmh2HMG9Y5+9VA8Ap3pHIOQhG/GfAsIqnmfJJuGwKb8N2T9r1Yd+kmoP7Xs41cto4h6Fw1f4Pa6Tw050y3LmwpXvDN+2Qq1qYI0rT4pqOiYBkyMbOQhqLF5tA+MNYGdibQj/fWkG+gKa8wwzkTgCEAn6PgEZiqR9LFJrqr4RfQDxaWCLmXM96+AVGG5/SXWx5u6T3lanUnpcfISvB2yx4HifsINAHPgLR4weEzra/b7e0QNyxItxvlDseasPyeYHD3Hdi2PNuUmoZC+zWEoWoU3BMAQeXR7lmEcdtyK5wr0pIBmf0CKFdvGrdVWrzAUbDc8ZHXmWyKlWHHZvHch1V2r/S4J2983UsG3sJwM8954Tj325LgS1nldIYBSjwMGfhZFYzmy9obAN7ZSV5qwD0h+rxt/I9RNdXS3SRu9tOZI+AN59De44cF23OJS5MfrfnB7JUnBOv4ScVML4rPjPx9L4/omOlfbBVJx42b1RlboXEk52J7Aa3xRseA4Elvuk=
                                                         [192.168.122.105]*,[np0005541911.ctlplane.localdomain]*,[172.17.0.105]*,[np0005541911.internalapi.localdomain]*,[172.18.0.105]*,[np0005541911.storage.localdomain]*,[172.20.0.105]*,[np0005541911.storagemgmt.localdomain]*,[172.19.0.105]*,[np0005541911.tenant.localdomain]*,[np0005541911.localdomain]*,[np0005541911]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzI5YTDMvj8zBlKqeNplIMBQQJ43gcDfB5cRE7DwwpHBRcqOuhSoIm7r0C3h5ABQJYkTXEGRY0i5HC5eMErD7SKRJJ3q9aZ+uv4VvUGagr7M9S/JGUjZej2+ACXZ7L+d9MLt389xVtIuuNh5Cy3U8muIBEAS1b4mXOJ95eiW3M5b2hxmol0DTjUMX/bLtJU/MQ09wE72pj6Uqz/CCFsUwDBZlQ3jcVK74fYwgItCNkLJ+D2E4wTl4Ei8XOlEY9cV8B1E+aK6iUKesiya0Vfi/Ant77ONQDeCsI21AJDbi5wtUXg4qXBu3Z/zObZiEmedzqWj7K46Nv8lDlQoeoKuxzTCwxgn0PaorQgkUvUdAyk5Qo4BaUOv8ojICiZvRy9QZ3jblr1dCM/Jy3g4Sz6Hz4QHxtV21nUw//sBN2X6jCHQVGTJeZrbVvgGNcGiqcCzQTW/4NoiOB0ho7RVNtD+oYb5UE+Lh+Ibua3bv7zfnLjsw1GiyclsCgrQTKBl8Netc=
                                                          create=True state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:54:07 np0005541914.localdomain sudo[38089]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:07 np0005541914.localdomain sudo[38105]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gvhaodckdnvncbmtqwcixxxuffqngzni ; /usr/bin/python3
Dec 02 07:54:07 np0005541914.localdomain sudo[38105]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:07 np0005541914.localdomain python3[38107]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.fwl50lyf' > /etc/ssh/ssh_known_hosts _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:54:07 np0005541914.localdomain sudo[38105]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:08 np0005541914.localdomain sudo[38123]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jocgblqpeagfxheanuiszyebydnepjms ; /usr/bin/python3
Dec 02 07:54:08 np0005541914.localdomain sudo[38123]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:08 np0005541914.localdomain python3[38125]: ansible-file Invoked with path=/tmp/ansible.fwl50lyf state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:54:08 np0005541914.localdomain sudo[38123]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:08 np0005541914.localdomain sudo[38139]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wbsafncrcyvftyufpnjprrpynfxoxzbs ; /usr/bin/python3
Dec 02 07:54:08 np0005541914.localdomain sudo[38139]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:09 np0005541914.localdomain python3[38141]: ansible-file Invoked with path=/var/log/journal state=directory mode=0750 owner=root group=root setype=var_log_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 07:54:09 np0005541914.localdomain sudo[38139]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:09 np0005541914.localdomain sudo[38155]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vqmuzpizsucgnvysvymobmiwunpjuaaj ; /usr/bin/python3
Dec 02 07:54:09 np0005541914.localdomain sudo[38155]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:09 np0005541914.localdomain python3[38157]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active cloud-init.service || systemctl is-enabled cloud-init.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:54:09 np0005541914.localdomain sudo[38155]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:09 np0005541914.localdomain sudo[38173]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ggdoetblyekftjfxqjgsqbpehuxxiogb ; /usr/bin/python3
Dec 02 07:54:09 np0005541914.localdomain sudo[38173]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:09 np0005541914.localdomain python3[38175]: ansible-ansible.legacy.command Invoked with _raw_params=cat /proc/cmdline | grep -q cloud-init=disabled _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:54:09 np0005541914.localdomain sudo[38173]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:10 np0005541914.localdomain sudo[38192]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pooawnlcohcxexmhdvhzqsivtowfysbm ; /usr/bin/python3
Dec 02 07:54:10 np0005541914.localdomain sudo[38192]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:10 np0005541914.localdomain python3[38194]: ansible-community.general.cloud_init_data_facts Invoked with filter=status
Dec 02 07:54:10 np0005541914.localdomain sudo[38192]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:10 np0005541914.localdomain sudo[38208]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bwugwlyumdrydcywcrbjllwexsoyeaio ; /usr/bin/python3
Dec 02 07:54:10 np0005541914.localdomain sudo[38208]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:10 np0005541914.localdomain sudo[38208]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:10 np0005541914.localdomain sshd[38258]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:54:10 np0005541914.localdomain sudo[38256]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uuxbtovbmmyvpulaynackrjjsctyfeza ; /usr/bin/python3
Dec 02 07:54:10 np0005541914.localdomain sudo[38256]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:11 np0005541914.localdomain sudo[38256]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:11 np0005541914.localdomain sudo[38301]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xfvnwzcbfivlefjuukmniktrshcejbcu ; /usr/bin/python3
Dec 02 07:54:11 np0005541914.localdomain sudo[38301]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:11 np0005541914.localdomain sudo[38301]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:12 np0005541914.localdomain sudo[38331]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eylyehrzstaiyueutyyjosufdgvjoabw ; /usr/bin/python3
Dec 02 07:54:12 np0005541914.localdomain sudo[38331]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:12 np0005541914.localdomain sshd[38258]: Received disconnect from 103.52.115.25 port 50292:11: Bye Bye [preauth]
Dec 02 07:54:12 np0005541914.localdomain sshd[38258]: Disconnected from authenticating user root 103.52.115.25 port 50292 [preauth]
Dec 02 07:54:12 np0005541914.localdomain python3[38333]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides tuned tuned-profiles-cpu-partitioning _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:54:12 np0005541914.localdomain sudo[38331]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:12 np0005541914.localdomain sudo[38348]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dkxcusvkljrtfncpdcedzimblpndvcmf ; /usr/bin/python3
Dec 02 07:54:12 np0005541914.localdomain sudo[38348]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:13 np0005541914.localdomain python3[38350]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 02 07:54:16 np0005541914.localdomain dbus-broker-launch[751]: Noticed file-system modification, trigger reload.
Dec 02 07:54:16 np0005541914.localdomain dbus-broker-launch[751]: Noticed file-system modification, trigger reload.
Dec 02 07:54:16 np0005541914.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 02 07:54:16 np0005541914.localdomain systemd[1]: Starting man-db-cache-update.service...
Dec 02 07:54:16 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 07:54:16 np0005541914.localdomain systemd-rc-local-generator[38434]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 07:54:16 np0005541914.localdomain systemd-sysv-generator[38438]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 07:54:16 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 07:54:16 np0005541914.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Dec 02 07:54:16 np0005541914.localdomain systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 02 07:54:16 np0005541914.localdomain systemd[1]: tuned.service: Deactivated successfully.
Dec 02 07:54:16 np0005541914.localdomain systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 02 07:54:16 np0005541914.localdomain systemd[1]: tuned.service: Consumed 1.808s CPU time.
Dec 02 07:54:16 np0005541914.localdomain systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 02 07:54:16 np0005541914.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 02 07:54:16 np0005541914.localdomain systemd[1]: Finished man-db-cache-update.service.
Dec 02 07:54:16 np0005541914.localdomain systemd[1]: run-r852279eb83bb4b36b98357f88d1d8272.service: Deactivated successfully.
Dec 02 07:54:18 np0005541914.localdomain systemd[1]: Started Dynamic System Tuning Daemon.
Dec 02 07:54:18 np0005541914.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 02 07:54:18 np0005541914.localdomain systemd[1]: Starting man-db-cache-update.service...
Dec 02 07:54:18 np0005541914.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 02 07:54:18 np0005541914.localdomain systemd[1]: Finished man-db-cache-update.service.
Dec 02 07:54:18 np0005541914.localdomain systemd[1]: run-rcdbec0c656584f509556249d1b2503d6.service: Deactivated successfully.
Dec 02 07:54:19 np0005541914.localdomain sudo[38348]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:19 np0005541914.localdomain sudo[38784]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sxhtdmxosoteoswuqrmffwppyuknsihv ; /usr/bin/python3
Dec 02 07:54:19 np0005541914.localdomain sudo[38784]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:19 np0005541914.localdomain python3[38786]: ansible-systemd Invoked with name=tuned state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 07:54:19 np0005541914.localdomain systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 02 07:54:19 np0005541914.localdomain systemd[1]: tuned.service: Deactivated successfully.
Dec 02 07:54:19 np0005541914.localdomain systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 02 07:54:19 np0005541914.localdomain systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 02 07:54:20 np0005541914.localdomain systemd[1]: Started Dynamic System Tuning Daemon.
Dec 02 07:54:20 np0005541914.localdomain sudo[38784]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:21 np0005541914.localdomain sudo[38979]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cbbiwnidjqmsbhbacacehxbjvfcozluu ; PATH=/bin:/usr/bin:/sbin:/usr/sbin /usr/bin/python3
Dec 02 07:54:21 np0005541914.localdomain sudo[38979]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:21 np0005541914.localdomain python3[38981]: ansible-ansible.legacy.command Invoked with _raw_params=which tuned-adm _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:54:21 np0005541914.localdomain sudo[38979]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:21 np0005541914.localdomain sudo[38996]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ytnwimnbxjvmkyaddwlyoauvkigfhtuk ; /usr/bin/python3
Dec 02 07:54:21 np0005541914.localdomain sudo[38996]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:22 np0005541914.localdomain python3[38998]: ansible-slurp Invoked with src=/etc/tuned/active_profile
Dec 02 07:54:22 np0005541914.localdomain sudo[38996]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:22 np0005541914.localdomain sudo[39012]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-equhgllzqvciqpjdtxgprqpyubsaoctl ; /usr/bin/python3
Dec 02 07:54:22 np0005541914.localdomain sudo[39012]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:22 np0005541914.localdomain python3[39014]: ansible-stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 07:54:22 np0005541914.localdomain sudo[39012]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:22 np0005541914.localdomain sudo[39028]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-msfyehvqaiklzkiqodjjlanckrygwkuq ; PATH=/bin:/usr/bin:/sbin:/usr/sbin /usr/bin/python3
Dec 02 07:54:22 np0005541914.localdomain sudo[39028]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:23 np0005541914.localdomain python3[39030]: ansible-ansible.legacy.command Invoked with _raw_params=tuned-adm profile throughput-performance _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:54:24 np0005541914.localdomain sudo[39028]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:24 np0005541914.localdomain sudo[39048]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jdgupefhayckklaauxbuqvdrurkyseau ; /usr/bin/python3
Dec 02 07:54:24 np0005541914.localdomain sudo[39048]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:24 np0005541914.localdomain python3[39050]: ansible-ansible.legacy.command Invoked with _raw_params=cat /proc/cmdline _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:54:24 np0005541914.localdomain sudo[39048]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:25 np0005541914.localdomain sudo[39065]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-voxsumndpbnyrdakglbnlilfpppuubtf ; /usr/bin/python3
Dec 02 07:54:25 np0005541914.localdomain sudo[39065]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:25 np0005541914.localdomain python3[39067]: ansible-stat Invoked with path=/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 07:54:25 np0005541914.localdomain sudo[39065]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:28 np0005541914.localdomain sudo[39081]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-trgzegczurttusytemhomgkxgpqzjyrr ; /usr/bin/python3
Dec 02 07:54:28 np0005541914.localdomain sudo[39081]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:28 np0005541914.localdomain python3[39083]: ansible-replace Invoked with regexp=TRIPLEO_HEAT_TEMPLATE_KERNEL_ARGS dest=/etc/default/grub replace= path=/etc/default/grub backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:54:28 np0005541914.localdomain sudo[39081]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:29 np0005541914.localdomain sshd[39084]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:54:29 np0005541914.localdomain sshd[39084]: Invalid user sol from 45.148.10.240 port 47524
Dec 02 07:54:29 np0005541914.localdomain sshd[39084]: Connection closed by invalid user sol 45.148.10.240 port 47524 [preauth]
Dec 02 07:54:32 np0005541914.localdomain sudo[39099]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uirxxuoomgfpnskmdeebboaojjjvrvyi ; /usr/bin/python3
Dec 02 07:54:32 np0005541914.localdomain sudo[39099]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:33 np0005541914.localdomain python3[39101]: ansible-file Invoked with path=/etc/puppet/hieradata state=directory mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:54:33 np0005541914.localdomain sudo[39099]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:33 np0005541914.localdomain sudo[39147]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cyqezkbfbxfosntynnzalynejyshcjuo ; /usr/bin/python3
Dec 02 07:54:33 np0005541914.localdomain sudo[39147]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:33 np0005541914.localdomain python3[39149]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hiera.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:54:33 np0005541914.localdomain sudo[39147]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:33 np0005541914.localdomain sudo[39192]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gjharkzzijlvsecvpdoeuxynriuzphla ; /usr/bin/python3
Dec 02 07:54:33 np0005541914.localdomain sudo[39192]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:34 np0005541914.localdomain python3[39194]: ansible-ansible.legacy.copy Invoked with mode=384 dest=/etc/puppet/hiera.yaml src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662073.3062372-70033-61177823672067/source _original_basename=tmp12icfplw follow=False checksum=aaf3699defba931d532f4955ae152f505046749a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:54:34 np0005541914.localdomain sudo[39192]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:34 np0005541914.localdomain sudo[39222]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hkvcyefbbjkrreofjlgufjjioduebudm ; /usr/bin/python3
Dec 02 07:54:34 np0005541914.localdomain sudo[39222]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:34 np0005541914.localdomain python3[39224]: ansible-file Invoked with src=/etc/puppet/hiera.yaml dest=/etc/hiera.yaml state=link force=True path=/etc/hiera.yaml recurse=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:54:34 np0005541914.localdomain sudo[39222]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:34 np0005541914.localdomain sudo[39270]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-stivjifpmrkuzhkzozydmcwsvidbzxmg ; /usr/bin/python3
Dec 02 07:54:34 np0005541914.localdomain sudo[39270]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:35 np0005541914.localdomain python3[39272]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/all_nodes.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:54:35 np0005541914.localdomain sudo[39270]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:35 np0005541914.localdomain sudo[39313]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cxhexcqlfidectxhhvehotteqqtpfsdo ; /usr/bin/python3
Dec 02 07:54:35 np0005541914.localdomain sudo[39313]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:35 np0005541914.localdomain python3[39315]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662074.8532314-70129-183458166602365/source dest=/etc/puppet/hieradata/all_nodes.json _original_basename=overcloud.json follow=False checksum=303a9e8dd06eeb9157c66bb31355109aa4c872ae backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:54:35 np0005541914.localdomain sudo[39313]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:35 np0005541914.localdomain sudo[39375]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-silinyomlcmsytcdfvyyvezrycbfukdt ; /usr/bin/python3
Dec 02 07:54:35 np0005541914.localdomain sudo[39375]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:36 np0005541914.localdomain python3[39377]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/bootstrap_node.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:54:36 np0005541914.localdomain sudo[39375]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:36 np0005541914.localdomain sudo[39418]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pafqgcxoyioqpopoeilbptxdqihnzsuj ; /usr/bin/python3
Dec 02 07:54:36 np0005541914.localdomain sudo[39418]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:36 np0005541914.localdomain python3[39420]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662075.731821-70407-239080182332621/source dest=/etc/puppet/hieradata/bootstrap_node.json mode=None follow=False _original_basename=bootstrap_node.j2 checksum=da1c3b8584bf2231cac158ee0d91c3ea69fbb742 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:54:36 np0005541914.localdomain sudo[39418]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:36 np0005541914.localdomain sudo[39480]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-odrjikkvedoeuthypdyjbegdhkvzclmk ; /usr/bin/python3
Dec 02 07:54:36 np0005541914.localdomain sudo[39480]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:36 np0005541914.localdomain python3[39482]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/vip_data.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:54:37 np0005541914.localdomain sudo[39480]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:37 np0005541914.localdomain sudo[39523]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-exsqicktsuybyqgvpxjnvkbmwddsybia ; /usr/bin/python3
Dec 02 07:54:37 np0005541914.localdomain sudo[39523]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:37 np0005541914.localdomain python3[39525]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662076.6405096-70407-50174873424137/source dest=/etc/puppet/hieradata/vip_data.json mode=None follow=False _original_basename=vip_data.j2 checksum=cefd5bd69caea640bd56356af0b9c6878752d6a2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:54:37 np0005541914.localdomain sudo[39523]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:37 np0005541914.localdomain sudo[39585]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rlqgjdapjemdkhdcmmvknxuqcwmzwkbx ; /usr/bin/python3
Dec 02 07:54:37 np0005541914.localdomain sudo[39585]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:37 np0005541914.localdomain python3[39587]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/net_ip_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:54:37 np0005541914.localdomain sudo[39585]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:38 np0005541914.localdomain sudo[39628]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jecygdserwkaxcqwtgdegcpfuwyjfcck ; /usr/bin/python3
Dec 02 07:54:38 np0005541914.localdomain sudo[39628]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:38 np0005541914.localdomain python3[39630]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662077.5180104-70407-254312378040493/source dest=/etc/puppet/hieradata/net_ip_map.json mode=None follow=False _original_basename=net_ip_map.j2 checksum=1bd75eeb71ad8a06f7ad5bd2e02e7279e09e867f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:54:38 np0005541914.localdomain sudo[39628]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:38 np0005541914.localdomain sudo[39690]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jiaytqdsjysobcirjvrsscnndehxlibt ; /usr/bin/python3
Dec 02 07:54:38 np0005541914.localdomain sudo[39690]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:38 np0005541914.localdomain python3[39692]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/cloud_domain.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:54:38 np0005541914.localdomain sudo[39690]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:39 np0005541914.localdomain sudo[39733]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-weoiegxmnezjzuuufrhqcmlpgpookfus ; /usr/bin/python3
Dec 02 07:54:39 np0005541914.localdomain sudo[39733]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:39 np0005541914.localdomain python3[39735]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662078.5209446-70407-215929840763969/source dest=/etc/puppet/hieradata/cloud_domain.json mode=None follow=False _original_basename=cloud_domain.j2 checksum=5dd835a63e6a03d74797c2e2eadf4bea1cecd9d9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:54:39 np0005541914.localdomain sudo[39733]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:39 np0005541914.localdomain sudo[39795]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-crcexewzenfcvzzekpblkrlgompximnj ; /usr/bin/python3
Dec 02 07:54:39 np0005541914.localdomain sudo[39795]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:39 np0005541914.localdomain python3[39797]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/fqdn.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:54:39 np0005541914.localdomain sudo[39795]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:39 np0005541914.localdomain sudo[39838]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aiibohmyjlfgxkemzhgslotpulmhxoqz ; /usr/bin/python3
Dec 02 07:54:39 np0005541914.localdomain sudo[39838]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:40 np0005541914.localdomain python3[39840]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662079.39071-70407-220704440541338/source dest=/etc/puppet/hieradata/fqdn.json mode=None follow=False _original_basename=fqdn.j2 checksum=1c471308e2382b36016a260bb9c9a72bc4f65120 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:54:40 np0005541914.localdomain sudo[39838]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:40 np0005541914.localdomain sudo[39900]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dkkmevgirvmlsbucmnkcssrmxrdurupt ; /usr/bin/python3
Dec 02 07:54:40 np0005541914.localdomain sudo[39900]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:40 np0005541914.localdomain python3[39902]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_names.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:54:40 np0005541914.localdomain sudo[39900]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:40 np0005541914.localdomain sudo[39943]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rjvelnklpmwsviqnelhvzshbskkrgnoo ; /usr/bin/python3
Dec 02 07:54:40 np0005541914.localdomain sudo[39943]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:40 np0005541914.localdomain python3[39945]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662080.2945921-70407-124902819074386/source dest=/etc/puppet/hieradata/service_names.json mode=None follow=False _original_basename=service_names.j2 checksum=ff586b96402d8ae133745cf06f17e772b2f22d52 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:54:40 np0005541914.localdomain sudo[39943]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:41 np0005541914.localdomain sudo[40005]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-olikfwbqtxlelsgupxykawyjpkxswvun ; /usr/bin/python3
Dec 02 07:54:41 np0005541914.localdomain sudo[40005]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:41 np0005541914.localdomain python3[40007]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_configs.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:54:41 np0005541914.localdomain sudo[40005]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:41 np0005541914.localdomain sudo[40048]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ayouvbuyleqndafuafqhsdkaylnfmyxp ; /usr/bin/python3
Dec 02 07:54:41 np0005541914.localdomain sudo[40048]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:41 np0005541914.localdomain python3[40050]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662081.1493134-70407-226169721851505/source dest=/etc/puppet/hieradata/service_configs.json mode=None follow=False _original_basename=service_configs.j2 checksum=c605747c28ed219c21bc7a334ba3c66112b9a2b8 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:54:41 np0005541914.localdomain sudo[40048]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:42 np0005541914.localdomain sudo[40110]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-diubrnyqflnwmydfpuvlzfyqdavhpzoe ; /usr/bin/python3
Dec 02 07:54:42 np0005541914.localdomain sudo[40110]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:42 np0005541914.localdomain python3[40112]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:54:42 np0005541914.localdomain sudo[40110]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:42 np0005541914.localdomain sudo[40161]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-haleddjksuqieosfrukxzcslsmvvvcdu ; /usr/bin/python3
Dec 02 07:54:42 np0005541914.localdomain sudo[40161]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:42 np0005541914.localdomain sudo[40147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 07:54:42 np0005541914.localdomain sudo[40147]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:54:42 np0005541914.localdomain sudo[40147]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:42 np0005541914.localdomain sudo[40171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 07:54:42 np0005541914.localdomain sudo[40171]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:54:42 np0005541914.localdomain python3[40169]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662082.0732875-70407-179504042793434/source dest=/etc/puppet/hieradata/extraconfig.json mode=None follow=False _original_basename=extraconfig.j2 checksum=5f36b2ea290645ee34d943220a14b54ee5ea5be5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:54:42 np0005541914.localdomain sudo[40161]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:43 np0005541914.localdomain sudo[40245]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zpktapskfaphyvmbsetajxvzidpymeyl ; /usr/bin/python3
Dec 02 07:54:43 np0005541914.localdomain sudo[40245]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:43 np0005541914.localdomain python3[40257]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/role_extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:54:43 np0005541914.localdomain sudo[40245]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:43 np0005541914.localdomain sudo[40171]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:43 np0005541914.localdomain sudo[40319]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dnmhmplondmfzqdibvyrhgvzrhbogyzs ; /usr/bin/python3
Dec 02 07:54:43 np0005541914.localdomain sudo[40319]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:43 np0005541914.localdomain python3[40321]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662082.9530525-70407-151862547607562/source dest=/etc/puppet/hieradata/role_extraconfig.json mode=None follow=False _original_basename=role_extraconfig.j2 checksum=34875968bf996542162e620523f9dcfb3deac331 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:54:43 np0005541914.localdomain sudo[40319]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:44 np0005541914.localdomain sudo[40387]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wklbdbiipvyuubqvioehiotkucuhvlbz ; /usr/bin/python3
Dec 02 07:54:44 np0005541914.localdomain sudo[40387]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:44 np0005541914.localdomain sudo[40378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 07:54:44 np0005541914.localdomain sudo[40378]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:54:44 np0005541914.localdomain sudo[40378]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:44 np0005541914.localdomain python3[40397]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ovn_chassis_mac_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:54:44 np0005541914.localdomain sudo[40387]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:44 np0005541914.localdomain sudo[40439]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kvvjjxmyzclqcyspjrbvheimfwbovojr ; /usr/bin/python3
Dec 02 07:54:44 np0005541914.localdomain sudo[40439]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:44 np0005541914.localdomain python3[40441]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662083.8863442-70407-176185455576848/source dest=/etc/puppet/hieradata/ovn_chassis_mac_map.json mode=None follow=False _original_basename=ovn_chassis_mac_map.j2 checksum=ace0296e780adbfa11d93013fc1b670cc14ab7b7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:54:44 np0005541914.localdomain sudo[40439]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:45 np0005541914.localdomain sudo[40469]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wvcexnohwwjmuwdynzrgjattkvqsagve ; /usr/bin/python3
Dec 02 07:54:45 np0005541914.localdomain sudo[40469]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:45 np0005541914.localdomain python3[40471]: ansible-stat Invoked with path={'src': '/etc/puppet/hieradata/ansible_managed.json'} follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 07:54:45 np0005541914.localdomain sudo[40469]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:45 np0005541914.localdomain sudo[40517]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-slzksulkamftrqvbpuumtpiqzjlytnqg ; /usr/bin/python3
Dec 02 07:54:45 np0005541914.localdomain sudo[40517]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:45 np0005541914.localdomain python3[40519]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ansible_managed.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:54:45 np0005541914.localdomain sudo[40517]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:46 np0005541914.localdomain sudo[40560]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fcsulobmkjsnchdcepzwshodybvummgn ; /usr/bin/python3
Dec 02 07:54:46 np0005541914.localdomain sudo[40560]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:46 np0005541914.localdomain python3[40562]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/ansible_managed.json owner=root group=root mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662085.554342-71051-146365039118940/source _original_basename=tmp2kf8xn4b follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:54:46 np0005541914.localdomain sudo[40560]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:50 np0005541914.localdomain sudo[40590]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yvklvzavinipkgrpswkkzseqfniuhomc ; /usr/bin/python3
Dec 02 07:54:50 np0005541914.localdomain sudo[40590]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:50 np0005541914.localdomain python3[40592]: ansible-setup Invoked with gather_subset=['!all', '!min', 'network'] filter=['ansible_default_ipv4'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 02 07:54:50 np0005541914.localdomain sudo[40590]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:51 np0005541914.localdomain sudo[40651]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ydgbfdokpcrtrgjmmdlbbuappakmjscp ; /usr/bin/python3
Dec 02 07:54:51 np0005541914.localdomain sudo[40651]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:51 np0005541914.localdomain python3[40653]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 38.102.83.1 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:54:54 np0005541914.localdomain systemd[36014]: Starting Mark boot as successful...
Dec 02 07:54:54 np0005541914.localdomain systemd[36014]: Finished Mark boot as successful.
Dec 02 07:54:55 np0005541914.localdomain sudo[40651]: pam_unix(sudo:session): session closed for user root
Dec 02 07:54:55 np0005541914.localdomain sudo[40669]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aypsuxltlggittvocmowzmcznpplqokf ; /usr/bin/python3
Dec 02 07:54:55 np0005541914.localdomain sudo[40669]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:54:55 np0005541914.localdomain python3[40671]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 192.168.122.10 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:54:59 np0005541914.localdomain sudo[40669]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:00 np0005541914.localdomain sudo[40686]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wfowcdurgdpcxdbtcvvcilftlrldphar ; /usr/bin/python3
Dec 02 07:55:00 np0005541914.localdomain sudo[40686]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:00 np0005541914.localdomain python3[40688]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 192.168.122.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")
                                                         MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")
                                                         echo "$INT $MTU"
                                                          _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:55:00 np0005541914.localdomain sudo[40686]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:01 np0005541914.localdomain sudo[40709]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tbamcviiamzipmxdnbztjpytfatrzkgh ; /usr/bin/python3
Dec 02 07:55:01 np0005541914.localdomain sudo[40709]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:01 np0005541914.localdomain python3[40711]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 192.168.122.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:55:01 np0005541914.localdomain anacron[6721]: Job `cron.weekly' started
Dec 02 07:55:01 np0005541914.localdomain anacron[6721]: Job `cron.weekly' terminated
Dec 02 07:55:05 np0005541914.localdomain sudo[40709]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:05 np0005541914.localdomain sshd[40715]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:55:05 np0005541914.localdomain sudo[40730]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-polpgdbqfkubtcyogqxtjcabvekqpawu ; /usr/bin/python3
Dec 02 07:55:05 np0005541914.localdomain sudo[40730]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:05 np0005541914.localdomain python3[40732]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.18.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")
                                                         MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")
                                                         echo "$INT $MTU"
                                                          _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:55:05 np0005541914.localdomain sudo[40730]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:06 np0005541914.localdomain sudo[40753]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zfxfltmctqhvoytvcyrupzcfbujheqro ; /usr/bin/python3
Dec 02 07:55:06 np0005541914.localdomain sudo[40753]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:06 np0005541914.localdomain python3[40755]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.18.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:55:06 np0005541914.localdomain sshd[40715]: Invalid user nextcloud from 182.253.156.173 port 58278
Dec 02 07:55:07 np0005541914.localdomain sshd[40715]: Received disconnect from 182.253.156.173 port 58278:11: Bye Bye [preauth]
Dec 02 07:55:07 np0005541914.localdomain sshd[40715]: Disconnected from invalid user nextcloud 182.253.156.173 port 58278 [preauth]
Dec 02 07:55:10 np0005541914.localdomain sudo[40753]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:10 np0005541914.localdomain sudo[40770]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-shzpjbmxizjrognsqsqwoyusejerobdb ; /usr/bin/python3
Dec 02 07:55:10 np0005541914.localdomain sudo[40770]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:10 np0005541914.localdomain python3[40772]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.18.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:55:14 np0005541914.localdomain sudo[40770]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:14 np0005541914.localdomain sudo[40787]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fkzsxgesiikttkwpwukthdmgprjestbw ; /usr/bin/python3
Dec 02 07:55:14 np0005541914.localdomain sudo[40787]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:15 np0005541914.localdomain python3[40789]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.20.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")
                                                         MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")
                                                         echo "$INT $MTU"
                                                          _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:55:15 np0005541914.localdomain sudo[40787]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:15 np0005541914.localdomain sudo[40810]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lbmuklfkdetgsdntdfinopoovpsfirhc ; /usr/bin/python3
Dec 02 07:55:15 np0005541914.localdomain sudo[40810]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:15 np0005541914.localdomain python3[40812]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.20.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:55:19 np0005541914.localdomain sudo[40810]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:19 np0005541914.localdomain sudo[40827]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ayhuefnkjxpwzxbthyvnekzxeifpdngg ; /usr/bin/python3
Dec 02 07:55:19 np0005541914.localdomain sudo[40827]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:20 np0005541914.localdomain python3[40829]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.20.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:55:24 np0005541914.localdomain sudo[40827]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:24 np0005541914.localdomain sudo[40844]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cfutkmhmfxosowlxcseolyfqtlmlxotq ; /usr/bin/python3
Dec 02 07:55:24 np0005541914.localdomain sudo[40844]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:24 np0005541914.localdomain python3[40846]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.17.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")
                                                         MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")
                                                         echo "$INT $MTU"
                                                          _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:55:24 np0005541914.localdomain sudo[40844]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:24 np0005541914.localdomain sudo[40867]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-znapsvgtgxlvtavfkvigfctdftosqqxw ; /usr/bin/python3
Dec 02 07:55:24 np0005541914.localdomain sudo[40867]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:24 np0005541914.localdomain python3[40869]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.17.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:55:26 np0005541914.localdomain sshd[40871]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:55:27 np0005541914.localdomain sshd[40871]: Invalid user ubuntu from 103.52.115.25 port 35928
Dec 02 07:55:28 np0005541914.localdomain sshd[40871]: Received disconnect from 103.52.115.25 port 35928:11: Bye Bye [preauth]
Dec 02 07:55:28 np0005541914.localdomain sshd[40871]: Disconnected from invalid user ubuntu 103.52.115.25 port 35928 [preauth]
Dec 02 07:55:29 np0005541914.localdomain sudo[40867]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:29 np0005541914.localdomain sudo[40886]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pagxrvqaypjzgchubytslvivtbvxjupd ; /usr/bin/python3
Dec 02 07:55:29 np0005541914.localdomain sudo[40886]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:29 np0005541914.localdomain python3[40888]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.17.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:55:33 np0005541914.localdomain sudo[40886]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:33 np0005541914.localdomain sudo[40903]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ofpdwfxrgskpakwqfcudammplieunnij ; /usr/bin/python3
Dec 02 07:55:33 np0005541914.localdomain sudo[40903]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:33 np0005541914.localdomain python3[40905]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.19.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")
                                                         MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")
                                                         echo "$INT $MTU"
                                                          _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:55:33 np0005541914.localdomain sudo[40903]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:34 np0005541914.localdomain sudo[40926]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lioscuzhzpejwcwfriuapoygstjvlzqw ; /usr/bin/python3
Dec 02 07:55:34 np0005541914.localdomain sudo[40926]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:34 np0005541914.localdomain python3[40928]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.19.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:55:38 np0005541914.localdomain sudo[40926]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:38 np0005541914.localdomain sudo[40943]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jyudxpxwkclsafizmnijwdthahyutkzc ; /usr/bin/python3
Dec 02 07:55:38 np0005541914.localdomain sudo[40943]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:38 np0005541914.localdomain python3[40945]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.19.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:55:42 np0005541914.localdomain sudo[40943]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:44 np0005541914.localdomain sudo[40947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 07:55:44 np0005541914.localdomain sudo[40947]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:55:44 np0005541914.localdomain sudo[40947]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:44 np0005541914.localdomain sudo[40962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 07:55:44 np0005541914.localdomain sudo[40962]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:55:44 np0005541914.localdomain sudo[40962]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:45 np0005541914.localdomain sudo[41021]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-etljkrmgmxvpwxzlgjdvskoaqfbmtbwx ; /usr/bin/python3
Dec 02 07:55:45 np0005541914.localdomain sudo[41021]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:45 np0005541914.localdomain python3[41023]: ansible-file Invoked with path=/etc/puppet/hieradata state=directory mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:55:45 np0005541914.localdomain sudo[41021]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:46 np0005541914.localdomain sudo[41069]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zokxtflhtpudjznbesaixwxporezrmox ; /usr/bin/python3
Dec 02 07:55:46 np0005541914.localdomain sudo[41069]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:46 np0005541914.localdomain python3[41071]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hiera.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:55:46 np0005541914.localdomain sudo[41069]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:46 np0005541914.localdomain sudo[41087]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kgitqlnyqatsqxvuyabdphpdilxxmcpz ; /usr/bin/python3
Dec 02 07:55:46 np0005541914.localdomain sudo[41087]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:46 np0005541914.localdomain python3[41089]: ansible-ansible.legacy.file Invoked with mode=384 dest=/etc/puppet/hiera.yaml _original_basename=tmp6k5u_4dg recurse=False state=file path=/etc/puppet/hiera.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:55:46 np0005541914.localdomain sudo[41087]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:46 np0005541914.localdomain sudo[41117]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-htykumofqgliuvrwvjwbdylillepgjzs ; /usr/bin/python3
Dec 02 07:55:46 np0005541914.localdomain sudo[41117]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:47 np0005541914.localdomain python3[41119]: ansible-file Invoked with src=/etc/puppet/hiera.yaml dest=/etc/hiera.yaml state=link force=True path=/etc/hiera.yaml recurse=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:55:47 np0005541914.localdomain sudo[41117]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:47 np0005541914.localdomain sudo[41120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 07:55:47 np0005541914.localdomain sudo[41120]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:55:47 np0005541914.localdomain sudo[41120]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:47 np0005541914.localdomain sudo[41180]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tqxzsxznywvnexcepizzcpumrdbrfyjb ; /usr/bin/python3
Dec 02 07:55:47 np0005541914.localdomain sudo[41180]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:47 np0005541914.localdomain python3[41182]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/all_nodes.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:55:47 np0005541914.localdomain sudo[41180]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:47 np0005541914.localdomain sudo[41198]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hhysocsifbxaqrmxyfbfoqbxlepjjpir ; /usr/bin/python3
Dec 02 07:55:47 np0005541914.localdomain sudo[41198]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:48 np0005541914.localdomain python3[41200]: ansible-ansible.legacy.file Invoked with dest=/etc/puppet/hieradata/all_nodes.json _original_basename=overcloud.json recurse=False state=file path=/etc/puppet/hieradata/all_nodes.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:55:48 np0005541914.localdomain sudo[41198]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:48 np0005541914.localdomain sudo[41260]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gqglztreluybyzrxxvkxmuxirxiygbyx ; /usr/bin/python3
Dec 02 07:55:48 np0005541914.localdomain sudo[41260]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:48 np0005541914.localdomain python3[41262]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/bootstrap_node.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:55:48 np0005541914.localdomain sudo[41260]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:48 np0005541914.localdomain sudo[41278]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hoaxbhpjhdfvphcpbmrayezurapepvtk ; /usr/bin/python3
Dec 02 07:55:48 np0005541914.localdomain sudo[41278]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:48 np0005541914.localdomain python3[41280]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/bootstrap_node.json _original_basename=bootstrap_node.j2 recurse=False state=file path=/etc/puppet/hieradata/bootstrap_node.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:55:48 np0005541914.localdomain sudo[41278]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:49 np0005541914.localdomain sudo[41340]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eglybxeapnjvupbzuhxbjsbbdcblvllx ; /usr/bin/python3
Dec 02 07:55:49 np0005541914.localdomain sudo[41340]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:49 np0005541914.localdomain python3[41342]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/vip_data.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:55:49 np0005541914.localdomain sudo[41340]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:49 np0005541914.localdomain sudo[41358]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ubsfporxzgtowpkclfpkyqsckgqvqxzv ; /usr/bin/python3
Dec 02 07:55:49 np0005541914.localdomain sudo[41358]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:49 np0005541914.localdomain python3[41360]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/vip_data.json _original_basename=vip_data.j2 recurse=False state=file path=/etc/puppet/hieradata/vip_data.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:55:49 np0005541914.localdomain sudo[41358]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:50 np0005541914.localdomain sudo[41420]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-phpueyxchorcayhdrsbvrfbphazvzznv ; /usr/bin/python3
Dec 02 07:55:50 np0005541914.localdomain sudo[41420]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:50 np0005541914.localdomain python3[41422]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/net_ip_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:55:50 np0005541914.localdomain sudo[41420]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:50 np0005541914.localdomain sudo[41438]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nolaxkjafwrynatuwdgfrmbgmyfwjbyn ; /usr/bin/python3
Dec 02 07:55:50 np0005541914.localdomain sudo[41438]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:50 np0005541914.localdomain python3[41440]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/net_ip_map.json _original_basename=net_ip_map.j2 recurse=False state=file path=/etc/puppet/hieradata/net_ip_map.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:55:50 np0005541914.localdomain sudo[41438]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:50 np0005541914.localdomain sudo[41500]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-puhzmbvtqmslmirecwdvcxzhmaqrbsek ; /usr/bin/python3
Dec 02 07:55:50 np0005541914.localdomain sudo[41500]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:50 np0005541914.localdomain python3[41502]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/cloud_domain.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:55:50 np0005541914.localdomain sudo[41500]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:50 np0005541914.localdomain sudo[41518]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dcvibgcldbmycwsqjwoaimkhmvvnuvow ; /usr/bin/python3
Dec 02 07:55:50 np0005541914.localdomain sudo[41518]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:51 np0005541914.localdomain python3[41520]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/cloud_domain.json _original_basename=cloud_domain.j2 recurse=False state=file path=/etc/puppet/hieradata/cloud_domain.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:55:51 np0005541914.localdomain sudo[41518]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:51 np0005541914.localdomain sudo[41580]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uztbprakbenfzwyflyxujmvwbwrrquoy ; /usr/bin/python3
Dec 02 07:55:51 np0005541914.localdomain sudo[41580]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:51 np0005541914.localdomain python3[41582]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/fqdn.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:55:51 np0005541914.localdomain sudo[41580]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:51 np0005541914.localdomain sudo[41598]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-neadskvtavvlfupzojnbidgpzhxqrfpv ; /usr/bin/python3
Dec 02 07:55:51 np0005541914.localdomain sudo[41598]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:51 np0005541914.localdomain python3[41600]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/fqdn.json _original_basename=fqdn.j2 recurse=False state=file path=/etc/puppet/hieradata/fqdn.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:55:51 np0005541914.localdomain sudo[41598]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:52 np0005541914.localdomain sudo[41660]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jplchpfhfldeywypzpiysjthcbjvkigv ; /usr/bin/python3
Dec 02 07:55:52 np0005541914.localdomain sudo[41660]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:52 np0005541914.localdomain python3[41662]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_names.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:55:52 np0005541914.localdomain sudo[41660]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:52 np0005541914.localdomain sudo[41678]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-upyejfltbrxkmyuixotsfswxmcmmcpsz ; /usr/bin/python3
Dec 02 07:55:52 np0005541914.localdomain sudo[41678]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:52 np0005541914.localdomain python3[41680]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/service_names.json _original_basename=service_names.j2 recurse=False state=file path=/etc/puppet/hieradata/service_names.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:55:52 np0005541914.localdomain sudo[41678]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:52 np0005541914.localdomain sudo[41740]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zqaqrbicnbbifrfabncwrrjsefmtwrxh ; /usr/bin/python3
Dec 02 07:55:52 np0005541914.localdomain sudo[41740]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:53 np0005541914.localdomain python3[41742]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_configs.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:55:53 np0005541914.localdomain sudo[41740]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:53 np0005541914.localdomain sudo[41758]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wmoibgojfyewzaqiwbngtthxoqphyion ; /usr/bin/python3
Dec 02 07:55:53 np0005541914.localdomain sudo[41758]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:53 np0005541914.localdomain python3[41760]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/service_configs.json _original_basename=service_configs.j2 recurse=False state=file path=/etc/puppet/hieradata/service_configs.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:55:53 np0005541914.localdomain sudo[41758]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:53 np0005541914.localdomain sudo[41820]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pnwttxwxcbhsvxwttlymcgfyaonioiyc ; /usr/bin/python3
Dec 02 07:55:53 np0005541914.localdomain sudo[41820]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:53 np0005541914.localdomain python3[41822]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:55:53 np0005541914.localdomain sudo[41820]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:53 np0005541914.localdomain sudo[41838]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zatptcmgxhcpysvhsfocvpthuamngztz ; /usr/bin/python3
Dec 02 07:55:53 np0005541914.localdomain sudo[41838]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:54 np0005541914.localdomain python3[41840]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/extraconfig.json _original_basename=extraconfig.j2 recurse=False state=file path=/etc/puppet/hieradata/extraconfig.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:55:54 np0005541914.localdomain sudo[41838]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:54 np0005541914.localdomain sudo[41900]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ylvxvytbbqhmmcnjhatrstsmdoywyoxa ; /usr/bin/python3
Dec 02 07:55:54 np0005541914.localdomain sudo[41900]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:54 np0005541914.localdomain python3[41902]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/role_extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:55:54 np0005541914.localdomain sudo[41900]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:54 np0005541914.localdomain sudo[41918]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pemduzhwqvhnyhyixgifqchojoskmqmx ; /usr/bin/python3
Dec 02 07:55:54 np0005541914.localdomain sudo[41918]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:54 np0005541914.localdomain python3[41920]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/role_extraconfig.json _original_basename=role_extraconfig.j2 recurse=False state=file path=/etc/puppet/hieradata/role_extraconfig.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:55:54 np0005541914.localdomain sudo[41918]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:55 np0005541914.localdomain sudo[41980]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ztpvftvufjtanhhyxibcpeucqxlryeyl ; /usr/bin/python3
Dec 02 07:55:55 np0005541914.localdomain sudo[41980]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:55 np0005541914.localdomain python3[41982]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ovn_chassis_mac_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:55:55 np0005541914.localdomain sudo[41980]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:55 np0005541914.localdomain sudo[41998]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wudvwhfkaykxniqwlpdwsvnckxestafi ; /usr/bin/python3
Dec 02 07:55:55 np0005541914.localdomain sudo[41998]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:55 np0005541914.localdomain python3[42000]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/ovn_chassis_mac_map.json _original_basename=ovn_chassis_mac_map.j2 recurse=False state=file path=/etc/puppet/hieradata/ovn_chassis_mac_map.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:55:55 np0005541914.localdomain sudo[41998]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:56 np0005541914.localdomain sudo[42028]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-allplibmibneblubpqlhjfcognqkzkau ; /usr/bin/python3
Dec 02 07:55:56 np0005541914.localdomain sudo[42028]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:56 np0005541914.localdomain python3[42030]: ansible-stat Invoked with path={'src': '/etc/puppet/hieradata/ansible_managed.json'} follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 07:55:56 np0005541914.localdomain sudo[42028]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:56 np0005541914.localdomain sudo[42076]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lzxcaeevzjiygdiaytesmgaultkufehd ; /usr/bin/python3
Dec 02 07:55:56 np0005541914.localdomain sudo[42076]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:56 np0005541914.localdomain python3[42078]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ansible_managed.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:55:56 np0005541914.localdomain sudo[42076]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:57 np0005541914.localdomain sudo[42094]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xvijenuabdftmzyebntgtciisqyvuuuu ; /usr/bin/python3
Dec 02 07:55:57 np0005541914.localdomain sudo[42094]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:55:57 np0005541914.localdomain python3[42096]: ansible-ansible.legacy.file Invoked with owner=root group=root mode=0644 dest=/etc/puppet/hieradata/ansible_managed.json _original_basename=tmpvltr6rr2 recurse=False state=file path=/etc/puppet/hieradata/ansible_managed.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:55:57 np0005541914.localdomain sudo[42094]: pam_unix(sudo:session): session closed for user root
Dec 02 07:55:59 np0005541914.localdomain sudo[42124]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-grjejfhggawzdgnefpoaodbumpethxiq ; /usr/bin/python3
Dec 02 07:55:59 np0005541914.localdomain sudo[42124]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:00 np0005541914.localdomain python3[42126]: ansible-dnf Invoked with name=['firewalld'] state=absent allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 02 07:56:02 np0005541914.localdomain sudo[42124]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:04 np0005541914.localdomain sudo[42141]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oudzqubzkncwwjvoklxrmbfejbzstusx ; /usr/bin/python3
Dec 02 07:56:04 np0005541914.localdomain sudo[42141]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:04 np0005541914.localdomain python3[42143]: ansible-ansible.builtin.systemd Invoked with name=iptables.service state=stopped enabled=False daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 07:56:04 np0005541914.localdomain sudo[42141]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:05 np0005541914.localdomain sudo[42159]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-davkurowxrlfjcsobdjfzqmtwldcjciw ; /usr/bin/python3
Dec 02 07:56:05 np0005541914.localdomain sudo[42159]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:05 np0005541914.localdomain python3[42161]: ansible-ansible.builtin.systemd Invoked with name=ip6tables.service state=stopped enabled=False daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 07:56:05 np0005541914.localdomain sudo[42159]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:05 np0005541914.localdomain sudo[42177]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hffxdsyskxbxkkzmcnptwargmsqkpqux ; /usr/bin/python3
Dec 02 07:56:05 np0005541914.localdomain sudo[42177]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:05 np0005541914.localdomain python3[42179]: ansible-ansible.builtin.systemd Invoked with name=nftables state=started enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 07:56:05 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 07:56:06 np0005541914.localdomain systemd-rc-local-generator[42205]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 07:56:06 np0005541914.localdomain systemd-sysv-generator[42211]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 07:56:06 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 07:56:06 np0005541914.localdomain systemd[1]: Starting Netfilter Tables...
Dec 02 07:56:06 np0005541914.localdomain systemd[1]: Finished Netfilter Tables.
Dec 02 07:56:06 np0005541914.localdomain sudo[42177]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:06 np0005541914.localdomain sudo[42267]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-srerndwktgyprnxqbqqffpncralnbevy ; /usr/bin/python3
Dec 02 07:56:06 np0005541914.localdomain sudo[42267]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:07 np0005541914.localdomain python3[42269]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:56:07 np0005541914.localdomain sudo[42267]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:07 np0005541914.localdomain sudo[42310]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wujcemsoluzulcfignljxurfigssgvmh ; /usr/bin/python3
Dec 02 07:56:07 np0005541914.localdomain sudo[42310]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:07 np0005541914.localdomain python3[42312]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662166.8353975-73763-261211308044903/source _original_basename=iptables.nft follow=False checksum=ede9860c99075946a7bc827210247aac639bc84a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:56:07 np0005541914.localdomain sudo[42310]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:07 np0005541914.localdomain sudo[42340]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rcoupbcssosnvtabzppnjrrfetrbvmmb ; /usr/bin/python3
Dec 02 07:56:07 np0005541914.localdomain sudo[42340]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:07 np0005541914.localdomain python3[42342]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:56:07 np0005541914.localdomain sudo[42340]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:08 np0005541914.localdomain sudo[42358]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-linbhpufsypvqyajwkwojrnnwbjcyfwf ; /usr/bin/python3
Dec 02 07:56:08 np0005541914.localdomain sudo[42358]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:08 np0005541914.localdomain python3[42360]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:56:08 np0005541914.localdomain sudo[42358]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:08 np0005541914.localdomain sudo[42407]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ennwkrtnkbxrtjpprklwyydloseucabh ; /usr/bin/python3
Dec 02 07:56:08 np0005541914.localdomain sudo[42407]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:08 np0005541914.localdomain python3[42409]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-jumps.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:56:08 np0005541914.localdomain sudo[42407]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:09 np0005541914.localdomain sudo[42450]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-opinjhdfbfgxopseukfqkhbftzhgkcpy ; /usr/bin/python3
Dec 02 07:56:09 np0005541914.localdomain sudo[42450]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:09 np0005541914.localdomain python3[42452]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-jumps.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662168.6203887-74106-225751472549208/source mode=None follow=False _original_basename=jump-chain.j2 checksum=eec306c3276262a27663d76bd0ea526457445afa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:56:09 np0005541914.localdomain sudo[42450]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:09 np0005541914.localdomain sudo[42512]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mfpactoelzedhvdgalzthsalrihnhiww ; /usr/bin/python3
Dec 02 07:56:09 np0005541914.localdomain sudo[42512]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:09 np0005541914.localdomain python3[42514]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-update-jumps.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:56:09 np0005541914.localdomain sudo[42512]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:10 np0005541914.localdomain sudo[42555]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xdvlkvzgtztogtmuurksleqxcfbmkxqr ; /usr/bin/python3
Dec 02 07:56:10 np0005541914.localdomain sudo[42555]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:10 np0005541914.localdomain python3[42557]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-update-jumps.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662169.5707421-74163-57053949448765/source mode=None follow=False _original_basename=jump-chain.j2 checksum=eec306c3276262a27663d76bd0ea526457445afa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:56:10 np0005541914.localdomain sudo[42555]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:10 np0005541914.localdomain sudo[42617]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yvbxafdlhdkrntmzpeymdlvnxykgwzci ; /usr/bin/python3
Dec 02 07:56:10 np0005541914.localdomain sudo[42617]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:10 np0005541914.localdomain python3[42619]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-flushes.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:56:10 np0005541914.localdomain sudo[42617]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:11 np0005541914.localdomain sudo[42660]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-henymofehnadxijuwluqooslnmypnaae ; /usr/bin/python3
Dec 02 07:56:11 np0005541914.localdomain sudo[42660]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:11 np0005541914.localdomain python3[42662]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-flushes.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662170.516338-74218-116552651574724/source mode=None follow=False _original_basename=flush-chain.j2 checksum=e8e7b8db0d61a7fe393441cc91613f470eb34a6e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:56:11 np0005541914.localdomain sudo[42660]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:11 np0005541914.localdomain sudo[42722]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mqkxxpnxwkweslcvnaimpnqpbgplylaa ; /usr/bin/python3
Dec 02 07:56:11 np0005541914.localdomain sudo[42722]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:11 np0005541914.localdomain python3[42724]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-chains.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:56:11 np0005541914.localdomain sudo[42722]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:11 np0005541914.localdomain sudo[42765]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eqtkdyxblbekgjsetlrzovijvnmlmgxx ; /usr/bin/python3
Dec 02 07:56:11 np0005541914.localdomain sudo[42765]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:12 np0005541914.localdomain python3[42767]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-chains.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662171.4040427-74280-249049247323966/source mode=None follow=False _original_basename=chains.j2 checksum=e60ee651f5014e83924f4e901ecc8e25b1906610 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:56:12 np0005541914.localdomain sudo[42765]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:12 np0005541914.localdomain sudo[42827]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-witterokydatcmljwsiwcwiwskmbkpby ; /usr/bin/python3
Dec 02 07:56:12 np0005541914.localdomain sudo[42827]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:12 np0005541914.localdomain python3[42829]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-rules.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:56:12 np0005541914.localdomain sudo[42827]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:13 np0005541914.localdomain sudo[42870]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eqvwodrjspzkcesrjmjnngfokzjutskp ; /usr/bin/python3
Dec 02 07:56:13 np0005541914.localdomain sudo[42870]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:13 np0005541914.localdomain python3[42872]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-rules.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662172.277965-74320-21086605098136/source mode=None follow=False _original_basename=ruleset.j2 checksum=0444e4206083f91e2fb2aabfa2928244c2db35ed backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:56:13 np0005541914.localdomain sudo[42870]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:13 np0005541914.localdomain sudo[42900]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wfijyuyzohyqodezbzxukiaqriunxwot ; /usr/bin/python3
Dec 02 07:56:13 np0005541914.localdomain sudo[42900]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:13 np0005541914.localdomain python3[42902]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/nftables/tripleo-chains.nft /etc/nftables/tripleo-flushes.nft /etc/nftables/tripleo-rules.nft /etc/nftables/tripleo-update-jumps.nft /etc/nftables/tripleo-jumps.nft | nft -c -f - _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:56:14 np0005541914.localdomain sudo[42900]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:14 np0005541914.localdomain sudo[42965]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jvsaarkenyqegtelvrhwjvivwjohhucm ; /usr/bin/python3
Dec 02 07:56:14 np0005541914.localdomain sudo[42965]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:14 np0005541914.localdomain python3[42967]: ansible-ansible.builtin.blockinfile Invoked with path=/etc/sysconfig/nftables.conf backup=False validate=nft -c -f %s block=include "/etc/nftables/iptables.nft"
                                                         include "/etc/nftables/tripleo-chains.nft"
                                                         include "/etc/nftables/tripleo-rules.nft"
                                                         include "/etc/nftables/tripleo-jumps.nft"
                                                          state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:56:14 np0005541914.localdomain sudo[42965]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:14 np0005541914.localdomain sudo[42982]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dxqkihiohyweottjiwvpfhpcprgofslq ; /usr/bin/python3
Dec 02 07:56:14 np0005541914.localdomain sudo[42982]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:14 np0005541914.localdomain python3[42984]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/tripleo-chains.nft _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:56:14 np0005541914.localdomain sudo[42982]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:15 np0005541914.localdomain sudo[42999]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zxlbaudcncnibqeydwnyskhqvjllbmke ; /usr/bin/python3
Dec 02 07:56:15 np0005541914.localdomain sudo[42999]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:15 np0005541914.localdomain python3[43001]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/nftables/tripleo-flushes.nft /etc/nftables/tripleo-rules.nft /etc/nftables/tripleo-update-jumps.nft | nft -f - _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:56:15 np0005541914.localdomain sudo[42999]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:15 np0005541914.localdomain sudo[43018]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yjiqnxcdhlatbwchazrqmfvtkmzxhgxm ; /usr/bin/python3
Dec 02 07:56:15 np0005541914.localdomain sudo[43018]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:15 np0005541914.localdomain python3[43020]: ansible-file Invoked with mode=0750 path=/var/log/containers/collectd setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 07:56:15 np0005541914.localdomain sudo[43018]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:15 np0005541914.localdomain sudo[43034]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qelolidrimyjkxqxmyseirjyhuwonsaj ; /usr/bin/python3
Dec 02 07:56:15 np0005541914.localdomain sudo[43034]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:16 np0005541914.localdomain python3[43036]: ansible-file Invoked with mode=0755 path=/var/lib/container-user-scripts/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 07:56:16 np0005541914.localdomain sudo[43034]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:16 np0005541914.localdomain sudo[43050]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ggcxladbxnjxudevhamcxavcqwmlkamy ; /usr/bin/python3
Dec 02 07:56:16 np0005541914.localdomain sudo[43050]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:16 np0005541914.localdomain python3[43052]: ansible-file Invoked with mode=0750 path=/var/log/containers/ceilometer setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 07:56:16 np0005541914.localdomain sudo[43050]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:16 np0005541914.localdomain sudo[43066]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-twjetndjckofusbfitsrxpbkllxnscis ; /usr/bin/python3
Dec 02 07:56:16 np0005541914.localdomain sudo[43066]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:16 np0005541914.localdomain python3[43068]: ansible-seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 02 07:56:17 np0005541914.localdomain sudo[43066]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:17 np0005541914.localdomain sudo[43086]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gtybxlcbdtndemwahmfikrcodenjzwso ; /usr/bin/python3
Dec 02 07:56:17 np0005541914.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Dec 02 07:56:17 np0005541914.localdomain sudo[43086]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:18 np0005541914.localdomain python3[43088]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/etc/iscsi(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None
Dec 02 07:56:18 np0005541914.localdomain kernel: SELinux:  Converting 2703 SID table entries...
Dec 02 07:56:18 np0005541914.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 02 07:56:18 np0005541914.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 02 07:56:18 np0005541914.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 02 07:56:18 np0005541914.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 02 07:56:18 np0005541914.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 02 07:56:18 np0005541914.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 02 07:56:18 np0005541914.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 02 07:56:18 np0005541914.localdomain sudo[43086]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:18 np0005541914.localdomain sudo[43107]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tcauwpxqjzsxyndtmhjyfnwqvdvdldqc ; /usr/bin/python3
Dec 02 07:56:18 np0005541914.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec 02 07:56:18 np0005541914.localdomain sudo[43107]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:19 np0005541914.localdomain python3[43109]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/etc/target(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None
Dec 02 07:56:19 np0005541914.localdomain kernel: SELinux:  Converting 2703 SID table entries...
Dec 02 07:56:19 np0005541914.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 02 07:56:19 np0005541914.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 02 07:56:19 np0005541914.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 02 07:56:19 np0005541914.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 02 07:56:19 np0005541914.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 02 07:56:19 np0005541914.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 02 07:56:19 np0005541914.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 02 07:56:20 np0005541914.localdomain sudo[43107]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:20 np0005541914.localdomain sudo[43128]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ebtewovxnxregvriixmxybqtslkckuvm ; /usr/bin/python3
Dec 02 07:56:20 np0005541914.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec 02 07:56:20 np0005541914.localdomain sudo[43128]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:20 np0005541914.localdomain python3[43130]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/var/lib/iscsi(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None
Dec 02 07:56:21 np0005541914.localdomain kernel: SELinux:  Converting 2703 SID table entries...
Dec 02 07:56:21 np0005541914.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 02 07:56:21 np0005541914.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 02 07:56:21 np0005541914.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 02 07:56:21 np0005541914.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 02 07:56:21 np0005541914.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 02 07:56:21 np0005541914.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 02 07:56:21 np0005541914.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 02 07:56:21 np0005541914.localdomain sudo[43128]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:21 np0005541914.localdomain sudo[43149]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mclcpspfwpkkotvkbinrwlkcvhjvjlxr ; /usr/bin/python3
Dec 02 07:56:21 np0005541914.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec 02 07:56:21 np0005541914.localdomain sudo[43149]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:21 np0005541914.localdomain python3[43151]: ansible-file Invoked with path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 07:56:21 np0005541914.localdomain sudo[43149]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:21 np0005541914.localdomain sudo[43165]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zybagthanyzgwyndphriqjaovskuroqy ; /usr/bin/python3
Dec 02 07:56:21 np0005541914.localdomain sudo[43165]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:22 np0005541914.localdomain python3[43167]: ansible-file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 07:56:22 np0005541914.localdomain sudo[43165]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:22 np0005541914.localdomain sudo[43181]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-djzlpkrpsxgnrtdklangbihfuvgpwzdl ; /usr/bin/python3
Dec 02 07:56:22 np0005541914.localdomain sudo[43181]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:22 np0005541914.localdomain python3[43183]: ansible-file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 07:56:22 np0005541914.localdomain sudo[43181]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:22 np0005541914.localdomain sshd[43198]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:56:22 np0005541914.localdomain sudo[43197]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qyweylzqsuqhdtipzolvqvoijxpjltbe ; /usr/bin/python3
Dec 02 07:56:22 np0005541914.localdomain sudo[43197]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:22 np0005541914.localdomain python3[43201]: ansible-stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 07:56:22 np0005541914.localdomain sudo[43197]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:23 np0005541914.localdomain sudo[43215]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xwchqqlycjfgevizkqilrmcknyerhibq ; /usr/bin/python3
Dec 02 07:56:23 np0005541914.localdomain sudo[43215]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:23 np0005541914.localdomain python3[43217]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-enabled --quiet iscsi.service _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:56:23 np0005541914.localdomain sudo[43215]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:23 np0005541914.localdomain sudo[43232]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uwowdvcbgsjmbcjlrpkguuzxbxxumqdd ; /usr/bin/python3
Dec 02 07:56:23 np0005541914.localdomain sudo[43232]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:23 np0005541914.localdomain sshd[43198]: Invalid user openbravo from 182.253.156.173 port 54936
Dec 02 07:56:24 np0005541914.localdomain python3[43234]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 02 07:56:24 np0005541914.localdomain sshd[43198]: Received disconnect from 182.253.156.173 port 54936:11: Bye Bye [preauth]
Dec 02 07:56:24 np0005541914.localdomain sshd[43198]: Disconnected from invalid user openbravo 182.253.156.173 port 54936 [preauth]
Dec 02 07:56:25 np0005541914.localdomain sshd[43236]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:56:25 np0005541914.localdomain sshd[43236]: Invalid user sol from 45.148.10.240 port 50974
Dec 02 07:56:25 np0005541914.localdomain sshd[43236]: Connection closed by invalid user sol 45.148.10.240 port 50974 [preauth]
Dec 02 07:56:27 np0005541914.localdomain sudo[43232]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:27 np0005541914.localdomain sudo[43251]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zdbribttrlcseuwpsjfinksakopqxfaq ; /usr/bin/python3
Dec 02 07:56:27 np0005541914.localdomain sudo[43251]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:27 np0005541914.localdomain python3[43253]: ansible-file Invoked with path=/etc/modules-load.d state=directory mode=493 owner=root group=root setype=etc_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 07:56:28 np0005541914.localdomain sudo[43251]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:28 np0005541914.localdomain sudo[43299]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hjaazzsjqzgaqiwjvxsmhhurnbfkbbir ; /usr/bin/python3
Dec 02 07:56:28 np0005541914.localdomain sudo[43299]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:28 np0005541914.localdomain python3[43301]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-tripleo.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:56:28 np0005541914.localdomain sudo[43299]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:28 np0005541914.localdomain sudo[43342]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dzwdysxyzologifoqlllhhommvnmabts ; /usr/bin/python3
Dec 02 07:56:28 np0005541914.localdomain sudo[43342]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:28 np0005541914.localdomain python3[43344]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662188.167698-75157-160996103504889/source dest=/etc/modules-load.d/99-tripleo.conf mode=420 owner=root group=root setype=etc_t follow=False _original_basename=tripleo-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 02 07:56:28 np0005541914.localdomain sudo[43342]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:29 np0005541914.localdomain sudo[43372]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hviipkuofyslhmahyelmdstmercpdjmi ; /usr/bin/python3
Dec 02 07:56:29 np0005541914.localdomain sudo[43372]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:29 np0005541914.localdomain python3[43374]: ansible-systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 02 07:56:29 np0005541914.localdomain systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 02 07:56:29 np0005541914.localdomain systemd[1]: Stopped Load Kernel Modules.
Dec 02 07:56:29 np0005541914.localdomain systemd[1]: Stopping Load Kernel Modules...
Dec 02 07:56:29 np0005541914.localdomain systemd[1]: Starting Load Kernel Modules...
Dec 02 07:56:29 np0005541914.localdomain kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec 02 07:56:29 np0005541914.localdomain systemd-modules-load[43377]: Inserted module 'br_netfilter'
Dec 02 07:56:29 np0005541914.localdomain kernel: Bridge firewalling registered
Dec 02 07:56:29 np0005541914.localdomain systemd-modules-load[43377]: Module 'msr' is built in
Dec 02 07:56:29 np0005541914.localdomain systemd[1]: Finished Load Kernel Modules.
Dec 02 07:56:29 np0005541914.localdomain sudo[43372]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:29 np0005541914.localdomain sudo[43426]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xwrrvnhkwejneovquldfmqsbwwzhqysq ; /usr/bin/python3
Dec 02 07:56:29 np0005541914.localdomain sudo[43426]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:29 np0005541914.localdomain python3[43428]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-tripleo.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:56:29 np0005541914.localdomain sudo[43426]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:30 np0005541914.localdomain sudo[43469]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ikaxwhhwrkfadtseicpbdbqelrmtqxaq ; /usr/bin/python3
Dec 02 07:56:30 np0005541914.localdomain sudo[43469]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:30 np0005541914.localdomain python3[43471]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662189.5857527-75223-53451697338259/source dest=/etc/sysctl.d/99-tripleo.conf mode=420 owner=root group=root setype=etc_t follow=False _original_basename=tripleo-sysctl.conf.j2 checksum=cddb9401fdafaaf28a4a94b98448f98ae93c94c9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 02 07:56:30 np0005541914.localdomain sudo[43469]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:30 np0005541914.localdomain sudo[43499]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-stftqommgsqplfztjpabozofrpsvalwl ; /usr/bin/python3
Dec 02 07:56:30 np0005541914.localdomain sudo[43499]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:30 np0005541914.localdomain python3[43501]: ansible-sysctl Invoked with name=fs.aio-max-nr value=1048576 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 02 07:56:30 np0005541914.localdomain sudo[43499]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:31 np0005541914.localdomain sudo[43516]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bblsvzjdubjpgzraniqfqhadpprjptdu ; /usr/bin/python3
Dec 02 07:56:31 np0005541914.localdomain sudo[43516]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:31 np0005541914.localdomain python3[43518]: ansible-sysctl Invoked with name=fs.inotify.max_user_instances value=1024 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 02 07:56:31 np0005541914.localdomain sudo[43516]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:31 np0005541914.localdomain sudo[43534]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tapnpfivjvkcpnnqkdqppphtidquatqc ; /usr/bin/python3
Dec 02 07:56:31 np0005541914.localdomain sudo[43534]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:31 np0005541914.localdomain python3[43536]: ansible-sysctl Invoked with name=kernel.pid_max value=1048576 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 02 07:56:31 np0005541914.localdomain sudo[43534]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:31 np0005541914.localdomain sudo[43552]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ultylbezcbminjyrmcvdjebmzvwsarcr ; /usr/bin/python3
Dec 02 07:56:31 np0005541914.localdomain sudo[43552]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:31 np0005541914.localdomain python3[43554]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-arptables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 02 07:56:32 np0005541914.localdomain sudo[43552]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:32 np0005541914.localdomain sudo[43569]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lxvpbcqxbmwkicmeuctcabkgemwuyodn ; /usr/bin/python3
Dec 02 07:56:32 np0005541914.localdomain sudo[43569]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:33 np0005541914.localdomain python3[43571]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-ip6tables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 02 07:56:33 np0005541914.localdomain sudo[43569]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:33 np0005541914.localdomain sudo[43586]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fvbvqkiowwghaljmtqmlqloikodbmtxu ; /usr/bin/python3
Dec 02 07:56:33 np0005541914.localdomain sudo[43586]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:33 np0005541914.localdomain python3[43588]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-iptables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 02 07:56:33 np0005541914.localdomain sudo[43586]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:33 np0005541914.localdomain sudo[43603]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-abdhjjsddccjkeajmbngdgakqzkylfgh ; /usr/bin/python3
Dec 02 07:56:33 np0005541914.localdomain sudo[43603]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:33 np0005541914.localdomain python3[43605]: ansible-sysctl Invoked with name=net.ipv4.conf.all.rp_filter value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 02 07:56:33 np0005541914.localdomain sudo[43603]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:33 np0005541914.localdomain sudo[43621]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kwasnolfqhgqhbomyufxdvpcyiqpxswe ; /usr/bin/python3
Dec 02 07:56:33 np0005541914.localdomain sudo[43621]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:33 np0005541914.localdomain python3[43623]: ansible-sysctl Invoked with name=net.ipv4.ip_forward value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 02 07:56:34 np0005541914.localdomain sudo[43621]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:34 np0005541914.localdomain sudo[43639]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nzxryactjfjxlpqaaiwduxcneqazjvfp ; /usr/bin/python3
Dec 02 07:56:34 np0005541914.localdomain sudo[43639]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:34 np0005541914.localdomain python3[43641]: ansible-sysctl Invoked with name=net.ipv4.ip_local_reserved_ports value=35357,49000-49001 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 02 07:56:34 np0005541914.localdomain sudo[43639]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:34 np0005541914.localdomain sudo[43657]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-awdloptagxrgtvbgmzyrkuewazozfkgm ; /usr/bin/python3
Dec 02 07:56:34 np0005541914.localdomain sudo[43657]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:34 np0005541914.localdomain python3[43659]: ansible-sysctl Invoked with name=net.ipv4.ip_nonlocal_bind value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 02 07:56:34 np0005541914.localdomain sudo[43657]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:34 np0005541914.localdomain sudo[43675]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-udfbjoquzwrafjtnhoudhzvqvnwleyfs ; /usr/bin/python3
Dec 02 07:56:34 np0005541914.localdomain sudo[43675]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:34 np0005541914.localdomain python3[43677]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh1 value=1024 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 02 07:56:34 np0005541914.localdomain sudo[43675]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:35 np0005541914.localdomain sudo[43693]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qugzmubklmeoygjmnhpylxvgqoyprhbl ; /usr/bin/python3
Dec 02 07:56:35 np0005541914.localdomain sudo[43693]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:35 np0005541914.localdomain python3[43695]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh2 value=2048 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 02 07:56:35 np0005541914.localdomain sudo[43693]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:35 np0005541914.localdomain sudo[43711]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eldlhxqgqoorvjwwbuhoqscekkcarnmo ; /usr/bin/python3
Dec 02 07:56:35 np0005541914.localdomain sudo[43711]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:35 np0005541914.localdomain python3[43713]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh3 value=4096 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 02 07:56:35 np0005541914.localdomain sudo[43711]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:35 np0005541914.localdomain sudo[43729]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bxwrrdoxfttamrxcmiwxebkddmltfgha ; /usr/bin/python3
Dec 02 07:56:35 np0005541914.localdomain sudo[43729]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:35 np0005541914.localdomain python3[43731]: ansible-sysctl Invoked with name=net.ipv6.conf.all.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 02 07:56:35 np0005541914.localdomain sudo[43729]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:36 np0005541914.localdomain sudo[43746]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lqoxryjvvxxjmahotrwhozekanmzaffp ; /usr/bin/python3
Dec 02 07:56:36 np0005541914.localdomain sudo[43746]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:36 np0005541914.localdomain python3[43748]: ansible-sysctl Invoked with name=net.ipv6.conf.all.forwarding value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 02 07:56:36 np0005541914.localdomain sudo[43746]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:36 np0005541914.localdomain sudo[43763]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qxdfeecehbdmgeuemmrvvhyxiwmetjfb ; /usr/bin/python3
Dec 02 07:56:36 np0005541914.localdomain sudo[43763]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:36 np0005541914.localdomain python3[43765]: ansible-sysctl Invoked with name=net.ipv6.conf.default.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 02 07:56:36 np0005541914.localdomain sudo[43763]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:36 np0005541914.localdomain sudo[43780]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jsugkbuzhzygwzrkwjzmlfxysvyqyicu ; /usr/bin/python3
Dec 02 07:56:36 np0005541914.localdomain sudo[43780]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:36 np0005541914.localdomain python3[43782]: ansible-sysctl Invoked with name=net.ipv6.conf.lo.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 02 07:56:36 np0005541914.localdomain sudo[43780]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:36 np0005541914.localdomain sudo[43797]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xfvrmcujmrunxgdorgniuuwxiyzxtelv ; /usr/bin/python3
Dec 02 07:56:36 np0005541914.localdomain sudo[43797]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:37 np0005541914.localdomain python3[43799]: ansible-sysctl Invoked with name=net.ipv6.ip_nonlocal_bind value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Dec 02 07:56:37 np0005541914.localdomain sudo[43797]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:37 np0005541914.localdomain sudo[43815]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mlrdjwswkvehmkxxllimxpkntyuafuhk ; /usr/bin/python3
Dec 02 07:56:37 np0005541914.localdomain sudo[43815]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:37 np0005541914.localdomain python3[43817]: ansible-systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 02 07:56:37 np0005541914.localdomain systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 02 07:56:37 np0005541914.localdomain systemd[1]: Stopped Apply Kernel Variables.
Dec 02 07:56:37 np0005541914.localdomain systemd[1]: Stopping Apply Kernel Variables...
Dec 02 07:56:37 np0005541914.localdomain systemd[1]: Starting Apply Kernel Variables...
Dec 02 07:56:37 np0005541914.localdomain systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 02 07:56:37 np0005541914.localdomain systemd[1]: Finished Apply Kernel Variables.
Dec 02 07:56:37 np0005541914.localdomain sudo[43815]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:37 np0005541914.localdomain sudo[43835]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cmxkjuqkygyfouzoztbdvypdbgoxwqtn ; /usr/bin/python3
Dec 02 07:56:37 np0005541914.localdomain sudo[43835]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:37 np0005541914.localdomain python3[43837]: ansible-file Invoked with mode=0750 path=/var/log/containers/metrics_qdr setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 07:56:37 np0005541914.localdomain sudo[43835]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:38 np0005541914.localdomain sudo[43851]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hifjiwstvucwojwjvpsotlbdfsxolrgk ; /usr/bin/python3
Dec 02 07:56:38 np0005541914.localdomain sudo[43851]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:38 np0005541914.localdomain python3[43853]: ansible-file Invoked with path=/var/lib/metrics_qdr setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 07:56:38 np0005541914.localdomain sudo[43851]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:38 np0005541914.localdomain sudo[43867]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-odcmrwxdyftghiewykpckqisagaebdzk ; /usr/bin/python3
Dec 02 07:56:38 np0005541914.localdomain sudo[43867]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:38 np0005541914.localdomain python3[43869]: ansible-file Invoked with mode=0750 path=/var/log/containers/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 07:56:38 np0005541914.localdomain sudo[43867]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:38 np0005541914.localdomain sudo[43883]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iybzwrhqfmttycrrwqjusaamatqetqvq ; /usr/bin/python3
Dec 02 07:56:38 np0005541914.localdomain sudo[43883]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:38 np0005541914.localdomain python3[43885]: ansible-stat Invoked with path=/var/lib/nova/instances follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 07:56:38 np0005541914.localdomain sudo[43883]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:39 np0005541914.localdomain sudo[43899]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-idkbnfccrgmvwwsomxyqkbittlysfvse ; /usr/bin/python3
Dec 02 07:56:39 np0005541914.localdomain sudo[43899]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:39 np0005541914.localdomain python3[43901]: ansible-file Invoked with path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 07:56:39 np0005541914.localdomain sudo[43899]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:39 np0005541914.localdomain sudo[43915]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-otfjyvapqljdkrxzfgmoshsybedemcrt ; /usr/bin/python3
Dec 02 07:56:39 np0005541914.localdomain sudo[43915]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:39 np0005541914.localdomain python3[43917]: ansible-file Invoked with path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 07:56:39 np0005541914.localdomain sudo[43915]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:39 np0005541914.localdomain sudo[43931]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wcqmwwaaaklachaudvybptxwyhlpsfjq ; /usr/bin/python3
Dec 02 07:56:39 np0005541914.localdomain sudo[43931]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:39 np0005541914.localdomain python3[43933]: ansible-file Invoked with path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 07:56:39 np0005541914.localdomain sudo[43931]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:39 np0005541914.localdomain sudo[43947]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nhvezskcleocibsmjagozkwkykcjgbip ; /usr/bin/python3
Dec 02 07:56:39 np0005541914.localdomain sudo[43947]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:40 np0005541914.localdomain python3[43949]: ansible-file Invoked with path=/var/lib/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 07:56:40 np0005541914.localdomain sudo[43947]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:40 np0005541914.localdomain sudo[43963]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rriijdgfsatxsesimyranzhpxzceifvl ; /usr/bin/python3
Dec 02 07:56:40 np0005541914.localdomain sudo[43963]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:40 np0005541914.localdomain python3[43965]: ansible-file Invoked with path=/etc/tmpfiles.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:56:40 np0005541914.localdomain sudo[43963]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:40 np0005541914.localdomain sudo[44011]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dmqoeeqhhpujfatghmurzszkzlrxfnrp ; /usr/bin/python3
Dec 02 07:56:40 np0005541914.localdomain sudo[44011]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:40 np0005541914.localdomain python3[44013]: ansible-ansible.legacy.stat Invoked with path=/etc/tmpfiles.d/run-nova.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:56:40 np0005541914.localdomain sudo[44011]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:41 np0005541914.localdomain sudo[44054]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jkxukdinmegjpfapjemqcgyopxeraqel ; /usr/bin/python3
Dec 02 07:56:41 np0005541914.localdomain sudo[44054]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:41 np0005541914.localdomain python3[44056]: ansible-ansible.legacy.copy Invoked with dest=/etc/tmpfiles.d/run-nova.conf src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662200.5573525-75723-51736172754302/source _original_basename=tmpbfbm_gjy follow=False checksum=f834349098718ec09c7562bcb470b717a83ff411 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:56:41 np0005541914.localdomain sudo[44054]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:41 np0005541914.localdomain sudo[44084]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ihafradqzartfyjujlxuzeqwyjbrictk ; /usr/bin/python3
Dec 02 07:56:41 np0005541914.localdomain sudo[44084]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:41 np0005541914.localdomain python3[44086]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-tmpfiles --create _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:56:41 np0005541914.localdomain sudo[44084]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:43 np0005541914.localdomain sudo[44101]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yqmltaygsefuheluewohicpjcxibmbvv ; /usr/bin/python3
Dec 02 07:56:43 np0005541914.localdomain sudo[44101]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:43 np0005541914.localdomain python3[44103]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:56:43 np0005541914.localdomain sudo[44101]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:43 np0005541914.localdomain sudo[44149]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-whdbumwyjojmmenyhsmhdvcheqihpqmf ; /usr/bin/python3
Dec 02 07:56:43 np0005541914.localdomain sudo[44149]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:43 np0005541914.localdomain python3[44151]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/delay-nova-compute follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:56:43 np0005541914.localdomain sudo[44149]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:44 np0005541914.localdomain sudo[44192]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mnfidzakbkjtszvalalgotgyyvmnxnqn ; /usr/bin/python3
Dec 02 07:56:44 np0005541914.localdomain sudo[44192]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:44 np0005541914.localdomain python3[44194]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/nova/delay-nova-compute mode=493 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662203.7394264-75924-52890893692572/source _original_basename=tmppwqg972p follow=False checksum=f07ad3e8cf3766b3b3b07ae8278826a0ef3bb5e3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:56:44 np0005541914.localdomain sudo[44192]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:44 np0005541914.localdomain sudo[44222]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aapkhsibicghoiqmxdsatijihivngmae ; /usr/bin/python3
Dec 02 07:56:44 np0005541914.localdomain sudo[44222]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:44 np0005541914.localdomain python3[44224]: ansible-file Invoked with mode=0750 path=/var/log/containers/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 07:56:44 np0005541914.localdomain sudo[44222]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:44 np0005541914.localdomain sudo[44238]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lxjaomrcdydxbncnvwhyachubiopfogu ; /usr/bin/python3
Dec 02 07:56:44 np0005541914.localdomain sudo[44238]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:45 np0005541914.localdomain python3[44240]: ansible-file Invoked with path=/etc/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 07:56:45 np0005541914.localdomain sudo[44238]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:45 np0005541914.localdomain sudo[44254]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hfzajloopavavyvzuxgephhqglugmgin ; /usr/bin/python3
Dec 02 07:56:45 np0005541914.localdomain sudo[44254]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:45 np0005541914.localdomain python3[44256]: ansible-file Invoked with path=/etc/libvirt/secrets setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 07:56:45 np0005541914.localdomain sudo[44254]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:45 np0005541914.localdomain sudo[44270]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xaxyfegvgnmkwjbttzzzewjazcowhkkk ; /usr/bin/python3
Dec 02 07:56:45 np0005541914.localdomain sudo[44270]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:45 np0005541914.localdomain python3[44272]: ansible-file Invoked with path=/etc/libvirt/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 07:56:45 np0005541914.localdomain sudo[44270]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:45 np0005541914.localdomain sudo[44286]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xhfgeiumuajnrncpgezgfhfahmaqxjpd ; /usr/bin/python3
Dec 02 07:56:45 np0005541914.localdomain sudo[44286]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:46 np0005541914.localdomain python3[44288]: ansible-file Invoked with path=/var/lib/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 07:56:46 np0005541914.localdomain sudo[44286]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:46 np0005541914.localdomain sudo[44302]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tifiqexpqtvfpuedhjdxuvwgzvgpqcfl ; /usr/bin/python3
Dec 02 07:56:46 np0005541914.localdomain sudo[44302]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:46 np0005541914.localdomain python3[44304]: ansible-file Invoked with path=/var/cache/libvirt state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:56:46 np0005541914.localdomain sudo[44302]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:46 np0005541914.localdomain sudo[44318]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xpalpucguczsvzshagwxhxmqfdkujjjo ; /usr/bin/python3
Dec 02 07:56:46 np0005541914.localdomain sudo[44318]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:46 np0005541914.localdomain python3[44320]: ansible-file Invoked with path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 07:56:46 np0005541914.localdomain sudo[44318]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:46 np0005541914.localdomain sudo[44334]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-afgojirdsyagqrcuhkzvfwzkxwsiyxgg ; /usr/bin/python3
Dec 02 07:56:46 np0005541914.localdomain sudo[44334]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:46 np0005541914.localdomain python3[44336]: ansible-file Invoked with path=/run/libvirt state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:56:46 np0005541914.localdomain sudo[44334]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:47 np0005541914.localdomain sudo[44350]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rhpjmubdrimotfhmjsbnvgrypkutgsmp ; /usr/bin/python3
Dec 02 07:56:47 np0005541914.localdomain sudo[44350]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:47 np0005541914.localdomain python3[44352]: ansible-file Invoked with mode=0770 path=/var/log/containers/libvirt/swtpm setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 07:56:47 np0005541914.localdomain sudo[44350]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:47 np0005541914.localdomain sudo[44353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 07:56:47 np0005541914.localdomain sudo[44353]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:56:47 np0005541914.localdomain sudo[44353]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:47 np0005541914.localdomain sshd[44375]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:56:47 np0005541914.localdomain sudo[44368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 02 07:56:47 np0005541914.localdomain sudo[44368]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:56:47 np0005541914.localdomain sudo[44397]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-swrmgjjrdrtjjptwhfitfbqgrlyfhdpt ; /usr/bin/python3
Dec 02 07:56:47 np0005541914.localdomain sudo[44397]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:47 np0005541914.localdomain python3[44400]: ansible-group Invoked with gid=107 name=qemu state=present system=False local=False non_unique=False
Dec 02 07:56:47 np0005541914.localdomain groupadd[44401]: group added to /etc/group: name=qemu, GID=107
Dec 02 07:56:47 np0005541914.localdomain groupadd[44401]: group added to /etc/gshadow: name=qemu
Dec 02 07:56:47 np0005541914.localdomain groupadd[44401]: new group: name=qemu, GID=107
Dec 02 07:56:47 np0005541914.localdomain sudo[44397]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:47 np0005541914.localdomain sudo[44368]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:47 np0005541914.localdomain sudo[44440]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hcjvuyiwbuonkjuuoyawurtjbrwnhive ; /usr/bin/python3
Dec 02 07:56:47 np0005541914.localdomain sudo[44440]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:47 np0005541914.localdomain sudo[44443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 07:56:47 np0005541914.localdomain sudo[44443]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:56:47 np0005541914.localdomain sudo[44443]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:48 np0005541914.localdomain sudo[44458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 07:56:48 np0005541914.localdomain sudo[44458]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:56:48 np0005541914.localdomain python3[44442]: ansible-user Invoked with comment=qemu user group=qemu name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005541914.localdomain update_password=always groups=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec 02 07:56:48 np0005541914.localdomain useradd[44474]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=none
Dec 02 07:56:48 np0005541914.localdomain sudo[44440]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:48 np0005541914.localdomain sudo[44494]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ykhuiifcciksfrmfwltpepnuktaozyxm ; /usr/bin/python3
Dec 02 07:56:48 np0005541914.localdomain sudo[44494]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:48 np0005541914.localdomain python3[44497]: ansible-file Invoked with group=qemu owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None serole=None selevel=None attributes=None
Dec 02 07:56:48 np0005541914.localdomain sudo[44494]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:48 np0005541914.localdomain sudo[44458]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:48 np0005541914.localdomain sudo[44542]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lamytpdewfcyenxulefjzcccwqvhnaqm ; /usr/bin/python3
Dec 02 07:56:48 np0005541914.localdomain sudo[44542]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:48 np0005541914.localdomain python3[44544]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/rpm -q libvirt-daemon _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:56:48 np0005541914.localdomain sudo[44542]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:49 np0005541914.localdomain sshd[44375]: Received disconnect from 103.52.115.25 port 49918:11: Bye Bye [preauth]
Dec 02 07:56:49 np0005541914.localdomain sshd[44375]: Disconnected from authenticating user root 103.52.115.25 port 49918 [preauth]
Dec 02 07:56:49 np0005541914.localdomain sudo[44591]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nokpqarhdjeuiuuqcvgeeivakftjkicm ; /usr/bin/python3
Dec 02 07:56:49 np0005541914.localdomain sudo[44591]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:49 np0005541914.localdomain sudo[44594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 07:56:49 np0005541914.localdomain sudo[44594]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:56:49 np0005541914.localdomain sudo[44594]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:49 np0005541914.localdomain python3[44593]: ansible-ansible.legacy.stat Invoked with path=/etc/tmpfiles.d/run-libvirt.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:56:49 np0005541914.localdomain sudo[44591]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:49 np0005541914.localdomain sudo[44649]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xqapjiqvvqwqmcrqdoyejnoxszxarihj ; /usr/bin/python3
Dec 02 07:56:49 np0005541914.localdomain sudo[44649]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:49 np0005541914.localdomain python3[44651]: ansible-ansible.legacy.copy Invoked with dest=/etc/tmpfiles.d/run-libvirt.conf src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662209.1537104-76193-38631552878565/source _original_basename=tmpfmlija7_ follow=False checksum=57f3ff94c666c6aae69ae22e23feb750cf9e8b13 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:56:49 np0005541914.localdomain sudo[44649]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:50 np0005541914.localdomain sudo[44679]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tmcgoxaxlimgsxtexjoxtpuwsfbifnbh ; /usr/bin/python3
Dec 02 07:56:50 np0005541914.localdomain sudo[44679]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:50 np0005541914.localdomain python3[44681]: ansible-seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec 02 07:56:50 np0005541914.localdomain sudo[44679]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:51 np0005541914.localdomain sudo[44700]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wsbdcvwnbdhmiwqageqorcqjfkreetxl ; /usr/bin/python3
Dec 02 07:56:51 np0005541914.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Dec 02 07:56:51 np0005541914.localdomain sudo[44700]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:51 np0005541914.localdomain python3[44702]: ansible-file Invoked with path=/etc/crypto-policies/local.d/gnutls-qemu.config state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:56:51 np0005541914.localdomain sudo[44700]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:51 np0005541914.localdomain sudo[44716]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zdaqdglymllwmbigbsavnglnliqhvtcm ; /usr/bin/python3
Dec 02 07:56:51 np0005541914.localdomain sudo[44716]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:51 np0005541914.localdomain python3[44718]: ansible-file Invoked with path=/run/libvirt setype=virt_var_run_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 07:56:51 np0005541914.localdomain sudo[44716]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:51 np0005541914.localdomain sudo[44732]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-drbwwdoymtuvytofzvbcuxvhlrkvcimc ; /usr/bin/python3
Dec 02 07:56:51 np0005541914.localdomain sudo[44732]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:51 np0005541914.localdomain python3[44734]: ansible-seboolean Invoked with name=logrotate_read_inside_containers persistent=True state=True ignore_selinux_state=False
Dec 02 07:56:52 np0005541914.localdomain sudo[44732]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:53 np0005541914.localdomain sudo[44752]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lqnempmngrgoqqmqtsquciovlahtvcqe ; /usr/bin/python3
Dec 02 07:56:53 np0005541914.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Dec 02 07:56:53 np0005541914.localdomain sudo[44752]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:53 np0005541914.localdomain python3[44754]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 02 07:56:56 np0005541914.localdomain sudo[44752]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:56 np0005541914.localdomain sudo[44769]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jrtuiabmywjsfapcalklnxwpeaigzhjv ; /usr/bin/python3
Dec 02 07:56:56 np0005541914.localdomain sudo[44769]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:56 np0005541914.localdomain python3[44771]: ansible-setup Invoked with gather_subset=['!all', '!min', 'network'] filter=['ansible_interfaces'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 02 07:56:56 np0005541914.localdomain sudo[44769]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:56 np0005541914.localdomain sudo[44830]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-omajdfnbgwcsckofrbpwiltanmsrehon ; /usr/bin/python3
Dec 02 07:56:56 np0005541914.localdomain sudo[44830]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:57 np0005541914.localdomain python3[44832]: ansible-file Invoked with path=/etc/containers/networks state=directory recurse=True mode=493 owner=root group=root force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:56:57 np0005541914.localdomain sudo[44830]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:57 np0005541914.localdomain sudo[44846]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vbofxhcitzzgvjlvxjgckxopufatqzsa ; /usr/bin/python3
Dec 02 07:56:57 np0005541914.localdomain sudo[44846]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:57 np0005541914.localdomain python3[44848]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                                          _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:56:57 np0005541914.localdomain sudo[44846]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:57 np0005541914.localdomain sudo[44905]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yzrwlefasygqbtjroewhbepstirwnkqy ; /usr/bin/python3
Dec 02 07:56:57 np0005541914.localdomain sudo[44905]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:57 np0005541914.localdomain python3[44907]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:56:58 np0005541914.localdomain sudo[44905]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:58 np0005541914.localdomain sudo[44948]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dioedkypevuvyxeoaxxjzvpiszmvhahd ; /usr/bin/python3
Dec 02 07:56:58 np0005541914.localdomain sudo[44948]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:58 np0005541914.localdomain python3[44950]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662217.5876255-76516-27656382419816/source dest=/etc/containers/networks/podman.json mode=0644 owner=root group=root follow=False _original_basename=podman_network_config.j2 checksum=b4295d801cd8c23bfe072937c7f9f133ab6cb946 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:56:58 np0005541914.localdomain sudo[44948]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:58 np0005541914.localdomain sudo[45010]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jprhinvopzknwbeoxzcqojrhpuzxoxvx ; /usr/bin/python3
Dec 02 07:56:58 np0005541914.localdomain sudo[45010]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:58 np0005541914.localdomain python3[45012]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:56:58 np0005541914.localdomain sudo[45010]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:59 np0005541914.localdomain sudo[45055]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ichanjpognrewpgykidberrqjozkbkqa ; /usr/bin/python3
Dec 02 07:56:59 np0005541914.localdomain sudo[45055]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:59 np0005541914.localdomain python3[45057]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662218.5267909-76590-236212836960340/source dest=/etc/containers/registries.conf owner=root group=root setype=etc_t mode=0644 follow=False _original_basename=registries.conf.j2 checksum=710a00cfb11a4c3eba9c028ef1984a9fea9ba83a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 02 07:56:59 np0005541914.localdomain sudo[45055]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:59 np0005541914.localdomain sudo[45085]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lzshfoahuxrqgutwrqgbcbhaveplqqwr ; /usr/bin/python3
Dec 02 07:56:59 np0005541914.localdomain sudo[45085]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:59 np0005541914.localdomain python3[45087]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=containers option=pids_limit value=4096 backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None
Dec 02 07:56:59 np0005541914.localdomain sudo[45085]: pam_unix(sudo:session): session closed for user root
Dec 02 07:56:59 np0005541914.localdomain sudo[45101]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pcrbwallaxoijmmssthmxkjwybsjkkmv ; /usr/bin/python3
Dec 02 07:56:59 np0005541914.localdomain sudo[45101]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:56:59 np0005541914.localdomain python3[45103]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=engine option=events_logger value="journald" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None
Dec 02 07:57:00 np0005541914.localdomain sudo[45101]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:00 np0005541914.localdomain sudo[45117]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-llrlbgytlwghcfdruihblglivbiciouj ; /usr/bin/python3
Dec 02 07:57:00 np0005541914.localdomain sudo[45117]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:00 np0005541914.localdomain python3[45119]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=engine option=runtime value="crun" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None
Dec 02 07:57:00 np0005541914.localdomain sudo[45117]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:00 np0005541914.localdomain sudo[45133]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lmvnsxicvsmgisqplaqsdozpigxqfmuv ; /usr/bin/python3
Dec 02 07:57:00 np0005541914.localdomain sudo[45133]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:00 np0005541914.localdomain python3[45135]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=network option=network_backend value="netavark" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None
Dec 02 07:57:00 np0005541914.localdomain sudo[45133]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:01 np0005541914.localdomain sudo[45181]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-glqkznibowqrwavgksfajqtmdaqghasi ; /usr/bin/python3
Dec 02 07:57:01 np0005541914.localdomain sudo[45181]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:01 np0005541914.localdomain python3[45183]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:57:01 np0005541914.localdomain sudo[45181]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:01 np0005541914.localdomain sudo[45224]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wmcbnyxigumajpqpeoimvwwqikkqaewl ; /usr/bin/python3
Dec 02 07:57:01 np0005541914.localdomain sudo[45224]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:01 np0005541914.localdomain python3[45226]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662220.9355533-76763-236914699623339/source _original_basename=tmpx9_y_bke follow=False checksum=0bfbc70e9a4740c9004b9947da681f723d529c83 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:57:01 np0005541914.localdomain sudo[45224]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:01 np0005541914.localdomain sudo[45254]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vkkiptllwydglyhgqtbyajjfheyaghhr ; /usr/bin/python3
Dec 02 07:57:01 np0005541914.localdomain sudo[45254]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:02 np0005541914.localdomain python3[45256]: ansible-file Invoked with mode=0750 path=/var/log/containers/rsyslog setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 07:57:02 np0005541914.localdomain sudo[45254]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:02 np0005541914.localdomain sudo[45270]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xiyawdyvtwqlettqxwzffhlqcbcydkhk ; /usr/bin/python3
Dec 02 07:57:02 np0005541914.localdomain sudo[45270]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:02 np0005541914.localdomain python3[45272]: ansible-file Invoked with path=/var/lib/rsyslog.container setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 07:57:02 np0005541914.localdomain sudo[45270]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:02 np0005541914.localdomain sudo[45286]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vbxonpcygmytbnehzyvksktmadfqcyjb ; /usr/bin/python3
Dec 02 07:57:02 np0005541914.localdomain sudo[45286]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:02 np0005541914.localdomain python3[45288]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 02 07:57:05 np0005541914.localdomain sudo[45286]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:06 np0005541914.localdomain sudo[45335]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wakabvjttclmoqbmddegnyvyahetodyd ; /usr/bin/python3
Dec 02 07:57:06 np0005541914.localdomain sudo[45335]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:06 np0005541914.localdomain python3[45337]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:57:06 np0005541914.localdomain sudo[45335]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:06 np0005541914.localdomain sudo[45380]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tgqodhegiyzlgmaoyxwwbeiwugpnvbci ; /usr/bin/python3
Dec 02 07:57:06 np0005541914.localdomain sudo[45380]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:06 np0005541914.localdomain python3[45382]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662226.1143267-76952-181373760118912/source validate=/usr/sbin/sshd -T -f %s mode=None follow=False _original_basename=sshd_config_block.j2 checksum=913c99ed7d5c33615bfb07a6792a4ef143dcfd2b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:57:06 np0005541914.localdomain sudo[45380]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:07 np0005541914.localdomain sudo[45411]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rquxonoreohhyozftjrpoumrfjjuayac ; /usr/bin/python3
Dec 02 07:57:07 np0005541914.localdomain sudo[45411]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:07 np0005541914.localdomain python3[45413]: ansible-systemd Invoked with name=sshd state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 07:57:07 np0005541914.localdomain sshd[1130]: Received signal 15; terminating.
Dec 02 07:57:07 np0005541914.localdomain systemd[1]: Stopping OpenSSH server daemon...
Dec 02 07:57:07 np0005541914.localdomain systemd[1]: sshd.service: Deactivated successfully.
Dec 02 07:57:07 np0005541914.localdomain systemd[1]: Stopped OpenSSH server daemon.
Dec 02 07:57:07 np0005541914.localdomain systemd[1]: sshd.service: Consumed 3.503s CPU time, read 2.6M from disk, written 144.0K to disk.
Dec 02 07:57:07 np0005541914.localdomain systemd[1]: Stopped target sshd-keygen.target.
Dec 02 07:57:07 np0005541914.localdomain systemd[1]: Stopping sshd-keygen.target...
Dec 02 07:57:07 np0005541914.localdomain systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 02 07:57:07 np0005541914.localdomain systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 02 07:57:07 np0005541914.localdomain systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 02 07:57:07 np0005541914.localdomain systemd[1]: Reached target sshd-keygen.target.
Dec 02 07:57:07 np0005541914.localdomain systemd[1]: Starting OpenSSH server daemon...
Dec 02 07:57:07 np0005541914.localdomain sshd[45417]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:57:07 np0005541914.localdomain sshd[45417]: Server listening on 0.0.0.0 port 22.
Dec 02 07:57:07 np0005541914.localdomain sshd[45417]: Server listening on :: port 22.
Dec 02 07:57:07 np0005541914.localdomain systemd[1]: Started OpenSSH server daemon.
Dec 02 07:57:07 np0005541914.localdomain sudo[45411]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:07 np0005541914.localdomain sudo[45431]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bxdgclmrxkfmxynzynnowaqxollppkgw ; /usr/bin/python3
Dec 02 07:57:07 np0005541914.localdomain sudo[45431]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:07 np0005541914.localdomain python3[45433]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:57:07 np0005541914.localdomain sudo[45431]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:08 np0005541914.localdomain sudo[45449]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cmhhcpqgxbxfiwtsdmqpczayspgvkydm ; /usr/bin/python3
Dec 02 07:57:08 np0005541914.localdomain sudo[45449]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:08 np0005541914.localdomain python3[45451]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:57:08 np0005541914.localdomain sudo[45449]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:09 np0005541914.localdomain sudo[45467]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qvrnztzdkqspapunlkgkohltmhpzwifz ; /usr/bin/python3
Dec 02 07:57:09 np0005541914.localdomain sudo[45467]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:09 np0005541914.localdomain python3[45469]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 02 07:57:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 07:57:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Cumulative writes: 3399 writes, 16K keys, 3399 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.03 MB/s
                                                          Cumulative WAL: 3399 writes, 202 syncs, 16.83 writes per sync, written: 0.01 GB, 0.03 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 3399 writes, 16K keys, 3399 commit groups, 1.0 writes per commit group, ingest: 15.32 MB, 0.03 MB/s
                                                          Interval WAL: 3399 writes, 202 syncs, 16.83 writes per sync, written: 0.01 GB, 0.03 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          
                                                          ** Compaction Stats [default] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                           Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [default] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [default] **
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-0] **
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-1] **
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-2] **
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.57 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                           Sum      1/0    1.57 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-0] **
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-1] **
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-2] **
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf57610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-0] **
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf57610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-1] **
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.26 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                                           Sum      1/0    1.26 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf57610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-2] **
                                                          
                                                          ** Compaction Stats [L] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.005       0      0       0.0       0.0
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.005       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [L] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.005       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [L] **
                                                          
                                                          ** Compaction Stats [P] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [P] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [P] **
Dec 02 07:57:11 np0005541914.localdomain sudo[45467]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:12 np0005541914.localdomain sudo[45516]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-czkruqrkrkvgdiuxkczoysqqxxdbrimz ; /usr/bin/python3
Dec 02 07:57:12 np0005541914.localdomain sudo[45516]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:12 np0005541914.localdomain python3[45518]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:57:12 np0005541914.localdomain sudo[45516]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:12 np0005541914.localdomain sudo[45534]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gfmimmdgotqbtbvgzfhirzotfexmzrer ; /usr/bin/python3
Dec 02 07:57:12 np0005541914.localdomain sudo[45534]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:12 np0005541914.localdomain python3[45536]: ansible-ansible.legacy.file Invoked with owner=root group=root mode=420 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:57:12 np0005541914.localdomain sudo[45534]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:13 np0005541914.localdomain sudo[45564]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bptwbegizbpglwkkkagzhmxivftddgtu ; /usr/bin/python3
Dec 02 07:57:13 np0005541914.localdomain sudo[45564]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:13 np0005541914.localdomain python3[45566]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 07:57:13 np0005541914.localdomain sudo[45564]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:14 np0005541914.localdomain sudo[45614]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mtljvleavszfkekdbfmokwcqkebanbcc ; /usr/bin/python3
Dec 02 07:57:14 np0005541914.localdomain sudo[45614]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:14 np0005541914.localdomain python3[45616]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/chrony-online.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:57:14 np0005541914.localdomain sudo[45614]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:14 np0005541914.localdomain sudo[45632]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yqmuvdndnxikzjrvnrfnhowtodzdseek ; /usr/bin/python3
Dec 02 07:57:14 np0005541914.localdomain sudo[45632]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:14 np0005541914.localdomain python3[45634]: ansible-ansible.legacy.file Invoked with dest=/etc/systemd/system/chrony-online.service _original_basename=chrony-online.service recurse=False state=file path=/etc/systemd/system/chrony-online.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:57:14 np0005541914.localdomain sudo[45632]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:14 np0005541914.localdomain sudo[45662]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rwfhacyhlsojmukwnpdpfbuprjjtjcfu ; /usr/bin/python3
Dec 02 07:57:14 np0005541914.localdomain sudo[45662]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:14 np0005541914.localdomain python3[45664]: ansible-systemd Invoked with state=started name=chrony-online.service enabled=True daemon-reload=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 07:57:14 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 07:57:15 np0005541914.localdomain systemd-sysv-generator[45690]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 07:57:15 np0005541914.localdomain systemd-rc-local-generator[45686]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 07:57:15 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 07:57:15 np0005541914.localdomain systemd[1]: Starting chronyd online sources service...
Dec 02 07:57:15 np0005541914.localdomain chronyc[45704]: 200 OK
Dec 02 07:57:15 np0005541914.localdomain systemd[1]: chrony-online.service: Deactivated successfully.
Dec 02 07:57:15 np0005541914.localdomain systemd[1]: Finished chronyd online sources service.
Dec 02 07:57:15 np0005541914.localdomain sudo[45662]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:15 np0005541914.localdomain sudo[45718]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xqpxzvcpnbrywazmqvguwuugcmljwgre ; /usr/bin/python3
Dec 02 07:57:15 np0005541914.localdomain sudo[45718]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:15 np0005541914.localdomain python3[45720]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:57:15 np0005541914.localdomain chronyd[26062]: System clock was stepped by -0.000097 seconds
Dec 02 07:57:15 np0005541914.localdomain sudo[45718]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 07:57:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 600.2 total, 600.0 interval
                                                          Cumulative writes: 3251 writes, 16K keys, 3251 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                                          Cumulative WAL: 3251 writes, 141 syncs, 23.06 writes per sync, written: 0.01 GB, 0.02 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 3251 writes, 16K keys, 3251 commit groups, 1.0 writes per commit group, ingest: 14.66 MB, 0.02 MB/s
                                                          Interval WAL: 3251 writes, 141 syncs, 23.06 writes per sync, written: 0.01 GB, 0.02 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          
                                                          ** Compaction Stats [default] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                           Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [default] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [default] **
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-0] **
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-1] **
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-2] **
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.57 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.023       0      0       0.0       0.0
                                                           Sum      1/0    1.57 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.023       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.023       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-0] **
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-1] **
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-2] **
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x562050321610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-0] **
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x562050321610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-1] **
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.26 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                                           Sum      1/0    1.26 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x562050321610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-2] **
                                                          
                                                          ** Compaction Stats [L] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [L] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [L] **
                                                          
                                                          ** Compaction Stats [P] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [P] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [P] **
Dec 02 07:57:15 np0005541914.localdomain sudo[45735]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-prsytwxzquofaiuwvtprxilbseisgdyp ; /usr/bin/python3
Dec 02 07:57:15 np0005541914.localdomain sudo[45735]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:16 np0005541914.localdomain python3[45737]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:57:16 np0005541914.localdomain sudo[45735]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:16 np0005541914.localdomain sudo[45752]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-etxnzcwbemazjmwxfkvjgdnmualqiplj ; /usr/bin/python3
Dec 02 07:57:16 np0005541914.localdomain sudo[45752]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:16 np0005541914.localdomain python3[45754]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:57:16 np0005541914.localdomain chronyd[26062]: System clock was stepped by 0.000000 seconds
Dec 02 07:57:16 np0005541914.localdomain sudo[45752]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:16 np0005541914.localdomain sudo[45769]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xqgjksldbxndllqctlgxpepakihgbkln ; /usr/bin/python3
Dec 02 07:57:16 np0005541914.localdomain sudo[45769]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:16 np0005541914.localdomain python3[45771]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:57:16 np0005541914.localdomain sudo[45769]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:17 np0005541914.localdomain sudo[45786]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ohwfgotzrbegtojktziyvqmwzidqukws ; /usr/bin/python3
Dec 02 07:57:17 np0005541914.localdomain sudo[45786]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:17 np0005541914.localdomain python3[45788]: ansible-timezone Invoked with name=UTC hwclock=None
Dec 02 07:57:17 np0005541914.localdomain systemd[1]: Starting Time & Date Service...
Dec 02 07:57:17 np0005541914.localdomain systemd[1]: Started Time & Date Service.
Dec 02 07:57:17 np0005541914.localdomain sudo[45786]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:18 np0005541914.localdomain sudo[45806]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fsgmlpgwvucgslpxjlybjvetfwqelsir ; /usr/bin/python3
Dec 02 07:57:18 np0005541914.localdomain sudo[45806]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:18 np0005541914.localdomain python3[45808]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides tuned tuned-profiles-cpu-partitioning _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:57:18 np0005541914.localdomain sudo[45806]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:18 np0005541914.localdomain sudo[45823]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xlhlgjvuxljfolphiasksjvmgbrymdjv ; PATH=/bin:/usr/bin:/sbin:/usr/sbin /usr/bin/python3
Dec 02 07:57:18 np0005541914.localdomain sudo[45823]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:19 np0005541914.localdomain python3[45825]: ansible-ansible.legacy.command Invoked with _raw_params=which tuned-adm _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:57:19 np0005541914.localdomain sudo[45823]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:19 np0005541914.localdomain sudo[45840]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yhcpznwepsfeaaqhmlojsobikwmmpfny ; /usr/bin/python3
Dec 02 07:57:19 np0005541914.localdomain sudo[45840]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:19 np0005541914.localdomain python3[45842]: ansible-slurp Invoked with src=/etc/tuned/active_profile
Dec 02 07:57:19 np0005541914.localdomain sudo[45840]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:19 np0005541914.localdomain sudo[45856]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vlzcvmxkmrqkuilxgswnvwqqtputguge ; /usr/bin/python3
Dec 02 07:57:19 np0005541914.localdomain sudo[45856]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:19 np0005541914.localdomain python3[45858]: ansible-stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 07:57:19 np0005541914.localdomain sudo[45856]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:20 np0005541914.localdomain sudo[45872]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-shencgspztanvtlennuzezywelapiuoj ; /usr/bin/python3
Dec 02 07:57:20 np0005541914.localdomain sudo[45872]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:20 np0005541914.localdomain python3[45874]: ansible-file Invoked with mode=0750 path=/var/log/containers/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 07:57:20 np0005541914.localdomain sudo[45872]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:20 np0005541914.localdomain sudo[45888]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tkdgegpoijyrnxlckfwziebkyfiofogy ; /usr/bin/python3
Dec 02 07:57:20 np0005541914.localdomain sudo[45888]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:21 np0005541914.localdomain python3[45890]: ansible-file Invoked with path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 07:57:21 np0005541914.localdomain sudo[45888]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:21 np0005541914.localdomain sudo[45936]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ukdfintslolpiqtnxddwdosrtvyaqrck ; /usr/bin/python3
Dec 02 07:57:21 np0005541914.localdomain sudo[45936]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:21 np0005541914.localdomain python3[45938]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/neutron-cleanup follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:57:21 np0005541914.localdomain sudo[45936]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:21 np0005541914.localdomain sudo[45979]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kzcaudnsmznjotmdkyteqvkqzubbivxo ; /usr/bin/python3
Dec 02 07:57:21 np0005541914.localdomain sudo[45979]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:21 np0005541914.localdomain python3[45981]: ansible-ansible.legacy.copy Invoked with dest=/usr/libexec/neutron-cleanup force=True mode=0755 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662241.2627783-77976-200947271492114/source _original_basename=tmp40hvdeir follow=False checksum=f9cc7d1e91fbae49caa7e35eb2253bba146a73b4 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:57:21 np0005541914.localdomain sudo[45979]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:22 np0005541914.localdomain sudo[46041]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iyoaemwthcifmhxasxjyeqktvthcmxyr ; /usr/bin/python3
Dec 02 07:57:22 np0005541914.localdomain sudo[46041]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:22 np0005541914.localdomain python3[46043]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/neutron-cleanup.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:57:22 np0005541914.localdomain sudo[46041]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:22 np0005541914.localdomain sudo[46084]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rcjadlzghdbsrbjwagltxoafjtlrpeyl ; /usr/bin/python3
Dec 02 07:57:22 np0005541914.localdomain sudo[46084]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:22 np0005541914.localdomain python3[46086]: ansible-ansible.legacy.copy Invoked with dest=/usr/lib/systemd/system/neutron-cleanup.service force=True src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662242.0980065-78082-234266542714096/source _original_basename=tmps5z0znie follow=False checksum=6b6cd9f074903a28d054eb530a10c7235d0c39fc backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:57:22 np0005541914.localdomain sudo[46084]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:22 np0005541914.localdomain sudo[46114]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zhbizkmvugmfdkjpgymrzyvomdsvtuju ; /usr/bin/python3
Dec 02 07:57:22 np0005541914.localdomain sudo[46114]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:23 np0005541914.localdomain python3[46116]: ansible-ansible.legacy.systemd Invoked with enabled=True name=neutron-cleanup daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 02 07:57:23 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 07:57:23 np0005541914.localdomain systemd-rc-local-generator[46142]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 07:57:23 np0005541914.localdomain systemd-sysv-generator[46146]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 07:57:23 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 07:57:23 np0005541914.localdomain sudo[46114]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:23 np0005541914.localdomain sudo[46168]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lyaokzsqvsgaqbbxpisbsnoolntxzyps ; /usr/bin/python3
Dec 02 07:57:23 np0005541914.localdomain sudo[46168]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:23 np0005541914.localdomain python3[46170]: ansible-file Invoked with mode=0750 path=/var/log/containers/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 07:57:23 np0005541914.localdomain sudo[46168]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:24 np0005541914.localdomain sudo[46184]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wmuzikrajjpjxtndfhklvxwsqgsacboe ; /usr/bin/python3
Dec 02 07:57:24 np0005541914.localdomain sudo[46184]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:24 np0005541914.localdomain python3[46186]: ansible-ansible.legacy.command Invoked with _raw_params=ip netns add ns_temp _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:57:24 np0005541914.localdomain systemd[36014]: Created slice User Background Tasks Slice.
Dec 02 07:57:24 np0005541914.localdomain systemd[36014]: Starting Cleanup of User's Temporary Files and Directories...
Dec 02 07:57:24 np0005541914.localdomain sudo[46184]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:24 np0005541914.localdomain systemd[36014]: Finished Cleanup of User's Temporary Files and Directories.
Dec 02 07:57:24 np0005541914.localdomain sudo[46202]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hwdxjcdbpivxnehqtjkjzwxgwvnhjiql ; /usr/bin/python3
Dec 02 07:57:24 np0005541914.localdomain sudo[46202]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:24 np0005541914.localdomain python3[46204]: ansible-ansible.legacy.command Invoked with _raw_params=ip netns delete ns_temp _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:57:24 np0005541914.localdomain systemd[1]: run-netns-ns_temp.mount: Deactivated successfully.
Dec 02 07:57:24 np0005541914.localdomain sudo[46202]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:24 np0005541914.localdomain sudo[46219]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zvrxlflqdbknuikcbudtrifibfgrmqjv ; /usr/bin/python3
Dec 02 07:57:24 np0005541914.localdomain sudo[46219]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:24 np0005541914.localdomain python3[46221]: ansible-file Invoked with path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 07:57:24 np0005541914.localdomain sudo[46219]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:25 np0005541914.localdomain sudo[46235]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hmejrxnvehvebwszsrhmqckfdwkuvbfz ; /usr/bin/python3
Dec 02 07:57:25 np0005541914.localdomain sudo[46235]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:25 np0005541914.localdomain python3[46237]: ansible-file Invoked with path=/var/lib/neutron/kill_scripts state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:57:25 np0005541914.localdomain sudo[46235]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:25 np0005541914.localdomain sudo[46283]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-brdzfaycekyedmsbhzetnnqjoazmwunp ; /usr/bin/python3
Dec 02 07:57:25 np0005541914.localdomain sudo[46283]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:27 np0005541914.localdomain python3[46285]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:57:27 np0005541914.localdomain sudo[46283]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:27 np0005541914.localdomain sudo[46326]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mappoypnlxihdxunxcutcrnrhkzyqtvu ; /usr/bin/python3
Dec 02 07:57:27 np0005541914.localdomain sudo[46326]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:27 np0005541914.localdomain python3[46328]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=493 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662245.5488522-78271-209774938345307/source _original_basename=tmpfe2lbpj6 follow=False checksum=2f369fbe8f83639cdfd4efc53e7feb4ee77d1ed7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:57:27 np0005541914.localdomain sudo[46326]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:37 np0005541914.localdomain sshd[46343]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:57:39 np0005541914.localdomain sshd[46343]: Invalid user administrator from 182.253.156.173 port 57376
Dec 02 07:57:39 np0005541914.localdomain sshd[46343]: Received disconnect from 182.253.156.173 port 57376:11: Bye Bye [preauth]
Dec 02 07:57:39 np0005541914.localdomain sshd[46343]: Disconnected from invalid user administrator 182.253.156.173 port 57376 [preauth]
Dec 02 07:57:47 np0005541914.localdomain systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 02 07:57:49 np0005541914.localdomain sudo[46347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 07:57:49 np0005541914.localdomain sudo[46347]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:57:49 np0005541914.localdomain sudo[46347]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:49 np0005541914.localdomain sudo[46362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 07:57:49 np0005541914.localdomain sudo[46362]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:57:49 np0005541914.localdomain sudo[46390]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lqoeufebyjgsliaaoszcywfvujufhjzt ; /usr/bin/python3
Dec 02 07:57:49 np0005541914.localdomain sudo[46390]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:49 np0005541914.localdomain python3[46392]: ansible-file Invoked with path=/var/log/containers state=directory setype=container_file_t selevel=s0 mode=488 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 02 07:57:49 np0005541914.localdomain sudo[46390]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:50 np0005541914.localdomain sudo[46423]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oxmgzpwmyldvmuzggqgompzjpudrxzjm ; /usr/bin/python3
Dec 02 07:57:50 np0005541914.localdomain sudo[46423]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:50 np0005541914.localdomain sudo[46362]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:50 np0005541914.localdomain python3[46425]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None setype=None attributes=None
Dec 02 07:57:50 np0005541914.localdomain sudo[46423]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:50 np0005541914.localdomain sudo[46453]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kvijxvhjuapctspddxlfjzpzxroivgns ; /usr/bin/python3
Dec 02 07:57:50 np0005541914.localdomain sudo[46453]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:50 np0005541914.localdomain python3[46455]: ansible-file Invoked with path=/var/lib/tripleo-config state=directory setype=container_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 02 07:57:50 np0005541914.localdomain sudo[46453]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:50 np0005541914.localdomain sudo[46469]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ikpkolxiloemuqdnncycjtlbhrkjdqeg ; /usr/bin/python3
Dec 02 07:57:50 np0005541914.localdomain sudo[46469]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:50 np0005541914.localdomain python3[46471]: ansible-file Invoked with path=/var/lib/container-startup-configs.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:57:50 np0005541914.localdomain sudo[46469]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:51 np0005541914.localdomain sudo[46485]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qsjbfqchfvisjhgczdzzdtzxwhjjoxal ; /usr/bin/python3
Dec 02 07:57:51 np0005541914.localdomain sudo[46485]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:51 np0005541914.localdomain python3[46487]: ansible-file Invoked with path=/var/lib/docker-container-startup-configs.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:57:51 np0005541914.localdomain sudo[46485]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:51 np0005541914.localdomain sudo[46501]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hitrxablomnqkxsthkqhdoskbyupfmeh ; /usr/bin/python3
Dec 02 07:57:51 np0005541914.localdomain sudo[46501]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:51 np0005541914.localdomain python3[46503]: ansible-community.general.sefcontext Invoked with target=/var/lib/container-config-scripts(/.*)? setype=container_file_t state=present ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None
Dec 02 07:57:51 np0005541914.localdomain sudo[46504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 07:57:51 np0005541914.localdomain sudo[46504]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:57:51 np0005541914.localdomain sudo[46504]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:52 np0005541914.localdomain kernel: SELinux:  Converting 2706 SID table entries...
Dec 02 07:57:52 np0005541914.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 02 07:57:52 np0005541914.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 02 07:57:52 np0005541914.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 02 07:57:52 np0005541914.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 02 07:57:52 np0005541914.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 02 07:57:52 np0005541914.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 02 07:57:52 np0005541914.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 02 07:57:52 np0005541914.localdomain sudo[46501]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:52 np0005541914.localdomain sudo[46538]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wkqzxsgemvsxxznffogbuxjlzmifvfys ; /usr/bin/python3
Dec 02 07:57:52 np0005541914.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Dec 02 07:57:52 np0005541914.localdomain sudo[46538]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:52 np0005541914.localdomain python3[46540]: ansible-file Invoked with path=/var/lib/container-config-scripts state=directory setype=container_file_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 07:57:52 np0005541914.localdomain sudo[46538]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:52 np0005541914.localdomain sudo[46554]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qantwyvvduwxcmfigmcykdxkzqvufdbq ; /usr/bin/python3
Dec 02 07:57:52 np0005541914.localdomain sudo[46554]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:53 np0005541914.localdomain sudo[46554]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:53 np0005541914.localdomain sudo[46602]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-enuzxuqlxkkizxdxnoeydmljfqckwvsa ; /usr/bin/python3
Dec 02 07:57:53 np0005541914.localdomain sudo[46602]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:53 np0005541914.localdomain sudo[46602]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:53 np0005541914.localdomain sudo[46645]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mqrpopzyaoqfoqlhfroocmqjplhnqfod ; /usr/bin/python3
Dec 02 07:57:53 np0005541914.localdomain sudo[46645]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:53 np0005541914.localdomain sudo[46645]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:54 np0005541914.localdomain sudo[46675]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mggpxyukxobdtjwdjuwolxeaxychhywm ; /usr/bin/python3
Dec 02 07:57:54 np0005541914.localdomain sudo[46675]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:54 np0005541914.localdomain python3[46677]: ansible-container_startup_config Invoked with config_base_dir=/var/lib/tripleo-config/container-startup-config config_data={'step_1': {'metrics_qdr': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, 'metrics_qdr_init_logs': {'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}}, 'step_2': {'create_haproxy_wrapper': {'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, 'create_virtlogd_wrapper': {'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, 'nova_compute_init_log': {'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, 'nova_virtqemud_init_logs': {'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}}, 'step_3': {'ceilometer_init_log': {'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, 'collectd': {'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, 'iscsid': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, 'nova_statedir_owner': {'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, 'nova_virtlogd_wrapper': {'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, 'nova_virtnodedevd': {'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, 'nova_virtproxyd': {'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, 'nova_virtqemud': {'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, 'nova_virtsecretd': {'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, 'nova_virtstoraged': {'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, 'rsyslog': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}}, 'step_4': {'ceilometer_agent_compute': {'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, 'ceilometer_agent_ipmi': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, 'configure_cms_options': {'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, 'logrotate_crond': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, 'nova_libvirt_init_secret': {'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, 'nova_migration_target': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, 'ovn_controller': {'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, 'ovn_metadata_agent': {'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, 'setup_ovs_manager': {'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}}, 'step_5': {'nova_compute': {'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, 'nova_wait_for_compute_service': {'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}}}
Dec 02 07:57:54 np0005541914.localdomain sudo[46675]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:54 np0005541914.localdomain rsyslogd[759]: message too long (31243) with configured size 8096, begin of message is: ansible-container_startup_config Invoked with config_base_dir=/var/lib/tripleo-c [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2445 ]
Dec 02 07:57:55 np0005541914.localdomain sudo[46691]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ysdictphcjoejplpquzbgdwzxwnisnfj ; /usr/bin/python3
Dec 02 07:57:55 np0005541914.localdomain sudo[46691]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:55 np0005541914.localdomain python3[46693]: ansible-file Invoked with path=/var/lib/kolla/config_files state=directory setype=container_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 02 07:57:55 np0005541914.localdomain sudo[46691]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:55 np0005541914.localdomain sudo[46707]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iulleabnfkijxygnjdiseccadidyrigh ; /usr/bin/python3
Dec 02 07:57:55 np0005541914.localdomain sudo[46707]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:55 np0005541914.localdomain python3[46709]: ansible-file Invoked with path=/var/lib/config-data mode=493 state=directory setype=container_file_t selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 02 07:57:55 np0005541914.localdomain sudo[46707]: pam_unix(sudo:session): session closed for user root
Dec 02 07:57:55 np0005541914.localdomain sudo[46723]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-quphlfvtzqoeqfqvulbhfeqjccrlerot ; /usr/bin/python3
Dec 02 07:57:55 np0005541914.localdomain sudo[46723]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:57:56 np0005541914.localdomain python3[46725]: ansible-tripleo_container_configs Invoked with config_data={'/var/lib/kolla/config_files/ceilometer-agent-ipmi.json': {'command': '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /var/log/ceilometer/ipmi.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/ceilometer_agent_compute.json': {'command': '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/collectd.json': {'command': '/usr/sbin/collectd -f', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/collectd.d'}], 'permissions': [{'owner': 'collectd:collectd', 'path': '/var/log/collectd', 'recurse': True}, {'owner': 'collectd:collectd', 'path': '/scripts', 'recurse': True}, {'owner': 'collectd:collectd', 'path': '/config-scripts', 'recurse': True}]}, '/var/lib/kolla/config_files/iscsid.json': {'command': '/usr/sbin/iscsid -f', 'config_files': [{'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/'}]}, '/var/lib/kolla/config_files/logrotate-crond.json': {'command': '/usr/sbin/crond -s -n', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/metrics_qdr.json': {'command': '/usr/sbin/qdrouterd -c /etc/qpid-dispatch/qdrouterd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/', 'merge': True, 'optional': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-tls/*'}], 'permissions': [{'owner': 'qdrouterd:qdrouterd', 'path': '/var/lib/qdrouterd', 'recurse': True}, {'optional': True, 'owner': 'qdrouterd:qdrouterd', 'path': '/etc/pki/tls/certs/metrics_qdr.crt'}, {'optional': True, 'owner': 'qdrouterd:qdrouterd', 'path': '/etc/pki/tls/private/metrics_qdr.key'}]}, '/var/lib/kolla/config_files/nova-migration-target.json': {'command': 'dumb-init --single-child -- /usr/sbin/sshd -D -p 2022', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ssh/', 'owner': 'root', 'perm': '0600', 'source': '/host-ssh/ssh_host_*_key'}]}, '/var/lib/kolla/config_files/nova_compute.json': {'command': '/var/lib/nova/delay-nova-compute --delay 180 --nova-binary /usr/bin/nova-compute ', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}, {'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json': {'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_wait_for_compute_service.py', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}]}, '/var/lib/kolla/config_files/nova_virtlogd.json': {'command': '/usr/local/bin/virtlogd_wrapper', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtnodedevd.json': {'command': '/usr/sbin/virtnodedevd --config /etc/libvirt/virtnodedevd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtproxyd.json': {'command': '/usr/sbin/virtproxyd --config /etc/libvirt/virtproxyd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtqemud.json': {'command': '/usr/sbin/virtqemud --config /etc/libvirt/virtqemud.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtsecretd.json': {'command': '/usr/sbin/virtsecretd --config /etc/libvirt/virtsecretd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtstoraged.json': {'command': '/usr/sbin/virtstoraged --config /etc/libvirt/virtstoraged.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/ovn_controller.json': {'command': '/usr/bin/ovn-controller --pidfile --log-file unix:/run/openvswitch/db.sock ', 'permissions': [{'owner': 'root:root', 'path': '/var/log/openvswitch', 'recurse': True}, {'owner': 'root:root', 'path': '/var/log/ovn', 'recurse': True}]}, '/var/lib/kolla/config_files/ovn_metadata_agent.json': {'command': '/usr/bin/networking-ovn-metadata-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/networking-ovn/networking-ovn-metadata-agent.ini --log-file=/var/log/neutron/ovn-metadata-agent.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'neutron:neutron', 'path': '/var/log/neutron', 'recurse': True}, {'owner': 'neutron:neutron', 'path': '/var/lib/neutron', 'recurse': True}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/certs/ovn_metadata.crt', 'perm': '0644'}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/private/ovn_metadata.key', 'perm': '0644'}]}, '/var/lib/kolla/config_files/rsyslog.json': {'command': '/usr/sbin/rsyslogd -n', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'root:root', 'path': '/var/lib/rsyslog', 'recurse': True}, {'owner': 'root:root', 'path': '/var/log/rsyslog', 'recurse': True}]}}
Dec 02 07:57:56 np0005541914.localdomain sudo[46723]: pam_unix(sudo:session): session closed for user root
Dec 02 07:58:00 np0005541914.localdomain sudo[46771]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iqwbysnromcghukqevzdezwikxiwuvll ; /usr/bin/python3
Dec 02 07:58:00 np0005541914.localdomain sudo[46771]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:58:00 np0005541914.localdomain python3[46773]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:58:00 np0005541914.localdomain sudo[46771]: pam_unix(sudo:session): session closed for user root
Dec 02 07:58:01 np0005541914.localdomain sudo[46814]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-heikdxvyqtjatzsepghwriurktszowma ; /usr/bin/python3
Dec 02 07:58:01 np0005541914.localdomain sudo[46814]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:58:01 np0005541914.localdomain python3[46816]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662280.5079954-79790-158554665366788/source _original_basename=tmpgtlg0taw follow=False checksum=dfdcc7695edd230e7a2c06fc7b739bfa56506d8f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 07:58:01 np0005541914.localdomain sudo[46814]: pam_unix(sudo:session): session closed for user root
Dec 02 07:58:01 np0005541914.localdomain sudo[46844]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zefljguustvpkggszmmyppsxyoymuyvp ; /usr/bin/python3
Dec 02 07:58:01 np0005541914.localdomain sudo[46844]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:58:01 np0005541914.localdomain python3[46846]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_1 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 07:58:01 np0005541914.localdomain sudo[46844]: pam_unix(sudo:session): session closed for user root
Dec 02 07:58:02 np0005541914.localdomain sudo[46894]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-olrjfchubhybltmjgpfpgneuakxdcpcv ; /usr/bin/python3
Dec 02 07:58:02 np0005541914.localdomain sudo[46894]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:58:02 np0005541914.localdomain sudo[46894]: pam_unix(sudo:session): session closed for user root
Dec 02 07:58:02 np0005541914.localdomain sudo[46937]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cemfkkfsxruginebibtgbsckkhuzgonx ; /usr/bin/python3
Dec 02 07:58:02 np0005541914.localdomain sudo[46937]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:58:02 np0005541914.localdomain sudo[46937]: pam_unix(sudo:session): session closed for user root
Dec 02 07:58:03 np0005541914.localdomain sudo[46967]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oshzdakrtkhqnljwrhvktdhqzaarhbxm ; /usr/bin/python3
Dec 02 07:58:03 np0005541914.localdomain sudo[46967]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:58:03 np0005541914.localdomain python3[46969]: ansible-file Invoked with path=/var/lib/container-puppet state=directory setype=container_file_t selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 02 07:58:03 np0005541914.localdomain sudo[46967]: pam_unix(sudo:session): session closed for user root
Dec 02 07:58:03 np0005541914.localdomain sudo[47015]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fngaewvuxdrxxrjrqutflzbqjidiutnu ; /usr/bin/python3
Dec 02 07:58:03 np0005541914.localdomain sudo[47015]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:58:04 np0005541914.localdomain sudo[47015]: pam_unix(sudo:session): session closed for user root
Dec 02 07:58:04 np0005541914.localdomain sudo[47058]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xpaacvkilaloavlwqhqdawuyzutrcvwo ; /usr/bin/python3
Dec 02 07:58:04 np0005541914.localdomain sudo[47058]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:58:04 np0005541914.localdomain sudo[47058]: pam_unix(sudo:session): session closed for user root
Dec 02 07:58:04 np0005541914.localdomain sudo[47088]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mltauiayrrplksxbhynhzpvaujqrheeo ; /usr/bin/python3
Dec 02 07:58:04 np0005541914.localdomain sudo[47088]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:58:05 np0005541914.localdomain python3[47090]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6
Dec 02 07:58:05 np0005541914.localdomain sudo[47088]: pam_unix(sudo:session): session closed for user root
Dec 02 07:58:07 np0005541914.localdomain sudo[47104]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-scuxzayzkjksrgtliimyopnmzzybzdta ; /usr/bin/python3
Dec 02 07:58:07 np0005541914.localdomain sudo[47104]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:58:07 np0005541914.localdomain python3[47106]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q lvm2 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:58:07 np0005541914.localdomain sudo[47104]: pam_unix(sudo:session): session closed for user root
Dec 02 07:58:08 np0005541914.localdomain sudo[47121]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pcfhbbrvuexxgozbqbpqhgtpgrmctifx ; /usr/bin/python3
Dec 02 07:58:08 np0005541914.localdomain sudo[47121]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:58:08 np0005541914.localdomain python3[47123]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 02 07:58:12 np0005541914.localdomain dbus-broker-launch[751]: Noticed file-system modification, trigger reload.
Dec 02 07:58:12 np0005541914.localdomain dbus-broker-launch[14516]: Noticed file-system modification, trigger reload.
Dec 02 07:58:12 np0005541914.localdomain dbus-broker-launch[14516]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec 02 07:58:12 np0005541914.localdomain dbus-broker-launch[14516]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec 02 07:58:12 np0005541914.localdomain dbus-broker-launch[751]: Noticed file-system modification, trigger reload.
Dec 02 07:58:12 np0005541914.localdomain systemd[1]: Reexecuting.
Dec 02 07:58:12 np0005541914.localdomain systemd[1]: systemd 252-14.el9_2.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 02 07:58:12 np0005541914.localdomain systemd[1]: Detected virtualization kvm.
Dec 02 07:58:12 np0005541914.localdomain systemd[1]: Detected architecture x86-64.
Dec 02 07:58:12 np0005541914.localdomain systemd-sysv-generator[47182]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 07:58:12 np0005541914.localdomain sshd[47162]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:58:12 np0005541914.localdomain systemd-rc-local-generator[47178]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 07:58:12 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 07:58:14 np0005541914.localdomain sshd[47162]: Invalid user ubuntu from 103.52.115.25 port 57302
Dec 02 07:58:14 np0005541914.localdomain sshd[47162]: Received disconnect from 103.52.115.25 port 57302:11: Bye Bye [preauth]
Dec 02 07:58:14 np0005541914.localdomain sshd[47162]: Disconnected from invalid user ubuntu 103.52.115.25 port 57302 [preauth]
Dec 02 07:58:21 np0005541914.localdomain kernel: SELinux:  Converting 2706 SID table entries...
Dec 02 07:58:21 np0005541914.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 02 07:58:21 np0005541914.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 02 07:58:21 np0005541914.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 02 07:58:21 np0005541914.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 02 07:58:21 np0005541914.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 02 07:58:21 np0005541914.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 02 07:58:21 np0005541914.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 02 07:58:21 np0005541914.localdomain dbus-broker-launch[751]: Noticed file-system modification, trigger reload.
Dec 02 07:58:21 np0005541914.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Dec 02 07:58:21 np0005541914.localdomain dbus-broker-launch[751]: Noticed file-system modification, trigger reload.
Dec 02 07:58:22 np0005541914.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 02 07:58:22 np0005541914.localdomain systemd[1]: Starting man-db-cache-update.service...
Dec 02 07:58:22 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 07:58:22 np0005541914.localdomain systemd-sysv-generator[47286]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 07:58:22 np0005541914.localdomain systemd-rc-local-generator[47281]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 07:58:22 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 07:58:22 np0005541914.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 02 07:58:22 np0005541914.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Dec 02 07:58:22 np0005541914.localdomain systemd-journald[619]: Journal stopped
Dec 02 07:58:22 np0005541914.localdomain systemd-journald[619]: Received SIGTERM from PID 1 (systemd).
Dec 02 07:58:22 np0005541914.localdomain systemd[1]: Stopping Journal Service...
Dec 02 07:58:22 np0005541914.localdomain systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec 02 07:58:22 np0005541914.localdomain systemd[1]: systemd-journald.service: Deactivated successfully.
Dec 02 07:58:22 np0005541914.localdomain systemd[1]: Stopped Journal Service.
Dec 02 07:58:22 np0005541914.localdomain systemd[1]: systemd-journald.service: Consumed 1.854s CPU time.
Dec 02 07:58:22 np0005541914.localdomain systemd[1]: Starting Journal Service...
Dec 02 07:58:22 np0005541914.localdomain systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 02 07:58:22 np0005541914.localdomain systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec 02 07:58:22 np0005541914.localdomain systemd[1]: systemd-udevd.service: Consumed 3.127s CPU time.
Dec 02 07:58:22 np0005541914.localdomain systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 02 07:58:22 np0005541914.localdomain systemd-journald[47679]: Journal started
Dec 02 07:58:22 np0005541914.localdomain systemd-journald[47679]: Runtime Journal (/run/log/journal/510530184876bdc0ebb29e7199f63471) is 12.1M, max 314.7M, 302.5M free.
Dec 02 07:58:22 np0005541914.localdomain systemd[1]: Started Journal Service.
Dec 02 07:58:22 np0005541914.localdomain systemd-journald[47679]: Field hash table of /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal has a fill level at 75.4 (251 of 333 items), suggesting rotation.
Dec 02 07:58:22 np0005541914.localdomain systemd-journald[47679]: /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal: Journal header limits reached or header out-of-date, rotating.
Dec 02 07:58:22 np0005541914.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 02 07:58:22 np0005541914.localdomain systemd-udevd[47688]: Using default interface naming scheme 'rhel-9.0'.
Dec 02 07:58:22 np0005541914.localdomain systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 02 07:58:22 np0005541914.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 02 07:58:22 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 07:58:23 np0005541914.localdomain systemd-sysv-generator[48309]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 07:58:23 np0005541914.localdomain systemd-rc-local-generator[48305]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 07:58:23 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 07:58:23 np0005541914.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Dec 02 07:58:23 np0005541914.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 02 07:58:23 np0005541914.localdomain systemd[1]: Finished man-db-cache-update.service.
Dec 02 07:58:23 np0005541914.localdomain systemd[1]: man-db-cache-update.service: Consumed 1.109s CPU time.
Dec 02 07:58:23 np0005541914.localdomain systemd[1]: run-rfabf144092414b60bcb1134c36603e1a.service: Deactivated successfully.
Dec 02 07:58:23 np0005541914.localdomain systemd[1]: run-re656b0bcf7e24cb19d18946d33784dcb.service: Deactivated successfully.
Dec 02 07:58:24 np0005541914.localdomain sudo[47121]: pam_unix(sudo:session): session closed for user root
Dec 02 07:58:24 np0005541914.localdomain sudo[48619]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-twlxbrjaotrzydfezrbengyhjhuyuhqz ; /usr/bin/python3
Dec 02 07:58:25 np0005541914.localdomain sudo[48619]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:58:25 np0005541914.localdomain python3[48621]: ansible-sysctl Invoked with name=vm.unprivileged_userfaultfd reload=True state=present sysctl_file=/etc/sysctl.d/99-tripleo-postcopy.conf sysctl_set=True value=1 ignoreerrors=False
Dec 02 07:58:25 np0005541914.localdomain sudo[48619]: pam_unix(sudo:session): session closed for user root
Dec 02 07:58:25 np0005541914.localdomain sudo[48638]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xtmixjspxppustxxzodkezcdvmslnnyb ; /usr/bin/python3
Dec 02 07:58:25 np0005541914.localdomain sudo[48638]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:58:25 np0005541914.localdomain python3[48640]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ksm.service || systemctl is-enabled ksm.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 07:58:25 np0005541914.localdomain sudo[48638]: pam_unix(sudo:session): session closed for user root
Dec 02 07:58:26 np0005541914.localdomain sudo[48656]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ditktgifjnlsnbzfxsuenxlvimiqszph ; /usr/bin/python3
Dec 02 07:58:26 np0005541914.localdomain sudo[48656]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:58:26 np0005541914.localdomain sshd[48659]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:58:26 np0005541914.localdomain python3[48658]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Dec 02 07:58:26 np0005541914.localdomain python3[48658]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 --format json
Dec 02 07:58:26 np0005541914.localdomain python3[48658]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 -q --tls-verify=false
Dec 02 07:58:26 np0005541914.localdomain sshd[48659]: Invalid user funded from 45.148.10.240 port 34372
Dec 02 07:58:26 np0005541914.localdomain sshd[48659]: Connection closed by invalid user funded 45.148.10.240 port 34372 [preauth]
Dec 02 07:58:33 np0005541914.localdomain podman[48672]: 2025-12-02 07:58:26.558062488 +0000 UTC m=+0.022656192 image pull  registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1
Dec 02 07:58:33 np0005541914.localdomain python3[48658]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect bac901955dcf7a32a493c6ef724c092009bbc18467858aa8c55e916b8c2b2b8f --format json
Dec 02 07:58:33 np0005541914.localdomain sudo[48656]: pam_unix(sudo:session): session closed for user root
Dec 02 07:58:33 np0005541914.localdomain sudo[48770]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cfumpymimlbpvdrhcdwnedrjadqfrgty ; /usr/bin/python3
Dec 02 07:58:33 np0005541914.localdomain sudo[48770]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:58:34 np0005541914.localdomain python3[48772]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Dec 02 07:58:34 np0005541914.localdomain python3[48772]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 --format json
Dec 02 07:58:34 np0005541914.localdomain python3[48772]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 -q --tls-verify=false
Dec 02 07:58:41 np0005541914.localdomain podman[48786]: 2025-12-02 07:58:34.105731094 +0000 UTC m=+0.029671685 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1
Dec 02 07:58:41 np0005541914.localdomain python3[48772]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 44feaf8d87c1d40487578230316b622680576d805efdb45dfeea6aad464b41f1 --format json
Dec 02 07:58:41 np0005541914.localdomain sudo[48770]: pam_unix(sudo:session): session closed for user root
Dec 02 07:58:41 np0005541914.localdomain sudo[48887]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zdmxincinkglcjqfrhtbapfmeznzxldm ; /usr/bin/python3
Dec 02 07:58:41 np0005541914.localdomain sudo[48887]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:58:41 np0005541914.localdomain python3[48889]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Dec 02 07:58:41 np0005541914.localdomain python3[48889]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 --format json
Dec 02 07:58:41 np0005541914.localdomain python3[48889]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 -q --tls-verify=false
Dec 02 07:58:52 np0005541914.localdomain sudo[49065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 07:58:52 np0005541914.localdomain sudo[49065]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:58:52 np0005541914.localdomain sudo[49065]: pam_unix(sudo:session): session closed for user root
Dec 02 07:58:52 np0005541914.localdomain sudo[49080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 02 07:58:52 np0005541914.localdomain sudo[49080]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:59:00 np0005541914.localdomain podman[48903]: 2025-12-02 07:58:41.876652619 +0000 UTC m=+0.040897950 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 02 07:59:00 np0005541914.localdomain python3[48889]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 3a088c12511c977065fdc5f1594cba7b1a79f163578a6ffd0ac4a475b8e67938 --format json
Dec 02 07:59:00 np0005541914.localdomain sudo[48887]: pam_unix(sudo:session): session closed for user root
Dec 02 07:59:00 np0005541914.localdomain sudo[49565]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jiaslgfjsudtsfxcobhrfiydwmnjesdv ; /usr/bin/python3
Dec 02 07:59:00 np0005541914.localdomain sudo[49565]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:59:00 np0005541914.localdomain systemd[1]: tmp-crun.DHJ7ep.mount: Deactivated successfully.
Dec 02 07:59:00 np0005541914.localdomain podman[49572]: 2025-12-02 07:59:00.771574367 +0000 UTC m=+0.087931690 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, GIT_CLEAN=True, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.openshift.expose-services=, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc.)
Dec 02 07:59:00 np0005541914.localdomain python3[49571]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Dec 02 07:59:00 np0005541914.localdomain python3[49571]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 --format json
Dec 02 07:59:00 np0005541914.localdomain podman[49572]: 2025-12-02 07:59:00.864827904 +0000 UTC m=+0.181185217 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, ceph=True, release=1763362218, vendor=Red Hat, Inc., GIT_BRANCH=main, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, name=rhceph, io.openshift.expose-services=, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, vcs-type=git, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, distribution-scope=public)
Dec 02 07:59:00 np0005541914.localdomain python3[49571]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 -q --tls-verify=false
Dec 02 07:59:01 np0005541914.localdomain sudo[49080]: pam_unix(sudo:session): session closed for user root
Dec 02 07:59:01 np0005541914.localdomain sudo[49664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 07:59:01 np0005541914.localdomain sudo[49664]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:59:01 np0005541914.localdomain sudo[49664]: pam_unix(sudo:session): session closed for user root
Dec 02 07:59:01 np0005541914.localdomain sshd[49693]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:59:01 np0005541914.localdomain sudo[49679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 07:59:01 np0005541914.localdomain sudo[49679]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:59:02 np0005541914.localdomain sudo[49679]: pam_unix(sudo:session): session closed for user root
Dec 02 07:59:03 np0005541914.localdomain sshd[49693]: Received disconnect from 182.253.156.173 port 54442:11: Bye Bye [preauth]
Dec 02 07:59:03 np0005541914.localdomain sshd[49693]: Disconnected from authenticating user root 182.253.156.173 port 54442 [preauth]
Dec 02 07:59:03 np0005541914.localdomain sudo[49754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 07:59:03 np0005541914.localdomain sudo[49754]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 07:59:03 np0005541914.localdomain sudo[49754]: pam_unix(sudo:session): session closed for user root
Dec 02 07:59:15 np0005541914.localdomain podman[49604]: 2025-12-02 07:59:00.96702657 +0000 UTC m=+0.089823150 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Dec 02 07:59:15 np0005541914.localdomain python3[49571]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 514d439186251360cf734cbc6d4a44c834664891872edf3798a653dfaacf10c0 --format json
Dec 02 07:59:15 np0005541914.localdomain sudo[49565]: pam_unix(sudo:session): session closed for user root
Dec 02 07:59:15 np0005541914.localdomain sudo[49807]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fwdvfjiqqrpfnqxmzqyhzwhsjanhrccy ; /usr/bin/python3
Dec 02 07:59:15 np0005541914.localdomain sudo[49807]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:59:15 np0005541914.localdomain python3[49809]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Dec 02 07:59:15 np0005541914.localdomain python3[49809]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 --format json
Dec 02 07:59:15 np0005541914.localdomain python3[49809]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 -q --tls-verify=false
Dec 02 07:59:22 np0005541914.localdomain systemd[1]: Starting dnf makecache...
Dec 02 07:59:22 np0005541914.localdomain dnf[50052]: Updating Subscription Management repositories.
Dec 02 07:59:22 np0005541914.localdomain podman[49822]: 2025-12-02 07:59:15.916762845 +0000 UTC m=+0.046605658 image pull  registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1
Dec 02 07:59:22 np0005541914.localdomain python3[49809]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect a9dd7a2ac6f35cb086249f87f74e2f8e74e7e2ad5141ce2228263be6faedce26 --format json
Dec 02 07:59:22 np0005541914.localdomain sudo[49807]: pam_unix(sudo:session): session closed for user root
Dec 02 07:59:23 np0005541914.localdomain sudo[50077]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ymuylegebanoysyqzjvpukpkkwmprbou ; /usr/bin/python3
Dec 02 07:59:23 np0005541914.localdomain sudo[50077]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:59:23 np0005541914.localdomain python3[50079]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Dec 02 07:59:23 np0005541914.localdomain python3[50079]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 --format json
Dec 02 07:59:23 np0005541914.localdomain python3[50079]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 -q --tls-verify=false
Dec 02 07:59:24 np0005541914.localdomain dnf[50052]: Failed determining last makecache time.
Dec 02 07:59:24 np0005541914.localdomain dnf[50052]: Red Hat Enterprise Linux 9 for x86_64 - AppStre  26 kB/s | 4.5 kB     00:00
Dec 02 07:59:24 np0005541914.localdomain dnf[50052]: Red Hat Enterprise Linux 9 for x86_64 - High Av  30 kB/s | 4.0 kB     00:00
Dec 02 07:59:24 np0005541914.localdomain dnf[50052]: Fast Datapath for RHEL 9 x86_64 (RPMs)           29 kB/s | 4.0 kB     00:00
Dec 02 07:59:24 np0005541914.localdomain dnf[50052]: Red Hat Enterprise Linux 9 for x86_64 - BaseOS   27 kB/s | 4.1 kB     00:00
Dec 02 07:59:25 np0005541914.localdomain dnf[50052]: Red Hat OpenStack Platform 17.1 for RHEL 9 x86_  30 kB/s | 4.0 kB     00:00
Dec 02 07:59:25 np0005541914.localdomain dnf[50052]: Red Hat Enterprise Linux 9 for x86_64 - AppStre  33 kB/s | 4.5 kB     00:00
Dec 02 07:59:25 np0005541914.localdomain dnf[50052]: Red Hat Enterprise Linux 9 for x86_64 - BaseOS   30 kB/s | 4.1 kB     00:00
Dec 02 07:59:25 np0005541914.localdomain dnf[50052]: Metadata cache created.
Dec 02 07:59:25 np0005541914.localdomain systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec 02 07:59:25 np0005541914.localdomain systemd[1]: Finished dnf makecache.
Dec 02 07:59:25 np0005541914.localdomain systemd[1]: dnf-makecache.service: Consumed 2.938s CPU time.
Dec 02 07:59:28 np0005541914.localdomain podman[50091]: 2025-12-02 07:59:23.350436381 +0000 UTC m=+0.043809212 image pull  registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1
Dec 02 07:59:28 np0005541914.localdomain python3[50079]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 24976907b2c2553304119aba5731a800204d664feed24ca9eb7f2b4c7d81016b --format json
Dec 02 07:59:28 np0005541914.localdomain sudo[50077]: pam_unix(sudo:session): session closed for user root
Dec 02 07:59:28 np0005541914.localdomain sudo[50174]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lqwpseydkpsvvkbhstoetaecucneywgw ; /usr/bin/python3
Dec 02 07:59:28 np0005541914.localdomain sudo[50174]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:59:28 np0005541914.localdomain python3[50176]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Dec 02 07:59:28 np0005541914.localdomain python3[50176]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 --format json
Dec 02 07:59:28 np0005541914.localdomain python3[50176]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 -q --tls-verify=false
Dec 02 07:59:30 np0005541914.localdomain podman[50188]: 2025-12-02 07:59:28.54920576 +0000 UTC m=+0.044221574 image pull  registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1
Dec 02 07:59:30 np0005541914.localdomain python3[50176]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 57163a7b21fdbb804a27897cb6e6052a5e5c7a339c45d663e80b52375a760dcf --format json
Dec 02 07:59:30 np0005541914.localdomain sudo[50174]: pam_unix(sudo:session): session closed for user root
Dec 02 07:59:31 np0005541914.localdomain sudo[50265]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tyvoumelwbwbgvmoqoanjnxtabfqlmei ; /usr/bin/python3
Dec 02 07:59:31 np0005541914.localdomain sudo[50265]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:59:31 np0005541914.localdomain python3[50267]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Dec 02 07:59:31 np0005541914.localdomain python3[50267]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 --format json
Dec 02 07:59:31 np0005541914.localdomain python3[50267]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 -q --tls-verify=false
Dec 02 07:59:33 np0005541914.localdomain podman[50281]: 2025-12-02 07:59:31.329594125 +0000 UTC m=+0.036967996 image pull  registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1
Dec 02 07:59:33 np0005541914.localdomain python3[50267]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 076d82a27d63c8328729ed27ceb4291585ae18d017befe6fe353df7aa11715ae --format json
Dec 02 07:59:33 np0005541914.localdomain sudo[50265]: pam_unix(sudo:session): session closed for user root
Dec 02 07:59:33 np0005541914.localdomain sudo[50357]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mcmjfhydqfujplekvycyvdrsrigzrvto ; /usr/bin/python3
Dec 02 07:59:33 np0005541914.localdomain sudo[50357]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:59:33 np0005541914.localdomain python3[50359]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Dec 02 07:59:33 np0005541914.localdomain python3[50359]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 --format json
Dec 02 07:59:34 np0005541914.localdomain python3[50359]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 -q --tls-verify=false
Dec 02 07:59:36 np0005541914.localdomain sshd[50410]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 07:59:37 np0005541914.localdomain podman[50372]: 2025-12-02 07:59:34.099860845 +0000 UTC m=+0.042479159 image pull  registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1
Dec 02 07:59:37 np0005541914.localdomain python3[50359]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect d0dbcb95546840a8d088df044347a7877ad5ea45a2ddba0578e9bb5de4ab0da5 --format json
Dec 02 07:59:37 np0005541914.localdomain sudo[50357]: pam_unix(sudo:session): session closed for user root
Dec 02 07:59:37 np0005541914.localdomain sudo[50449]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qlcxveojsmlipeeudxspvitosztrhjhb ; /usr/bin/python3
Dec 02 07:59:37 np0005541914.localdomain sudo[50449]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:59:37 np0005541914.localdomain python3[50451]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Dec 02 07:59:37 np0005541914.localdomain python3[50451]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 --format json
Dec 02 07:59:37 np0005541914.localdomain python3[50451]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 -q --tls-verify=false
Dec 02 07:59:38 np0005541914.localdomain sshd[50410]: Received disconnect from 103.52.115.25 port 53838:11: Bye Bye [preauth]
Dec 02 07:59:38 np0005541914.localdomain sshd[50410]: Disconnected from authenticating user root 103.52.115.25 port 53838 [preauth]
Dec 02 07:59:41 np0005541914.localdomain podman[50465]: 2025-12-02 07:59:37.867665822 +0000 UTC m=+0.046539517 image pull  registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1
Dec 02 07:59:41 np0005541914.localdomain python3[50451]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect e6e981540e553415b2d6eda490d7683db07164af2e7a0af8245623900338a4d6 --format json
Dec 02 07:59:41 np0005541914.localdomain sudo[50449]: pam_unix(sudo:session): session closed for user root
Dec 02 07:59:41 np0005541914.localdomain sudo[50553]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bgbiwdqufxteaqwvhhmqxnjurxipshrj ; /usr/bin/python3
Dec 02 07:59:41 np0005541914.localdomain sudo[50553]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:59:42 np0005541914.localdomain python3[50555]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Dec 02 07:59:42 np0005541914.localdomain python3[50555]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 --format json
Dec 02 07:59:42 np0005541914.localdomain python3[50555]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 -q --tls-verify=false
Dec 02 07:59:44 np0005541914.localdomain podman[50569]: 2025-12-02 07:59:42.174992541 +0000 UTC m=+0.047939511 image pull  registry.redhat.io/rhosp-rhel9/openstack-cron:17.1
Dec 02 07:59:44 np0005541914.localdomain python3[50555]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 87ee88cbf01fb42e0b22747072843bcca6130a90eda4de6e74b3ccd847bb4040 --format json
Dec 02 07:59:44 np0005541914.localdomain sudo[50553]: pam_unix(sudo:session): session closed for user root
Dec 02 07:59:44 np0005541914.localdomain sudo[50646]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jkydcjymgarkchfvmuylnulwnfmhiwke ; /usr/bin/python3
Dec 02 07:59:44 np0005541914.localdomain sudo[50646]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:59:45 np0005541914.localdomain python3[50648]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_1 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 07:59:45 np0005541914.localdomain sudo[50646]: pam_unix(sudo:session): session closed for user root
Dec 02 07:59:45 np0005541914.localdomain sudo[50696]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wbdkssqiyblhonmavespnytpkntwsohe ; /usr/bin/python3
Dec 02 07:59:45 np0005541914.localdomain sudo[50696]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:59:45 np0005541914.localdomain sudo[50696]: pam_unix(sudo:session): session closed for user root
Dec 02 07:59:45 np0005541914.localdomain sudo[50714]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nkltjwurmkgvgkhfwwiedscyvewkknuo ; /usr/bin/python3
Dec 02 07:59:45 np0005541914.localdomain sudo[50714]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:59:46 np0005541914.localdomain sudo[50714]: pam_unix(sudo:session): session closed for user root
Dec 02 07:59:46 np0005541914.localdomain sudo[50818]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tpvspxwywbhnodwvglxmdmigcryfhdko ; ANSIBLE_ASYNC_DIR=/tmp/.ansible_async /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662386.2252982-82653-151212817962803/async_wrapper.py 356590835224 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662386.2252982-82653-151212817962803/AnsiballZ_command.py _
Dec 02 07:59:46 np0005541914.localdomain sudo[50818]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Dec 02 07:59:46 np0005541914.localdomain ansible-async_wrapper.py[50820]: Invoked with 356590835224 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662386.2252982-82653-151212817962803/AnsiballZ_command.py _
Dec 02 07:59:46 np0005541914.localdomain ansible-async_wrapper.py[50823]: Starting module and watcher
Dec 02 07:59:46 np0005541914.localdomain ansible-async_wrapper.py[50823]: Start watching 50824 (3600)
Dec 02 07:59:46 np0005541914.localdomain ansible-async_wrapper.py[50824]: Start module (50824)
Dec 02 07:59:46 np0005541914.localdomain ansible-async_wrapper.py[50820]: Return async_wrapper task started.
Dec 02 07:59:47 np0005541914.localdomain sudo[50818]: pam_unix(sudo:session): session closed for user root
Dec 02 07:59:47 np0005541914.localdomain sudo[50842]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ojoigellihvprtoxxowmvtvpxemtncgf ; /usr/bin/python3
Dec 02 07:59:47 np0005541914.localdomain sudo[50842]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:59:47 np0005541914.localdomain python3[50844]: ansible-ansible.legacy.async_status Invoked with jid=356590835224.50820 mode=status _async_dir=/tmp/.ansible_async
Dec 02 07:59:47 np0005541914.localdomain sudo[50842]: pam_unix(sudo:session): session closed for user root
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]:    (file: /etc/puppet/hiera.yaml)
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]: Warning: Undefined variable '::deploy_config_name';
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]:    (file & line not available)
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]:    (file & line not available)
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8)
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69)
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]: Notice: Compiled catalog for np0005541914.localdomain in environment production in 0.13 seconds
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Exec[directory-create-etc-my.cnf.d]/returns: executed successfully
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d/tripleo.cnf]/ensure: created
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]: Notice: Applied catalog in 0.06 seconds
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]: Application:
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]:    Initial environment: production
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]:    Converged environment: production
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]:          Run mode: user
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]: Changes:
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]:             Total: 3
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]: Events:
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]:           Success: 3
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]:             Total: 3
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]: Resources:
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]:           Changed: 3
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]:       Out of sync: 3
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]:             Total: 10
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]: Time:
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]:          Schedule: 0.00
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]:              File: 0.00
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]:              Exec: 0.02
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]:            Augeas: 0.03
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]:    Transaction evaluation: 0.06
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]:    Catalog application: 0.06
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]:    Config retrieval: 0.16
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]:          Last run: 1764662391
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]:        Filebucket: 0.00
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]:             Total: 0.06
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]: Version:
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]:            Config: 1764662391
Dec 02 07:59:51 np0005541914.localdomain puppet-user[50828]:            Puppet: 7.10.0
Dec 02 07:59:51 np0005541914.localdomain ansible-async_wrapper.py[50824]: Module complete (50824)
Dec 02 07:59:51 np0005541914.localdomain ansible-async_wrapper.py[50823]: Done in kid B.
Dec 02 07:59:57 np0005541914.localdomain sudo[50969]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-plfiopwfvivtvpbtokvwgwovcpaatmgt ; /usr/bin/python3
Dec 02 07:59:57 np0005541914.localdomain sudo[50969]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:59:57 np0005541914.localdomain python3[50971]: ansible-ansible.legacy.async_status Invoked with jid=356590835224.50820 mode=status _async_dir=/tmp/.ansible_async
Dec 02 07:59:57 np0005541914.localdomain sudo[50969]: pam_unix(sudo:session): session closed for user root
Dec 02 07:59:58 np0005541914.localdomain sudo[50985]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wqucmsvohzixegzexhdhrsswdzamntqq ; /usr/bin/python3
Dec 02 07:59:58 np0005541914.localdomain sudo[50985]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:59:58 np0005541914.localdomain python3[50987]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 02 07:59:58 np0005541914.localdomain sudo[50985]: pam_unix(sudo:session): session closed for user root
Dec 02 07:59:58 np0005541914.localdomain sudo[51044]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xxwxdbtcshqskdcdzebwgkyxxlnlmjop ; /usr/bin/python3
Dec 02 07:59:58 np0005541914.localdomain sudo[51044]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:59:59 np0005541914.localdomain python3[51046]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 07:59:59 np0005541914.localdomain sudo[51044]: pam_unix(sudo:session): session closed for user root
Dec 02 07:59:59 np0005541914.localdomain sudo[51092]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xqgkjitchvlommfiiwcmhhcdywhtbsbh ; /usr/bin/python3
Dec 02 07:59:59 np0005541914.localdomain sudo[51092]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 07:59:59 np0005541914.localdomain python3[51094]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 07:59:59 np0005541914.localdomain sudo[51092]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:00 np0005541914.localdomain sudo[51135]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ocqzscrsnevifxvmpcbrcnkwsemuwnbe ; /usr/bin/python3
Dec 02 08:00:00 np0005541914.localdomain sudo[51135]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:00 np0005541914.localdomain python3[51137]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/container-puppet/puppetlabs/facter.conf setype=svirt_sandbox_file_t selevel=s0 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662399.2834432-82882-10240848673277/source _original_basename=tmpj27hrm6x follow=False checksum=53908622cb869db5e2e2a68e737aa2ab1a872111 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 02 08:00:00 np0005541914.localdomain sudo[51135]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:00 np0005541914.localdomain sudo[51165]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kdyykgeyywjcdldcsdyzmhtjapyuoltz ; /usr/bin/python3
Dec 02 08:00:00 np0005541914.localdomain sudo[51165]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:00 np0005541914.localdomain python3[51167]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:00:00 np0005541914.localdomain sudo[51165]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:00 np0005541914.localdomain sudo[51181]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-daozaudhsnvqkyflrtppdeiokfhlwxrp ; /usr/bin/python3
Dec 02 08:00:00 np0005541914.localdomain sudo[51181]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:01 np0005541914.localdomain sudo[51181]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:01 np0005541914.localdomain sudo[51268]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-frkiyhwltaxycytdqzgkeyrfdwbhjuqv ; /usr/bin/python3
Dec 02 08:00:01 np0005541914.localdomain sudo[51268]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:01 np0005541914.localdomain python3[51270]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None
Dec 02 08:00:01 np0005541914.localdomain sudo[51268]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:01 np0005541914.localdomain sudo[51287]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-weguitllyopifapygoaegiiasaiukbjq ; /usr/bin/python3
Dec 02 08:00:01 np0005541914.localdomain sudo[51287]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:02 np0005541914.localdomain python3[51289]: ansible-file Invoked with path=/var/lib/tripleo-config/container-puppet-config mode=448 recurse=True setype=container_file_t force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 08:00:02 np0005541914.localdomain sudo[51287]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:02 np0005541914.localdomain sudo[51303]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gfvimztxthacnvlycuhdwxdccaysxtxf ; /usr/bin/python3
Dec 02 08:00:02 np0005541914.localdomain sudo[51303]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:02 np0005541914.localdomain python3[51305]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=False puppet_config=/var/lib/container-puppet/container-puppet.json short_hostname=np0005541914 step=1 update_config_hash_only=False
Dec 02 08:00:02 np0005541914.localdomain sudo[51303]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:02 np0005541914.localdomain sudo[51319]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dfsfblcotlhtzvfhrikhlthkcyjkcilz ; /usr/bin/python3
Dec 02 08:00:02 np0005541914.localdomain sudo[51319]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:02 np0005541914.localdomain python3[51321]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:00:02 np0005541914.localdomain sudo[51319]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:03 np0005541914.localdomain sudo[51335]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-znfnoxctgfvuymbljwpdaehrpkxaphss ; /usr/bin/python3
Dec 02 08:00:03 np0005541914.localdomain sudo[51335]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:03 np0005541914.localdomain python3[51337]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_1 config_pattern=container-puppet-*.json config_overrides={} debug=True
Dec 02 08:00:03 np0005541914.localdomain sudo[51335]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:03 np0005541914.localdomain sudo[51338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:00:03 np0005541914.localdomain sudo[51338]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:00:03 np0005541914.localdomain sudo[51338]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:03 np0005541914.localdomain sudo[51353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:00:03 np0005541914.localdomain sudo[51353]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:00:03 np0005541914.localdomain sudo[51381]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-camydbnfstlezfcisvhtpspqhbyxuqja ; /usr/bin/python3
Dec 02 08:00:03 np0005541914.localdomain sudo[51381]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:03 np0005541914.localdomain python3[51383]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 02 08:00:04 np0005541914.localdomain sudo[51381]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:04 np0005541914.localdomain sudo[51353]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:04 np0005541914.localdomain sudo[51453]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gwldntlmodykweelpkntfmipcmfpgacy ; /usr/bin/python3
Dec 02 08:00:04 np0005541914.localdomain sudo[51453]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:04 np0005541914.localdomain python3[51455]: ansible-tripleo_container_manage Invoked with config_id=tripleo_puppet_step1 config_dir=/var/lib/tripleo-config/container-puppet-config/step_1 config_patterns=container-puppet-*.json config_overrides={} concurrency=6 log_base_path=/var/log/containers/stdouts debug=False
Dec 02 08:00:05 np0005541914.localdomain podman[51625]: 2025-12-02 08:00:04.969612082 +0000 UTC m=+0.040606411 image pull  registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1
Dec 02 08:00:05 np0005541914.localdomain podman[51644]: 2025-12-02 08:00:04.976731745 +0000 UTC m=+0.029799580 image pull  registry.redhat.io/rhosp-rhel9/openstack-cron:17.1
Dec 02 08:00:05 np0005541914.localdomain podman[51625]: 2025-12-02 08:00:05.118540244 +0000 UTC m=+0.189534633 container create e194edf06982fbba8af1e423158f4762cfae96575a27c854af2fae8dbb53e243 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, architecture=x86_64, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, version=17.1.12, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, container_name=container-puppet-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid)
Dec 02 08:00:05 np0005541914.localdomain podman[51644]: 2025-12-02 08:00:05.178170772 +0000 UTC m=+0.231238587 container create 396ec4554c260ef3fa39f162dfc5651ee3f4236329d79627a71b9bb9a2dedf27 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, container_name=container-puppet-crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, vcs-type=git, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_puppet_step1, managed_by=tripleo_ansible, batch=17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron)
Dec 02 08:00:05 np0005541914.localdomain podman[51632]: 2025-12-02 08:00:05.103999879 +0000 UTC m=+0.154726654 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 02 08:00:05 np0005541914.localdomain podman[51669]: 2025-12-02 08:00:05.105895656 +0000 UTC m=+0.128310447 image pull  registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1
Dec 02 08:00:05 np0005541914.localdomain podman[51669]: 2025-12-02 08:00:05.208359582 +0000 UTC m=+0.230774343 container create bd3d81b6875a7afa051ebbb8eff5f66052aad4117ff91b78c5efda542bbd94a7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, batch=17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=container-puppet-metrics_qdr, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, build-date=2025-11-18T22:49:46Z, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, version=17.1.12, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_puppet_step1, architecture=x86_64)
Dec 02 08:00:05 np0005541914.localdomain podman[51663]: 2025-12-02 08:00:05.112879365 +0000 UTC m=+0.132047999 image pull  registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1
Dec 02 08:00:05 np0005541914.localdomain systemd[1]: Started libpod-conmon-e194edf06982fbba8af1e423158f4762cfae96575a27c854af2fae8dbb53e243.scope.
Dec 02 08:00:05 np0005541914.localdomain systemd[1]: Started libpod-conmon-396ec4554c260ef3fa39f162dfc5651ee3f4236329d79627a71b9bb9a2dedf27.scope.
Dec 02 08:00:05 np0005541914.localdomain podman[51663]: 2025-12-02 08:00:05.249752505 +0000 UTC m=+0.268921169 container create 85b95abaafde60f63af4623ac20048fabe216f5d7494a8d686d11540d0ec48f7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, io.openshift.expose-services=, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, config_id=tripleo_puppet_step1, container_name=container-puppet-collectd, com.redhat.component=openstack-collectd-container, release=1761123044, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., tcib_managed=true, maintainer=OpenStack TripleO Team)
Dec 02 08:00:05 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:00:05 np0005541914.localdomain systemd[1]: Started libpod-conmon-bd3d81b6875a7afa051ebbb8eff5f66052aad4117ff91b78c5efda542bbd94a7.scope.
Dec 02 08:00:05 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:00:05 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/400c7ba0962a9736ae4730e3c3204c67b2bad8d9266c2a49e5c729fb35c892ee/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Dec 02 08:00:05 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cd4674896f37ed03c180aa0ab9f93ced388cfe5185ce6c19dc1fe143ce7985a/merged/tmp/iscsi.host supports timestamps until 2038 (0x7fffffff)
Dec 02 08:00:05 np0005541914.localdomain systemd[1]: Started libpod-conmon-85b95abaafde60f63af4623ac20048fabe216f5d7494a8d686d11540d0ec48f7.scope.
Dec 02 08:00:05 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cd4674896f37ed03c180aa0ab9f93ced388cfe5185ce6c19dc1fe143ce7985a/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Dec 02 08:00:05 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:00:05 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:00:05 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cbd426914bbc0b3c94f281248297da1bdd998807cad604e4ab2f39851a1899c/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Dec 02 08:00:05 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f51912cd7ca4d93a076413ed4727a62a427f09f722d7bf72e350182571c8db0/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Dec 02 08:00:05 np0005541914.localdomain podman[51625]: 2025-12-02 08:00:05.295178071 +0000 UTC m=+0.366172390 container init e194edf06982fbba8af1e423158f4762cfae96575a27c854af2fae8dbb53e243 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, architecture=x86_64, vcs-type=git, config_id=tripleo_puppet_step1, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, vendor=Red Hat, Inc., container_name=container-puppet-iscsid, name=rhosp17/openstack-iscsid, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:00:05 np0005541914.localdomain podman[51632]: 2025-12-02 08:00:05.301354955 +0000 UTC m=+0.352081760 container create 0d054a117c7c46e13ca1c41c72142c6e4f9c31e859e3ab54e5194094c2c4096b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, release=1761123044, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, name=rhosp17/openstack-nova-libvirt, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, io.openshift.expose-services=, container_name=container-puppet-nova_libvirt, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_puppet_step1)
Dec 02 08:00:05 np0005541914.localdomain systemd[1]: Started libpod-conmon-0d054a117c7c46e13ca1c41c72142c6e4f9c31e859e3ab54e5194094c2c4096b.scope.
Dec 02 08:00:05 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:00:05 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b31a729f52d6f9ece82ff86db83ec0c0420ae47f49a38ed5b1f2bb83a229399e/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Dec 02 08:00:06 np0005541914.localdomain podman[51644]: 2025-12-02 08:00:06.016922352 +0000 UTC m=+1.069990207 container init 396ec4554c260ef3fa39f162dfc5651ee3f4236329d79627a71b9bb9a2dedf27 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, tcib_managed=true, container_name=container-puppet-crond, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_puppet_step1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron)
Dec 02 08:00:06 np0005541914.localdomain podman[51644]: 2025-12-02 08:00:06.453777268 +0000 UTC m=+1.506845113 container start 396ec4554c260ef3fa39f162dfc5651ee3f4236329d79627a71b9bb9a2dedf27 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc., container_name=container-puppet-crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:00:06 np0005541914.localdomain podman[51644]: 2025-12-02 08:00:06.456270852 +0000 UTC m=+1.509338747 container attach 396ec4554c260ef3fa39f162dfc5651ee3f4236329d79627a71b9bb9a2dedf27 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, version=17.1.12, distribution-scope=public, name=rhosp17/openstack-cron, architecture=x86_64, build-date=2025-11-18T22:49:32Z, container_name=container-puppet-crond, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_puppet_step1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1)
Dec 02 08:00:06 np0005541914.localdomain podman[51663]: 2025-12-02 08:00:06.484305908 +0000 UTC m=+1.503474582 container init 85b95abaafde60f63af4623ac20048fabe216f5d7494a8d686d11540d0ec48f7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, version=17.1.12, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, release=1761123044, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_id=tripleo_puppet_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=container-puppet-collectd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, io.openshift.expose-services=)
Dec 02 08:00:06 np0005541914.localdomain systemd[1]: tmp-crun.dKzOJv.mount: Deactivated successfully.
Dec 02 08:00:06 np0005541914.localdomain podman[51663]: 2025-12-02 08:00:06.504920852 +0000 UTC m=+1.524089496 container start 85b95abaafde60f63af4623ac20048fabe216f5d7494a8d686d11540d0ec48f7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, distribution-scope=public, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, release=1761123044, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_puppet_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=container-puppet-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, maintainer=OpenStack TripleO Team)
Dec 02 08:00:06 np0005541914.localdomain podman[51663]: 2025-12-02 08:00:06.505556962 +0000 UTC m=+1.524725666 container attach 85b95abaafde60f63af4623ac20048fabe216f5d7494a8d686d11540d0ec48f7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, config_id=tripleo_puppet_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., release=1761123044, com.redhat.component=openstack-collectd-container, version=17.1.12, url=https://www.redhat.com, architecture=x86_64, container_name=container-puppet-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z)
Dec 02 08:00:06 np0005541914.localdomain podman[51669]: 2025-12-02 08:00:06.515520978 +0000 UTC m=+1.537935729 container init bd3d81b6875a7afa051ebbb8eff5f66052aad4117ff91b78c5efda542bbd94a7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, batch=17.1_20251118.1, container_name=container-puppet-metrics_qdr, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_puppet_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, architecture=x86_64, version=17.1.12, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:00:06 np0005541914.localdomain podman[51625]: 2025-12-02 08:00:06.52058586 +0000 UTC m=+1.591580179 container start e194edf06982fbba8af1e423158f4762cfae96575a27c854af2fae8dbb53e243 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=container-puppet-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-18T23:44:13Z, version=17.1.12, release=1761123044, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, config_id=tripleo_puppet_step1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid)
Dec 02 08:00:06 np0005541914.localdomain podman[51625]: 2025-12-02 08:00:06.520968771 +0000 UTC m=+1.591963140 container attach e194edf06982fbba8af1e423158f4762cfae96575a27c854af2fae8dbb53e243 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=container-puppet-iscsid, build-date=2025-11-18T23:44:13Z, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_id=tripleo_puppet_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true)
Dec 02 08:00:06 np0005541914.localdomain podman[51669]: 2025-12-02 08:00:06.52967616 +0000 UTC m=+1.552090911 container start bd3d81b6875a7afa051ebbb8eff5f66052aad4117ff91b78c5efda542bbd94a7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, config_id=tripleo_puppet_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=container-puppet-metrics_qdr, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1761123044, managed_by=tripleo_ansible)
Dec 02 08:00:06 np0005541914.localdomain podman[51669]: 2025-12-02 08:00:06.530145754 +0000 UTC m=+1.552560505 container attach bd3d81b6875a7afa051ebbb8eff5f66052aad4117ff91b78c5efda542bbd94a7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, container_name=container-puppet-metrics_qdr, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1761123044, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., url=https://www.redhat.com, config_id=tripleo_puppet_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:00:06 np0005541914.localdomain podman[51632]: 2025-12-02 08:00:06.533472114 +0000 UTC m=+1.584198909 container init 0d054a117c7c46e13ca1c41c72142c6e4f9c31e859e3ab54e5194094c2c4096b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:35:22Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.expose-services=, container_name=container-puppet-nova_libvirt, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-type=git, batch=17.1_20251118.1, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, config_id=tripleo_puppet_step1, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:00:06 np0005541914.localdomain podman[51632]: 2025-12-02 08:00:06.55077023 +0000 UTC m=+1.601497065 container start 0d054a117c7c46e13ca1c41c72142c6e4f9c31e859e3ab54e5194094c2c4096b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, config_id=tripleo_puppet_step1, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-type=git, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, build-date=2025-11-19T00:35:22Z, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=container-puppet-nova_libvirt, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt)
Dec 02 08:00:06 np0005541914.localdomain podman[51632]: 2025-12-02 08:00:06.551141091 +0000 UTC m=+1.601867936 container attach 0d054a117c7c46e13ca1c41c72142c6e4f9c31e859e3ab54e5194094c2c4096b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1761123044, com.redhat.component=openstack-nova-libvirt-container, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_puppet_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=container-puppet-nova_libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, tcib_managed=true, name=rhosp17/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, url=https://www.redhat.com, version=17.1.12, build-date=2025-11-19T00:35:22Z, maintainer=OpenStack TripleO Team)
Dec 02 08:00:06 np0005541914.localdomain sudo[51807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:00:06 np0005541914.localdomain sudo[51807]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:00:06 np0005541914.localdomain sudo[51807]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51784]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51784]:    (file: /etc/puppet/hiera.yaml)
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51784]: Warning: Undefined variable '::deploy_config_name';
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51784]:    (file & line not available)
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]:    (file: /etc/puppet/hiera.yaml)
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]: Warning: Undefined variable '::deploy_config_name';
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]:    (file & line not available)
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51784]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51784]:    (file & line not available)
Dec 02 08:00:08 np0005541914.localdomain ovs-vsctl[52068]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]:    (file & line not available)
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51784]: Notice: Compiled catalog for np0005541914.localdomain in environment production in 0.09 seconds
Dec 02 08:00:08 np0005541914.localdomain podman[51546]: 2025-12-02 08:00:04.881911867 +0000 UTC m=+0.030887491 image pull  registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]: Notice: Accepting previously invalid value for target type 'Integer'
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51782]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51782]:    (file: /etc/puppet/hiera.yaml)
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51782]: Warning: Undefined variable '::deploy_config_name';
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51782]:    (file & line not available)
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51784]: Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{sha256}1c3202f58bd2ae16cb31badcbb7f0d4e6697157b987d1887736ad96bb73d70b0'
Dec 02 08:00:08 np0005541914.localdomain crontab[52099]: (root) LIST (root)
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51784]: Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created
Dec 02 08:00:08 np0005541914.localdomain crontab[52113]: (root) REPLACE (root)
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]: Notice: Compiled catalog for np0005541914.localdomain in environment production in 0.13 seconds
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51782]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51782]:    (file & line not available)
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51784]: Notice: Applied catalog in 0.04 seconds
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51784]: Application:
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51784]:    Initial environment: production
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51784]:    Converged environment: production
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51784]:          Run mode: user
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51784]: Changes:
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51784]:             Total: 2
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51784]: Events:
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51784]:           Success: 2
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51784]:             Total: 2
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51784]: Resources:
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51784]:           Changed: 2
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51784]:       Out of sync: 2
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51784]:           Skipped: 7
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51784]:             Total: 9
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51784]: Time:
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51784]:              File: 0.01
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51784]:              Cron: 0.01
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51784]:    Transaction evaluation: 0.04
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51784]:    Catalog application: 0.04
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51784]:    Config retrieval: 0.12
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51784]:          Last run: 1764662408
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51784]:             Total: 0.04
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51784]: Version:
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51784]:            Config: 1764662408
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51784]:            Puppet: 7.10.0
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/owner: owner changed 'qdrouterd' to 'root'
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/group: group changed 'qdrouterd' to 'root'
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/mode: mode changed '0700' to '0755'
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]: Notice: /Stage[main]/Qdr::Config/File[/etc/qpid-dispatch/ssl]/ensure: created
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]: Notice: /Stage[main]/Qdr::Config/File[qdrouterd.conf]/content: content changed '{sha256}89e10d8896247f992c5f0baf027c25a8ca5d0441be46d8859d9db2067ea74cd3' to '{sha256}296a77cf0860ceaf3513703c18bbb7eb622db175df9af5d6bfe1bade3b73a54a'
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]: Notice: /Stage[main]/Qdr::Config/File[/var/log/qdrouterd]/ensure: created
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]: Notice: /Stage[main]/Qdr::Config/File[/var/log/qdrouterd/metrics_qdr.log]/ensure: created
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]: Notice: Applied catalog in 0.03 seconds
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]: Application:
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]:    Initial environment: production
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]:    Converged environment: production
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]:          Run mode: user
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]: Changes:
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]:             Total: 7
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]: Events:
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]:           Success: 7
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]:             Total: 7
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]: Resources:
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]:           Skipped: 13
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]:           Changed: 5
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]:       Out of sync: 5
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]:             Total: 20
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]: Time:
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]:              File: 0.01
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]:    Transaction evaluation: 0.02
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]:    Catalog application: 0.03
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]:    Config retrieval: 0.16
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]:          Last run: 1764662408
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]:             Total: 0.03
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]: Version:
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]:            Config: 1764662408
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51828]:            Puppet: 7.10.0
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51782]: Notice: Compiled catalog for np0005541914.localdomain in environment production in 0.11 seconds
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51811]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51811]:    (file: /etc/puppet/hiera.yaml)
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51811]: Warning: Undefined variable '::deploy_config_name';
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51811]:    (file & line not available)
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51840]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51840]:    (file: /etc/puppet/hiera.yaml)
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51840]: Warning: Undefined variable '::deploy_config_name';
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51840]:    (file & line not available)
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51782]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[reset-iscsi-initiator-name]/returns: executed successfully
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51782]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/File[/etc/iscsi/.initiator_reset]/ensure: created
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51811]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51811]:    (file & line not available)
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51840]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51840]:    (file & line not available)
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51782]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[sync-iqn-to-host]/returns: executed successfully
Dec 02 08:00:08 np0005541914.localdomain podman[52224]: 2025-12-02 08:00:08.631927726 +0000 UTC m=+0.072340208 container create acca850a007a0ec242ce5dd760b330bd12c19e84116fb71d0ff4e5759135e9e7 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, container_name=container-puppet-ceilometer, build-date=2025-11-19T00:11:59Z, com.redhat.component=openstack-ceilometer-central-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., release=1761123044, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, config_id=tripleo_puppet_step1, distribution-scope=public, name=rhosp17/openstack-ceilometer-central, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, architecture=x86_64, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 02 08:00:08 np0005541914.localdomain systemd[1]: Started libpod-conmon-acca850a007a0ec242ce5dd760b330bd12c19e84116fb71d0ff4e5759135e9e7.scope.
Dec 02 08:00:08 np0005541914.localdomain systemd[1]: tmp-crun.LT1SPD.mount: Deactivated successfully.
Dec 02 08:00:08 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:00:08 np0005541914.localdomain podman[52224]: 2025-12-02 08:00:08.599582701 +0000 UTC m=+0.039995173 image pull  registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1
Dec 02 08:00:08 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d2cbcd6205ebc71bef7b0378e46c50958788e3d833a076a9d36ebe402a8a467/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Dec 02 08:00:08 np0005541914.localdomain podman[52224]: 2025-12-02 08:00:08.711633042 +0000 UTC m=+0.152045524 container init acca850a007a0ec242ce5dd760b330bd12c19e84116fb71d0ff4e5759135e9e7 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, managed_by=tripleo_ansible, io.openshift.expose-services=, io.buildah.version=1.41.4, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-central-container, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-central, container_name=container-puppet-ceilometer, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 ceilometer-central, release=1761123044, distribution-scope=public, build-date=2025-11-19T00:11:59Z, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, architecture=x86_64, vcs-type=git, config_id=tripleo_puppet_step1)
Dec 02 08:00:08 np0005541914.localdomain podman[52224]: 2025-12-02 08:00:08.758779928 +0000 UTC m=+0.199192400 container start acca850a007a0ec242ce5dd760b330bd12c19e84116fb71d0ff4e5759135e9e7 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-central, release=1761123044, vcs-type=git, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-central-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=container-puppet-ceilometer, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-central, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, build-date=2025-11-19T00:11:59Z, vendor=Red Hat, Inc., version=17.1.12, config_id=tripleo_puppet_step1, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:00:08 np0005541914.localdomain podman[52224]: 2025-12-02 08:00:08.759119448 +0000 UTC m=+0.199532010 container attach acca850a007a0ec242ce5dd760b330bd12c19e84116fb71d0ff4e5759135e9e7 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, version=17.1.12, url=https://www.redhat.com, io.openshift.expose-services=, io.buildah.version=1.41.4, config_id=tripleo_puppet_step1, build-date=2025-11-19T00:11:59Z, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-central, release=1761123044, container_name=container-puppet-ceilometer, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-central, com.redhat.component=openstack-ceilometer-central-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, distribution-scope=public)
Dec 02 08:00:08 np0005541914.localdomain systemd[1]: libpod-396ec4554c260ef3fa39f162dfc5651ee3f4236329d79627a71b9bb9a2dedf27.scope: Deactivated successfully.
Dec 02 08:00:08 np0005541914.localdomain systemd[1]: libpod-396ec4554c260ef3fa39f162dfc5651ee3f4236329d79627a71b9bb9a2dedf27.scope: Consumed 2.199s CPU time.
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51840]: Warning: Scope(Class[Nova]): The os_region_name parameter is deprecated and will be removed \
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51840]: in a future release. Use nova::cinder::os_region_name instead
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51840]: Warning: Scope(Class[Nova]): The catalog_info parameter is deprecated and will be removed \
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51840]: in a future release. Use nova::cinder::catalog_info instead
Dec 02 08:00:08 np0005541914.localdomain systemd[1]: libpod-bd3d81b6875a7afa051ebbb8eff5f66052aad4117ff91b78c5efda542bbd94a7.scope: Deactivated successfully.
Dec 02 08:00:08 np0005541914.localdomain systemd[1]: libpod-bd3d81b6875a7afa051ebbb8eff5f66052aad4117ff91b78c5efda542bbd94a7.scope: Consumed 2.153s CPU time.
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51811]: Notice: Compiled catalog for np0005541914.localdomain in environment production in 0.36 seconds
Dec 02 08:00:08 np0005541914.localdomain podman[51644]: 2025-12-02 08:00:08.929718845 +0000 UTC m=+3.982786670 container died 396ec4554c260ef3fa39f162dfc5651ee3f4236329d79627a71b9bb9a2dedf27 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_puppet_step1, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.buildah.version=1.41.4, tcib_managed=true, container_name=container-puppet-crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public)
Dec 02 08:00:08 np0005541914.localdomain puppet-user[51840]: Warning: Unknown variable: '::nova::compute::verify_glance_signatures'. (file: /etc/puppet/modules/nova/manifests/glance.pp, line: 62, column: 41)
Dec 02 08:00:08 np0005541914.localdomain podman[51669]: 2025-12-02 08:00:08.982921852 +0000 UTC m=+4.005336603 container died bd3d81b6875a7afa051ebbb8eff5f66052aad4117ff91b78c5efda542bbd94a7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, managed_by=tripleo_ansible, version=17.1.12, io.openshift.expose-services=, batch=17.1_20251118.1, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=container-puppet-metrics_qdr, release=1761123044, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_puppet_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51782]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Augeas[chap_algs in /etc/iscsi/iscsid.conf]/returns: executed successfully
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51782]: Notice: Applied catalog in 0.46 seconds
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51782]: Application:
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51782]:    Initial environment: production
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51782]:    Converged environment: production
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51782]:          Run mode: user
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51782]: Changes:
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51782]:             Total: 4
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51782]: Events:
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51782]:           Success: 4
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51782]:             Total: 4
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51782]: Resources:
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51782]:           Changed: 4
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51782]:       Out of sync: 4
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51782]:           Skipped: 8
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51782]:             Total: 13
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51782]: Time:
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51782]:              File: 0.00
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51782]:              Exec: 0.05
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51782]:    Config retrieval: 0.14
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51782]:            Augeas: 0.39
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51782]:    Transaction evaluation: 0.45
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51782]:    Catalog application: 0.46
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51782]:          Last run: 1764662409
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51782]:             Total: 0.46
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51782]: Version:
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51782]:            Config: 1764662408
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51782]:            Puppet: 7.10.0
Dec 02 08:00:09 np0005541914.localdomain podman[52322]: 2025-12-02 08:00:09.026684077 +0000 UTC m=+0.080878053 container cleanup bd3d81b6875a7afa051ebbb8eff5f66052aad4117ff91b78c5efda542bbd94a7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_puppet_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, version=17.1.12, container_name=container-puppet-metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, distribution-scope=public, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container)
Dec 02 08:00:09 np0005541914.localdomain systemd[1]: libpod-conmon-bd3d81b6875a7afa051ebbb8eff5f66052aad4117ff91b78c5efda542bbd94a7.scope: Deactivated successfully.
Dec 02 08:00:09 np0005541914.localdomain python3[51455]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-metrics_qdr --conmon-pidfile /run/container-puppet-metrics_qdr.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005541914 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=metrics_qdr --env STEP_CONFIG=include ::tripleo::packages
                                                         include tripleo::profile::base::metrics::qdr
                                                          --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-metrics_qdr --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-metrics_qdr.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51840]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_base_images'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 44, column: 5)
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51840]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_original_minimum_age_seconds'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 48, column: 5)
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51840]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_resized_minimum_age_seconds'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 52, column: 5)
Dec 02 08:00:09 np0005541914.localdomain podman[52321]: 2025-12-02 08:00:09.063699121 +0000 UTC m=+0.124949917 container cleanup 396ec4554c260ef3fa39f162dfc5651ee3f4236329d79627a71b9bb9a2dedf27 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-type=git, build-date=2025-11-18T22:49:32Z, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_puppet_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=container-puppet-crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:00:09 np0005541914.localdomain systemd[1]: libpod-conmon-396ec4554c260ef3fa39f162dfc5651ee3f4236329d79627a71b9bb9a2dedf27.scope: Deactivated successfully.
Dec 02 08:00:09 np0005541914.localdomain python3[51455]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-crond --conmon-pidfile /run/container-puppet-crond.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005541914 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env STEP_CONFIG=include ::tripleo::packages
                                                         include tripleo::profile::base::logging::logrotate --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-crond --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-crond.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-cron:17.1
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/content: content changed '{sha256}aea388a73ebafc7e07a81ddb930a91099211f660eee55fbf92c13007a77501e5' to '{sha256}2523d01ee9c3022c0e9f61d896b1474a168e18472aee141cc278e69fe13f41c1'
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/owner: owner changed 'collectd' to 'root'
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/group: group changed 'collectd' to 'root'
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/mode: mode changed '0644' to '0640'
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51840]: Warning: Scope(Class[Tripleo::Profile::Base::Nova::Compute]): The keymgr_backend parameter has been deprecated
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51840]: Warning: Scope(Class[Nova::Compute]): vcpu_pin_set is deprecated, instead use cpu_dedicated_set or cpu_shared_set.
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51840]: Warning: Scope(Class[Nova::Compute]): verify_glance_signatures is deprecated. Use the same parameter in nova::glance
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/owner: owner changed 'collectd' to 'root'
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/group: group changed 'collectd' to 'root'
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/mode: mode changed '0755' to '0750'
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-cpu.conf]/ensure: removed
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-interface.conf]/ensure: removed
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-load.conf]/ensure: removed
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-memory.conf]/ensure: removed
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-syslog.conf]/ensure: removed
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/apache.conf]/ensure: removed
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/dns.conf]/ensure: removed
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ipmi.conf]/ensure: removed
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/mcelog.conf]/ensure: removed
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/mysql.conf]/ensure: removed
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ovs-events.conf]/ensure: removed
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ovs-stats.conf]/ensure: removed
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ping.conf]/ensure: removed
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/pmu.conf]/ensure: removed
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/rdt.conf]/ensure: removed
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/sensors.conf]/ensure: removed
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/snmp.conf]/ensure: removed
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/write_prometheus.conf]/ensure: removed
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Plugin::Python/File[/usr/lib/python3.9/site-packages]/mode: mode changed '0755' to '0750'
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Plugin::Python/Collectd::Plugin[python]/File[python.load]/ensure: defined content as '{sha256}0163924a0099dd43fe39cb85e836df147fd2cfee8197dc6866d3c384539eb6ee'
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Plugin::Python/Concat[/etc/collectd.d/python-config.conf]/File[/etc/collectd.d/python-config.conf]/ensure: defined content as '{sha256}2e5fb20e60b30f84687fc456a37fc62451000d2d85f5bbc1b3fca3a5eac9deeb'
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Plugin::Logfile/Collectd::Plugin[logfile]/File[logfile.load]/ensure: defined content as '{sha256}07bbda08ef9b824089500bdc6ac5a86e7d1ef2ae3ed4ed423c0559fe6361e5af'
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Plugin::Amqp1/Collectd::Plugin[amqp1]/File[amqp1.load]/ensure: defined content as '{sha256}8dd3769945b86c38433504b97f7851a931eb3c94b667298d10a9796a3d020595'
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Plugin::Ceph/Collectd::Plugin[ceph]/File[ceph.load]/ensure: defined content as '{sha256}c796abffda2e860875295b4fc11cc95c6032b4e13fa8fb128e839a305aa1676c'
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Plugin::Cpu/Collectd::Plugin[cpu]/File[cpu.load]/ensure: defined content as '{sha256}67d4c8bf6bf5785f4cb6b596712204d9eacbcebbf16fe289907195d4d3cb0e34'
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Plugin::Df/Collectd::Plugin[df]/File[df.load]/ensure: defined content as '{sha256}edeb4716d96fc9dca2c6adfe07bae70ba08c6af3944a3900581cba0f08f3c4ba'
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Plugin::Disk/Collectd::Plugin[disk]/File[disk.load]/ensure: defined content as '{sha256}1d0cb838278f3226fcd381f0fc2e0e1abaf0d590f4ba7bcb2fc6ec113d3ebde7'
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Plugin::Hugepages/Collectd::Plugin[hugepages]/File[hugepages.load]/ensure: defined content as '{sha256}9b9f35b65a73da8d4037e4355a23b678f2cf61997ccf7a5e1adf2a7ce6415827'
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Plugin::Hugepages/Collectd::Plugin[hugepages]/File[older_hugepages.load]/ensure: removed
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Plugin::Interface/Collectd::Plugin[interface]/File[interface.load]/ensure: defined content as '{sha256}b76b315dc312e398940fe029c6dbc5c18d2b974ff7527469fc7d3617b5222046'
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Plugin::Load/Collectd::Plugin[load]/File[load.load]/ensure: defined content as '{sha256}af2403f76aebd2f10202d66d2d55e1a8d987eed09ced5a3e3873a4093585dc31'
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Plugin::Memory/Collectd::Plugin[memory]/File[memory.load]/ensure: defined content as '{sha256}0f270425ee6b05fc9440ee32b9afd1010dcbddd9b04ca78ff693858f7ecb9d0e'
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51840]: Warning: Scope(Class[Nova::Compute::Libvirt]): nova::compute::libvirt::images_type will be required if rbd ephemeral storage is used.
Dec 02 08:00:09 np0005541914.localdomain systemd[1]: libpod-e194edf06982fbba8af1e423158f4762cfae96575a27c854af2fae8dbb53e243.scope: Deactivated successfully.
Dec 02 08:00:09 np0005541914.localdomain systemd[1]: libpod-e194edf06982fbba8af1e423158f4762cfae96575a27c854af2fae8dbb53e243.scope: Consumed 2.740s CPU time.
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Plugin::Unixsock/Collectd::Plugin[unixsock]/File[unixsock.load]/ensure: defined content as '{sha256}9d1ec1c51ba386baa6f62d2e019dbd6998ad924bf868b3edc2d24d3dc3c63885'
Dec 02 08:00:09 np0005541914.localdomain podman[51625]: 2025-12-02 08:00:09.295922885 +0000 UTC m=+4.366917194 container died e194edf06982fbba8af1e423158f4762cfae96575a27c854af2fae8dbb53e243 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, distribution-scope=public, config_id=tripleo_puppet_step1, container_name=container-puppet-iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, release=1761123044, vendor=Red Hat, Inc.)
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Plugin::Uptime/Collectd::Plugin[uptime]/File[uptime.load]/ensure: defined content as '{sha256}f7a26c6369f904d0ca1af59627ebea15f5e72160bcacdf08d217af282b42e5c0'
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Plugin::Virt/Collectd::Plugin[virt]/File[virt.load]/ensure: defined content as '{sha256}9a2bcf913f6bf8a962a0ff351a9faea51ae863cc80af97b77f63f8ab68941c62'
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: /Stage[main]/Collectd::Plugin::Virt/Collectd::Plugin[virt]/File[older_virt.load]/ensure: removed
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Notice: Applied catalog in 0.29 seconds
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Application:
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]:    Initial environment: production
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]:    Converged environment: production
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]:          Run mode: user
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Changes:
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]:             Total: 43
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Events:
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]:           Success: 43
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]:             Total: 43
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Resources:
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]:           Skipped: 14
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]:           Changed: 38
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]:       Out of sync: 38
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]:             Total: 82
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Time:
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]:    Concat fragment: 0.00
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]:       Concat file: 0.00
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]:              File: 0.17
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]:    Transaction evaluation: 0.27
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]:    Catalog application: 0.29
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]:    Config retrieval: 0.49
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]:          Last run: 1764662409
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]:             Total: 0.29
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]: Version:
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]:            Config: 1764662408
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51811]:            Puppet: 7.10.0
Dec 02 08:00:09 np0005541914.localdomain podman[52475]: 2025-12-02 08:00:09.400697599 +0000 UTC m=+0.094132137 container cleanup e194edf06982fbba8af1e423158f4762cfae96575a27c854af2fae8dbb53e243 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, container_name=container-puppet-iscsid, version=17.1.12, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, io.buildah.version=1.41.4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_id=tripleo_puppet_step1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64)
Dec 02 08:00:09 np0005541914.localdomain systemd[1]: libpod-conmon-e194edf06982fbba8af1e423158f4762cfae96575a27c854af2fae8dbb53e243.scope: Deactivated successfully.
Dec 02 08:00:09 np0005541914.localdomain python3[51455]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-iscsid --conmon-pidfile /run/container-puppet-iscsid.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005541914 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,iscsid_config --env NAME=iscsid --env STEP_CONFIG=include ::tripleo::packages
                                                         include tripleo::profile::base::iscsid
                                                          --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-iscsid --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-iscsid.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/iscsi:/tmp/iscsi.host:z --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1
Dec 02 08:00:09 np0005541914.localdomain podman[52504]: 2025-12-02 08:00:09.433691783 +0000 UTC m=+0.077398179 container create 7f052286f4e335d8d24dc834e47a500ce9df94f9e0c9499a5327ee5cef14ee4e (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, vcs-type=git, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc., container_name=container-puppet-ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_puppet_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 02 08:00:09 np0005541914.localdomain podman[52482]: 2025-12-02 08:00:09.458568195 +0000 UTC m=+0.134931824 container create a548c2ff58f0fac68171c484bc56f01793a35da78bc1e9b62e76858e6f9b179a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:49Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, container_name=container-puppet-rsyslog, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_id=tripleo_puppet_step1, com.redhat.component=openstack-rsyslog-container, io.buildah.version=1.41.4, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, architecture=x86_64, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:00:09 np0005541914.localdomain podman[52504]: 2025-12-02 08:00:09.385331591 +0000 UTC m=+0.029037997 image pull  registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1
Dec 02 08:00:09 np0005541914.localdomain systemd[1]: Started libpod-conmon-7f052286f4e335d8d24dc834e47a500ce9df94f9e0c9499a5327ee5cef14ee4e.scope.
Dec 02 08:00:09 np0005541914.localdomain systemd[1]: Started libpod-conmon-a548c2ff58f0fac68171c484bc56f01793a35da78bc1e9b62e76858e6f9b179a.scope.
Dec 02 08:00:09 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:00:09 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d40ebd622fb49c1d984ae69be39f1f1d5d9bbd0185c9e75888b797dd6f2afb7e/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Dec 02 08:00:09 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d40ebd622fb49c1d984ae69be39f1f1d5d9bbd0185c9e75888b797dd6f2afb7e/merged/etc/sysconfig/modules supports timestamps until 2038 (0x7fffffff)
Dec 02 08:00:09 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:00:09 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ac3d5ef6cd74f750bad6e1bed4e64701dec5212d5cf52ac16ce138246b77afa/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Dec 02 08:00:09 np0005541914.localdomain podman[52504]: 2025-12-02 08:00:09.511873794 +0000 UTC m=+0.155580180 container init 7f052286f4e335d8d24dc834e47a500ce9df94f9e0c9499a5327ee5cef14ee4e (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, version=17.1.12, config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, container_name=container-puppet-ovn_controller, io.openshift.expose-services=, io.buildah.version=1.41.4, url=https://www.redhat.com, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, release=1761123044, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git)
Dec 02 08:00:09 np0005541914.localdomain podman[52482]: 2025-12-02 08:00:09.425133278 +0000 UTC m=+0.101496897 image pull  registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1
Dec 02 08:00:09 np0005541914.localdomain podman[52504]: 2025-12-02 08:00:09.52513894 +0000 UTC m=+0.168845326 container start 7f052286f4e335d8d24dc834e47a500ce9df94f9e0c9499a5327ee5cef14ee4e (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, tcib_managed=true, release=1761123044, url=https://www.redhat.com, container_name=container-puppet-ovn_controller, config_id=tripleo_puppet_step1, batch=17.1_20251118.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, architecture=x86_64, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc.)
Dec 02 08:00:09 np0005541914.localdomain podman[52504]: 2025-12-02 08:00:09.525872792 +0000 UTC m=+0.169579198 container attach 7f052286f4e335d8d24dc834e47a500ce9df94f9e0c9499a5327ee5cef14ee4e (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, container_name=container-puppet-ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, vcs-type=git, architecture=x86_64, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_puppet_step1, url=https://www.redhat.com, release=1761123044)
Dec 02 08:00:09 np0005541914.localdomain podman[52482]: 2025-12-02 08:00:09.605852936 +0000 UTC m=+0.282216575 container init a548c2ff58f0fac68171c484bc56f01793a35da78bc1e9b62e76858e6f9b179a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, vcs-type=git, io.openshift.expose-services=, version=17.1.12, architecture=x86_64, com.redhat.component=openstack-rsyslog-container, build-date=2025-11-18T22:49:49Z, release=1761123044, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, container_name=container-puppet-rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog)
Dec 02 08:00:09 np0005541914.localdomain podman[52482]: 2025-12-02 08:00:09.616177824 +0000 UTC m=+0.292541463 container start a548c2ff58f0fac68171c484bc56f01793a35da78bc1e9b62e76858e6f9b179a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, io.buildah.version=1.41.4, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-rsyslog-container, summary=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_id=tripleo_puppet_step1, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-rsyslog, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, container_name=container-puppet-rsyslog, io.openshift.expose-services=, build-date=2025-11-18T22:49:49Z, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc.)
Dec 02 08:00:09 np0005541914.localdomain podman[52482]: 2025-12-02 08:00:09.616433302 +0000 UTC m=+0.292796941 container attach a548c2ff58f0fac68171c484bc56f01793a35da78bc1e9b62e76858e6f9b179a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, vcs-type=git, com.redhat.component=openstack-rsyslog-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, io.buildah.version=1.41.4, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_puppet_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., architecture=x86_64, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, tcib_managed=true, container_name=container-puppet-rsyslog, managed_by=tripleo_ansible)
Dec 02 08:00:09 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-4cbd426914bbc0b3c94f281248297da1bdd998807cad604e4ab2f39851a1899c-merged.mount: Deactivated successfully.
Dec 02 08:00:09 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bd3d81b6875a7afa051ebbb8eff5f66052aad4117ff91b78c5efda542bbd94a7-userdata-shm.mount: Deactivated successfully.
Dec 02 08:00:09 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-400c7ba0962a9736ae4730e3c3204c67b2bad8d9266c2a49e5c729fb35c892ee-merged.mount: Deactivated successfully.
Dec 02 08:00:09 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-1cd4674896f37ed03c180aa0ab9f93ced388cfe5185ce6c19dc1fe143ce7985a-merged.mount: Deactivated successfully.
Dec 02 08:00:09 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-396ec4554c260ef3fa39f162dfc5651ee3f4236329d79627a71b9bb9a2dedf27-userdata-shm.mount: Deactivated successfully.
Dec 02 08:00:09 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e194edf06982fbba8af1e423158f4762cfae96575a27c854af2fae8dbb53e243-userdata-shm.mount: Deactivated successfully.
Dec 02 08:00:09 np0005541914.localdomain systemd[1]: libpod-85b95abaafde60f63af4623ac20048fabe216f5d7494a8d686d11540d0ec48f7.scope: Deactivated successfully.
Dec 02 08:00:09 np0005541914.localdomain systemd[1]: libpod-85b95abaafde60f63af4623ac20048fabe216f5d7494a8d686d11540d0ec48f7.scope: Consumed 2.865s CPU time.
Dec 02 08:00:09 np0005541914.localdomain podman[51663]: 2025-12-02 08:00:09.71127296 +0000 UTC m=+4.730441644 container died 85b95abaafde60f63af4623ac20048fabe216f5d7494a8d686d11540d0ec48f7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, com.redhat.component=openstack-collectd-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=container-puppet-collectd, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.41.4, url=https://www.redhat.com, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_puppet_step1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1761123044, vcs-type=git)
Dec 02 08:00:09 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-85b95abaafde60f63af4623ac20048fabe216f5d7494a8d686d11540d0ec48f7-userdata-shm.mount: Deactivated successfully.
Dec 02 08:00:09 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-1f51912cd7ca4d93a076413ed4727a62a427f09f722d7bf72e350182571c8db0-merged.mount: Deactivated successfully.
Dec 02 08:00:09 np0005541914.localdomain podman[52635]: 2025-12-02 08:00:09.798287224 +0000 UTC m=+0.078526712 container cleanup 85b95abaafde60f63af4623ac20048fabe216f5d7494a8d686d11540d0ec48f7 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_puppet_step1, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=container-puppet-collectd, batch=17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-collectd, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible)
Dec 02 08:00:09 np0005541914.localdomain systemd[1]: libpod-conmon-85b95abaafde60f63af4623ac20048fabe216f5d7494a8d686d11540d0ec48f7.scope: Deactivated successfully.
Dec 02 08:00:09 np0005541914.localdomain python3[51455]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-collectd --conmon-pidfile /run/container-puppet-collectd.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005541914 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,collectd_client_config,exec --env NAME=collectd --env STEP_CONFIG=include ::tripleo::packages
                                                         include tripleo::profile::base::metrics::collectd --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-collectd --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-collectd.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1
Dec 02 08:00:09 np0005541914.localdomain puppet-user[51840]: Notice: Compiled catalog for np0005541914.localdomain in environment production in 1.32 seconds
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File[/etc/nova/migration/identity]/content: content changed '{sha256}86610d84e745a3992358ae0b747297805d075492e5114c666fa08f8aecce7da0' to '{sha256}5ba64817af7f9555281205611eb52d45214b5127a0e5ce894ff9b319c0723a16'
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File_line[nova_ssh_port]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/File[/etc/sasl2/libvirt.conf]/content: content changed '{sha256}78510a0d6f14b269ddeb9f9638dfdfba9f976d370ee2ec04ba25352a8af6df35' to '{sha256}6d7bcae773217a30c0772f75d0d1b6d21f5d64e72853f5e3d91bb47799dbb7fe'
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Warning: Empty environment setting 'TLS_PASSWORD'
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]:    (file: /etc/puppet/modules/tripleo/manifests/profile/base/nova/libvirt.pp, line: 182)
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/Exec[set libvirt sasl credentials]/returns: executed successfully
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File[/etc/nova/migration/authorized_keys]/content: content changed '{sha256}0d05a8832f36c0517b84e9c3ad11069d531c7d2be5297661e5552fd29e3a5e47' to '{sha256}8c1883a65300cc327d1cb9c34702b30b2083e07e3f42b734ab7685f1cc6449ef'
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File_line[nova_migration_logindefs]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/never_download_image_if_on_rbd]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/disable_compute_service_check_for_ffu]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ssl_only]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/my_ip]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/host]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/cpu_allocation_ratio]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ram_allocation_ratio]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/disk_allocation_ratio]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/dhcp_domain]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova/Nova_config[vif_plug_ovs/ovsdb_connection]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova/Nova_config[cinder/cross_az_attach]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Glance/Nova_config[glance/valid_interfaces]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/region_name]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/valid_interfaces]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/password]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/auth_type]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/auth_url]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/region_name]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/project_name]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[52333]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 02 08:00:10 np0005541914.localdomain puppet-user[52333]:    (file: /etc/puppet/hiera.yaml)
Dec 02 08:00:10 np0005541914.localdomain puppet-user[52333]: Warning: Undefined variable '::deploy_config_name';
Dec 02 08:00:10 np0005541914.localdomain puppet-user[52333]:    (file & line not available)
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/project_domain_name]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/username]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[52333]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 02 08:00:10 np0005541914.localdomain puppet-user[52333]:    (file & line not available)
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/user_domain_name]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/os_region_name]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/catalog_info]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/manager_interval]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_base_images]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_original_minimum_age_seconds]/ensure: created
Dec 02 08:00:10 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_resized_minimum_age_seconds]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/precache_concurrency]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Warning: Unknown variable: '::ceilometer::cache_backend'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 145, column: 39)
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Warning: Unknown variable: '::ceilometer::memcache_servers'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 146, column: 39)
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Warning: Unknown variable: '::ceilometer::cache_tls_enabled'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 147, column: 39)
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Warning: Unknown variable: '::ceilometer::cache_tls_cafile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 148, column: 39)
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Warning: Unknown variable: '::ceilometer::cache_tls_certfile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 149, column: 39)
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Warning: Unknown variable: '::ceilometer::cache_tls_keyfile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 150, column: 39)
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Warning: Unknown variable: '::ceilometer::cache_tls_allowed_ciphers'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 151, column: 39)
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Warning: Unknown variable: '::ceilometer::manage_backend_package'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 152, column: 39)
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/project_domain_name]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/user_domain_name]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_password'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 63, column: 25)
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_url'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 68, column: 25)
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_region'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 69, column: 28)
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_user'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 70, column: 25)
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_tenant_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 71, column: 29)
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_cacert'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 72, column: 23)
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_endpoint_type'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 73, column: 26)
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_user_domain_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 74, column: 33)
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_project_domain_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 75, column: 36)
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_type'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 76, column: 26)
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Provider/Nova_config[compute/provider_config_location]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Provider/File[/etc/nova/provider_config]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Notice: Compiled catalog for np0005541914.localdomain in environment production in 0.40 seconds
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/use_cow_images]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/mkisofs_cmd]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/force_raw_images]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_host_memory_mb]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/http_timeout]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/host]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[publisher/telemetry_secret]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_name]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_huge_pages]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_password]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/auth_url]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/region_name]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/username]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/password]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/project_name]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/interface]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/user_domain_name]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/project_domain_name]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/auth_type]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[compute/instance_discovery_method]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/resume_guests_state_on_host_boot]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[DEFAULT/polling_namespaces]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute/Nova_config[key_manager/backend]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[polling/tenant_name_discovery]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[coordination/backend_url]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52561]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52561]:    (file: /etc/puppet/hiera.yaml)
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52561]: Warning: Undefined variable '::deploy_config_name';
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52561]:    (file & line not available)
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/sync_power_state_interval]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/consecutive_build_service_disable_threshold]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/backend]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/enabled]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/live_migration_wait_for_vif_plug]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52561]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52561]:    (file & line not available)
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/memcache_servers]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/max_disk_devices_to_attach]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/tls_enabled]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52636]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52636]:    (file: /etc/puppet/hiera.yaml)
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52636]: Warning: Undefined variable '::deploy_config_name';
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52636]:    (file & line not available)
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52636]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52636]:    (file & line not available)
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Vncproxy::Common/Nova_config[vnc/novncproxy_base_url]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/server_proxyclient_address]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/enabled]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Amqp[ceilometer_config]/Ceilometer_config[oslo_messaging_amqp/rpc_address_prefix]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute/Nova_config[spice/enabled]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Amqp[ceilometer_config]/Ceilometer_config[oslo_messaging_amqp/notify_address_prefix]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit_period]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52561]: Notice: Compiled catalog for np0005541914.localdomain in environment production in 0.25 seconds
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/driver]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/transport_url]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/topics]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52636]: Notice: Compiled catalog for np0005541914.localdomain in environment production in 0.25 seconds
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Default[ceilometer_config]/Ceilometer_config[DEFAULT/transport_url]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain ovs-vsctl[52889]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-remote=tcp:172.17.0.103:6642,tcp:172.17.0.104:6642,tcp:172.17.0.105:6642
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/valid_interfaces]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/debug]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52561]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-remote]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/log_dir]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain ovs-vsctl[52891]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-type=geneve
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52561]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-type]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_uri]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain ovs-vsctl[52893]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-ip=172.19.0.108
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Notice: Applied catalog in 0.50 seconds
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Application:
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]:    Initial environment: production
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]:    Converged environment: production
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]:          Run mode: user
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Changes:
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]:             Total: 31
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Events:
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]:           Success: 31
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]:             Total: 31
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Resources:
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]:           Skipped: 22
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]:           Changed: 31
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]:       Out of sync: 31
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]:             Total: 151
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Time:
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]:           Package: 0.02
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]:    Ceilometer config: 0.39
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]:    Config retrieval: 0.48
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]:    Transaction evaluation: 0.49
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]:    Catalog application: 0.50
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]:          Last run: 1764662411
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]:         Resources: 0.00
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]:             Total: 0.50
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]: Version:
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]:            Config: 1764662410
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52333]:            Puppet: 7.10.0
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52561]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-ip]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_tunnelled]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain ovs-vsctl[52896]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:hostname=np0005541914.localdomain
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52561]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:hostname]/value: value changed 'np0005541914.novalocal' to 'np0005541914.localdomain'
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_inbound_addr]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_permit_post_copy]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain ovs-vsctl[52898]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-bridge=br-int
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_permit_auto_converge]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52561]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-bridge]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Migration::Libvirt/Virtproxyd_config[listen_tls]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52636]: Notice: /Stage[main]/Rsyslog::Base/File[/etc/rsyslog.conf]/content: content changed '{sha256}d6f679f6a4eb6f33f9fc20c846cb30bef93811e1c86bc4da1946dc3100b826c3' to '{sha256}7963bd801fadd49a17561f4d3f80738c3f504b413b11c443432d8303138041f2'
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Migration::Libvirt/Virtproxyd_config[listen_tcp]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_user]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52636]: Notice: /Stage[main]/Rsyslog::Config::Global/Rsyslog::Component::Global_config[MaxMessageSize]/Rsyslog::Generate_concat[rsyslog::concat::global_config::MaxMessageSize]/Concat[/etc/rsyslog.d/00_rsyslog.conf]/File[/etc/rsyslog.d/00_rsyslog.conf]/ensure: defined content as '{sha256}a291d5cc6d5884a978161f4c7b5831d43edd07797cc590bae366e7f150b8643b'
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_secret_uuid]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain ovs-vsctl[52900]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-remote-probe-interval=60000
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52636]: Notice: /Stage[main]/Rsyslog::Config::Templates/Rsyslog::Component::Template[rsyslog-node-index]/Rsyslog::Generate_concat[rsyslog::concat::template::rsyslog-node-index]/Concat[/etc/rsyslog.d/50_openstack_logs.conf]/File[/etc/rsyslog.d/50_openstack_logs.conf]/ensure: defined content as '{sha256}301188e6f63ae57a8369f32fa91f83fd08a8e66eaf1f4f68c45e1e21c89fcefc'
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Rbd/File[/etc/nova/secret.xml]/ensure: defined content as '{sha256}3f62d179f65be7c16842a28abf994d6a58e30b2328fb95c74da2c0a9b9529a22'
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52561]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-remote-probe-interval]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52636]: Notice: Applied catalog in 0.12 seconds
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52636]: Application:
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52636]:    Initial environment: production
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52636]:    Converged environment: production
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52636]:          Run mode: user
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52636]: Changes:
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52636]:             Total: 3
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52636]: Events:
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52636]:           Success: 3
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52636]:             Total: 3
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52636]: Resources:
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52636]:           Skipped: 11
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52636]:           Changed: 3
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52636]:       Out of sync: 3
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52636]:             Total: 25
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52636]: Time:
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52636]:       Concat file: 0.00
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52636]:    Concat fragment: 0.00
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52636]:              File: 0.02
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52636]:    Transaction evaluation: 0.12
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52636]:    Catalog application: 0.12
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52636]:    Config retrieval: 0.30
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52636]:          Last run: 1764662411
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52636]:             Total: 0.12
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52636]: Version:
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52636]:            Config: 1764662411
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52636]:            Puppet: 7.10.0
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_type]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_pool]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain ovs-vsctl[52902]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-openflow-probe-interval=60
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_ceph_conf]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52561]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-openflow-probe-interval]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_store_name]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_copy_poll_interval]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain ovs-vsctl[52904]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-monitor-all=true
Dec 02 08:00:11 np0005541914.localdomain puppet-user[52561]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-monitor-all]/ensure: created
Dec 02 08:00:11 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_copy_timeout]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain ovs-vsctl[52916]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-ofctrl-wait-before-clear=8000
Dec 02 08:00:12 np0005541914.localdomain puppet-user[52561]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-ofctrl-wait-before-clear]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/compute_driver]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain ovs-vsctl[52920]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-tos=0
Dec 02 08:00:12 np0005541914.localdomain puppet-user[52561]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-tos]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/preallocate_images]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain ovs-vsctl[52922]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-chassis-mac-mappings=datacentre:fa:16:3e:6e:12:41
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[vnc/server_listen]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[52561]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-chassis-mac-mappings]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/virt_type]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain ovs-vsctl[52928]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-bridge-mappings=datacentre:br-ex
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_mode]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[52561]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-bridge-mappings]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_password]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain ovs-vsctl[52932]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-match-northd-version=false
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_key]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[52561]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-match-northd-version]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_partition]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain ovs-vsctl[52934]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:garp-max-timeout-sec=0
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_disk_discard]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[52561]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:garp-max-timeout-sec]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_machine_type]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/enabled_perf_events]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[52561]: Notice: Applied catalog in 0.43 seconds
Dec 02 08:00:12 np0005541914.localdomain puppet-user[52561]: Application:
Dec 02 08:00:12 np0005541914.localdomain puppet-user[52561]:    Initial environment: production
Dec 02 08:00:12 np0005541914.localdomain puppet-user[52561]:    Converged environment: production
Dec 02 08:00:12 np0005541914.localdomain puppet-user[52561]:          Run mode: user
Dec 02 08:00:12 np0005541914.localdomain puppet-user[52561]: Changes:
Dec 02 08:00:12 np0005541914.localdomain puppet-user[52561]:             Total: 14
Dec 02 08:00:12 np0005541914.localdomain puppet-user[52561]: Events:
Dec 02 08:00:12 np0005541914.localdomain puppet-user[52561]:           Success: 14
Dec 02 08:00:12 np0005541914.localdomain puppet-user[52561]:             Total: 14
Dec 02 08:00:12 np0005541914.localdomain puppet-user[52561]: Resources:
Dec 02 08:00:12 np0005541914.localdomain puppet-user[52561]:           Skipped: 12
Dec 02 08:00:12 np0005541914.localdomain puppet-user[52561]:           Changed: 14
Dec 02 08:00:12 np0005541914.localdomain puppet-user[52561]:       Out of sync: 14
Dec 02 08:00:12 np0005541914.localdomain puppet-user[52561]:             Total: 29
Dec 02 08:00:12 np0005541914.localdomain puppet-user[52561]: Time:
Dec 02 08:00:12 np0005541914.localdomain puppet-user[52561]:              Exec: 0.02
Dec 02 08:00:12 np0005541914.localdomain puppet-user[52561]:    Config retrieval: 0.29
Dec 02 08:00:12 np0005541914.localdomain puppet-user[52561]:         Vs config: 0.37
Dec 02 08:00:12 np0005541914.localdomain puppet-user[52561]:    Transaction evaluation: 0.42
Dec 02 08:00:12 np0005541914.localdomain puppet-user[52561]:    Catalog application: 0.43
Dec 02 08:00:12 np0005541914.localdomain puppet-user[52561]:          Last run: 1764662412
Dec 02 08:00:12 np0005541914.localdomain puppet-user[52561]:             Total: 0.43
Dec 02 08:00:12 np0005541914.localdomain puppet-user[52561]: Version:
Dec 02 08:00:12 np0005541914.localdomain puppet-user[52561]:            Config: 1764662411
Dec 02 08:00:12 np0005541914.localdomain puppet-user[52561]:            Puppet: 7.10.0
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/rx_queue_size]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/tx_queue_size]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/file_backed_memory]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/volume_use_multipath]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/num_pcie_ports]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/mem_stats_period_seconds]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain systemd[1]: libpod-acca850a007a0ec242ce5dd760b330bd12c19e84116fb71d0ff4e5759135e9e7.scope: Deactivated successfully.
Dec 02 08:00:12 np0005541914.localdomain systemd[1]: libpod-acca850a007a0ec242ce5dd760b330bd12c19e84116fb71d0ff4e5759135e9e7.scope: Consumed 3.163s CPU time.
Dec 02 08:00:12 np0005541914.localdomain systemd[1]: libpod-a548c2ff58f0fac68171c484bc56f01793a35da78bc1e9b62e76858e6f9b179a.scope: Deactivated successfully.
Dec 02 08:00:12 np0005541914.localdomain systemd[1]: libpod-a548c2ff58f0fac68171c484bc56f01793a35da78bc1e9b62e76858e6f9b179a.scope: Consumed 2.390s CPU time.
Dec 02 08:00:12 np0005541914.localdomain podman[52224]: 2025-12-02 08:00:12.238035763 +0000 UTC m=+3.678448275 container died acca850a007a0ec242ce5dd760b330bd12c19e84116fb71d0ff4e5759135e9e7 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-central, config_id=tripleo_puppet_step1, build-date=2025-11-19T00:11:59Z, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-central-container, managed_by=tripleo_ansible, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.openshift.expose-services=, vcs-type=git, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, vendor=Red Hat, Inc., container_name=container-puppet-ceilometer, tcib_managed=true)
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/pmem_namespaces]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/swtpm_enabled]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain podman[52482]: 2025-12-02 08:00:12.290996512 +0000 UTC m=+2.967360161 container died a548c2ff58f0fac68171c484bc56f01793a35da78bc1e9b62e76858e6f9b179a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:49Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, description=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=container-puppet-rsyslog, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, com.redhat.component=openstack-rsyslog-container, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_puppet_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.12, summary=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, distribution-scope=public, release=1761123044, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']})
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_model_extra_flags]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/disk_cachemodes]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtlogd/Virtlogd_config[log_filters]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtlogd/Virtlogd_config[log_outputs]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtproxyd/Virtproxyd_config[log_filters]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtproxyd/Virtproxyd_config[log_outputs]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtqemud/Virtqemud_config[log_filters]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtqemud/Virtqemud_config[log_outputs]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtnodedevd/Virtnodedevd_config[log_filters]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtnodedevd/Virtnodedevd_config[log_outputs]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtstoraged/Virtstoraged_config[log_filters]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtstoraged/Virtstoraged_config[log_outputs]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtsecretd/Virtsecretd_config[log_filters]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtsecretd/Virtsecretd_config[log_outputs]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_group]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[auth_unix_ro]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[auth_unix_rw]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_ro_perms]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_rw_perms]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_group]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[auth_unix_ro]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[auth_unix_rw]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_ro_perms]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_rw_perms]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_group]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[auth_unix_ro]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[auth_unix_rw]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_ro_perms]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_rw_perms]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_group]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[auth_unix_ro]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[auth_unix_rw]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_ro_perms]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_rw_perms]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_group]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[auth_unix_ro]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[auth_unix_rw]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_ro_perms]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_rw_perms]/ensure: created
Dec 02 08:00:12 np0005541914.localdomain systemd[1]: libpod-7f052286f4e335d8d24dc834e47a500ce9df94f9e0c9499a5327ee5cef14ee4e.scope: Deactivated successfully.
Dec 02 08:00:12 np0005541914.localdomain systemd[1]: libpod-7f052286f4e335d8d24dc834e47a500ce9df94f9e0c9499a5327ee5cef14ee4e.scope: Consumed 2.876s CPU time.
Dec 02 08:00:12 np0005541914.localdomain podman[52504]: 2025-12-02 08:00:12.608095517 +0000 UTC m=+3.251801943 container died 7f052286f4e335d8d24dc834e47a500ce9df94f9e0c9499a5327ee5cef14ee4e (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_puppet_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=container-puppet-ovn_controller, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, architecture=x86_64, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1761123044, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:00:12 np0005541914.localdomain systemd[1]: tmp-crun.4jjSGK.mount: Deactivated successfully.
Dec 02 08:00:12 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-acca850a007a0ec242ce5dd760b330bd12c19e84116fb71d0ff4e5759135e9e7-userdata-shm.mount: Deactivated successfully.
Dec 02 08:00:12 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3d2cbcd6205ebc71bef7b0378e46c50958788e3d833a076a9d36ebe402a8a467-merged.mount: Deactivated successfully.
Dec 02 08:00:13 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Compute::Libvirt::Qemu/Augeas[qemu-conf-limits]/returns: executed successfully
Dec 02 08:00:13 np0005541914.localdomain podman[53001]: 2025-12-02 08:00:13.353692029 +0000 UTC m=+1.107345539 container cleanup acca850a007a0ec242ce5dd760b330bd12c19e84116fb71d0ff4e5759135e9e7 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, config_id=tripleo_puppet_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, description=Red Hat OpenStack Platform 17.1 ceilometer-central, name=rhosp17/openstack-ceilometer-central, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-central-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1761123044, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, container_name=container-puppet-ceilometer, build-date=2025-11-19T00:11:59Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 02 08:00:13 np0005541914.localdomain python3[51455]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-ceilometer --conmon-pidfile /run/container-puppet-ceilometer.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005541914 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config --env NAME=ceilometer --env STEP_CONFIG=include ::tripleo::packages
                                                         include tripleo::profile::base::ceilometer::agent::polling
                                                         include tripleo::profile::base::ceilometer::agent::polling
                                                          --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-ceilometer --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-ceilometer.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1
Dec 02 08:00:13 np0005541914.localdomain systemd[1]: libpod-conmon-acca850a007a0ec242ce5dd760b330bd12c19e84116fb71d0ff4e5759135e9e7.scope: Deactivated successfully.
Dec 02 08:00:13 np0005541914.localdomain podman[52607]: 2025-12-02 08:00:09.683277755 +0000 UTC m=+0.025958695 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1
Dec 02 08:00:13 np0005541914.localdomain podman[53002]: 2025-12-02 08:00:13.377963133 +0000 UTC m=+1.129790118 container cleanup a548c2ff58f0fac68171c484bc56f01793a35da78bc1e9b62e76858e6f9b179a (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, container_name=container-puppet-rsyslog, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, url=https://www.redhat.com, build-date=2025-11-18T22:49:49Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=17.1.12, name=rhosp17/openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-rsyslog-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, config_id=tripleo_puppet_step1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=)
Dec 02 08:00:13 np0005541914.localdomain podman[53057]: 2025-12-02 08:00:13.396060473 +0000 UTC m=+0.777008800 container cleanup 7f052286f4e335d8d24dc834e47a500ce9df94f9e0c9499a5327ee5cef14ee4e (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, build-date=2025-11-18T23:34:05Z, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, managed_by=tripleo_ansible, container_name=container-puppet-ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_puppet_step1, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 02 08:00:13 np0005541914.localdomain systemd[1]: libpod-conmon-a548c2ff58f0fac68171c484bc56f01793a35da78bc1e9b62e76858e6f9b179a.scope: Deactivated successfully.
Dec 02 08:00:13 np0005541914.localdomain systemd[1]: libpod-conmon-7f052286f4e335d8d24dc834e47a500ce9df94f9e0c9499a5327ee5cef14ee4e.scope: Deactivated successfully.
Dec 02 08:00:13 np0005541914.localdomain python3[51455]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-rsyslog --conmon-pidfile /run/container-puppet-rsyslog.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005541914 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment --env NAME=rsyslog --env STEP_CONFIG=include ::tripleo::packages
                                                         include tripleo::profile::base::logging::rsyslog --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-rsyslog --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-rsyslog.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1
Dec 02 08:00:13 np0005541914.localdomain python3[51455]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-ovn_controller --conmon-pidfile /run/container-puppet-ovn_controller.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005541914 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,vs_config,exec --env NAME=ovn_controller --env STEP_CONFIG=include ::tripleo::packages
                                                         include tripleo::profile::base::neutron::agents::ovn
                                                          --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-ovn_controller --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-ovn_controller.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /etc/sysconfig/modules:/etc/sysconfig/modules --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1
Dec 02 08:00:13 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Migration::Qemu/Augeas[qemu-conf-migration-ports]/returns: executed successfully
Dec 02 08:00:13 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created
Dec 02 08:00:13 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created
Dec 02 08:00:13 np0005541914.localdomain podman[53211]: 2025-12-02 08:00:13.636176083 +0000 UTC m=+0.068296087 container create 9a96d3f913d1b4dde6250bc3d5b2f8cf117698d47c8dab6e4724b5c4e6a31a32 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-server-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-server, name=rhosp17/openstack-neutron-server, architecture=x86_64, build-date=2025-11-19T00:23:27Z, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-server, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, container_name=container-puppet-neutron, config_id=tripleo_puppet_step1, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server)
Dec 02 08:00:13 np0005541914.localdomain systemd[1]: Started libpod-conmon-9a96d3f913d1b4dde6250bc3d5b2f8cf117698d47c8dab6e4724b5c4e6a31a32.scope.
Dec 02 08:00:13 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:00:13 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73f9890a30d4cca7075aebf2d1c79838b39a1c605ffe5291a19916efb9ec9b29/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Dec 02 08:00:13 np0005541914.localdomain podman[53211]: 2025-12-02 08:00:13.693508263 +0000 UTC m=+0.125628267 container init 9a96d3f913d1b4dde6250bc3d5b2f8cf117698d47c8dab6e4724b5c4e6a31a32 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, release=1761123044, name=rhosp17/openstack-neutron-server, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-11-19T00:23:27Z, tcib_managed=true, config_id=tripleo_puppet_step1, com.redhat.component=openstack-neutron-server-container, url=https://www.redhat.com, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=container-puppet-neutron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-server, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1)
Dec 02 08:00:13 np0005541914.localdomain podman[53211]: 2025-12-02 08:00:13.600722715 +0000 UTC m=+0.032842719 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1
Dec 02 08:00:13 np0005541914.localdomain podman[53211]: 2025-12-02 08:00:13.702836121 +0000 UTC m=+0.134956155 container start 9a96d3f913d1b4dde6250bc3d5b2f8cf117698d47c8dab6e4724b5c4e6a31a32 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:23:27Z, vendor=Red Hat, Inc., architecture=x86_64, container_name=container-puppet-neutron, config_id=tripleo_puppet_step1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, summary=Red Hat OpenStack Platform 17.1 neutron-server, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-server-container, description=Red Hat OpenStack Platform 17.1 neutron-server, name=rhosp17/openstack-neutron-server, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, io.openshift.expose-services=, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, url=https://www.redhat.com)
Dec 02 08:00:13 np0005541914.localdomain podman[53211]: 2025-12-02 08:00:13.703077238 +0000 UTC m=+0.135197262 container attach 9a96d3f913d1b4dde6250bc3d5b2f8cf117698d47c8dab6e4724b5c4e6a31a32 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, distribution-scope=public, config_id=tripleo_puppet_step1, vcs-type=git, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-server, name=rhosp17/openstack-neutron-server, container_name=container-puppet-neutron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, url=https://www.redhat.com, build-date=2025-11-19T00:23:27Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., release=1761123044, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, com.redhat.component=openstack-neutron-server-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']})
Dec 02 08:00:13 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/backend]/ensure: created
Dec 02 08:00:13 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/enabled]/ensure: created
Dec 02 08:00:13 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/memcache_servers]/ensure: created
Dec 02 08:00:13 np0005541914.localdomain systemd[1]: tmp-crun.kOmGFi.mount: Deactivated successfully.
Dec 02 08:00:13 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-6ac3d5ef6cd74f750bad6e1bed4e64701dec5212d5cf52ac16ce138246b77afa-merged.mount: Deactivated successfully.
Dec 02 08:00:13 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a548c2ff58f0fac68171c484bc56f01793a35da78bc1e9b62e76858e6f9b179a-userdata-shm.mount: Deactivated successfully.
Dec 02 08:00:13 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d40ebd622fb49c1d984ae69be39f1f1d5d9bbd0185c9e75888b797dd6f2afb7e-merged.mount: Deactivated successfully.
Dec 02 08:00:13 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7f052286f4e335d8d24dc834e47a500ce9df94f9e0c9499a5327ee5cef14ee4e-userdata-shm.mount: Deactivated successfully.
Dec 02 08:00:13 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/tls_enabled]/ensure: created
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/ssl]/ensure: created
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/transport_url]/ensure: created
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_type]/ensure: created
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/region_name]/ensure: created
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_url]/ensure: created
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/username]/ensure: created
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/password]/ensure: created
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/user_domain_name]/ensure: created
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_name]/ensure: created
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_domain_name]/ensure: created
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/send_service_user_token]/ensure: created
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]: Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/ensure: defined content as '{sha256}66a7ab6cc1a19ea5002a5aaa2cfb2f196778c89c859d0afac926fe3fac9c75a4'
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]: Notice: Applied catalog in 4.65 seconds
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]: Application:
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]:    Initial environment: production
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]:    Converged environment: production
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]:          Run mode: user
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]: Changes:
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]:             Total: 183
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]: Events:
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]:           Success: 183
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]:             Total: 183
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]: Resources:
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]:           Changed: 183
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]:       Out of sync: 183
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]:           Skipped: 57
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]:             Total: 487
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]: Time:
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]:       Concat file: 0.00
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]:    Concat fragment: 0.00
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]:            Anchor: 0.00
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]:         File line: 0.00
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]:    Virtlogd config: 0.00
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]:    Virtstoraged config: 0.01
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]:    Virtqemud config: 0.01
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]:           Package: 0.02
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]:    Virtsecretd config: 0.02
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]:    Virtproxyd config: 0.03
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]:              File: 0.03
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]:              Exec: 0.04
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]:    Virtnodedevd config: 0.05
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]:            Augeas: 1.14
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]:    Config retrieval: 1.56
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]:          Last run: 1764662414
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]:       Nova config: 3.10
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]:    Transaction evaluation: 4.64
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]:    Catalog application: 4.65
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]:         Resources: 0.00
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]:             Total: 4.65
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]: Version:
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]:            Config: 1764662408
Dec 02 08:00:14 np0005541914.localdomain puppet-user[51840]:            Puppet: 7.10.0
Dec 02 08:00:15 np0005541914.localdomain puppet-user[53241]: Error: Facter: error while resolving custom fact "haproxy_version": undefined method `strip' for nil:NilClass
Dec 02 08:00:15 np0005541914.localdomain puppet-user[53241]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 02 08:00:15 np0005541914.localdomain puppet-user[53241]:    (file: /etc/puppet/hiera.yaml)
Dec 02 08:00:15 np0005541914.localdomain puppet-user[53241]: Warning: Undefined variable '::deploy_config_name';
Dec 02 08:00:15 np0005541914.localdomain puppet-user[53241]:    (file & line not available)
Dec 02 08:00:15 np0005541914.localdomain puppet-user[53241]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 02 08:00:15 np0005541914.localdomain puppet-user[53241]:    (file & line not available)
Dec 02 08:00:15 np0005541914.localdomain systemd[1]: libpod-0d054a117c7c46e13ca1c41c72142c6e4f9c31e859e3ab54e5194094c2c4096b.scope: Deactivated successfully.
Dec 02 08:00:15 np0005541914.localdomain systemd[1]: libpod-0d054a117c7c46e13ca1c41c72142c6e4f9c31e859e3ab54e5194094c2c4096b.scope: Consumed 8.665s CPU time.
Dec 02 08:00:15 np0005541914.localdomain puppet-user[53241]: Warning: Unknown variable: 'dhcp_agents_per_net'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/neutron.pp, line: 154, column: 37)
Dec 02 08:00:15 np0005541914.localdomain podman[53353]: 2025-12-02 08:00:15.641337593 +0000 UTC m=+0.034603014 container died 0d054a117c7c46e13ca1c41c72142c6e4f9c31e859e3ab54e5194094c2c4096b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_puppet_step1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, version=17.1.12, build-date=2025-11-19T00:35:22Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, container_name=container-puppet-nova_libvirt, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-libvirt)
Dec 02 08:00:15 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0d054a117c7c46e13ca1c41c72142c6e4f9c31e859e3ab54e5194094c2c4096b-userdata-shm.mount: Deactivated successfully.
Dec 02 08:00:15 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-b31a729f52d6f9ece82ff86db83ec0c0420ae47f49a38ed5b1f2bb83a229399e-merged.mount: Deactivated successfully.
Dec 02 08:00:15 np0005541914.localdomain podman[53353]: 2025-12-02 08:00:15.761696622 +0000 UTC m=+0.154962063 container cleanup 0d054a117c7c46e13ca1c41c72142c6e4f9c31e859e3ab54e5194094c2c4096b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, vcs-type=git, url=https://www.redhat.com, batch=17.1_20251118.1, tcib_managed=true, build-date=2025-11-19T00:35:22Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, release=1761123044, name=rhosp17/openstack-nova-libvirt, config_id=tripleo_puppet_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=container-puppet-nova_libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-nova-libvirt-container)
Dec 02 08:00:15 np0005541914.localdomain systemd[1]: libpod-conmon-0d054a117c7c46e13ca1c41c72142c6e4f9c31e859e3ab54e5194094c2c4096b.scope: Deactivated successfully.
Dec 02 08:00:15 np0005541914.localdomain python3[51455]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-nova_libvirt --conmon-pidfile /run/container-puppet-nova_libvirt.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005541914 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password --env NAME=nova_libvirt --env STEP_CONFIG=include ::tripleo::packages
                                                         # TODO(emilien): figure how to deal with libvirt profile.
                                                         # We'll probably treat it like we do with Neutron plugins.
                                                         # Until then, just include it in the default nova-compute role.
                                                         include tripleo::profile::base::nova::compute::libvirt
                                                         
                                                         include tripleo::profile::base::nova::libvirt
                                                         
                                                         include tripleo::profile::base::nova::compute::libvirt_guests
                                                         
                                                         include tripleo::profile::base::sshd
                                                         include tripleo::profile::base::nova::migration::target --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-nova_libvirt --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-nova_libvirt.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Notice: Compiled catalog for np0005541914.localdomain in environment production in 0.66 seconds
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: created
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: created
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/host]/ensure: created
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]/ensure: created
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]/ensure: created
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]/ensure: created
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]/ensure: created
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/vlan_transparent]/ensure: created
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Notice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: created
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Notice: /Stage[main]/Neutron/Neutron_config[agent/report_interval]/ensure: created
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: created
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/debug]/ensure: created
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/nova_metadata_host]/ensure: created
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/nova_metadata_protocol]/ensure: created
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/metadata_proxy_shared_secret]/ensure: created
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/metadata_workers]/ensure: created
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/state_path]/ensure: created
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/hwol_qos_enabled]/ensure: created
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[agent/root_helper]/ensure: created
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovs/ovsdb_connection]/ensure: created
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovs/ovsdb_connection_timeout]/ensure: created
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovsdb_probe_interval]/ensure: created
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovn_nb_connection]/ensure: created
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovn_sb_connection]/ensure: created
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/transport_url]/ensure: created
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]/ensure: created
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Notice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]/ensure: created
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/driver]/ensure: created
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/transport_url]/ensure: created
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created
Dec 02 08:00:16 np0005541914.localdomain sshd[53388]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]/ensure: created
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]/ensure: created
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Notice: Applied catalog in 0.47 seconds
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Application:
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]:    Initial environment: production
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]:    Converged environment: production
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]:          Run mode: user
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Changes:
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]:             Total: 33
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Events:
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]:           Success: 33
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]:             Total: 33
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Resources:
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]:           Skipped: 21
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]:           Changed: 33
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]:       Out of sync: 33
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]:             Total: 155
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Time:
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]:         Resources: 0.00
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]:    Ovn metadata agent config: 0.02
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]:    Neutron config: 0.39
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]:    Transaction evaluation: 0.46
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]:    Catalog application: 0.47
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]:    Config retrieval: 0.73
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]:          Last run: 1764662416
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]:             Total: 0.47
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]: Version:
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]:            Config: 1764662415
Dec 02 08:00:16 np0005541914.localdomain puppet-user[53241]:            Puppet: 7.10.0
Dec 02 08:00:17 np0005541914.localdomain systemd[1]: libpod-9a96d3f913d1b4dde6250bc3d5b2f8cf117698d47c8dab6e4724b5c4e6a31a32.scope: Deactivated successfully.
Dec 02 08:00:17 np0005541914.localdomain systemd[1]: libpod-9a96d3f913d1b4dde6250bc3d5b2f8cf117698d47c8dab6e4724b5c4e6a31a32.scope: Consumed 3.534s CPU time.
Dec 02 08:00:17 np0005541914.localdomain podman[53211]: 2025-12-02 08:00:17.246419943 +0000 UTC m=+3.678539967 container died 9a96d3f913d1b4dde6250bc3d5b2f8cf117698d47c8dab6e4724b5c4e6a31a32 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, summary=Red Hat OpenStack Platform 17.1 neutron-server, config_id=tripleo_puppet_step1, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:23:27Z, io.openshift.expose-services=, tcib_managed=true, version=17.1.12, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-server-container, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, name=rhosp17/openstack-neutron-server, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-server, container_name=container-puppet-neutron, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044)
Dec 02 08:00:17 np0005541914.localdomain systemd[1]: tmp-crun.3F0vT0.mount: Deactivated successfully.
Dec 02 08:00:17 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9a96d3f913d1b4dde6250bc3d5b2f8cf117698d47c8dab6e4724b5c4e6a31a32-userdata-shm.mount: Deactivated successfully.
Dec 02 08:00:17 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-73f9890a30d4cca7075aebf2d1c79838b39a1c605ffe5291a19916efb9ec9b29-merged.mount: Deactivated successfully.
Dec 02 08:00:17 np0005541914.localdomain podman[53424]: 2025-12-02 08:00:17.396634472 +0000 UTC m=+0.139869492 container cleanup 9a96d3f913d1b4dde6250bc3d5b2f8cf117698d47c8dab6e4724b5c4e6a31a32 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:23:27Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-server, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=container-puppet-neutron, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-neutron-server, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, config_id=tripleo_puppet_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.expose-services=, com.redhat.component=openstack-neutron-server-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']})
Dec 02 08:00:17 np0005541914.localdomain systemd[1]: libpod-conmon-9a96d3f913d1b4dde6250bc3d5b2f8cf117698d47c8dab6e4724b5c4e6a31a32.scope: Deactivated successfully.
Dec 02 08:00:17 np0005541914.localdomain python3[51455]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-neutron --conmon-pidfile /run/container-puppet-neutron.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005541914 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config --env NAME=neutron --env STEP_CONFIG=include ::tripleo::packages
                                                         include tripleo::profile::base::neutron::ovn_metadata
                                                          --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-neutron --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005541914', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-neutron.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1
Dec 02 08:00:17 np0005541914.localdomain sudo[51453]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:18 np0005541914.localdomain sshd[53388]: Invalid user anaconda from 182.253.156.173 port 53176
Dec 02 08:00:18 np0005541914.localdomain sudo[53475]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ueqsiagakdrcftrtgnyreusgfrytgvxi ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:00:18 np0005541914.localdomain sudo[53475]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:18 np0005541914.localdomain sshd[53388]: Received disconnect from 182.253.156.173 port 53176:11: Bye Bye [preauth]
Dec 02 08:00:18 np0005541914.localdomain sshd[53388]: Disconnected from invalid user anaconda 182.253.156.173 port 53176 [preauth]
Dec 02 08:00:18 np0005541914.localdomain python3[53477]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:00:18 np0005541914.localdomain sudo[53475]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:18 np0005541914.localdomain sudo[53491]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-awbdmzfcvvtertlqzjmupjgmpummrfub ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:00:18 np0005541914.localdomain sudo[53491]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:18 np0005541914.localdomain sudo[53491]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:19 np0005541914.localdomain sudo[53507]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qwhywqgwozaroluwsvzhtbczgjctinob ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:00:19 np0005541914.localdomain sudo[53507]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:19 np0005541914.localdomain python3[53509]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 08:00:19 np0005541914.localdomain sudo[53507]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:19 np0005541914.localdomain sudo[53557]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-auxoptczqiiewljkqdbovemylpflmavx ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:00:19 np0005541914.localdomain sudo[53557]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:19 np0005541914.localdomain python3[53559]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:00:19 np0005541914.localdomain sudo[53557]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:19 np0005541914.localdomain sudo[53600]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-elxtjqfolrbsneegikycrhtlhcalvnhp ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:00:19 np0005541914.localdomain sudo[53600]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:20 np0005541914.localdomain python3[53602]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662419.4198186-83439-90547271906757/source dest=/usr/libexec/tripleo-container-shutdown mode=0700 owner=root group=root _original_basename=tripleo-container-shutdown follow=False checksum=7d67b1986212f5548057505748cd74cfcf9c0d35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:00:20 np0005541914.localdomain sudo[53600]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:20 np0005541914.localdomain sudo[53662]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sydpxlfjwwkmvarmlgomjdqmyeawralx ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:00:20 np0005541914.localdomain sudo[53662]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:20 np0005541914.localdomain python3[53664]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:00:20 np0005541914.localdomain sudo[53662]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:20 np0005541914.localdomain sudo[53705]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oskfqlabrybfhvatxaojwvitprevntfo ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:00:20 np0005541914.localdomain sudo[53705]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:20 np0005541914.localdomain python3[53707]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662420.2006798-83439-201456353606193/source dest=/usr/libexec/tripleo-start-podman-container mode=0700 owner=root group=root _original_basename=tripleo-start-podman-container follow=False checksum=536965633b8d3b1ce794269ffb07be0105a560a0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:00:20 np0005541914.localdomain sudo[53705]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:21 np0005541914.localdomain sudo[53767]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-csrjvnjcaigqzhjkvxvdvwhiqlkuuxph ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:00:21 np0005541914.localdomain sudo[53767]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:21 np0005541914.localdomain python3[53769]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:00:21 np0005541914.localdomain sudo[53767]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:21 np0005541914.localdomain sudo[53810]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hripcjhqwuxltnotoxzqpuyksqcagfvo ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:00:21 np0005541914.localdomain sudo[53810]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:21 np0005541914.localdomain python3[53812]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662421.1028492-83526-103879694803810/source dest=/usr/lib/systemd/system/tripleo-container-shutdown.service mode=0644 owner=root group=root _original_basename=tripleo-container-shutdown-service follow=False checksum=66c1d41406ba8714feb9ed0a35259a7a57ef9707 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:00:21 np0005541914.localdomain sudo[53810]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:22 np0005541914.localdomain sudo[53872]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fmzgijjargogsgxdldgmgrktuilflsla ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:00:22 np0005541914.localdomain sudo[53872]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:22 np0005541914.localdomain python3[53874]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:00:22 np0005541914.localdomain sudo[53872]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:22 np0005541914.localdomain sudo[53915]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xfwvnxixdemlxchmqqkqxsopgyulpevd ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:00:22 np0005541914.localdomain sudo[53915]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:22 np0005541914.localdomain python3[53917]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662421.9341533-83564-244804915133567/source dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset mode=0644 owner=root group=root _original_basename=91-tripleo-container-shutdown-preset follow=False checksum=bccb1207dcbcfaa5ca05f83c8f36ce4c2460f081 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:00:22 np0005541914.localdomain sudo[53915]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:22 np0005541914.localdomain sudo[53945]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wszbfebfhyqlnvzaifopfamfsdvxoxrg ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:00:22 np0005541914.localdomain sudo[53945]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:23 np0005541914.localdomain python3[53947]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 08:00:23 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:00:23 np0005541914.localdomain systemd-rc-local-generator[53969]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:00:23 np0005541914.localdomain systemd-sysv-generator[53973]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:00:23 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:00:23 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:00:23 np0005541914.localdomain systemd-sysv-generator[54012]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:00:23 np0005541914.localdomain systemd-rc-local-generator[54009]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:00:23 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:00:23 np0005541914.localdomain systemd[1]: Starting TripleO Container Shutdown...
Dec 02 08:00:23 np0005541914.localdomain systemd[1]: Finished TripleO Container Shutdown.
Dec 02 08:00:23 np0005541914.localdomain sudo[53945]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:23 np0005541914.localdomain sudo[54068]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ytovzshdcrgcsmxrawerwdfupuzrbreo ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:00:23 np0005541914.localdomain sudo[54068]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:24 np0005541914.localdomain python3[54070]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:00:24 np0005541914.localdomain sudo[54068]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:24 np0005541914.localdomain sudo[54111]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-laxlstycuyzlinudlsjalgtkapnelnro ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:00:24 np0005541914.localdomain sudo[54111]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:24 np0005541914.localdomain python3[54113]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662423.8682857-83617-259391825803792/source dest=/usr/lib/systemd/system/netns-placeholder.service mode=0644 owner=root group=root _original_basename=netns-placeholder-service follow=False checksum=8e9c6d5ce3a6e7f71c18780ec899f32f23de4c71 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:00:24 np0005541914.localdomain sudo[54111]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:24 np0005541914.localdomain sudo[54173]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qybqyjqdyjxsfxphdwsnaojfwcwsbudo ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:00:24 np0005541914.localdomain sudo[54173]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:25 np0005541914.localdomain python3[54175]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:00:25 np0005541914.localdomain sudo[54173]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:25 np0005541914.localdomain sudo[54216]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ithounmjoyycttahxyvpbffemcjlixkl ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:00:25 np0005541914.localdomain sudo[54216]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:25 np0005541914.localdomain python3[54218]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662424.7594256-83644-131110726709512/source dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset mode=0644 owner=root group=root _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:00:25 np0005541914.localdomain sudo[54216]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:25 np0005541914.localdomain sudo[54246]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-molxcyjfcfvwbbhihgdlbysihcjocjpw ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:00:25 np0005541914.localdomain sudo[54246]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:25 np0005541914.localdomain python3[54248]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 08:00:25 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:00:26 np0005541914.localdomain systemd-rc-local-generator[54271]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:00:26 np0005541914.localdomain systemd-sysv-generator[54274]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:00:26 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:00:26 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:00:26 np0005541914.localdomain systemd-rc-local-generator[54310]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:00:26 np0005541914.localdomain systemd-sysv-generator[54315]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:00:26 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:00:26 np0005541914.localdomain systemd[1]: Starting Create netns directory...
Dec 02 08:00:26 np0005541914.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 02 08:00:26 np0005541914.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 02 08:00:26 np0005541914.localdomain systemd[1]: Finished Create netns directory.
Dec 02 08:00:26 np0005541914.localdomain sudo[54246]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:26 np0005541914.localdomain sudo[54338]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bhfqghzotqwppxighjijronxphpaadhj ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:00:26 np0005541914.localdomain sudo[54338]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:27 np0005541914.localdomain python3[54340]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6
Dec 02 08:00:27 np0005541914.localdomain python3[54340]: ansible-container_puppet_config [WARNING] Config change detected for metrics_qdr, new hash: b56066700c0c3079c35d037ee6698236
Dec 02 08:00:27 np0005541914.localdomain python3[54340]: ansible-container_puppet_config [WARNING] Config change detected for collectd, new hash: d31718fcd17fdeee6489534105191c7a
Dec 02 08:00:27 np0005541914.localdomain python3[54340]: ansible-container_puppet_config [WARNING] Config change detected for iscsid, new hash: d89676d7ec0a7c13ef9894fdb26c6e3a
Dec 02 08:00:27 np0005541914.localdomain python3[54340]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtlogd_wrapper, new hash: 51230b537c6b56095225b7a0a6b952d0
Dec 02 08:00:27 np0005541914.localdomain python3[54340]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtnodedevd, new hash: 51230b537c6b56095225b7a0a6b952d0
Dec 02 08:00:27 np0005541914.localdomain python3[54340]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtproxyd, new hash: 51230b537c6b56095225b7a0a6b952d0
Dec 02 08:00:27 np0005541914.localdomain python3[54340]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtqemud, new hash: 51230b537c6b56095225b7a0a6b952d0
Dec 02 08:00:27 np0005541914.localdomain python3[54340]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtsecretd, new hash: 51230b537c6b56095225b7a0a6b952d0
Dec 02 08:00:27 np0005541914.localdomain python3[54340]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtstoraged, new hash: 51230b537c6b56095225b7a0a6b952d0
Dec 02 08:00:27 np0005541914.localdomain python3[54340]: ansible-container_puppet_config [WARNING] Config change detected for rsyslog, new hash: 96606bb2d91ec59ed336cbd6010f1851
Dec 02 08:00:27 np0005541914.localdomain python3[54340]: ansible-container_puppet_config [WARNING] Config change detected for ceilometer_agent_compute, new hash: 885e9e62222ac12bce952717b40ccfc4
Dec 02 08:00:27 np0005541914.localdomain python3[54340]: ansible-container_puppet_config [WARNING] Config change detected for ceilometer_agent_ipmi, new hash: 885e9e62222ac12bce952717b40ccfc4
Dec 02 08:00:27 np0005541914.localdomain python3[54340]: ansible-container_puppet_config [WARNING] Config change detected for logrotate_crond, new hash: 53ed83bb0cae779ff95edb2002262c6f
Dec 02 08:00:27 np0005541914.localdomain python3[54340]: ansible-container_puppet_config [WARNING] Config change detected for nova_libvirt_init_secret, new hash: 51230b537c6b56095225b7a0a6b952d0
Dec 02 08:00:27 np0005541914.localdomain python3[54340]: ansible-container_puppet_config [WARNING] Config change detected for nova_migration_target, new hash: 51230b537c6b56095225b7a0a6b952d0
Dec 02 08:00:27 np0005541914.localdomain python3[54340]: ansible-container_puppet_config [WARNING] Config change detected for ovn_metadata_agent, new hash: 6b6de39672ef4d892f2e8f81f38c430b
Dec 02 08:00:27 np0005541914.localdomain python3[54340]: ansible-container_puppet_config [WARNING] Config change detected for nova_compute, new hash: d89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0
Dec 02 08:00:27 np0005541914.localdomain python3[54340]: ansible-container_puppet_config [WARNING] Config change detected for nova_wait_for_compute_service, new hash: 51230b537c6b56095225b7a0a6b952d0
Dec 02 08:00:27 np0005541914.localdomain sudo[54338]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:27 np0005541914.localdomain sudo[54354]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fqctuivritnfmqyjihiywfowbqucyonv ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:00:27 np0005541914.localdomain sudo[54354]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:27 np0005541914.localdomain sudo[54354]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:28 np0005541914.localdomain sudo[54395]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nyijtvddxwtsvoogmsferilmdhwszpln ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:00:28 np0005541914.localdomain sudo[54395]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:28 np0005541914.localdomain python3[54397]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step1 config_dir=/var/lib/tripleo-config/container-startup-config/step_1 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False
Dec 02 08:00:28 np0005541914.localdomain podman[54434]: 2025-12-02 08:00:28.956520266 +0000 UTC m=+0.093469778 container create b88074af85686fa52b8478fd68fe7ff9ad7b2b644023de1ef040f078d2cd54b2 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, container_name=metrics_qdr_init_logs, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1)
Dec 02 08:00:29 np0005541914.localdomain systemd[1]: Started libpod-conmon-b88074af85686fa52b8478fd68fe7ff9ad7b2b644023de1ef040f078d2cd54b2.scope.
Dec 02 08:00:29 np0005541914.localdomain podman[54434]: 2025-12-02 08:00:28.910223895 +0000 UTC m=+0.047173417 image pull  registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1
Dec 02 08:00:29 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:00:29 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d735ed10a550a807437a0617701eca41c00b16c522094f4bdfdfee4840a918b/merged/var/log/qdrouterd supports timestamps until 2038 (0x7fffffff)
Dec 02 08:00:29 np0005541914.localdomain podman[54434]: 2025-12-02 08:00:29.042925873 +0000 UTC m=+0.179875365 container init b88074af85686fa52b8478fd68fe7ff9ad7b2b644023de1ef040f078d2cd54b2 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, distribution-scope=public, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, tcib_managed=true, container_name=metrics_qdr_init_logs, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12)
Dec 02 08:00:29 np0005541914.localdomain podman[54434]: 2025-12-02 08:00:29.053847988 +0000 UTC m=+0.190797480 container start b88074af85686fa52b8478fd68fe7ff9ad7b2b644023de1ef040f078d2cd54b2 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, tcib_managed=true, container_name=metrics_qdr_init_logs, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 08:00:29 np0005541914.localdomain podman[54434]: 2025-12-02 08:00:29.054142627 +0000 UTC m=+0.191092249 container attach b88074af85686fa52b8478fd68fe7ff9ad7b2b644023de1ef040f078d2cd54b2 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, vcs-type=git, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr_init_logs, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, architecture=x86_64, managed_by=tripleo_ansible, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, version=17.1.12, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:00:29 np0005541914.localdomain systemd[1]: libpod-b88074af85686fa52b8478fd68fe7ff9ad7b2b644023de1ef040f078d2cd54b2.scope: Deactivated successfully.
Dec 02 08:00:29 np0005541914.localdomain podman[54434]: 2025-12-02 08:00:29.060864357 +0000 UTC m=+0.197813889 container died b88074af85686fa52b8478fd68fe7ff9ad7b2b644023de1ef040f078d2cd54b2 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., container_name=metrics_qdr_init_logs, config_id=tripleo_step1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:00:29 np0005541914.localdomain podman[54453]: 2025-12-02 08:00:29.157323464 +0000 UTC m=+0.082148591 container cleanup b88074af85686fa52b8478fd68fe7ff9ad7b2b644023de1ef040f078d2cd54b2 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, managed_by=tripleo_ansible, version=17.1.12, release=1761123044, config_id=tripleo_step1, container_name=metrics_qdr_init_logs, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z)
Dec 02 08:00:29 np0005541914.localdomain systemd[1]: libpod-conmon-b88074af85686fa52b8478fd68fe7ff9ad7b2b644023de1ef040f078d2cd54b2.scope: Deactivated successfully.
Dec 02 08:00:29 np0005541914.localdomain python3[54397]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name metrics_qdr_init_logs --conmon-pidfile /run/metrics_qdr_init_logs.pid --detach=False --label config_id=tripleo_step1 --label container_name=metrics_qdr_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/metrics_qdr_init_logs.log --network none --privileged=False --user root --volume /var/log/containers/metrics_qdr:/var/log/qdrouterd:z registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 /bin/bash -c chown -R qdrouterd:qdrouterd /var/log/qdrouterd
Dec 02 08:00:29 np0005541914.localdomain podman[54525]: 2025-12-02 08:00:29.637096509 +0000 UTC m=+0.083706577 container create 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., version=17.1.12, container_name=metrics_qdr, batch=17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:00:29 np0005541914.localdomain systemd[1]: Started libpod-conmon-67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.scope.
Dec 02 08:00:29 np0005541914.localdomain podman[54525]: 2025-12-02 08:00:29.592053046 +0000 UTC m=+0.038663134 image pull  registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1
Dec 02 08:00:29 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:00:29 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46d22fb86a8cbaa2935fad3e910e4610328c0a9c2837bb75cb2a0cd28ff52849/merged/var/log/qdrouterd supports timestamps until 2038 (0x7fffffff)
Dec 02 08:00:29 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46d22fb86a8cbaa2935fad3e910e4610328c0a9c2837bb75cb2a0cd28ff52849/merged/var/lib/qdrouterd supports timestamps until 2038 (0x7fffffff)
Dec 02 08:00:29 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:00:29 np0005541914.localdomain podman[54525]: 2025-12-02 08:00:29.736184464 +0000 UTC m=+0.182794562 container init 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container)
Dec 02 08:00:29 np0005541914.localdomain sudo[54545]: qdrouterd : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 02 08:00:29 np0005541914.localdomain sudo[54545]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42465)
Dec 02 08:00:29 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:00:29 np0005541914.localdomain podman[54525]: 2025-12-02 08:00:29.776249458 +0000 UTC m=+0.222859556 container start 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, version=17.1.12, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20251118.1)
Dec 02 08:00:29 np0005541914.localdomain python3[54397]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name metrics_qdr --conmon-pidfile /run/metrics_qdr.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=b56066700c0c3079c35d037ee6698236 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step1 --label container_name=metrics_qdr --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/metrics_qdr.log --network host --privileged=False --user qdrouterd --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro --volume /var/lib/metrics_qdr:/var/lib/qdrouterd:z --volume /var/log/containers/metrics_qdr:/var/log/qdrouterd:z registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1
Dec 02 08:00:29 np0005541914.localdomain sudo[54545]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:29 np0005541914.localdomain podman[54547]: 2025-12-02 08:00:29.880086465 +0000 UTC m=+0.092584001 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=starting, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, io.buildah.version=1.41.4, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.12, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step1, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 08:00:29 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-5d735ed10a550a807437a0617701eca41c00b16c522094f4bdfdfee4840a918b-merged.mount: Deactivated successfully.
Dec 02 08:00:29 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b88074af85686fa52b8478fd68fe7ff9ad7b2b644023de1ef040f078d2cd54b2-userdata-shm.mount: Deactivated successfully.
Dec 02 08:00:30 np0005541914.localdomain sudo[54395]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:30 np0005541914.localdomain podman[54547]: 2025-12-02 08:00:30.123902935 +0000 UTC m=+0.336400521 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, release=1761123044, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, tcib_managed=true, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container)
Dec 02 08:00:30 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:00:30 np0005541914.localdomain sudo[54617]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-icllomgcyzbxyhxkxsigbddnobmzakfe ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:00:30 np0005541914.localdomain sudo[54617]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:30 np0005541914.localdomain python3[54619]: ansible-file Invoked with path=/etc/systemd/system/tripleo_metrics_qdr.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:00:30 np0005541914.localdomain sudo[54617]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:30 np0005541914.localdomain sudo[54633]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fvesgtpcqvmptnmiampmwhuknvjfqlst ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:00:30 np0005541914.localdomain sudo[54633]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:30 np0005541914.localdomain python3[54635]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_metrics_qdr_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 08:00:30 np0005541914.localdomain sudo[54633]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:30 np0005541914.localdomain sshd[54668]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:00:31 np0005541914.localdomain sudo[54696]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kazxrdhymwdiqupmsindhbngbwatescd ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:00:31 np0005541914.localdomain sudo[54696]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:31 np0005541914.localdomain sshd[54668]: Invalid user sol from 45.148.10.240 port 38648
Dec 02 08:00:31 np0005541914.localdomain python3[54698]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662430.779021-83879-85923993452796/source dest=/etc/systemd/system/tripleo_metrics_qdr.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:00:31 np0005541914.localdomain sshd[54668]: Connection closed by invalid user sol 45.148.10.240 port 38648 [preauth]
Dec 02 08:00:31 np0005541914.localdomain sudo[54696]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:31 np0005541914.localdomain sudo[54712]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-djrqijoosrfjwcmddmlnpwkvmxrexhqz ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:00:31 np0005541914.localdomain sudo[54712]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:31 np0005541914.localdomain python3[54714]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 02 08:00:31 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:00:31 np0005541914.localdomain systemd-rc-local-generator[54738]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:00:31 np0005541914.localdomain systemd-sysv-generator[54743]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:00:31 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:00:32 np0005541914.localdomain sudo[54712]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:32 np0005541914.localdomain sudo[54764]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pbswbkytpayefypefveshbizmcotikdw ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:00:32 np0005541914.localdomain sudo[54764]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:32 np0005541914.localdomain python3[54766]: ansible-systemd Invoked with state=restarted name=tripleo_metrics_qdr.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 08:00:32 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:00:32 np0005541914.localdomain systemd-sysv-generator[54795]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:00:32 np0005541914.localdomain systemd-rc-local-generator[54791]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:00:32 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:00:32 np0005541914.localdomain systemd[1]: Starting metrics_qdr container...
Dec 02 08:00:33 np0005541914.localdomain systemd[1]: Started metrics_qdr container.
Dec 02 08:00:33 np0005541914.localdomain sudo[54764]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:33 np0005541914.localdomain sudo[54844]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gbfmoxlapznurxtezzzokoqzowyguurm ; /usr/bin/python3
Dec 02 08:00:33 np0005541914.localdomain sudo[54844]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:33 np0005541914.localdomain python3[54846]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks1.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:00:33 np0005541914.localdomain sudo[54844]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:33 np0005541914.localdomain sudo[54892]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ylpahdeqwkfgfbgqxplbwzfiryxuttyc ; /usr/bin/python3
Dec 02 08:00:33 np0005541914.localdomain sudo[54892]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:34 np0005541914.localdomain sudo[54892]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:34 np0005541914.localdomain sudo[54935]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aqhrohixmsjtforyvmcdohkgpacvpmum ; /usr/bin/python3
Dec 02 08:00:34 np0005541914.localdomain sudo[54935]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:34 np0005541914.localdomain sudo[54935]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:34 np0005541914.localdomain sudo[54965]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lhiuztskiwcniamnifjtjpnltfkodnwm ; /usr/bin/python3
Dec 02 08:00:34 np0005541914.localdomain sudo[54965]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:34 np0005541914.localdomain python3[54967]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks1.json short_hostname=np0005541914 step=1 update_config_hash_only=False
Dec 02 08:00:34 np0005541914.localdomain sudo[54965]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:35 np0005541914.localdomain sudo[54981]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kizptseidpjkirdoiekjnwbhncmgdywd ; /usr/bin/python3
Dec 02 08:00:35 np0005541914.localdomain sudo[54981]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:35 np0005541914.localdomain python3[54983]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:00:35 np0005541914.localdomain sudo[54981]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:35 np0005541914.localdomain sudo[54997]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fjkkdtacgixcvzibgwyjgvyvsglmaulp ; /usr/bin/python3
Dec 02 08:00:35 np0005541914.localdomain sudo[54997]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:00:35 np0005541914.localdomain python3[54999]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_1 config_pattern=container-puppet-*.json config_overrides={} debug=True
Dec 02 08:00:35 np0005541914.localdomain sudo[54997]: pam_unix(sudo:session): session closed for user root
Dec 02 08:00:56 np0005541914.localdomain sshd[55000]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:00:58 np0005541914.localdomain sshd[55000]: Received disconnect from 103.52.115.25 port 44360:11: Bye Bye [preauth]
Dec 02 08:00:58 np0005541914.localdomain sshd[55000]: Disconnected from authenticating user root 103.52.115.25 port 44360 [preauth]
Dec 02 08:01:00 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:01:01 np0005541914.localdomain podman[55002]: 2025-12-02 08:01:01.057546085 +0000 UTC m=+0.066874935 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git)
Dec 02 08:01:01 np0005541914.localdomain podman[55002]: 2025-12-02 08:01:01.233229474 +0000 UTC m=+0.242558294 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=tripleo_step1, release=1761123044)
Dec 02 08:01:01 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:01:01 np0005541914.localdomain CROND[55032]: (root) CMD (run-parts /etc/cron.hourly)
Dec 02 08:01:01 np0005541914.localdomain run-parts[55035]: (/etc/cron.hourly) starting 0anacron
Dec 02 08:01:01 np0005541914.localdomain run-parts[55041]: (/etc/cron.hourly) finished 0anacron
Dec 02 08:01:01 np0005541914.localdomain CROND[55031]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 02 08:01:06 np0005541914.localdomain sudo[55042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:01:06 np0005541914.localdomain sudo[55042]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:01:06 np0005541914.localdomain sudo[55042]: pam_unix(sudo:session): session closed for user root
Dec 02 08:01:06 np0005541914.localdomain sudo[55057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:01:06 np0005541914.localdomain sudo[55057]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:01:07 np0005541914.localdomain sudo[55057]: pam_unix(sudo:session): session closed for user root
Dec 02 08:01:08 np0005541914.localdomain sudo[55103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:01:08 np0005541914.localdomain sudo[55103]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:01:08 np0005541914.localdomain sudo[55103]: pam_unix(sudo:session): session closed for user root
Dec 02 08:01:31 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:01:32 np0005541914.localdomain systemd[1]: tmp-crun.A5RDPF.mount: Deactivated successfully.
Dec 02 08:01:32 np0005541914.localdomain podman[55118]: 2025-12-02 08:01:32.09164362 +0000 UTC m=+0.099711945 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, release=1761123044, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, architecture=x86_64, tcib_managed=true, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, container_name=metrics_qdr, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git)
Dec 02 08:01:32 np0005541914.localdomain podman[55118]: 2025-12-02 08:01:32.291847303 +0000 UTC m=+0.299915638 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, release=1761123044, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-type=git, container_name=metrics_qdr, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team)
Dec 02 08:01:32 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:01:32 np0005541914.localdomain sshd[55147]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:01:34 np0005541914.localdomain sshd[55147]: Invalid user init from 182.253.156.173 port 46496
Dec 02 08:01:34 np0005541914.localdomain sshd[55147]: Received disconnect from 182.253.156.173 port 46496:11: Bye Bye [preauth]
Dec 02 08:01:34 np0005541914.localdomain sshd[55147]: Disconnected from invalid user init 182.253.156.173 port 46496 [preauth]
Dec 02 08:02:02 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:02:03 np0005541914.localdomain systemd[1]: tmp-crun.auX6cG.mount: Deactivated successfully.
Dec 02 08:02:03 np0005541914.localdomain podman[55149]: 2025-12-02 08:02:03.085884915 +0000 UTC m=+0.086581022 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, release=1761123044, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 08:02:03 np0005541914.localdomain podman[55149]: 2025-12-02 08:02:03.282480088 +0000 UTC m=+0.283176205 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, config_id=tripleo_step1, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=)
Dec 02 08:02:03 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:02:08 np0005541914.localdomain sudo[55179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:02:08 np0005541914.localdomain sudo[55179]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:02:08 np0005541914.localdomain sudo[55179]: pam_unix(sudo:session): session closed for user root
Dec 02 08:02:08 np0005541914.localdomain sudo[55194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:02:08 np0005541914.localdomain sudo[55194]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:02:09 np0005541914.localdomain sudo[55194]: pam_unix(sudo:session): session closed for user root
Dec 02 08:02:09 np0005541914.localdomain sudo[55242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:02:09 np0005541914.localdomain sudo[55242]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:02:09 np0005541914.localdomain sudo[55242]: pam_unix(sudo:session): session closed for user root
Dec 02 08:02:14 np0005541914.localdomain sshd[55257]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:02:15 np0005541914.localdomain sshd[55257]: Received disconnect from 103.52.115.25 port 44750:11: Bye Bye [preauth]
Dec 02 08:02:15 np0005541914.localdomain sshd[55257]: Disconnected from authenticating user root 103.52.115.25 port 44750 [preauth]
Dec 02 08:02:31 np0005541914.localdomain sshd[55259]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:02:32 np0005541914.localdomain sshd[55259]: Invalid user sol from 45.148.10.240 port 48864
Dec 02 08:02:32 np0005541914.localdomain sshd[55259]: Connection closed by invalid user sol 45.148.10.240 port 48864 [preauth]
Dec 02 08:02:33 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:02:34 np0005541914.localdomain podman[55261]: 2025-12-02 08:02:34.077372809 +0000 UTC m=+0.084588810 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, url=https://www.redhat.com, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container)
Dec 02 08:02:34 np0005541914.localdomain podman[55261]: 2025-12-02 08:02:34.260969195 +0000 UTC m=+0.268185216 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 08:02:34 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:02:44 np0005541914.localdomain sshd[55289]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:02:46 np0005541914.localdomain sshd[55289]: Invalid user devuser from 182.253.156.173 port 47250
Dec 02 08:02:46 np0005541914.localdomain sshd[55289]: Received disconnect from 182.253.156.173 port 47250:11: Bye Bye [preauth]
Dec 02 08:02:46 np0005541914.localdomain sshd[55289]: Disconnected from invalid user devuser 182.253.156.173 port 47250 [preauth]
Dec 02 08:03:04 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:03:05 np0005541914.localdomain systemd[1]: tmp-crun.XYiGHe.mount: Deactivated successfully.
Dec 02 08:03:05 np0005541914.localdomain podman[55291]: 2025-12-02 08:03:05.060005467 +0000 UTC m=+0.067909264 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, version=17.1.12, vendor=Red Hat, Inc., url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 08:03:05 np0005541914.localdomain podman[55291]: 2025-12-02 08:03:05.243313065 +0000 UTC m=+0.251216932 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, distribution-scope=public, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, version=17.1.12, release=1761123044)
Dec 02 08:03:05 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:03:10 np0005541914.localdomain sudo[55320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:03:10 np0005541914.localdomain sudo[55320]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:03:10 np0005541914.localdomain sudo[55320]: pam_unix(sudo:session): session closed for user root
Dec 02 08:03:10 np0005541914.localdomain sudo[55335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:03:10 np0005541914.localdomain sudo[55335]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:03:10 np0005541914.localdomain sudo[55335]: pam_unix(sudo:session): session closed for user root
Dec 02 08:03:11 np0005541914.localdomain sudo[55381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:03:11 np0005541914.localdomain sudo[55381]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:03:11 np0005541914.localdomain sudo[55381]: pam_unix(sudo:session): session closed for user root
Dec 02 08:03:35 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:03:36 np0005541914.localdomain systemd[1]: tmp-crun.lJy87M.mount: Deactivated successfully.
Dec 02 08:03:36 np0005541914.localdomain podman[55396]: 2025-12-02 08:03:36.071561256 +0000 UTC m=+0.078605167 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044)
Dec 02 08:03:36 np0005541914.localdomain podman[55396]: 2025-12-02 08:03:36.295216025 +0000 UTC m=+0.302259936 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, release=1761123044, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, architecture=x86_64, vcs-type=git)
Dec 02 08:03:36 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:03:40 np0005541914.localdomain sshd[55427]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:03:42 np0005541914.localdomain sshd[55427]: Received disconnect from 103.52.115.25 port 36932:11: Bye Bye [preauth]
Dec 02 08:03:42 np0005541914.localdomain sshd[55427]: Disconnected from authenticating user root 103.52.115.25 port 36932 [preauth]
Dec 02 08:03:58 np0005541914.localdomain sshd[55429]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:03:58 np0005541914.localdomain sshd[55429]: Connection reset by 182.253.156.173 port 50568 [preauth]
Dec 02 08:04:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:04:07 np0005541914.localdomain podman[55431]: 2025-12-02 08:04:07.136158816 +0000 UTC m=+0.138389271 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git, release=1761123044, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, tcib_managed=true, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=metrics_qdr, io.buildah.version=1.41.4)
Dec 02 08:04:07 np0005541914.localdomain podman[55431]: 2025-12-02 08:04:07.351126771 +0000 UTC m=+0.353357246 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20251118.1, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-type=git, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=)
Dec 02 08:04:07 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:04:11 np0005541914.localdomain sudo[55460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:04:11 np0005541914.localdomain sudo[55460]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:04:11 np0005541914.localdomain sudo[55460]: pam_unix(sudo:session): session closed for user root
Dec 02 08:04:11 np0005541914.localdomain sudo[55475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:04:11 np0005541914.localdomain sudo[55475]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:04:12 np0005541914.localdomain sudo[55475]: pam_unix(sudo:session): session closed for user root
Dec 02 08:04:13 np0005541914.localdomain sudo[55521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:04:13 np0005541914.localdomain sudo[55521]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:04:13 np0005541914.localdomain sudo[55521]: pam_unix(sudo:session): session closed for user root
Dec 02 08:04:37 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:04:38 np0005541914.localdomain podman[55536]: 2025-12-02 08:04:38.056589928 +0000 UTC m=+0.064053277 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-type=git, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, release=1761123044, io.buildah.version=1.41.4, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.expose-services=)
Dec 02 08:04:38 np0005541914.localdomain podman[55536]: 2025-12-02 08:04:38.250913016 +0000 UTC m=+0.258376355 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, container_name=metrics_qdr, release=1761123044, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1)
Dec 02 08:04:38 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:04:39 np0005541914.localdomain sshd[55564]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:04:40 np0005541914.localdomain sshd[55564]: Invalid user sol from 45.148.10.240 port 51632
Dec 02 08:04:40 np0005541914.localdomain sshd[55564]: Connection closed by invalid user sol 45.148.10.240 port 51632 [preauth]
Dec 02 08:05:04 np0005541914.localdomain sshd[55566]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:05:05 np0005541914.localdomain sshd[55566]: Invalid user tmp from 103.52.115.25 port 34440
Dec 02 08:05:05 np0005541914.localdomain sshd[55566]: Received disconnect from 103.52.115.25 port 34440:11: Bye Bye [preauth]
Dec 02 08:05:05 np0005541914.localdomain sshd[55566]: Disconnected from invalid user tmp 103.52.115.25 port 34440 [preauth]
Dec 02 08:05:08 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:05:09 np0005541914.localdomain podman[55568]: 2025-12-02 08:05:09.069021553 +0000 UTC m=+0.071895807 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.buildah.version=1.41.4, tcib_managed=true, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team)
Dec 02 08:05:09 np0005541914.localdomain podman[55568]: 2025-12-02 08:05:09.301946189 +0000 UTC m=+0.304820413 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_id=tripleo_step1, version=17.1.12, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, architecture=x86_64, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc.)
Dec 02 08:05:09 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:05:13 np0005541914.localdomain sudo[55596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:05:13 np0005541914.localdomain sudo[55596]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:05:13 np0005541914.localdomain sudo[55596]: pam_unix(sudo:session): session closed for user root
Dec 02 08:05:13 np0005541914.localdomain sudo[55611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:05:13 np0005541914.localdomain sudo[55611]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:05:13 np0005541914.localdomain sudo[55611]: pam_unix(sudo:session): session closed for user root
Dec 02 08:05:14 np0005541914.localdomain sudo[55658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:05:14 np0005541914.localdomain sudo[55658]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:05:14 np0005541914.localdomain sudo[55658]: pam_unix(sudo:session): session closed for user root
Dec 02 08:05:27 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 21 pg[2.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [4,5,3] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:28 np0005541914.localdomain sshd[55673]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:05:28 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 22 pg[2.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [4,5,3] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:31 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 23 pg[3.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [5,4,0] r=1 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:32 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 25 pg[4.0( empty local-lis/les=0/0 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [3,4,5] r=1 lpr=25 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:34 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 27 pg[5.0( empty local-lis/les=0/0 n=0 ec=27/27 lis/c=0/0 les/c/f=0/0/0 sis=27) [2,3,4] r=2 lpr=27 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:38 np0005541914.localdomain sshd[55673]: Connection closed by 71.6.199.65 port 33752 [preauth]
Dec 02 08:05:39 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:05:40 np0005541914.localdomain podman[55675]: 2025-12-02 08:05:40.08923907 +0000 UTC m=+0.089076253 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4)
Dec 02 08:05:40 np0005541914.localdomain podman[55675]: 2025-12-02 08:05:40.306138817 +0000 UTC m=+0.305975960 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-type=git, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:05:40 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:05:41 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 33 pg[2.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=33 pruub=11.269608498s) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active pruub 1117.107177734s@ mbc={}] start_peering_interval up [4,5,3] -> [4,5,3], acting [4,5,3] -> [4,5,3], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:41 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 33 pg[3.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=13.772205353s) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active pruub 1119.609741211s@ mbc={}] start_peering_interval up [5,4,0] -> [5,4,0], acting [5,4,0] -> [5,4,0], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:41 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 33 pg[2.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=33 pruub=11.269608498s) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown pruub 1117.107177734s@ mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:41 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 33 pg[3.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=13.769854546s) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1119.609741211s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.19( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.18( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.19( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.17( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.16( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.16( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.18( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.17( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.15( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.14( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.14( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.15( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.13( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.12( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.12( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.13( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.11( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.10( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.10( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.11( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.f( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.e( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.e( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.d( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.c( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.f( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.b( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.d( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.a( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.b( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.a( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.1( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.1( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.6( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.7( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.7( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.c( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.2( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.3( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.6( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.3( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.2( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.4( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.4( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.5( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.5( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.8( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.9( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.9( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.1b( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.8( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.1a( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.1d( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.1b( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.1a( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.1c( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.1c( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.1d( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.1e( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.1e( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[3.1f( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [5,4,0] r=1 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.1f( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.0( empty local-lis/les=33/34 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.1a( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.1( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.e( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.c( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.12( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.10( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.14( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.1e( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:42 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 34 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [4,5,3] r=0 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:43 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 35 pg[5.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=15.478153229s) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active pruub 1123.410644531s@ mbc={}] start_peering_interval up [2,3,4] -> [2,3,4], acting [2,3,4] -> [2,3,4], acting_primary 2 -> 2, up_primary 2 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:43 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 35 pg[4.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=35 pruub=13.107388496s) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 active pruub 1121.039916992s@ mbc={}] start_peering_interval up [3,4,5] -> [3,4,5], acting [3,4,5] -> [3,4,5], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:43 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 35 pg[5.0( empty local-lis/les=27/28 n=0 ec=27/27 lis/c=27/27 les/c/f=28/28/0 sis=35 pruub=15.476300240s) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1123.410644531s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:43 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 35 pg[4.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=35 pruub=13.105083466s) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1121.039916992s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 2.0 deep-scrub starts
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 2.0 deep-scrub ok
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.18( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.18( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.19( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.1b( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.19( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.1a( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.1b( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.1d( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.1c( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.1a( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.1c( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.1d( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.f( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.e( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.f( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.3( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.e( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.2( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.2( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.3( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.5( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.4( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.5( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.1( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.1( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.7( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.6( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.6( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.c( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.7( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.d( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.d( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.c( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.a( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.b( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.b( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.4( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.8( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.9( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.9( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.8( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.a( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.17( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.17( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.16( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.14( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.16( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.15( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.14( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.13( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.13( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.12( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.12( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.10( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.15( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.11( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.10( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.1e( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[4.1f( empty local-lis/les=25/26 n=0 ec=35/25 lis/c=25/25 les/c/f=26/26/0 sis=35) [3,4,5] r=1 lpr=35 pi=[25,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.1f( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.1e( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:44 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 36 pg[5.11( empty local-lis/les=27/28 n=0 ec=35/27 lis/c=27/27 les/c/f=28/28/0 sis=35) [2,3,4] r=2 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:46 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Dec 02 08:05:46 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Dec 02 08:05:48 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 2.1a deep-scrub starts
Dec 02 08:05:48 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 2.1a deep-scrub ok
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[3.d( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,2,3] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[3.f( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,5,0] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[3.10( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,5,3] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[3.14( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,2,0] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[5.9( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [1,5,0] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[3.13( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,3,2] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[5.1b( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [1,0,2] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[5.11( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [1,2,0] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[3.1c( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,3,2] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[3.16( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,3,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[5.16( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [1,3,2] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.19( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.643611908s) [0,1,2] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.913696289s@ mbc={}] start_peering_interval up [5,4,0] -> [0,1,2], acting [5,4,0] -> [0,1,2], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.11( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.775737762s) [1,2,0] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.045898438s@ mbc={}] start_peering_interval up [2,3,4] -> [1,2,0], acting [2,3,4] -> [1,2,0], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.1f( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774971962s) [4,5,3] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.045288086s@ mbc={}] start_peering_interval up [2,3,4] -> [4,5,3], acting [2,3,4] -> [4,5,3], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.19( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.643533707s) [0,1,2] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.913696289s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.1f( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774971962s) [4,5,3] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.045288086s@ mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.11( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.775689125s) [1,2,0] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.045898438s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.14( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.643563271s) [1,2,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.913818359s@ mbc={}] start_peering_interval up [5,4,0] -> [1,2,0], acting [5,4,0] -> [1,2,0], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.14( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.643545151s) [1,2,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.913818359s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.15( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.643477440s) [2,1,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914062500s@ mbc={}] start_peering_interval up [5,4,0] -> [2,1,0], acting [5,4,0] -> [2,1,0], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.12( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.643494606s) [0,4,5] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914062500s@ mbc={}] start_peering_interval up [5,4,0] -> [0,4,5], acting [5,4,0] -> [0,4,5], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.15( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.643445015s) [2,1,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914062500s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.12( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.643472672s) [0,4,5] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914062500s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.1e( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774338722s) [0,1,2] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.045654297s@ mbc={}] start_peering_interval up [2,3,4] -> [0,1,2], acting [2,3,4] -> [0,1,2], acting_primary 2 -> 0, up_primary 2 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.17( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.643100739s) [0,4,5] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914062500s@ mbc={}] start_peering_interval up [5,4,0] -> [0,4,5], acting [5,4,0] -> [0,4,5], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.13( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.642590523s) [1,3,2] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.913818359s@ mbc={}] start_peering_interval up [5,4,0] -> [1,3,2], acting [5,4,0] -> [1,3,2], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.10( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.643815041s) [1,5,3] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.915161133s@ mbc={}] start_peering_interval up [5,4,0] -> [1,5,3], acting [5,4,0] -> [1,5,3], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.17( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.642649651s) [0,4,5] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914062500s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.10( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.643723488s) [1,5,3] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.915161133s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.13( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.642518997s) [1,3,2] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.913818359s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.9( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772107124s) [1,5,0] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.043823242s@ mbc={}] start_peering_interval up [2,3,4] -> [1,5,0], acting [2,3,4] -> [1,5,0], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.e( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.642672539s) [2,4,0] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914306641s@ mbc={}] start_peering_interval up [5,4,0] -> [2,4,0], acting [5,4,0] -> [2,4,0], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.1e( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774074554s) [0,1,2] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.045654297s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.e( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.642640114s) [2,4,0] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914306641s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.9( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772087097s) [1,5,0] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.043823242s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.f( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.642460823s) [1,5,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914306641s@ mbc={}] start_peering_interval up [5,4,0] -> [1,5,0], acting [5,4,0] -> [1,5,0], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.f( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.642405510s) [1,5,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914306641s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.a( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.773132324s) [0,2,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.045166016s@ mbc={}] start_peering_interval up [2,3,4] -> [0,2,4], acting [2,3,4] -> [0,2,4], acting_primary 2 -> 0, up_primary 2 -> 0, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.c( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.642447472s) [4,3,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914428711s@ mbc={}] start_peering_interval up [5,4,0] -> [4,3,5], acting [5,4,0] -> [4,3,5], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.a( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.773102760s) [0,2,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.045166016s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.d( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.642099380s) [1,2,3] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914306641s@ mbc={}] start_peering_interval up [5,4,0] -> [1,2,3], acting [5,4,0] -> [1,2,3], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.d( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.642024994s) [1,2,3] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914306641s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.7( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.767205238s) [4,3,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.039550781s@ mbc={}] start_peering_interval up [2,3,4] -> [4,3,5], acting [2,3,4] -> [4,3,5], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.7( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.767205238s) [4,3,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.039550781s@ mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.a( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.642556190s) [4,3,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914916992s@ mbc={}] start_peering_interval up [5,4,0] -> [4,3,5], acting [5,4,0] -> [4,3,5], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.8( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.771138191s) [2,0,1] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.043579102s@ mbc={}] start_peering_interval up [2,3,4] -> [2,0,1], acting [2,3,4] -> [2,0,1], acting_primary 2 -> 2, up_primary 2 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.8( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.771058083s) [2,0,1] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.043579102s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.a( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.642556190s) [4,3,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1122.914916992s@ mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.c( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.642447472s) [4,3,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1122.914428711s@ mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.6( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.641952515s) [0,4,5] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914672852s@ mbc={}] start_peering_interval up [5,4,0] -> [0,4,5], acting [5,4,0] -> [0,4,5], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.5( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.766492844s) [0,4,5] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.039184570s@ mbc={}] start_peering_interval up [2,3,4] -> [0,4,5], acting [2,3,4] -> [0,4,5], acting_primary 2 -> 0, up_primary 2 -> 0, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.6( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.641885757s) [0,4,5] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914672852s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.3( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.641731262s) [4,0,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914672852s@ mbc={}] start_peering_interval up [5,4,0] -> [4,0,5], acting [5,4,0] -> [4,0,5], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.5( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.766381264s) [0,4,5] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.039184570s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.3( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.641731262s) [4,0,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1122.914672852s@ mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.3( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772113800s) [0,1,2] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.045166016s@ mbc={}] start_peering_interval up [2,3,4] -> [0,1,2], acting [2,3,4] -> [0,1,2], acting_primary 2 -> 0, up_primary 2 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.3( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772085190s) [0,1,2] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.045166016s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.5( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.641571999s) [4,3,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914672852s@ mbc={}] start_peering_interval up [5,4,0] -> [4,3,5], acting [5,4,0] -> [4,3,5], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.5( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.641571999s) [4,3,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1122.914672852s@ mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.1( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.641440392s) [0,4,2] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914428711s@ mbc={}] start_peering_interval up [5,4,0] -> [0,4,2], acting [5,4,0] -> [0,4,2], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.2( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.771500587s) [4,0,2] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.044677734s@ mbc={}] start_peering_interval up [2,3,4] -> [4,0,2], acting [2,3,4] -> [4,0,2], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.2( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.771500587s) [4,0,2] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.044677734s@ mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.8( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.641579628s) [2,0,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914916992s@ mbc={}] start_peering_interval up [5,4,0] -> [2,0,4], acting [5,4,0] -> [2,0,4], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.8( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.641543388s) [2,0,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914916992s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.1( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.641202927s) [0,4,2] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914428711s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.1b( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.766919136s) [1,0,2] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.040405273s@ mbc={}] start_peering_interval up [2,3,4] -> [1,0,2], acting [2,3,4] -> [1,0,2], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.1b( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.766897202s) [1,0,2] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.040405273s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.1a( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.770991325s) [2,4,3] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.044555664s@ mbc={}] start_peering_interval up [2,3,4] -> [2,4,3], acting [2,3,4] -> [2,4,3], acting_primary 2 -> 2, up_primary 2 -> 2, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.1a( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.770950317s) [2,4,3] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.044555664s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.1f( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.641399384s) [0,1,5] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.915039062s@ mbc={}] start_peering_interval up [5,4,0] -> [0,1,5], acting [5,4,0] -> [0,1,5], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.1c( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.640690804s) [1,3,2] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914428711s@ mbc={}] start_peering_interval up [5,4,0] -> [1,3,2], acting [5,4,0] -> [1,3,2], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.1f( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.641380310s) [0,1,5] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.915039062s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.1c( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.640630722s) [1,3,2] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914428711s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.19( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772137642s) [0,1,5] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.046020508s@ mbc={}] start_peering_interval up [2,3,4] -> [0,1,5], acting [2,3,4] -> [0,1,5], acting_primary 2 -> 0, up_primary 2 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.18( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.764846802s) [4,2,3] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.038696289s@ mbc={}] start_peering_interval up [2,3,4] -> [4,2,3], acting [2,3,4] -> [4,2,3], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.18( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.764846802s) [4,2,3] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.038696289s@ mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.19( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772120476s) [0,1,5] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.046020508s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.1e( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.640861511s) [3,2,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.915161133s@ mbc={}] start_peering_interval up [5,4,0] -> [3,2,4], acting [5,4,0] -> [3,2,4], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.640952110s) [0,4,2] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.915283203s@ mbc={}] start_peering_interval up [4,5,3] -> [0,4,2], acting [4,5,3] -> [0,4,2], acting_primary 4 -> 0, up_primary 4 -> 0, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.1e( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.640828133s) [3,2,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.915161133s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.1f( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.640813828s) [0,4,2] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.915283203s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.1e( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.639932632s) [3,5,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.915039062s@ mbc={}] start_peering_interval up [4,5,3] -> [3,5,4], acting [4,5,3] -> [3,5,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.1e( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.639659882s) [3,5,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.915039062s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.18( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.775296211s) [4,3,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.050903320s@ mbc={}] start_peering_interval up [3,4,5] -> [4,3,5], acting [3,4,5] -> [4,3,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.18( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.775296211s) [4,3,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.050903320s@ mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.1b( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.775214195s) [4,3,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.051025391s@ mbc={}] start_peering_interval up [3,4,5] -> [4,3,5], acting [3,4,5] -> [4,3,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.639222145s) [2,1,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.915161133s@ mbc={}] start_peering_interval up [4,5,3] -> [2,1,0], acting [4,5,3] -> [2,1,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.639042854s) [2,4,0] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914916992s@ mbc={}] start_peering_interval up [4,5,3] -> [2,4,0], acting [4,5,3] -> [2,4,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.1b( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.775214195s) [4,3,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.051025391s@ mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.1d( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.638940811s) [2,4,0] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914916992s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.1c( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.639085770s) [2,1,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.915161133s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.1d( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.639106750s) [5,4,3] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.915283203s@ mbc={}] start_peering_interval up [5,4,0] -> [5,4,3], acting [5,4,0] -> [5,4,3], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.1d( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.639085770s) [5,4,3] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.915283203s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.1a( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.780861855s) [4,3,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.057128906s@ mbc={}] start_peering_interval up [3,4,5] -> [4,3,5], acting [3,4,5] -> [4,3,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.1a( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.780861855s) [4,3,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.057128906s@ mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.1a( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.637595177s) [5,3,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914184570s@ mbc={}] start_peering_interval up [5,4,0] -> [5,3,4], acting [5,4,0] -> [5,3,4], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.19( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774245262s) [2,3,1] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.050903320s@ mbc={}] start_peering_interval up [3,4,5] -> [2,3,1], acting [3,4,5] -> [2,3,1], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.1a( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.637558937s) [5,3,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914184570s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.19( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774185181s) [2,3,1] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.050903320s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.1d( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774166107s) [2,1,3] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.050903320s@ mbc={}] start_peering_interval up [3,4,5] -> [2,1,3], acting [3,4,5] -> [2,1,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.1c( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.770162582s) [4,2,0] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.046997070s@ mbc={}] start_peering_interval up [2,3,4] -> [4,2,0], acting [2,3,4] -> [4,2,0], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.1d( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774127007s) [2,1,3] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.050903320s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.1c( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.770162582s) [4,2,0] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.046997070s@ mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.1a( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.630984306s) [1,5,3] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.908081055s@ mbc={}] start_peering_interval up [4,5,3] -> [1,5,3], acting [4,5,3] -> [1,5,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.1c( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.773820877s) [2,3,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.050903320s@ mbc={}] start_peering_interval up [3,4,5] -> [2,3,4], acting [3,4,5] -> [2,3,4], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.631194115s) [5,4,3] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.908081055s@ mbc={}] start_peering_interval up [4,5,3] -> [5,4,3], acting [4,5,3] -> [5,4,3], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.1b( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.636497498s) [5,4,3] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.913452148s@ mbc={}] start_peering_interval up [5,4,0] -> [5,4,3], acting [5,4,0] -> [5,4,3], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.1c( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.773624420s) [2,3,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.050903320s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.1b( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.636285782s) [5,4,3] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.913452148s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.1d( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.761549950s) [3,1,5] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.038940430s@ mbc={}] start_peering_interval up [2,3,4] -> [3,1,5], acting [2,3,4] -> [3,1,5], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.1b( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.630871773s) [5,4,3] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.908081055s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.1d( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.761529922s) [3,1,5] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.038940430s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.630474091s) [3,4,5] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.907958984s@ mbc={}] start_peering_interval up [4,5,3] -> [3,4,5], acting [4,5,3] -> [3,4,5], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.1a( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.630527496s) [1,5,3] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.908081055s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.16( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.636249542s) [1,3,5] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.913818359s@ mbc={}] start_peering_interval up [5,4,0] -> [1,3,5], acting [5,4,0] -> [1,3,5], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.9( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.630374908s) [3,4,5] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.907958984s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.f( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.773282051s) [3,2,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.051025391s@ mbc={}] start_peering_interval up [3,4,5] -> [3,2,4], acting [3,4,5] -> [3,2,4], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.f( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.773192406s) [3,2,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.051025391s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.16( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.636229515s) [1,3,5] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.913818359s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.e( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.768943787s) [2,0,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.046752930s@ mbc={}] start_peering_interval up [2,3,4] -> [2,0,4], acting [2,3,4] -> [2,0,4], acting_primary 2 -> 2, up_primary 2 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.16( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.769277573s) [1,3,2] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.047119141s@ mbc={}] start_peering_interval up [2,3,4] -> [1,3,2], acting [2,3,4] -> [1,3,2], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.9( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.636951447s) [5,1,3] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914916992s@ mbc={}] start_peering_interval up [5,4,0] -> [5,1,3], acting [5,4,0] -> [5,1,3], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.16( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.769010544s) [1,3,2] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.047119141s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.9( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.636927605s) [5,1,3] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914916992s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.e( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.768909454s) [2,0,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.046752930s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.f( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.767914772s) [4,2,3] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.046630859s@ mbc={}] start_peering_interval up [2,3,4] -> [4,2,3], acting [2,3,4] -> [4,2,3], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.e( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772156715s) [4,5,0] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.050903320s@ mbc={}] start_peering_interval up [3,4,5] -> [4,5,0], acting [3,4,5] -> [4,5,0], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[2.1a( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,5,3] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.4( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.635618210s) [3,2,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914550781s@ mbc={}] start_peering_interval up [5,4,0] -> [3,2,1], acting [5,4,0] -> [3,2,1], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.634724617s) [1,2,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.913574219s@ mbc={}] start_peering_interval up [4,5,3] -> [1,2,0], acting [4,5,3] -> [1,2,0], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.e( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772156715s) [4,5,0] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.050903320s@ mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.4( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.635580063s) [3,2,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914550781s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.8( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.634698868s) [1,2,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.913574219s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.634529114s) [2,0,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.913452148s@ mbc={}] start_peering_interval up [4,5,3] -> [2,0,1], acting [4,5,3] -> [2,0,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.f( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.767914772s) [4,2,3] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.046630859s@ mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.3( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772084236s) [2,4,3] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.051147461s@ mbc={}] start_peering_interval up [3,4,5] -> [2,4,3], acting [3,4,5] -> [2,4,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.3( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772066116s) [2,4,3] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.051147461s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.d( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.766921043s) [2,4,0] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.046020508s@ mbc={}] start_peering_interval up [2,3,4] -> [2,4,0], acting [2,3,4] -> [2,4,0], acting_primary 2 -> 2, up_primary 2 -> 2, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.5( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.634247780s) [2,0,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.913452148s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.d( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.766852379s) [2,4,0] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.046020508s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.2( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.771631241s) [2,1,3] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.051147461s@ mbc={}] start_peering_interval up [3,4,5] -> [2,1,3], acting [3,4,5] -> [2,1,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.2( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.771612167s) [2,1,3] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.051147461s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[2.8( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,2,0] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.634372711s) [4,3,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914062500s@ mbc={}] start_peering_interval up [4,5,3] -> [4,3,5], acting [4,5,3] -> [4,3,5], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.2( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.634899139s) [3,5,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914550781s@ mbc={}] start_peering_interval up [5,4,0] -> [3,5,1], acting [5,4,0] -> [3,5,1], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.5( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.771496773s) [1,5,0] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.051147461s@ mbc={}] start_peering_interval up [3,4,5] -> [1,5,0], acting [3,4,5] -> [1,5,0], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.2( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.634829521s) [3,5,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914550781s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.3( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.634372711s) [4,3,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1122.914062500s@ mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.5( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.771477699s) [1,5,0] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.051147461s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.633814812s) [3,2,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.913574219s@ mbc={}] start_peering_interval up [4,5,3] -> [3,2,1], acting [4,5,3] -> [3,2,1], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.4( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.633731842s) [3,2,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.913574219s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.633767128s) [1,0,2] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.913696289s@ mbc={}] start_peering_interval up [4,5,3] -> [1,0,2], acting [4,5,3] -> [1,0,2], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.2( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.633722305s) [1,0,2] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.913696289s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.4( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.776829720s) [0,5,1] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.056884766s@ mbc={}] start_peering_interval up [3,4,5] -> [0,5,1], acting [3,4,5] -> [0,5,1], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.4( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.776655197s) [0,5,1] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.056884766s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.1( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.765070915s) [4,3,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.045654297s@ mbc={}] start_peering_interval up [2,3,4] -> [4,3,5], acting [2,3,4] -> [4,3,5], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.1( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.776633263s) [2,1,0] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.057128906s@ mbc={}] start_peering_interval up [3,4,5] -> [2,1,0], acting [3,4,5] -> [2,1,0], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.7( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.634313583s) [3,5,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914916992s@ mbc={}] start_peering_interval up [5,4,0] -> [3,5,4], acting [5,4,0] -> [3,5,4], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.7( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.634293556s) [3,5,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914916992s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.1( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.776563644s) [2,1,0] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.057128906s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[4.5( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [1,5,0] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[2.2( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,0,2] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.632801056s) [3,2,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.913452148s@ mbc={}] start_peering_interval up [4,5,3] -> [3,2,4], acting [4,5,3] -> [3,2,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.633387566s) [4,2,3] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914184570s@ mbc={}] start_peering_interval up [4,5,3] -> [4,2,3], acting [4,5,3] -> [4,2,3], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.6( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.632761002s) [3,2,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.913452148s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.7( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.633387566s) [4,2,3] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1122.914184570s@ mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.1( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.632960320s) [3,5,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.913818359s@ mbc={}] start_peering_interval up [4,5,3] -> [3,5,4], acting [4,5,3] -> [3,5,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.1( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.632933617s) [3,5,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.913818359s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.4( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.763193130s) [5,3,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.044189453s@ mbc={}] start_peering_interval up [2,3,4] -> [5,3,4], acting [2,3,4] -> [5,3,4], acting_primary 2 -> 5, up_primary 2 -> 5, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.7( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.775778770s) [0,5,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.057006836s@ mbc={}] start_peering_interval up [3,4,5] -> [0,5,4], acting [3,4,5] -> [0,5,4], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.4( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.763002396s) [5,3,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.044189453s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.1( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.765070915s) [4,3,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.045654297s@ mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.7( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.775720596s) [0,5,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.057006836s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[2.16( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,2,0] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.6( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.775480270s) [5,3,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.057128906s@ mbc={}] start_peering_interval up [3,4,5] -> [5,3,4], acting [3,4,5] -> [5,3,4], acting_primary 3 -> 5, up_primary 3 -> 5, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.6( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.775454521s) [5,3,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.057128906s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[2.17( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,5,3] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.631974220s) [2,3,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914184570s@ mbc={}] start_peering_interval up [4,5,3] -> [2,3,1], acting [4,5,3] -> [2,3,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.b( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.632916451s) [3,4,5] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.915039062s@ mbc={}] start_peering_interval up [5,4,0] -> [3,4,5], acting [5,4,0] -> [3,4,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.a( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.631913185s) [2,3,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914184570s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.6( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.756907463s) [3,1,2] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.039062500s@ mbc={}] start_peering_interval up [2,3,4] -> [3,1,2], acting [2,3,4] -> [3,1,2], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.631725311s) [5,1,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914062500s@ mbc={}] start_peering_interval up [4,5,3] -> [5,1,0], acting [4,5,3] -> [5,1,0], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.b( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.632796288s) [3,4,5] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.915039062s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.c( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774639130s) [4,3,2] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.057006836s@ mbc={}] start_peering_interval up [3,4,5] -> [4,3,2], acting [3,4,5] -> [4,3,2], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.b( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.631699562s) [5,1,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914062500s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.6( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.756800652s) [3,1,2] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.039062500s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.c( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774639130s) [4,3,2] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.057006836s@ mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.d( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774272919s) [4,2,3] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.056884766s@ mbc={}] start_peering_interval up [3,4,5] -> [4,2,3], acting [3,4,5] -> [4,2,3], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.d( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774272919s) [4,2,3] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.056884766s@ mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.c( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.757686615s) [3,4,2] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.040405273s@ mbc={}] start_peering_interval up [2,3,4] -> [3,4,2], acting [2,3,4] -> [3,4,2], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.b( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.760393143s) [5,0,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.043212891s@ mbc={}] start_peering_interval up [2,3,4] -> [5,0,4], acting [2,3,4] -> [5,0,4], acting_primary 2 -> 5, up_primary 2 -> 5, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.c( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.631577492s) [2,0,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914428711s@ mbc={}] start_peering_interval up [4,5,3] -> [2,0,1], acting [4,5,3] -> [2,0,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.b( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.760374069s) [5,0,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.043212891s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.a( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774301529s) [1,0,2] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.057006836s@ mbc={}] start_peering_interval up [3,4,5] -> [1,0,2], acting [3,4,5] -> [1,0,2], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.a( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774095535s) [1,0,2] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.057006836s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.c( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.757373810s) [3,4,2] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.040405273s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.c( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.631469727s) [2,0,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914428711s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.630979538s) [5,1,3] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914306641s@ mbc={}] start_peering_interval up [4,5,3] -> [5,1,3], acting [4,5,3] -> [5,1,3], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.d( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.630958557s) [5,1,3] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914306641s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.e( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.630856514s) [3,2,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914184570s@ mbc={}] start_peering_interval up [4,5,3] -> [3,2,4], acting [4,5,3] -> [3,2,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.e( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.630836487s) [3,2,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914184570s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.b( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.773843765s) [0,1,5] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.057250977s@ mbc={}] start_peering_interval up [3,4,5] -> [0,1,5], acting [3,4,5] -> [0,1,5], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.b( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.773828506s) [0,1,5] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.057250977s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.15( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.762653351s) [4,3,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.046020508s@ mbc={}] start_peering_interval up [2,3,4] -> [4,3,5], acting [2,3,4] -> [4,3,5], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.8( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774485588s) [5,4,3] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.058227539s@ mbc={}] start_peering_interval up [3,4,5] -> [5,4,3], acting [3,4,5] -> [5,4,3], acting_primary 3 -> 5, up_primary 3 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.15( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.762653351s) [4,3,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.046020508s@ mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.8( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774389267s) [5,4,3] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.058227539s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.9( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774335861s) [5,0,1] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.058227539s@ mbc={}] start_peering_interval up [3,4,5] -> [5,0,1], acting [3,4,5] -> [5,0,1], acting_primary 3 -> 5, up_primary 3 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.9( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.774293900s) [5,0,1] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.058227539s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.630635262s) [2,4,0] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914428711s@ mbc={}] start_peering_interval up [4,5,3] -> [2,4,0], acting [4,5,3] -> [2,4,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.16( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.773723602s) [0,4,5] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.057861328s@ mbc={}] start_peering_interval up [3,4,5] -> [0,4,5], acting [3,4,5] -> [0,4,5], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.16( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.773691177s) [0,4,5] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.057861328s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.17( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.773790359s) [3,1,5] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.058105469s@ mbc={}] start_peering_interval up [3,4,5] -> [3,1,5], acting [3,4,5] -> [3,1,5], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.17( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.761653900s) [3,5,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.046020508s@ mbc={}] start_peering_interval up [2,3,4] -> [3,5,4], acting [2,3,4] -> [3,5,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.f( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.630414963s) [2,4,0] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914428711s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.17( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.761591911s) [3,5,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.046020508s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.629897118s) [4,3,2] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914428711s@ mbc={}] start_peering_interval up [4,5,3] -> [4,3,2], acting [4,5,3] -> [4,3,2], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[4.a( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [1,0,2] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.10( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.629917145s) [2,0,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914550781s@ mbc={}] start_peering_interval up [4,5,3] -> [2,0,4], acting [4,5,3] -> [2,0,4], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.12( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.629857063s) [5,3,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914550781s@ mbc={}] start_peering_interval up [4,5,3] -> [5,3,1], acting [4,5,3] -> [5,3,1], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.10( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.629890442s) [2,0,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914550781s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.12( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.629816055s) [5,3,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914550781s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.11( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.629897118s) [4,3,2] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1122.914428711s@ mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.17( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.773769379s) [3,1,5] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.058105469s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.10( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.760544777s) [4,5,0] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.045532227s@ mbc={}] start_peering_interval up [2,3,4] -> [4,5,0], acting [2,3,4] -> [4,5,0], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.14( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.773128510s) [5,0,1] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.057983398s@ mbc={}] start_peering_interval up [3,4,5] -> [5,0,1], acting [3,4,5] -> [5,0,1], acting_primary 3 -> 5, up_primary 3 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.10( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.760544777s) [4,5,0] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.045532227s@ mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.14( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772986412s) [5,0,1] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.057983398s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.629462242s) [2,4,3] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914672852s@ mbc={}] start_peering_interval up [4,5,3] -> [2,4,3], acting [4,5,3] -> [2,4,3], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.13( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.629435539s) [2,4,3] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914672852s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.15( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772093773s) [5,3,1] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.057373047s@ mbc={}] start_peering_interval up [3,4,5] -> [5,3,1], acting [3,4,5] -> [5,3,1], acting_primary 3 -> 5, up_primary 3 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.15( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772067070s) [5,3,1] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.057373047s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.12( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772439003s) [0,5,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.057861328s@ mbc={}] start_peering_interval up [3,4,5] -> [0,5,4], acting [3,4,5] -> [0,5,4], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.12( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772418022s) [0,5,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.057861328s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.14( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.760279655s) [3,2,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.045898438s@ mbc={}] start_peering_interval up [2,3,4] -> [3,2,4], acting [2,3,4] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.13( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.760244370s) [5,0,1] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.045898438s@ mbc={}] start_peering_interval up [2,3,4] -> [5,0,1], acting [2,3,4] -> [5,0,1], acting_primary 2 -> 5, up_primary 2 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.13( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.760226250s) [5,0,1] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.045898438s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.14( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.760129929s) [3,2,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.045898438s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.14( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.629001617s) [4,2,0] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914794922s@ mbc={}] start_peering_interval up [4,5,3] -> [4,2,0], acting [4,5,3] -> [4,2,0], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.13( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772271156s) [4,2,3] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.058105469s@ mbc={}] start_peering_interval up [3,4,5] -> [4,2,3], acting [3,4,5] -> [4,2,3], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.14( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.629001617s) [4,2,0] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1122.914794922s@ mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.13( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.772271156s) [4,2,3] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown pruub 1125.058105469s@ mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.628746986s) [5,0,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914672852s@ mbc={}] start_peering_interval up [4,5,3] -> [5,0,4], acting [4,5,3] -> [5,0,4], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.12( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.758749962s) [5,1,3] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.044799805s@ mbc={}] start_peering_interval up [2,3,4] -> [5,1,3], acting [2,3,4] -> [5,1,3], acting_primary 2 -> 5, up_primary 2 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.15( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.628642082s) [5,0,4] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914672852s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.628845215s) [1,2,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914916992s@ mbc={}] start_peering_interval up [4,5,3] -> [1,2,0], acting [4,5,3] -> [1,2,0], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[5.12( empty local-lis/les=35/36 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.758734703s) [5,1,3] r=-1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.044799805s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.11( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.771629333s) [3,4,2] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.058105469s@ mbc={}] start_peering_interval up [3,4,5] -> [3,4,2], acting [3,4,5] -> [3,4,2], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.10( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.771441460s) [3,2,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.057983398s@ mbc={}] start_peering_interval up [3,4,5] -> [3,2,4], acting [3,4,5] -> [3,2,4], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.11( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.771579742s) [3,4,2] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.058105469s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.16( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.628723145s) [1,2,0] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914916992s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.10( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.771390915s) [3,2,4] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.057983398s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.628582954s) [5,1,3] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.915039062s@ mbc={}] start_peering_interval up [4,5,3] -> [5,1,3], acting [4,5,3] -> [5,1,3], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.628530502s) [1,5,3] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.915161133s@ mbc={}] start_peering_interval up [4,5,3] -> [1,5,3], acting [4,5,3] -> [1,5,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.18( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.628270149s) [5,1,3] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.915039062s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.1e( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.771138191s) [0,4,5] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.057983398s@ mbc={}] start_peering_interval up [3,4,5] -> [0,4,5], acting [3,4,5] -> [0,4,5], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.18( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.626810074s) [3,2,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.913696289s@ mbc={}] start_peering_interval up [5,4,0] -> [3,2,1], acting [5,4,0] -> [3,2,1], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.17( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.628399849s) [1,5,3] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.915161133s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[3.18( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.626704216s) [3,2,1] r=-1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.913696289s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.627779961s) [3,4,2] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active pruub 1122.914794922s@ mbc={}] start_peering_interval up [4,5,3] -> [3,4,2], acting [4,5,3] -> [3,4,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.1f( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.770967484s) [2,4,3] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active pruub 1125.058105469s@ mbc={}] start_peering_interval up [3,4,5] -> [2,4,3], acting [3,4,5] -> [2,4,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[2.19( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37 pruub=9.627736092s) [3,4,2] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.914794922s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.1f( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.770787239s) [2,4,3] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.058105469s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[4.1e( empty local-lis/les=35/36 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37 pruub=11.771070480s) [0,4,5] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1125.057983398s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[5.19( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [0,1,5] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[3.1f( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [0,1,5] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[5.1d( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [3,1,5] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[4.4( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [0,5,1] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[5.3( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [0,1,2] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[3.4( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [3,2,1] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[5.6( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [3,1,2] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[2.4( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [3,2,1] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[5.8( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [2,0,1] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[3.2( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [3,5,1] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[4.19( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [2,3,1] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[3.19( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [0,1,2] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[2.c( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [2,0,1] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 37 pg[6.0( empty local-lis/les=0/0 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [0,4,2] r=1 lpr=37 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[5.1e( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [0,1,2] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[2.1c( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [2,1,0] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[4.b( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [0,1,5] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[3.18( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [3,2,1] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[4.2( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [2,1,3] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[4.17( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [3,1,5] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[2.5( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [2,0,1] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[4.1( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [2,1,0] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[2.a( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [2,3,1] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[3.15( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [2,1,0] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[4.1d( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [2,1,3] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[3.9( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [5,1,3] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[2.b( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [5,1,0] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[2.d( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [5,1,3] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[4.9( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [5,0,1] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[4.14( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [5,0,1] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[2.12( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [5,3,1] r=2 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[4.15( empty local-lis/les=0/0 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [5,3,1] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[5.13( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [5,0,1] r=2 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[5.12( empty local-lis/les=0/0 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [5,1,3] r=1 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 38 pg[4.d( empty local-lis/les=37/38 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [4,2,3] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 37 pg[2.18( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [5,1,3] r=1 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 38 pg[3.1c( empty local-lis/les=37/38 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,3,2] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 38 pg[2.2( empty local-lis/les=37/38 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,0,2] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 38 pg[3.d( empty local-lis/les=37/38 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,2,3] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 38 pg[5.18( empty local-lis/les=37/38 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [4,2,3] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 38 pg[4.a( empty local-lis/les=37/38 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [1,0,2] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 38 pg[5.1b( empty local-lis/les=37/38 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [1,0,2] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 38 pg[2.8( empty local-lis/les=37/38 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,2,0] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 38 pg[5.11( empty local-lis/les=37/38 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [1,2,0] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 38 pg[5.2( empty local-lis/les=37/38 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [4,0,2] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 38 pg[2.16( empty local-lis/les=37/38 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,2,0] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 38 pg[3.14( empty local-lis/les=37/38 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,2,0] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 38 pg[2.7( empty local-lis/les=37/38 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [4,2,3] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 38 pg[3.13( empty local-lis/les=37/38 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,3,2] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 38 pg[5.16( empty local-lis/les=37/38 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [1,3,2] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 38 pg[4.c( empty local-lis/les=37/38 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [4,3,2] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 38 pg[5.f( empty local-lis/les=37/38 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [4,2,3] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 38 pg[4.13( empty local-lis/les=37/38 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [4,2,3] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 38 pg[2.14( empty local-lis/les=37/38 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [4,2,0] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 38 pg[2.11( empty local-lis/les=37/38 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [4,3,2] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 38 pg[5.1c( empty local-lis/les=37/38 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [4,2,0] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 38 pg[4.1a( empty local-lis/les=37/38 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [4,3,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 38 pg[4.18( empty local-lis/les=37/38 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [4,3,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 38 pg[4.1b( empty local-lis/les=37/38 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [4,3,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 38 pg[2.1a( empty local-lis/les=37/38 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,5,3] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 38 pg[4.5( empty local-lis/les=37/38 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [1,5,0] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 38 pg[3.f( empty local-lis/les=37/38 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,5,0] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 38 pg[4.e( empty local-lis/les=37/38 n=0 ec=35/25 lis/c=35/35 les/c/f=36/36/0 sis=37) [4,5,0] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 38 pg[3.5( empty local-lis/les=37/38 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [4,3,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 38 pg[5.1( empty local-lis/les=37/38 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [4,3,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 38 pg[3.3( empty local-lis/les=37/38 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [4,0,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 38 pg[5.7( empty local-lis/les=37/38 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [4,3,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 38 pg[2.3( empty local-lis/les=37/38 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [4,3,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 38 pg[3.a( empty local-lis/les=37/38 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [4,3,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 38 pg[3.c( empty local-lis/les=37/38 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [4,3,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 38 pg[3.10( empty local-lis/les=37/38 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,5,3] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 38 pg[5.9( empty local-lis/les=37/38 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [1,5,0] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 38 pg[2.17( empty local-lis/les=37/38 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,5,3] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 38 pg[3.16( empty local-lis/les=37/38 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=37) [1,3,5] r=0 lpr=37 pi=[33,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 38 pg[5.10( empty local-lis/les=37/38 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [4,5,0] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 38 pg[5.1f( empty local-lis/les=37/38 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [4,5,3] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 38 pg[5.15( empty local-lis/les=37/38 n=0 ec=35/27 lis/c=35/35 les/c/f=36/36/0 sis=37) [4,3,5] r=0 lpr=37 pi=[35,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:51 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 39 pg[7.0( empty local-lis/les=0/0 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [1,5,3] r=0 lpr=39 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:05:51 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Dec 02 08:05:52 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 40 pg[7.0( empty local-lis/les=39/40 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [1,5,3] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:05:55 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Dec 02 08:05:55 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Dec 02 08:05:55 np0005541914.localdomain sudo[55706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:05:55 np0005541914.localdomain sudo[55706]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:05:55 np0005541914.localdomain sudo[55706]: pam_unix(sudo:session): session closed for user root
Dec 02 08:05:55 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Dec 02 08:05:56 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Dec 02 08:05:56 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.f scrub starts
Dec 02 08:05:56 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Dec 02 08:05:57 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Dec 02 08:05:57 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Dec 02 08:05:57 np0005541914.localdomain sudo[55721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:05:57 np0005541914.localdomain sudo[55721]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:05:57 np0005541914.localdomain sudo[55721]: pam_unix(sudo:session): session closed for user root
Dec 02 08:05:58 np0005541914.localdomain sudo[55736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:05:58 np0005541914.localdomain sudo[55736]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:05:58 np0005541914.localdomain sudo[55736]: pam_unix(sudo:session): session closed for user root
Dec 02 08:05:58 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 3.d scrub starts
Dec 02 08:05:58 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 3.d scrub ok
Dec 02 08:05:59 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Dec 02 08:05:59 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Dec 02 08:06:00 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Dec 02 08:06:02 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Dec 02 08:06:02 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Dec 02 08:06:04 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Dec 02 08:06:04 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Dec 02 08:06:08 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 3.a deep-scrub starts
Dec 02 08:06:08 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 3.a deep-scrub ok
Dec 02 08:06:09 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 3.16 deep-scrub starts
Dec 02 08:06:09 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 3.16 deep-scrub ok
Dec 02 08:06:10 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Dec 02 08:06:10 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Dec 02 08:06:10 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:06:11 np0005541914.localdomain systemd[1]: tmp-crun.rVPRaY.mount: Deactivated successfully.
Dec 02 08:06:11 np0005541914.localdomain podman[55751]: 2025-12-02 08:06:11.057818244 +0000 UTC m=+0.062168358 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, tcib_managed=true)
Dec 02 08:06:11 np0005541914.localdomain podman[55751]: 2025-12-02 08:06:11.228879956 +0000 UTC m=+0.233230080 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, distribution-scope=public, release=1761123044, container_name=metrics_qdr, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-type=git, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team)
Dec 02 08:06:11 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:06:11 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Dec 02 08:06:11 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Dec 02 08:06:13 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 3.c scrub starts
Dec 02 08:06:13 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 3.c scrub ok
Dec 02 08:06:14 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Dec 02 08:06:14 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Dec 02 08:06:15 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 3.f scrub starts
Dec 02 08:06:15 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Dec 02 08:06:15 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Dec 02 08:06:15 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 3.f scrub ok
Dec 02 08:06:16 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 4.c scrub starts
Dec 02 08:06:16 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 4.c scrub ok
Dec 02 08:06:19 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 4.d scrub starts
Dec 02 08:06:19 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 4.d scrub ok
Dec 02 08:06:20 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 4.5 deep-scrub starts
Dec 02 08:06:20 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 4.5 deep-scrub ok
Dec 02 08:06:21 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 2.16 deep-scrub starts
Dec 02 08:06:21 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 2.16 deep-scrub ok
Dec 02 08:06:22 np0005541914.localdomain sshd[55780]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:06:23 np0005541914.localdomain sshd[55780]: Invalid user tv from 103.52.115.25 port 33618
Dec 02 08:06:23 np0005541914.localdomain sshd[55780]: Received disconnect from 103.52.115.25 port 33618:11: Bye Bye [preauth]
Dec 02 08:06:23 np0005541914.localdomain sshd[55780]: Disconnected from invalid user tv 103.52.115.25 port 33618 [preauth]
Dec 02 08:06:24 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Dec 02 08:06:24 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Dec 02 08:06:25 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Dec 02 08:06:25 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Dec 02 08:06:26 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 4.13 deep-scrub starts
Dec 02 08:06:26 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 4.13 deep-scrub ok
Dec 02 08:06:26 np0005541914.localdomain sudo[55795]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iyxfxiighrzmbvzoqexgxuplmioyyuvu ; /usr/bin/python3
Dec 02 08:06:26 np0005541914.localdomain sudo[55795]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:06:27 np0005541914.localdomain python3[55797]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:06:27 np0005541914.localdomain sudo[55795]: pam_unix(sudo:session): session closed for user root
Dec 02 08:06:28 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Dec 02 08:06:28 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Dec 02 08:06:28 np0005541914.localdomain sudo[55811]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-toukoexuvgoncgrcaaklkerkxznceyaz ; /usr/bin/python3
Dec 02 08:06:28 np0005541914.localdomain sudo[55811]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:06:28 np0005541914.localdomain python3[55813]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:06:28 np0005541914.localdomain sudo[55811]: pam_unix(sudo:session): session closed for user root
Dec 02 08:06:29 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Dec 02 08:06:30 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 4.a scrub starts
Dec 02 08:06:30 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 4.a scrub ok
Dec 02 08:06:30 np0005541914.localdomain sudo[55827]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oeqvrpfcuirtfoflzbxmbvmabobobjbi ; /usr/bin/python3
Dec 02 08:06:30 np0005541914.localdomain sudo[55827]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:06:30 np0005541914.localdomain python3[55829]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:06:30 np0005541914.localdomain sudo[55827]: pam_unix(sudo:session): session closed for user root
Dec 02 08:06:31 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Dec 02 08:06:31 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Dec 02 08:06:32 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Dec 02 08:06:32 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Dec 02 08:06:33 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 2.1a deep-scrub starts
Dec 02 08:06:33 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 2.1a deep-scrub ok
Dec 02 08:06:33 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Dec 02 08:06:33 np0005541914.localdomain sudo[55875]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xbddmygunjmgtntpnznyqlzaoelvscan ; /usr/bin/python3
Dec 02 08:06:33 np0005541914.localdomain sudo[55875]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:06:33 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Dec 02 08:06:34 np0005541914.localdomain python3[55877]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:06:34 np0005541914.localdomain sudo[55875]: pam_unix(sudo:session): session closed for user root
Dec 02 08:06:34 np0005541914.localdomain sudo[55918]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iqmkderprvultzdstbkajyaqppeiwbui ; /usr/bin/python3
Dec 02 08:06:34 np0005541914.localdomain sudo[55918]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:06:34 np0005541914.localdomain python3[55920]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662793.7121615-91322-6419718888736/source dest=/var/lib/tripleo-config/ceph/ceph.client.openstack.keyring mode=600 _original_basename=ceph.client.openstack.keyring follow=False checksum=55e6802793866e8195bd7dc6c06395cc4184e741 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:06:34 np0005541914.localdomain sudo[55918]: pam_unix(sudo:session): session closed for user root
Dec 02 08:06:34 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Dec 02 08:06:34 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Dec 02 08:06:35 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Dec 02 08:06:35 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Dec 02 08:06:37 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Dec 02 08:06:37 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Dec 02 08:06:38 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 4.e scrub starts
Dec 02 08:06:38 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 4.e scrub ok
Dec 02 08:06:38 np0005541914.localdomain sudo[55980]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xjaofhxkmuxpyjsruknixzfyrplabota ; /usr/bin/python3
Dec 02 08:06:38 np0005541914.localdomain sudo[55980]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:06:39 np0005541914.localdomain python3[55982]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.client.manila.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:06:39 np0005541914.localdomain sudo[55980]: pam_unix(sudo:session): session closed for user root
Dec 02 08:06:39 np0005541914.localdomain sudo[56023]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eqkasfrcpbihwrkwyxwufatjvyzawplg ; /usr/bin/python3
Dec 02 08:06:39 np0005541914.localdomain sudo[56023]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:06:39 np0005541914.localdomain python3[56025]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662798.8278913-91322-73826038523956/source dest=/var/lib/tripleo-config/ceph/ceph.client.manila.keyring mode=600 _original_basename=ceph.client.manila.keyring follow=False checksum=32e95cb48a0c881d4099e3645e940da5c77bc88c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:06:39 np0005541914.localdomain sudo[56023]: pam_unix(sudo:session): session closed for user root
Dec 02 08:06:40 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Dec 02 08:06:40 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Dec 02 08:06:41 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Dec 02 08:06:41 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Dec 02 08:06:41 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:06:42 np0005541914.localdomain systemd[1]: tmp-crun.1iQotj.mount: Deactivated successfully.
Dec 02 08:06:42 np0005541914.localdomain podman[56040]: 2025-12-02 08:06:42.080009591 +0000 UTC m=+0.087045189 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, release=1761123044, version=17.1.12, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git)
Dec 02 08:06:42 np0005541914.localdomain podman[56040]: 2025-12-02 08:06:42.29776343 +0000 UTC m=+0.304798988 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, url=https://www.redhat.com, managed_by=tripleo_ansible, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, distribution-scope=public, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']})
Dec 02 08:06:42 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:06:42 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Dec 02 08:06:42 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Dec 02 08:06:43 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Dec 02 08:06:43 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Dec 02 08:06:44 np0005541914.localdomain sudo[56115]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-htrydjfxtydnsqcnjwklhamkpzuobwff ; /usr/bin/python3
Dec 02 08:06:44 np0005541914.localdomain sudo[56115]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:06:44 np0005541914.localdomain python3[56117]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:06:44 np0005541914.localdomain sudo[56115]: pam_unix(sudo:session): session closed for user root
Dec 02 08:06:44 np0005541914.localdomain sudo[56158]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yxffkklihkxnjmhyloujwhdiewqtesvb ; /usr/bin/python3
Dec 02 08:06:44 np0005541914.localdomain sudo[56158]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:06:44 np0005541914.localdomain python3[56160]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662803.8778832-91322-249950130745560/source dest=/var/lib/tripleo-config/ceph/ceph.conf mode=644 _original_basename=ceph.conf follow=False checksum=ed42d7e7572ec51630a216299b8e7374862502cf backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:06:44 np0005541914.localdomain sudo[56158]: pam_unix(sudo:session): session closed for user root
Dec 02 08:06:45 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 45 pg[7.0( v 42'39 (0'0,42'39] local-lis/les=39/40 n=22 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=11.660870552s) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 lcod 42'38 mlcod 42'38 active pruub 1185.561279297s@ mbc={}] start_peering_interval up [1,5,3] -> [1,5,3], acting [1,5,3] -> [1,5,3], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:45 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 45 pg[7.0( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=1 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=11.660870552s) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 lcod 42'38 mlcod 0'0 unknown pruub 1185.561279297s@ mbc={}] state<Start>: transitioning to Primary
Dec 02 08:06:45 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 45 pg[6.0( empty local-lis/les=37/38 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=8.961356163s) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 active pruub 1178.329833984s@ mbc={}] start_peering_interval up [0,4,2] -> [0,4,2], acting [0,4,2] -> [0,4,2], acting_primary 0 -> 0, up_primary 0 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:45 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 45 pg[6.0( empty local-lis/les=37/38 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=45 pruub=8.957962990s) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1178.329833984s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.d( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.c( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.9( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.b( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.6( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=2 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.7( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.1( v 42'39 (0'0,42'39] local-lis/les=39/40 n=2 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.1b( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.19( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.1e( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.1f( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.d( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.18( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.1( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.c( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.7( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.1a( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.6( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.4( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=2 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.a( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.8( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.2( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=2 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.f( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.3( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=2 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.e( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.5( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=39/40 n=2 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.3( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.2( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.5( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.4( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.e( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.f( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.8( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.9( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.a( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.b( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.14( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.17( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.15( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.16( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.10( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.11( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.13( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.1c( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.1d( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 46 pg[6.12( empty local-lis/les=37/38 n=0 ec=45/37 lis/c=37/37 les/c/f=38/38/0 sis=45) [0,4,2] r=1 lpr=45 pi=[37,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.0( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 lcod 42'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.c( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.1( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.6( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.2( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.f( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.a( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.7( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.9( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.4( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.d( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.e( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.3( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.8( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.5( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 46 pg[7.b( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=39/39 les/c/f=40/40/0 sis=45) [1,5,3] r=0 lpr=45 pi=[39,45)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Dec 02 08:06:46 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Dec 02 08:06:47 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 7.c deep-scrub starts
Dec 02 08:06:47 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 7.c deep-scrub ok
Dec 02 08:06:48 np0005541914.localdomain sudo[56220]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jbfsocafguwuhdbpwvrlasngndyfriyc ; /usr/bin/python3
Dec 02 08:06:48 np0005541914.localdomain sudo[56220]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:06:48 np0005541914.localdomain python3[56222]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:06:48 np0005541914.localdomain sudo[56220]: pam_unix(sudo:session): session closed for user root
Dec 02 08:06:48 np0005541914.localdomain sudo[56265]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fuvtosdeivezrrqkrvyoaexfuneazhyu ; /usr/bin/python3
Dec 02 08:06:48 np0005541914.localdomain sudo[56265]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:06:49 np0005541914.localdomain python3[56267]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662808.4753876-91678-138402771932315/source _original_basename=tmp3eiuoydn follow=False checksum=f17091ee142621a3c8290c8c96b5b52d67b3a864 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:06:49 np0005541914.localdomain sudo[56265]: pam_unix(sudo:session): session closed for user root
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[7.b( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,2,3] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[7.9( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,2,3] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[7.f( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,2,3] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 47 pg[7.b( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.772628784s) [4,2,3] r=-1 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1190.978027344s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 47 pg[7.b( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.772575378s) [4,2,3] r=-1 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1190.978027344s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 47 pg[7.d( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.771993637s) [4,2,3] r=-1 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1190.977416992s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 47 pg[7.d( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.771928787s) [4,2,3] r=-1 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1190.977416992s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 47 pg[7.9( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.771564484s) [4,2,3] r=-1 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1190.977172852s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 47 pg[7.9( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.771483421s) [4,2,3] r=-1 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1190.977172852s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[7.5( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,2,3] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[7.3( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,2,3] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 47 pg[7.7( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.770465851s) [4,2,3] r=-1 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1190.976928711s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[7.7( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,2,3] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 47 pg[7.1( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.769610405s) [4,2,3] r=-1 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1190.976318359s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 47 pg[7.7( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.770363808s) [4,2,3] r=-1 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1190.976928711s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 47 pg[7.1( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.769525528s) [4,2,3] r=-1 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1190.976318359s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[7.1( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,2,3] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[7.d( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,2,3] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 47 pg[7.f( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.769467354s) [4,2,3] r=-1 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1190.976684570s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 47 pg[7.3( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.770059586s) [4,2,3] r=-1 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1190.977416992s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 47 pg[7.f( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.769384384s) [4,2,3] r=-1 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1190.976684570s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 47 pg[7.3( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.770017624s) [4,2,3] r=-1 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1190.977416992s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 47 pg[7.5( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.770520210s) [4,2,3] r=-1 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1190.978149414s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 47 pg[7.5( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.770452499s) [4,2,3] r=-1 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1190.978149414s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.1b( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.780364037s) [5,1,0] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.442993164s@ mbc={}] start_peering_interval up [0,4,2] -> [5,1,0], acting [0,4,2] -> [5,1,0], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.1b( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.780282974s) [5,1,0] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.442993164s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.18( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.779023170s) [0,1,2] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.443237305s@ mbc={}] start_peering_interval up [0,4,2] -> [0,1,2], acting [0,4,2] -> [0,1,2], acting_primary 0 -> 0, up_primary 0 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.18( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778983116s) [0,1,2] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.443237305s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.1a( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.780378342s) [4,2,0] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.444580078s@ mbc={}] start_peering_interval up [0,4,2] -> [4,2,0], acting [0,4,2] -> [4,2,0], acting_primary 0 -> 4, up_primary 0 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.19( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778674126s) [1,3,2] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.442993164s@ mbc={}] start_peering_interval up [0,4,2] -> [1,3,2], acting [0,4,2] -> [1,3,2], acting_primary 0 -> 1, up_primary 0 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.1a( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.780378342s) [4,2,0] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1186.444580078s@ mbc={}] state<Start>: transitioning to Primary
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.1f( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.780510902s) [3,5,1] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.444824219s@ mbc={}] start_peering_interval up [0,4,2] -> [3,5,1], acting [0,4,2] -> [3,5,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.1f( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.780442238s) [3,5,1] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.444824219s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.1e( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778202057s) [5,1,3] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.442626953s@ mbc={}] start_peering_interval up [0,4,2] -> [5,1,3], acting [0,4,2] -> [5,1,3], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.19( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778571129s) [1,3,2] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.442993164s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.1e( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778161049s) [5,1,3] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.442626953s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.d( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.779086113s) [1,3,2] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.443603516s@ mbc={}] start_peering_interval up [0,4,2] -> [1,3,2], acting [0,4,2] -> [1,3,2], acting_primary 0 -> 1, up_primary 0 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.d( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.779058456s) [1,3,2] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.443603516s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.1( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778654099s) [2,1,3] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.443481445s@ mbc={}] start_peering_interval up [0,4,2] -> [2,1,3], acting [0,4,2] -> [2,1,3], acting_primary 0 -> 2, up_primary 0 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.1( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778604507s) [2,1,3] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.443481445s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.c( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.779118538s) [3,1,5] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.443847656s@ mbc={}] start_peering_interval up [0,4,2] -> [3,1,5], acting [0,4,2] -> [3,1,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.7( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.779795647s) [4,3,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.444702148s@ mbc={}] start_peering_interval up [0,4,2] -> [4,3,2], acting [0,4,2] -> [4,3,2], acting_primary 0 -> 4, up_primary 0 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.7( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.779795647s) [4,3,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1186.444702148s@ mbc={}] state<Start>: transitioning to Primary
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.6( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778046608s) [3,4,5] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.442993164s@ mbc={}] start_peering_interval up [0,4,2] -> [3,4,5], acting [0,4,2] -> [3,4,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.c( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778909683s) [3,1,5] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.443847656s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.6( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778006554s) [3,4,5] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.442993164s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.3( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778153419s) [4,5,0] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.443237305s@ mbc={}] start_peering_interval up [0,4,2] -> [4,5,0], acting [0,4,2] -> [4,5,0], acting_primary 0 -> 4, up_primary 0 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 47 pg[6.19( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [1,3,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.3( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778153419s) [4,5,0] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1186.443237305s@ mbc={}] state<Start>: transitioning to Primary
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.2( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778456688s) [1,3,2] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.443847656s@ mbc={}] start_peering_interval up [0,4,2] -> [1,3,2], acting [0,4,2] -> [1,3,2], acting_primary 0 -> 1, up_primary 0 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.5( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.776854515s) [4,2,0] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.442260742s@ mbc={}] start_peering_interval up [0,4,2] -> [4,2,0], acting [0,4,2] -> [4,2,0], acting_primary 0 -> 4, up_primary 0 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.4( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.779340744s) [3,1,5] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.444824219s@ mbc={}] start_peering_interval up [0,4,2] -> [3,1,5], acting [0,4,2] -> [3,1,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.5( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.776854515s) [4,2,0] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1186.442260742s@ mbc={}] state<Start>: transitioning to Primary
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.4( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.779304504s) [3,1,5] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.444824219s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.e( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.779032707s) [4,3,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.444580078s@ mbc={}] start_peering_interval up [0,4,2] -> [4,3,2], acting [0,4,2] -> [4,3,2], acting_primary 0 -> 4, up_primary 0 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.e( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.779032707s) [4,3,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1186.444580078s@ mbc={}] state<Start>: transitioning to Primary
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.f( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.776348114s) [3,5,1] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.442138672s@ mbc={}] start_peering_interval up [0,4,2] -> [3,5,1], acting [0,4,2] -> [3,5,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.f( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.776301384s) [3,5,1] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.442138672s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.9( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.777090073s) [0,2,4] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.442871094s@ mbc={}] start_peering_interval up [0,4,2] -> [0,2,4], acting [0,4,2] -> [0,2,4], acting_primary 0 -> 0, up_primary 0 -> 0, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.8( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.777280807s) [1,2,3] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.443115234s@ mbc={}] start_peering_interval up [0,4,2] -> [1,2,3], acting [0,4,2] -> [1,2,3], acting_primary 0 -> 1, up_primary 0 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.9( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.777069092s) [0,2,4] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.442871094s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.8( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.777251244s) [1,2,3] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.443115234s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 47 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [1,3,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.a( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.776212692s) [4,0,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.442138672s@ mbc={}] start_peering_interval up [0,4,2] -> [4,0,2], acting [0,4,2] -> [4,0,2], acting_primary 0 -> 4, up_primary 0 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.a( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.776212692s) [4,0,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1186.442138672s@ mbc={}] state<Start>: transitioning to Primary
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.14( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778293610s) [3,4,5] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.444335938s@ mbc={}] start_peering_interval up [0,4,2] -> [3,4,5], acting [0,4,2] -> [3,4,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.14( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778268814s) [3,4,5] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.444335938s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.b( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.776980400s) [3,1,2] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.443115234s@ mbc={}] start_peering_interval up [0,4,2] -> [3,1,2], acting [0,4,2] -> [3,1,2], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.15( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778181076s) [4,5,0] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.444458008s@ mbc={}] start_peering_interval up [0,4,2] -> [4,5,0], acting [0,4,2] -> [4,5,0], acting_primary 0 -> 4, up_primary 0 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.16( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778771400s) [0,1,5] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.444946289s@ mbc={}] start_peering_interval up [0,4,2] -> [0,1,5], acting [0,4,2] -> [0,1,5], acting_primary 0 -> 0, up_primary 0 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.2( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778391838s) [1,3,2] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.443847656s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 47 pg[6.8( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [1,2,3] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.16( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778753281s) [0,1,5] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.444946289s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.15( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778181076s) [4,5,0] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown pruub 1186.444458008s@ mbc={}] state<Start>: transitioning to Primary
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.b( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.776881218s) [3,1,2] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.443115234s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.17( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778571129s) [5,0,1] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.444824219s@ mbc={}] start_peering_interval up [0,4,2] -> [5,0,1], acting [0,4,2] -> [5,0,1], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.10( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.777273178s) [0,2,4] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.443725586s@ mbc={}] start_peering_interval up [0,4,2] -> [0,2,4], acting [0,4,2] -> [0,2,4], acting_primary 0 -> 0, up_primary 0 -> 0, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.10( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.777256966s) [0,2,4] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.443725586s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.17( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.778537750s) [5,0,1] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.444824219s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.11( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.777244568s) [3,5,4] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.443847656s@ mbc={}] start_peering_interval up [0,4,2] -> [3,5,4], acting [0,4,2] -> [3,5,4], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.13( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.776783943s) [3,2,1] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.443481445s@ mbc={}] start_peering_interval up [0,4,2] -> [3,2,1], acting [0,4,2] -> [3,2,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.12( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.777533531s) [5,4,0] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.444091797s@ mbc={}] start_peering_interval up [0,4,2] -> [5,4,0], acting [0,4,2] -> [5,4,0], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.13( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.776764870s) [3,2,1] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.443481445s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.1c( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.774728775s) [5,3,4] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.441406250s@ mbc={}] start_peering_interval up [0,4,2] -> [5,3,4], acting [0,4,2] -> [5,3,4], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.1d( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.776144981s) [3,5,1] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active pruub 1186.442871094s@ mbc={}] start_peering_interval up [0,4,2] -> [3,5,1], acting [0,4,2] -> [3,5,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 47 pg[6.2( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [1,3,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.1c( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.774652481s) [5,3,4] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.441406250s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.1d( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.776121140s) [3,5,1] r=-1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.442871094s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.12( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.777493477s) [5,4,0] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.444091797s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:49 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 47 pg[6.11( empty local-lis/les=45/46 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47 pruub=12.777163506s) [3,5,4] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.443847656s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:49 np0005541914.localdomain sshd[56282]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:06:50 np0005541914.localdomain sshd[56282]: Invalid user sol from 45.148.10.240 port 42250
Dec 02 08:06:50 np0005541914.localdomain sudo[56329]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ewqcfpoujlilzccmkbtdmppnxufekrhq ; /usr/bin/python3
Dec 02 08:06:50 np0005541914.localdomain sudo[56329]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:06:50 np0005541914.localdomain sshd[56282]: Connection closed by invalid user sol 45.148.10.240 port 42250 [preauth]
Dec 02 08:06:50 np0005541914.localdomain python3[56331]: ansible-ansible.legacy.stat Invoked with path=/usr/local/sbin/containers-tmpwatch follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:06:50 np0005541914.localdomain sudo[56329]: pam_unix(sudo:session): session closed for user root
Dec 02 08:06:50 np0005541914.localdomain sudo[56372]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vvilxxeqvsxquzmitvyciugtbuyqukmz ; /usr/bin/python3
Dec 02 08:06:50 np0005541914.localdomain sudo[56372]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:06:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 47 pg[6.1( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [2,1,3] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 47 pg[6.1e( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [5,1,3] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 47 pg[6.1b( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [5,1,0] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 47 pg[6.17( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [5,0,1] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 47 pg[6.18( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [0,1,2] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 47 pg[6.16( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [0,1,5] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 47 pg[6.c( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,1,5] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 47 pg[6.4( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,1,5] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 47 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,5,1] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 47 pg[6.13( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,2,1] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 47 pg[6.1d( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,5,1] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 47 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,1,2] r=1 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 47 pg[6.1f( empty local-lis/les=0/0 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [3,5,1] r=2 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 48 pg[6.15( empty local-lis/les=47/48 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,5,0] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:06:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 48 pg[6.19( empty local-lis/les=47/48 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [1,3,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:06:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 48 pg[6.2( empty local-lis/les=47/48 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [1,3,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:06:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 48 pg[6.8( empty local-lis/les=47/48 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [1,2,3] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:06:50 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 48 pg[6.d( empty local-lis/les=47/48 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [1,3,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:06:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 48 pg[6.e( empty local-lis/les=47/48 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,3,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:06:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 48 pg[7.f( v 42'39 lc 42'1 (0'0,42'39] local-lis/les=47/48 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,2,3] r=0 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(1+2)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:06:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 48 pg[7.b( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=47/48 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,2,3] r=0 lpr=47 pi=[45,47)/1 crt=42'39 mlcod 0'0 active+degraded m=1 mbc={255={(1+2)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:06:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 48 pg[6.3( empty local-lis/les=47/48 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,5,0] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:06:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 48 pg[7.9( v 42'39 (0'0,42'39] local-lis/les=47/48 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,2,3] r=0 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:06:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 48 pg[7.5( v 42'39 lc 42'11 (0'0,42'39] local-lis/les=47/48 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,2,3] r=0 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(1+2)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:06:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 48 pg[7.3( v 42'39 lc 0'0 (0'0,42'39] local-lis/les=47/48 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,2,3] r=0 lpr=47 pi=[45,47)/1 crt=42'39 mlcod 0'0 active+degraded m=2 mbc={255={(1+2)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:06:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 48 pg[7.1( v 42'39 (0'0,42'39] local-lis/les=47/48 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,2,3] r=0 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:06:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 48 pg[6.5( empty local-lis/les=47/48 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,2,0] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:06:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 48 pg[7.7( v 42'39 lc 42'21 (0'0,42'39] local-lis/les=47/48 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,2,3] r=0 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(1+2)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:06:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 48 pg[6.7( empty local-lis/les=47/48 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,3,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:06:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 48 pg[6.1a( empty local-lis/les=47/48 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,2,0] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:06:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 48 pg[6.a( empty local-lis/les=47/48 n=0 ec=45/37 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,0,2] r=0 lpr=47 pi=[45,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:06:50 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 48 pg[7.d( v 42'39 lc 42'13 (0'0,42'39] local-lis/les=47/48 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=47) [4,2,3] r=0 lpr=47 pi=[45,47)/1 crt=42'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(1+2)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:06:50 np0005541914.localdomain python3[56374]: ansible-ansible.legacy.copy Invoked with dest=/usr/local/sbin/containers-tmpwatch group=root mode=493 owner=root src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662809.9636912-91763-237988596539909/source _original_basename=tmpk44yopjf follow=False checksum=84397b037dad9813fed388c4bcdd4871f384cd22 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:06:50 np0005541914.localdomain sudo[56372]: pam_unix(sudo:session): session closed for user root
Dec 02 08:06:50 np0005541914.localdomain sudo[56402]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qpujvfnkextzcdzsuxwyjjuayumfpbmi ; /usr/bin/python3
Dec 02 08:06:50 np0005541914.localdomain sudo[56402]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:06:51 np0005541914.localdomain python3[56404]: ansible-cron Invoked with job=/usr/local/sbin/containers-tmpwatch name=Remove old logs special_time=daily user=root state=present backup=False minute=* hour=* day=* month=* weekday=* disabled=False env=False cron_file=None insertafter=None insertbefore=None
Dec 02 08:06:51 np0005541914.localdomain crontab[56405]: (root) LIST (root)
Dec 02 08:06:51 np0005541914.localdomain crontab[56406]: (root) REPLACE (root)
Dec 02 08:06:51 np0005541914.localdomain sudo[56402]: pam_unix(sudo:session): session closed for user root
Dec 02 08:06:51 np0005541914.localdomain sudo[56420]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mkcymaqhgbpumelloejrihoxmnadzmyp ; /usr/bin/python3
Dec 02 08:06:51 np0005541914.localdomain sudo[56420]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:06:51 np0005541914.localdomain python3[56422]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_2 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 08:06:51 np0005541914.localdomain sudo[56420]: pam_unix(sudo:session): session closed for user root
Dec 02 08:06:51 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 49 pg[7.6( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=10.709970474s) [3,5,1] r=2 lpr=49 pi=[45,49)/1 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1190.976318359s@ mbc={}] start_peering_interval up [1,5,3] -> [3,5,1], acting [1,5,3] -> [3,5,1], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:51 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 49 pg[7.6( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=10.709904671s) [3,5,1] r=2 lpr=49 pi=[45,49)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1190.976318359s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:51 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 49 pg[7.a( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=10.710413933s) [3,5,1] r=2 lpr=49 pi=[45,49)/1 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1190.976806641s@ mbc={}] start_peering_interval up [1,5,3] -> [3,5,1], acting [1,5,3] -> [3,5,1], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:51 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 49 pg[7.a( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=10.710338593s) [3,5,1] r=2 lpr=49 pi=[45,49)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1190.976806641s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:51 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 49 pg[7.2( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=10.710016251s) [3,5,1] r=2 lpr=49 pi=[45,49)/1 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1190.976684570s@ mbc={}] start_peering_interval up [1,5,3] -> [3,5,1], acting [1,5,3] -> [3,5,1], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:51 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 49 pg[7.2( v 42'39 (0'0,42'39] local-lis/les=45/46 n=2 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=10.709935188s) [3,5,1] r=2 lpr=49 pi=[45,49)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1190.976684570s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:51 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 49 pg[7.e( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=10.710141182s) [3,5,1] r=2 lpr=49 pi=[45,49)/1 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1190.977416992s@ mbc={}] start_peering_interval up [1,5,3] -> [3,5,1], acting [1,5,3] -> [3,5,1], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:51 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 49 pg[7.e( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=10.710083961s) [3,5,1] r=2 lpr=49 pi=[45,49)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1190.977416992s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:51 np0005541914.localdomain sudo[56470]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hzuvictbdsivknkhjytxpjhcgvvovikf ; /usr/bin/python3
Dec 02 08:06:51 np0005541914.localdomain sudo[56470]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:06:52 np0005541914.localdomain sudo[56470]: pam_unix(sudo:session): session closed for user root
Dec 02 08:06:52 np0005541914.localdomain sudo[56488]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rwzuusavpxouhrridcewbsufcuvccgbs ; /usr/bin/python3
Dec 02 08:06:52 np0005541914.localdomain sudo[56488]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:06:52 np0005541914.localdomain sudo[56488]: pam_unix(sudo:session): session closed for user root
Dec 02 08:06:52 np0005541914.localdomain sudo[56592]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cupbwknqmgtkyoddpazzlaecegjuqjiq ; ANSIBLE_ASYNC_DIR=/tmp/.ansible_async /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662812.470347-91852-144939483418995/async_wrapper.py 582142990844 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662812.470347-91852-144939483418995/AnsiballZ_command.py _
Dec 02 08:06:52 np0005541914.localdomain sudo[56592]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Dec 02 08:06:53 np0005541914.localdomain ansible-async_wrapper.py[56594]: Invoked with 582142990844 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764662812.470347-91852-144939483418995/AnsiballZ_command.py _
Dec 02 08:06:53 np0005541914.localdomain ansible-async_wrapper.py[56597]: Starting module and watcher
Dec 02 08:06:53 np0005541914.localdomain ansible-async_wrapper.py[56597]: Start watching 56598 (3600)
Dec 02 08:06:53 np0005541914.localdomain ansible-async_wrapper.py[56598]: Start module (56598)
Dec 02 08:06:53 np0005541914.localdomain ansible-async_wrapper.py[56594]: Return async_wrapper task started.
Dec 02 08:06:53 np0005541914.localdomain sudo[56592]: pam_unix(sudo:session): session closed for user root
Dec 02 08:06:53 np0005541914.localdomain sudo[56613]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zanmankwzsjffgzpwmqxvjqnhqjjuggq ; /usr/bin/python3
Dec 02 08:06:53 np0005541914.localdomain sudo[56613]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:06:53 np0005541914.localdomain python3[56618]: ansible-ansible.legacy.async_status Invoked with jid=582142990844.56594 mode=status _async_dir=/tmp/.ansible_async
Dec 02 08:06:53 np0005541914.localdomain sudo[56613]: pam_unix(sudo:session): session closed for user root
Dec 02 08:06:56 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Dec 02 08:06:56 np0005541914.localdomain puppet-user[56617]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 02 08:06:56 np0005541914.localdomain puppet-user[56617]:    (file: /etc/puppet/hiera.yaml)
Dec 02 08:06:56 np0005541914.localdomain puppet-user[56617]: Warning: Undefined variable '::deploy_config_name';
Dec 02 08:06:56 np0005541914.localdomain puppet-user[56617]:    (file & line not available)
Dec 02 08:06:56 np0005541914.localdomain puppet-user[56617]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 02 08:06:56 np0005541914.localdomain puppet-user[56617]:    (file & line not available)
Dec 02 08:06:56 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Dec 02 08:06:56 np0005541914.localdomain puppet-user[56617]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8)
Dec 02 08:06:56 np0005541914.localdomain puppet-user[56617]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69)
Dec 02 08:06:56 np0005541914.localdomain puppet-user[56617]: Notice: Compiled catalog for np0005541914.localdomain in environment production in 0.12 seconds
Dec 02 08:06:56 np0005541914.localdomain puppet-user[56617]: Notice: Applied catalog in 0.03 seconds
Dec 02 08:06:56 np0005541914.localdomain puppet-user[56617]: Application:
Dec 02 08:06:56 np0005541914.localdomain puppet-user[56617]:    Initial environment: production
Dec 02 08:06:56 np0005541914.localdomain puppet-user[56617]:    Converged environment: production
Dec 02 08:06:56 np0005541914.localdomain puppet-user[56617]:          Run mode: user
Dec 02 08:06:56 np0005541914.localdomain puppet-user[56617]: Changes:
Dec 02 08:06:56 np0005541914.localdomain puppet-user[56617]: Events:
Dec 02 08:06:56 np0005541914.localdomain puppet-user[56617]: Resources:
Dec 02 08:06:56 np0005541914.localdomain puppet-user[56617]:             Total: 10
Dec 02 08:06:56 np0005541914.localdomain puppet-user[56617]: Time:
Dec 02 08:06:56 np0005541914.localdomain puppet-user[56617]:          Schedule: 0.00
Dec 02 08:06:56 np0005541914.localdomain puppet-user[56617]:              File: 0.00
Dec 02 08:06:56 np0005541914.localdomain puppet-user[56617]:              Exec: 0.01
Dec 02 08:06:56 np0005541914.localdomain puppet-user[56617]:            Augeas: 0.01
Dec 02 08:06:56 np0005541914.localdomain puppet-user[56617]:    Transaction evaluation: 0.03
Dec 02 08:06:56 np0005541914.localdomain puppet-user[56617]:    Catalog application: 0.03
Dec 02 08:06:56 np0005541914.localdomain puppet-user[56617]:    Config retrieval: 0.20
Dec 02 08:06:56 np0005541914.localdomain puppet-user[56617]:          Last run: 1764662816
Dec 02 08:06:56 np0005541914.localdomain puppet-user[56617]:        Filebucket: 0.00
Dec 02 08:06:56 np0005541914.localdomain puppet-user[56617]:             Total: 0.04
Dec 02 08:06:56 np0005541914.localdomain puppet-user[56617]: Version:
Dec 02 08:06:56 np0005541914.localdomain puppet-user[56617]:            Config: 1764662816
Dec 02 08:06:56 np0005541914.localdomain puppet-user[56617]:            Puppet: 7.10.0
Dec 02 08:06:56 np0005541914.localdomain ansible-async_wrapper.py[56598]: Module complete (56598)
Dec 02 08:06:58 np0005541914.localdomain ansible-async_wrapper.py[56597]: Done in kid B.
Dec 02 08:06:58 np0005541914.localdomain sudo[56730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:06:58 np0005541914.localdomain sudo[56730]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:06:58 np0005541914.localdomain sudo[56730]: pam_unix(sudo:session): session closed for user root
Dec 02 08:06:58 np0005541914.localdomain sudo[56745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 02 08:06:58 np0005541914.localdomain sudo[56745]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:06:59 np0005541914.localdomain sudo[56745]: pam_unix(sudo:session): session closed for user root
Dec 02 08:06:59 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 51 pg[7.3( v 42'39 (0'0,42'39] local-lis/les=47/48 n=2 ec=45/39 lis/c=47/47 les/c/f=48/49/0 sis=51 pruub=15.040720940s) [3,4,2] r=1 lpr=51 pi=[47,51)/1 crt=42'39 mlcod 0'0 active pruub 1198.737792969s@ mbc={255={}}] start_peering_interval up [4,2,3] -> [3,4,2], acting [4,2,3] -> [3,4,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:59 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 51 pg[7.3( v 42'39 (0'0,42'39] local-lis/les=47/48 n=2 ec=45/39 lis/c=47/47 les/c/f=48/49/0 sis=51 pruub=15.040622711s) [3,4,2] r=1 lpr=51 pi=[47,51)/1 crt=42'39 mlcod 0'0 unknown NOTIFY pruub 1198.737792969s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:59 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 51 pg[7.f( v 42'39 (0'0,42'39] local-lis/les=47/48 n=1 ec=45/39 lis/c=47/47 les/c/f=48/49/0 sis=51 pruub=15.033008575s) [3,4,2] r=1 lpr=51 pi=[47,51)/1 crt=42'39 mlcod 0'0 active pruub 1198.730224609s@ mbc={255={}}] start_peering_interval up [4,2,3] -> [3,4,2], acting [4,2,3] -> [3,4,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:59 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 51 pg[7.f( v 42'39 (0'0,42'39] local-lis/les=47/48 n=1 ec=45/39 lis/c=47/47 les/c/f=48/49/0 sis=51 pruub=15.032918930s) [3,4,2] r=1 lpr=51 pi=[47,51)/1 crt=42'39 mlcod 0'0 unknown NOTIFY pruub 1198.730224609s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:59 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 51 pg[7.7( v 42'39 (0'0,42'39] local-lis/les=47/48 n=1 ec=45/39 lis/c=47/47 les/c/f=48/50/0 sis=51 pruub=15.041465759s) [3,4,2] r=1 lpr=51 pi=[47,51)/1 crt=42'39 mlcod 0'0 active pruub 1198.738403320s@ mbc={255={}}] start_peering_interval up [4,2,3] -> [3,4,2], acting [4,2,3] -> [3,4,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:59 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 51 pg[7.7( v 42'39 (0'0,42'39] local-lis/les=47/48 n=1 ec=45/39 lis/c=47/47 les/c/f=48/50/0 sis=51 pruub=15.040790558s) [3,4,2] r=1 lpr=51 pi=[47,51)/1 crt=42'39 mlcod 0'0 unknown NOTIFY pruub 1198.738403320s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:59 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 51 pg[7.b( v 42'39 (0'0,42'39] local-lis/les=47/48 n=1 ec=45/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=15.032548904s) [3,4,2] r=1 lpr=51 pi=[47,51)/1 crt=42'39 mlcod 0'0 active pruub 1198.730224609s@ mbc={255={}}] start_peering_interval up [4,2,3] -> [3,4,2], acting [4,2,3] -> [3,4,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:06:59 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 51 pg[7.b( v 42'39 (0'0,42'39] local-lis/les=47/48 n=1 ec=45/39 lis/c=47/47 les/c/f=48/48/0 sis=51 pruub=15.032447815s) [3,4,2] r=1 lpr=51 pi=[47,51)/1 crt=42'39 mlcod 0'0 unknown NOTIFY pruub 1198.730224609s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:06:59 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Dec 02 08:06:59 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Dec 02 08:06:59 np0005541914.localdomain sudo[56781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:06:59 np0005541914.localdomain sudo[56781]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:06:59 np0005541914.localdomain sudo[56781]: pam_unix(sudo:session): session closed for user root
Dec 02 08:06:59 np0005541914.localdomain sudo[56796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:06:59 np0005541914.localdomain sudo[56796]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:07:00 np0005541914.localdomain sudo[56796]: pam_unix(sudo:session): session closed for user root
Dec 02 08:07:00 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Dec 02 08:07:00 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Dec 02 08:07:01 np0005541914.localdomain sudo[56843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:07:01 np0005541914.localdomain sudo[56843]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:07:01 np0005541914.localdomain sudo[56843]: pam_unix(sudo:session): session closed for user root
Dec 02 08:07:01 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Dec 02 08:07:01 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Dec 02 08:07:01 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Dec 02 08:07:01 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 53 pg[7.c( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=8.619408607s) [0,1,2] r=1 lpr=53 pi=[45,53)/1 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1198.976440430s@ mbc={}] start_peering_interval up [1,5,3] -> [0,1,2], acting [1,5,3] -> [0,1,2], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:07:01 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 53 pg[7.c( v 42'39 (0'0,42'39] local-lis/les=45/46 n=1 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=8.619290352s) [0,1,2] r=1 lpr=53 pi=[45,53)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1198.976440430s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:07:01 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 53 pg[7.4( v 42'39 (0'0,42'39] local-lis/les=45/46 n=4 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=8.619512558s) [0,1,2] r=1 lpr=53 pi=[45,53)/1 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1198.977294922s@ mbc={}] start_peering_interval up [1,5,3] -> [0,1,2], acting [1,5,3] -> [0,1,2], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:07:01 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 53 pg[7.4( v 42'39 (0'0,42'39] local-lis/les=45/46 n=4 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=8.619264603s) [0,1,2] r=1 lpr=53 pi=[45,53)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1198.977294922s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:07:01 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Dec 02 08:07:02 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 6.d scrub starts
Dec 02 08:07:02 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 6.d scrub ok
Dec 02 08:07:02 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 6.3 deep-scrub starts
Dec 02 08:07:02 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 6.3 deep-scrub ok
Dec 02 08:07:03 np0005541914.localdomain sudo[56871]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kztvmzcrrgbjnfolgamibrfjyxflresu ; /usr/bin/python3
Dec 02 08:07:03 np0005541914.localdomain sudo[56871]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:07:03 np0005541914.localdomain python3[56873]: ansible-ansible.legacy.async_status Invoked with jid=582142990844.56594 mode=status _async_dir=/tmp/.ansible_async
Dec 02 08:07:03 np0005541914.localdomain sudo[56871]: pam_unix(sudo:session): session closed for user root
Dec 02 08:07:04 np0005541914.localdomain sudo[56887]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mseqqpskdbaafnqsyaaesyzmufpaugnc ; /usr/bin/python3
Dec 02 08:07:04 np0005541914.localdomain sudo[56887]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:07:04 np0005541914.localdomain python3[56889]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 02 08:07:04 np0005541914.localdomain sudo[56887]: pam_unix(sudo:session): session closed for user root
Dec 02 08:07:04 np0005541914.localdomain sudo[56903]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rvioxpxmjsixthxdaauisgpulxkxxdqu ; /usr/bin/python3
Dec 02 08:07:04 np0005541914.localdomain sudo[56903]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:07:04 np0005541914.localdomain python3[56905]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 08:07:04 np0005541914.localdomain sudo[56903]: pam_unix(sudo:session): session closed for user root
Dec 02 08:07:05 np0005541914.localdomain sudo[56953]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eyhrdroupnnxrmpfbcogsedfzfhzvmlg ; /usr/bin/python3
Dec 02 08:07:05 np0005541914.localdomain sudo[56953]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:07:05 np0005541914.localdomain python3[56955]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:07:05 np0005541914.localdomain sudo[56953]: pam_unix(sudo:session): session closed for user root
Dec 02 08:07:05 np0005541914.localdomain sudo[56971]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nssyhawribaatypdakyqdwozekhrnvnz ; /usr/bin/python3
Dec 02 08:07:05 np0005541914.localdomain sudo[56971]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:07:05 np0005541914.localdomain python3[56973]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmp1qy3p7o6 recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 02 08:07:05 np0005541914.localdomain sudo[56971]: pam_unix(sudo:session): session closed for user root
Dec 02 08:07:05 np0005541914.localdomain sudo[57001]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tipdgbzhaghsybsmkzvjpaeidksgymcb ; /usr/bin/python3
Dec 02 08:07:05 np0005541914.localdomain sudo[57001]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:07:05 np0005541914.localdomain python3[57003]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:07:05 np0005541914.localdomain sudo[57001]: pam_unix(sudo:session): session closed for user root
Dec 02 08:07:06 np0005541914.localdomain sudo[57017]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wobtdaeneunolcvgcxgmosavepbmzwit ; /usr/bin/python3
Dec 02 08:07:06 np0005541914.localdomain sudo[57017]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:07:06 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Dec 02 08:07:06 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Dec 02 08:07:06 np0005541914.localdomain sudo[57017]: pam_unix(sudo:session): session closed for user root
Dec 02 08:07:07 np0005541914.localdomain sudo[57105]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rwbtynenkqrnphjferdbunhvhshdfgek ; /usr/bin/python3
Dec 02 08:07:07 np0005541914.localdomain sudo[57105]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:07:07 np0005541914.localdomain python3[57107]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None
Dec 02 08:07:07 np0005541914.localdomain sudo[57105]: pam_unix(sudo:session): session closed for user root
Dec 02 08:07:07 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Dec 02 08:07:07 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Dec 02 08:07:07 np0005541914.localdomain sudo[57124]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-onaonbcijpdxhblqxoahojxduykzwzqp ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:07:07 np0005541914.localdomain sudo[57124]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:07:08 np0005541914.localdomain python3[57126]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:07:08 np0005541914.localdomain sudo[57124]: pam_unix(sudo:session): session closed for user root
Dec 02 08:07:08 np0005541914.localdomain sudo[57140]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xrrumwruzbzwkbmkfrqchcjzaodtpxxe ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:07:08 np0005541914.localdomain sudo[57140]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:07:08 np0005541914.localdomain sudo[57140]: pam_unix(sudo:session): session closed for user root
Dec 02 08:07:08 np0005541914.localdomain sudo[57156]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pogtcmfrprfokudzzilqgpdmcdvhnuyp ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:07:08 np0005541914.localdomain sudo[57156]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:07:09 np0005541914.localdomain python3[57158]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 08:07:09 np0005541914.localdomain sudo[57156]: pam_unix(sudo:session): session closed for user root
Dec 02 08:07:09 np0005541914.localdomain sudo[57206]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-juvmmcnqjzstrfpugqgmghdjherxfjke ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:07:09 np0005541914.localdomain sudo[57206]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:07:09 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Dec 02 08:07:09 np0005541914.localdomain python3[57208]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:07:09 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 55 pg[7.5( v 42'39 (0'0,42'39] local-lis/les=47/48 n=2 ec=45/39 lis/c=47/47 les/c/f=48/50/0 sis=55 pruub=12.940701485s) [2,0,4] r=2 lpr=55 pi=[47,55)/1 crt=42'39 mlcod 0'0 active pruub 1206.738037109s@ mbc={255={}}] start_peering_interval up [4,2,3] -> [2,0,4], acting [4,2,3] -> [2,0,4], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:07:09 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 55 pg[7.d( v 42'39 (0'0,42'39] local-lis/les=47/48 n=1 ec=45/39 lis/c=47/47 les/c/f=48/50/0 sis=55 pruub=12.946455002s) [2,0,4] r=2 lpr=55 pi=[47,55)/1 crt=42'39 mlcod 0'0 active pruub 1206.744140625s@ mbc={255={}}] start_peering_interval up [4,2,3] -> [2,0,4], acting [4,2,3] -> [2,0,4], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:07:09 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 55 pg[7.d( v 42'39 (0'0,42'39] local-lis/les=47/48 n=1 ec=45/39 lis/c=47/47 les/c/f=48/50/0 sis=55 pruub=12.946278572s) [2,0,4] r=2 lpr=55 pi=[47,55)/1 crt=42'39 mlcod 0'0 unknown NOTIFY pruub 1206.744140625s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:07:09 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 55 pg[7.5( v 42'39 (0'0,42'39] local-lis/les=47/48 n=2 ec=45/39 lis/c=47/47 les/c/f=48/50/0 sis=55 pruub=12.940092087s) [2,0,4] r=2 lpr=55 pi=[47,55)/1 crt=42'39 mlcod 0'0 unknown NOTIFY pruub 1206.738037109s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:07:09 np0005541914.localdomain sudo[57206]: pam_unix(sudo:session): session closed for user root
Dec 02 08:07:09 np0005541914.localdomain sudo[57224]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ichqmohseyzysryklhubicfjbilgpvlc ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:07:09 np0005541914.localdomain sudo[57224]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:07:09 np0005541914.localdomain python3[57226]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:07:09 np0005541914.localdomain sudo[57224]: pam_unix(sudo:session): session closed for user root
Dec 02 08:07:10 np0005541914.localdomain sudo[57286]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jgvmljgrkdocjpwfmlmyaitfxwymdbnx ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:07:10 np0005541914.localdomain sudo[57286]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:07:10 np0005541914.localdomain python3[57288]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:07:10 np0005541914.localdomain sudo[57286]: pam_unix(sudo:session): session closed for user root
Dec 02 08:07:10 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Dec 02 08:07:10 np0005541914.localdomain sudo[57304]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vvuhxrzrrrznwbtzzqcdrvueomwgfiql ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:07:10 np0005541914.localdomain sudo[57304]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:07:10 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Dec 02 08:07:10 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 6.2 deep-scrub starts
Dec 02 08:07:10 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 6.2 deep-scrub ok
Dec 02 08:07:10 np0005541914.localdomain python3[57306]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:07:10 np0005541914.localdomain sudo[57304]: pam_unix(sudo:session): session closed for user root
Dec 02 08:07:11 np0005541914.localdomain sudo[57366]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vzpcysfkjrlwnostrefsbgibnqkgehhx ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:07:11 np0005541914.localdomain sudo[57366]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:07:11 np0005541914.localdomain python3[57368]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:07:11 np0005541914.localdomain sudo[57366]: pam_unix(sudo:session): session closed for user root
Dec 02 08:07:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 08:07:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Cumulative writes: 4209 writes, 19K keys, 4209 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 4209 writes, 411 syncs, 10.24 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 810 writes, 2804 keys, 810 commit groups, 1.0 writes per commit group, ingest: 1.41 MB, 0.00 MB/s
                                                          Interval WAL: 810 writes, 209 syncs, 3.88 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          
                                                          ** Compaction Stats [default] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                           Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [default] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [default] **
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-0] **
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-1] **
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-2] **
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.57 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                           Sum      1/0    1.57 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-0] **
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-1] **
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-2] **
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf57610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 2.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-0] **
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf57610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 2.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-1] **
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.26 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                                           Sum      1/0    1.26 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf57610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 2.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-2] **
                                                          
                                                          ** Compaction Stats [L] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.005       0      0       0.0       0.0
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.005       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [L] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.005       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [L] **
                                                          
                                                          ** Compaction Stats [P] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [P] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [P] **
Dec 02 08:07:11 np0005541914.localdomain sudo[57384]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nhdookhozfgewznnfbkegqsjhzugiesf ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:07:11 np0005541914.localdomain sudo[57384]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:07:11 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Dec 02 08:07:11 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Dec 02 08:07:11 np0005541914.localdomain python3[57386]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:07:11 np0005541914.localdomain sudo[57384]: pam_unix(sudo:session): session closed for user root
Dec 02 08:07:11 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 57 pg[7.6( v 42'39 (0'0,42'39] local-lis/les=49/50 n=2 ec=45/39 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.983466148s) [0,4,5] r=-1 lpr=57 pi=[49,57)/1 luod=0'0 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1213.342407227s@ mbc={}] start_peering_interval up [3,5,1] -> [0,4,5], acting [3,5,1] -> [0,4,5], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:07:11 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 57 pg[7.6( v 42'39 (0'0,42'39] local-lis/les=49/50 n=2 ec=45/39 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.983395576s) [0,4,5] r=-1 lpr=57 pi=[49,57)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1213.342407227s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:07:11 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 57 pg[7.e( v 42'39 (0'0,42'39] local-lis/les=49/50 n=1 ec=45/39 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.984255791s) [0,4,5] r=-1 lpr=57 pi=[49,57)/1 luod=0'0 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1213.344238281s@ mbc={}] start_peering_interval up [3,5,1] -> [0,4,5], acting [3,5,1] -> [0,4,5], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:07:11 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 57 pg[7.e( v 42'39 (0'0,42'39] local-lis/les=49/50 n=1 ec=45/39 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=12.984129906s) [0,4,5] r=-1 lpr=57 pi=[49,57)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1213.344238281s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:07:11 np0005541914.localdomain sudo[57446]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lqonuvquozvkpjyhdysgserwysybmdoo ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:07:11 np0005541914.localdomain sudo[57446]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:07:12 np0005541914.localdomain python3[57448]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:07:12 np0005541914.localdomain sudo[57446]: pam_unix(sudo:session): session closed for user root
Dec 02 08:07:12 np0005541914.localdomain sudo[57464]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xwwhivvjvetszheoaxwsryhjhpemkbia ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:07:12 np0005541914.localdomain sudo[57464]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:07:12 np0005541914.localdomain python3[57466]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:07:12 np0005541914.localdomain sudo[57464]: pam_unix(sudo:session): session closed for user root
Dec 02 08:07:12 np0005541914.localdomain sudo[57494]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ncdwdeawqwbaifgizhfovhseqtmontug ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:07:12 np0005541914.localdomain sudo[57494]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:07:12 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:07:12 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 57 pg[7.e( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=49/49 les/c/f=50/50/0 sis=57) [0,4,5] r=1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:07:12 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 57 pg[7.6( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=49/49 les/c/f=50/50/0 sis=57) [0,4,5] r=1 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:07:12 np0005541914.localdomain systemd[1]: tmp-crun.KRKQ2q.mount: Deactivated successfully.
Dec 02 08:07:12 np0005541914.localdomain podman[57497]: 2025-12-02 08:07:12.749297329 +0000 UTC m=+0.110723734 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, distribution-scope=public, tcib_managed=true)
Dec 02 08:07:12 np0005541914.localdomain python3[57496]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 08:07:12 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:07:12 np0005541914.localdomain systemd-sysv-generator[57556]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:07:12 np0005541914.localdomain systemd-rc-local-generator[57549]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:07:12 np0005541914.localdomain podman[57497]: 2025-12-02 08:07:12.953644573 +0000 UTC m=+0.315070968 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, url=https://www.redhat.com, io.openshift.expose-services=, io.buildah.version=1.41.4, batch=17.1_20251118.1)
Dec 02 08:07:12 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:07:13 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:07:13 np0005541914.localdomain sudo[57494]: pam_unix(sudo:session): session closed for user root
Dec 02 08:07:13 np0005541914.localdomain sudo[57610]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qpejocyvxyqrqmbqulufgjkyskzhwbws ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:07:13 np0005541914.localdomain sudo[57610]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:07:13 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 6.a scrub starts
Dec 02 08:07:13 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 6.a scrub ok
Dec 02 08:07:13 np0005541914.localdomain python3[57612]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:07:13 np0005541914.localdomain sudo[57610]: pam_unix(sudo:session): session closed for user root
Dec 02 08:07:13 np0005541914.localdomain sudo[57628]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hvvhjrmaebsbngabigiqbxomhvmwqtqf ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:07:13 np0005541914.localdomain sudo[57628]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:07:13 np0005541914.localdomain python3[57630]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:07:13 np0005541914.localdomain sudo[57628]: pam_unix(sudo:session): session closed for user root
Dec 02 08:07:14 np0005541914.localdomain sudo[57690]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dashogfpvswwhplhiecufazufrczgtrt ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:07:14 np0005541914.localdomain sudo[57690]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:07:14 np0005541914.localdomain python3[57692]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:07:14 np0005541914.localdomain sudo[57690]: pam_unix(sudo:session): session closed for user root
Dec 02 08:07:14 np0005541914.localdomain sudo[57708]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kluwginjxwkuoyepcuxhtekikhandoew ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:07:14 np0005541914.localdomain sudo[57708]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:07:14 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Dec 02 08:07:14 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Dec 02 08:07:14 np0005541914.localdomain python3[57710]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:07:14 np0005541914.localdomain sudo[57708]: pam_unix(sudo:session): session closed for user root
Dec 02 08:07:14 np0005541914.localdomain sudo[57738]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ppfdemuhoiqgmfoafmepqqybpuumzsgt ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:07:14 np0005541914.localdomain sudo[57738]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:07:15 np0005541914.localdomain python3[57740]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 08:07:15 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:07:15 np0005541914.localdomain systemd-rc-local-generator[57765]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:07:15 np0005541914.localdomain systemd-sysv-generator[57768]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:07:15 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:07:15 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 6.1a scrub starts
Dec 02 08:07:15 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 6.1a scrub ok
Dec 02 08:07:15 np0005541914.localdomain systemd[1]: Starting Create netns directory...
Dec 02 08:07:15 np0005541914.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 02 08:07:15 np0005541914.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 02 08:07:15 np0005541914.localdomain systemd[1]: Finished Create netns directory.
Dec 02 08:07:15 np0005541914.localdomain sudo[57738]: pam_unix(sudo:session): session closed for user root
Dec 02 08:07:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 08:07:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 1200.2 total, 600.0 interval
                                                          Cumulative writes: 5093 writes, 22K keys, 5093 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 5093 writes, 477 syncs, 10.68 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 1842 writes, 6555 keys, 1842 commit groups, 1.0 writes per commit group, ingest: 2.41 MB, 0.00 MB/s
                                                          Interval WAL: 1842 writes, 336 syncs, 5.48 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          
                                                          ** Compaction Stats [default] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                           Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [default] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [default] **
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-0] **
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-1] **
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-2] **
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.57 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.023       0      0       0.0       0.0
                                                           Sum      1/0    1.57 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.023       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.023       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-0] **
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-1] **
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-2] **
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x562050321610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-0] **
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x562050321610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-1] **
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.26 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                                           Sum      1/0    1.26 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x562050321610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-2] **
                                                          
                                                          ** Compaction Stats [L] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [L] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [L] **
                                                          
                                                          ** Compaction Stats [P] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [P] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.2 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.8e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [P] **
Dec 02 08:07:15 np0005541914.localdomain sudo[57797]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-defzdjaknpvkieuaohsauuttknnhwgfm ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:07:15 np0005541914.localdomain sudo[57797]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:07:16 np0005541914.localdomain python3[57799]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6
Dec 02 08:07:16 np0005541914.localdomain sudo[57797]: pam_unix(sudo:session): session closed for user root
Dec 02 08:07:16 np0005541914.localdomain sudo[57813]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yyzcwafewvsflpxmopujkhwbygsczglj ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:07:16 np0005541914.localdomain sudo[57813]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:07:16 np0005541914.localdomain sudo[57813]: pam_unix(sudo:session): session closed for user root
Dec 02 08:07:17 np0005541914.localdomain sudo[57855]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gqrigiskjkicuunvingsnesrowtyymen ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:07:17 np0005541914.localdomain sudo[57855]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:07:17 np0005541914.localdomain python3[57857]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step2 config_dir=/var/lib/tripleo-config/container-startup-config/step_2 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False
Dec 02 08:07:17 np0005541914.localdomain podman[57932]: 2025-12-02 08:07:17.869194855 +0000 UTC m=+0.063726045 container create 1a68b62bc3beeadf71f3e8310d8e1e46bd339574192211eef7414f249be59eac (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_id=tripleo_step2, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute_init_log, build-date=2025-11-19T00:36:58Z, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:07:17 np0005541914.localdomain podman[57933]: 2025-12-02 08:07:17.901564979 +0000 UTC m=+0.091025273 container create 768ff5af5abe6761c3f4aebc2f0946ea01d845e25dc3aac5977c8065993110c9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, name=rhosp17/openstack-nova-libvirt, vcs-type=git, build-date=2025-11-19T00:35:22Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_virtqemud_init_logs, url=https://www.redhat.com, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, config_id=tripleo_step2, version=17.1.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-libvirt)
Dec 02 08:07:17 np0005541914.localdomain systemd[1]: Started libpod-conmon-1a68b62bc3beeadf71f3e8310d8e1e46bd339574192211eef7414f249be59eac.scope.
Dec 02 08:07:17 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:07:17 np0005541914.localdomain systemd[1]: Started libpod-conmon-768ff5af5abe6761c3f4aebc2f0946ea01d845e25dc3aac5977c8065993110c9.scope.
Dec 02 08:07:17 np0005541914.localdomain podman[57932]: 2025-12-02 08:07:17.832225779 +0000 UTC m=+0.026756999 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Dec 02 08:07:17 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54218a875306d5e9e02be164dfc59f569c03cec4fa589e4979e72cb65e05c169/merged/var/log/nova supports timestamps until 2038 (0x7fffffff)
Dec 02 08:07:17 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:07:17 np0005541914.localdomain podman[57933]: 2025-12-02 08:07:17.85062954 +0000 UTC m=+0.040089824 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 02 08:07:17 np0005541914.localdomain podman[57932]: 2025-12-02 08:07:17.949452563 +0000 UTC m=+0.143983753 container init 1a68b62bc3beeadf71f3e8310d8e1e46bd339574192211eef7414f249be59eac (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., distribution-scope=public, container_name=nova_compute_init_log, batch=17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step2, url=https://www.redhat.com, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 02 08:07:17 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bcbe901bb45e8070f2f315648c2b8d8a4260ab9ddef9da25ac029ee28a25fc8/merged/var/log/swtpm supports timestamps until 2038 (0x7fffffff)
Dec 02 08:07:17 np0005541914.localdomain podman[57932]: 2025-12-02 08:07:17.956898834 +0000 UTC m=+0.151430024 container start 1a68b62bc3beeadf71f3e8310d8e1e46bd339574192211eef7414f249be59eac (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute_init_log, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step2, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:07:17 np0005541914.localdomain python3[57857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_compute_init_log --conmon-pidfile /run/nova_compute_init_log.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1764661676 --label config_id=tripleo_step2 --label container_name=nova_compute_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_compute_init_log.log --network none --privileged=False --user root --volume /var/log/containers/nova:/var/log/nova:z registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 /bin/bash -c chown -R nova:nova /var/log/nova
Dec 02 08:07:17 np0005541914.localdomain systemd[1]: libpod-1a68b62bc3beeadf71f3e8310d8e1e46bd339574192211eef7414f249be59eac.scope: Deactivated successfully.
Dec 02 08:07:18 np0005541914.localdomain podman[57968]: 2025-12-02 08:07:18.010700452 +0000 UTC m=+0.039625159 container died 1a68b62bc3beeadf71f3e8310d8e1e46bd339574192211eef7414f249be59eac (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., config_id=tripleo_step2, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.buildah.version=1.41.4, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, container_name=nova_compute_init_log, release=1761123044)
Dec 02 08:07:18 np0005541914.localdomain podman[57933]: 2025-12-02 08:07:18.059470533 +0000 UTC m=+0.248930807 container init 768ff5af5abe6761c3f4aebc2f0946ea01d845e25dc3aac5977c8065993110c9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, io.buildah.version=1.41.4, container_name=nova_virtqemud_init_logs, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, batch=17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, io.openshift.expose-services=, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step2, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, build-date=2025-11-19T00:35:22Z, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 02 08:07:18 np0005541914.localdomain systemd[1]: libpod-768ff5af5abe6761c3f4aebc2f0946ea01d845e25dc3aac5977c8065993110c9.scope: Deactivated successfully.
Dec 02 08:07:18 np0005541914.localdomain podman[57969]: 2025-12-02 08:07:18.147065408 +0000 UTC m=+0.171400914 container cleanup 1a68b62bc3beeadf71f3e8310d8e1e46bd339574192211eef7414f249be59eac (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute_init_log, maintainer=OpenStack TripleO Team, config_id=tripleo_step2, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:07:18 np0005541914.localdomain systemd[1]: libpod-conmon-1a68b62bc3beeadf71f3e8310d8e1e46bd339574192211eef7414f249be59eac.scope: Deactivated successfully.
Dec 02 08:07:18 np0005541914.localdomain podman[57933]: 2025-12-02 08:07:18.167531763 +0000 UTC m=+0.356992087 container start 768ff5af5abe6761c3f4aebc2f0946ea01d845e25dc3aac5977c8065993110c9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:35:22Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, managed_by=tripleo_ansible, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, com.redhat.component=openstack-nova-libvirt-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, container_name=nova_virtqemud_init_logs, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step2, version=17.1.12, architecture=x86_64, name=rhosp17/openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public)
Dec 02 08:07:18 np0005541914.localdomain python3[57857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtqemud_init_logs --conmon-pidfile /run/nova_virtqemud_init_logs.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1764661676 --label config_id=tripleo_step2 --label container_name=nova_virtqemud_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtqemud_init_logs.log --network none --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --user root --volume /var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /bin/bash -c chown -R tss:tss /var/log/swtpm
Dec 02 08:07:18 np0005541914.localdomain podman[58005]: 2025-12-02 08:07:18.170285478 +0000 UTC m=+0.086811722 container died 768ff5af5abe6761c3f4aebc2f0946ea01d845e25dc3aac5977c8065993110c9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, config_id=tripleo_step2, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_virtqemud_init_logs, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, version=17.1.12, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, url=https://www.redhat.com, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:35:22Z, com.redhat.component=openstack-nova-libvirt-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64)
Dec 02 08:07:18 np0005541914.localdomain podman[58005]: 2025-12-02 08:07:18.199330048 +0000 UTC m=+0.115856252 container cleanup 768ff5af5abe6761c3f4aebc2f0946ea01d845e25dc3aac5977c8065993110c9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, build-date=2025-11-19T00:35:22Z, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, release=1761123044, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtqemud_init_logs, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step2)
Dec 02 08:07:18 np0005541914.localdomain systemd[1]: libpod-conmon-768ff5af5abe6761c3f4aebc2f0946ea01d845e25dc3aac5977c8065993110c9.scope: Deactivated successfully.
Dec 02 08:07:18 np0005541914.localdomain podman[58117]: 2025-12-02 08:07:18.55458889 +0000 UTC m=+0.079001970 container create 7d36d529cfda69929ba20af10bed8f3a76fb676437af457e32de26b87125dd55 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, container_name=create_virtlogd_wrapper, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step2, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, vendor=Red Hat, Inc., tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, release=1761123044, build-date=2025-11-19T00:35:22Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.openshift.expose-services=)
Dec 02 08:07:18 np0005541914.localdomain podman[58128]: 2025-12-02 08:07:18.576635083 +0000 UTC m=+0.076516273 container create 52525e1f46c0237e5557da21526ac2245465c71fd97b338531639879fe8a2bd7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=create_haproxy_wrapper, vendor=Red Hat, Inc., config_id=tripleo_step2, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044)
Dec 02 08:07:18 np0005541914.localdomain systemd[1]: Started libpod-conmon-7d36d529cfda69929ba20af10bed8f3a76fb676437af457e32de26b87125dd55.scope.
Dec 02 08:07:18 np0005541914.localdomain systemd[1]: Started libpod-conmon-52525e1f46c0237e5557da21526ac2245465c71fd97b338531639879fe8a2bd7.scope.
Dec 02 08:07:18 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:07:18 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/297413164ba634cc6890ee6589cadf094aa7e1bc60468b5e2b171a73d85ccd70/merged/var/lib/container-config-scripts supports timestamps until 2038 (0x7fffffff)
Dec 02 08:07:18 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:07:18 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1e958529aaf3ea18edfde977fa21cc545be3514f2ed0637a72be1cc0091549c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 08:07:18 np0005541914.localdomain podman[58117]: 2025-12-02 08:07:18.514828558 +0000 UTC m=+0.039241728 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 02 08:07:18 np0005541914.localdomain podman[58117]: 2025-12-02 08:07:18.613979111 +0000 UTC m=+0.138392191 container init 7d36d529cfda69929ba20af10bed8f3a76fb676437af457e32de26b87125dd55 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:35:22Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step2, release=1761123044, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=create_virtlogd_wrapper, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4)
Dec 02 08:07:18 np0005541914.localdomain podman[58128]: 2025-12-02 08:07:18.622103653 +0000 UTC m=+0.121984833 container init 52525e1f46c0237e5557da21526ac2245465c71fd97b338531639879fe8a2bd7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step2, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, container_name=create_haproxy_wrapper, io.buildah.version=1.41.4, release=1761123044, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64)
Dec 02 08:07:18 np0005541914.localdomain podman[58128]: 2025-12-02 08:07:18.535138427 +0000 UTC m=+0.035019627 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1
Dec 02 08:07:18 np0005541914.localdomain podman[58128]: 2025-12-02 08:07:18.65425236 +0000 UTC m=+0.154133540 container start 52525e1f46c0237e5557da21526ac2245465c71fd97b338531639879fe8a2bd7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step2, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=create_haproxy_wrapper, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']})
Dec 02 08:07:18 np0005541914.localdomain podman[58128]: 2025-12-02 08:07:18.654863518 +0000 UTC m=+0.154744788 container attach 52525e1f46c0237e5557da21526ac2245465c71fd97b338531639879fe8a2bd7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_id=tripleo_step2, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, release=1761123044, tcib_managed=true, distribution-scope=public, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=create_haproxy_wrapper, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, vcs-type=git)
Dec 02 08:07:18 np0005541914.localdomain podman[58117]: 2025-12-02 08:07:18.674478626 +0000 UTC m=+0.198891716 container start 7d36d529cfda69929ba20af10bed8f3a76fb676437af457e32de26b87125dd55 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-libvirt-container, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, io.buildah.version=1.41.4, build-date=2025-11-19T00:35:22Z, batch=17.1_20251118.1, container_name=create_virtlogd_wrapper, config_id=tripleo_step2, name=rhosp17/openstack-nova-libvirt, managed_by=tripleo_ansible, version=17.1.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, release=1761123044, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git)
Dec 02 08:07:18 np0005541914.localdomain podman[58117]: 2025-12-02 08:07:18.674807706 +0000 UTC m=+0.199220856 container attach 7d36d529cfda69929ba20af10bed8f3a76fb676437af457e32de26b87125dd55 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, config_id=tripleo_step2, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.openshift.expose-services=, build-date=2025-11-19T00:35:22Z, io.buildah.version=1.41.4, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, container_name=create_virtlogd_wrapper, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 02 08:07:18 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-9bcbe901bb45e8070f2f315648c2b8d8a4260ab9ddef9da25ac029ee28a25fc8-merged.mount: Deactivated successfully.
Dec 02 08:07:18 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-768ff5af5abe6761c3f4aebc2f0946ea01d845e25dc3aac5977c8065993110c9-userdata-shm.mount: Deactivated successfully.
Dec 02 08:07:18 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-54218a875306d5e9e02be164dfc59f569c03cec4fa589e4979e72cb65e05c169-merged.mount: Deactivated successfully.
Dec 02 08:07:18 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1a68b62bc3beeadf71f3e8310d8e1e46bd339574192211eef7414f249be59eac-userdata-shm.mount: Deactivated successfully.
Dec 02 08:07:19 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 6.e scrub starts
Dec 02 08:07:19 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 6.e scrub ok
Dec 02 08:07:19 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 59 pg[7.7( v 42'39 (0'0,42'39] local-lis/les=51/52 n=1 ec=45/39 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=12.809239388s) [1,5,3] r=-1 lpr=59 pi=[51,59)/1 luod=0'0 crt=42'39 mlcod 0'0 active pruub 1216.785766602s@ mbc={}] start_peering_interval up [3,4,2] -> [1,5,3], acting [3,4,2] -> [1,5,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:07:19 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 59 pg[7.7( v 42'39 (0'0,42'39] local-lis/les=51/52 n=1 ec=45/39 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=12.809184074s) [1,5,3] r=-1 lpr=59 pi=[51,59)/1 crt=42'39 mlcod 0'0 unknown NOTIFY pruub 1216.785766602s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:07:19 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 59 pg[7.f( v 42'39 (0'0,42'39] local-lis/les=51/52 n=1 ec=45/39 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=12.812079430s) [1,5,3] r=-1 lpr=59 pi=[51,59)/1 luod=0'0 crt=42'39 mlcod 0'0 active pruub 1216.789062500s@ mbc={}] start_peering_interval up [3,4,2] -> [1,5,3], acting [3,4,2] -> [1,5,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:07:19 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 59 pg[7.f( v 42'39 (0'0,42'39] local-lis/les=51/52 n=1 ec=45/39 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=12.812050819s) [1,5,3] r=-1 lpr=59 pi=[51,59)/1 crt=42'39 mlcod 0'0 unknown NOTIFY pruub 1216.789062500s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:07:19 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 59 pg[7.7( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=51/51 les/c/f=52/52/0 sis=59) [1,5,3] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:07:19 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 59 pg[7.f( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=51/51 les/c/f=52/52/0 sis=59) [1,5,3] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:07:20 np0005541914.localdomain ovs-vsctl[58240]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)
Dec 02 08:07:20 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Dec 02 08:07:20 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Dec 02 08:07:20 np0005541914.localdomain systemd[1]: libpod-7d36d529cfda69929ba20af10bed8f3a76fb676437af457e32de26b87125dd55.scope: Deactivated successfully.
Dec 02 08:07:20 np0005541914.localdomain systemd[1]: libpod-7d36d529cfda69929ba20af10bed8f3a76fb676437af457e32de26b87125dd55.scope: Consumed 2.186s CPU time.
Dec 02 08:07:20 np0005541914.localdomain podman[58117]: 2025-12-02 08:07:20.814500589 +0000 UTC m=+2.338913759 container died 7d36d529cfda69929ba20af10bed8f3a76fb676437af457e32de26b87125dd55 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, com.redhat.component=openstack-nova-libvirt-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step2, container_name=create_virtlogd_wrapper, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:35:22Z, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-nova-libvirt)
Dec 02 08:07:20 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 60 pg[7.7( v 42'39 lc 42'21 (0'0,42'39] local-lis/les=59/60 n=1 ec=45/39 lis/c=51/51 les/c/f=52/52/0 sis=59) [1,5,3] r=0 lpr=59 pi=[51,59)/1 crt=42'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(1+2)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:07:20 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 60 pg[7.f( v 42'39 lc 42'1 (0'0,42'39] local-lis/les=59/60 n=1 ec=45/39 lis/c=51/51 les/c/f=52/52/0 sis=59) [1,5,3] r=0 lpr=59 pi=[51,59)/1 crt=42'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(1+2)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:07:20 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7d36d529cfda69929ba20af10bed8f3a76fb676437af457e32de26b87125dd55-userdata-shm.mount: Deactivated successfully.
Dec 02 08:07:20 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-297413164ba634cc6890ee6589cadf094aa7e1bc60468b5e2b171a73d85ccd70-merged.mount: Deactivated successfully.
Dec 02 08:07:20 np0005541914.localdomain podman[58373]: 2025-12-02 08:07:20.899789742 +0000 UTC m=+0.076079519 container cleanup 7d36d529cfda69929ba20af10bed8f3a76fb676437af457e32de26b87125dd55 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, build-date=2025-11-19T00:35:22Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, release=1761123044, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step2, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=create_virtlogd_wrapper, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible)
Dec 02 08:07:20 np0005541914.localdomain systemd[1]: libpod-conmon-7d36d529cfda69929ba20af10bed8f3a76fb676437af457e32de26b87125dd55.scope: Deactivated successfully.
Dec 02 08:07:20 np0005541914.localdomain python3[57857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name create_virtlogd_wrapper --cgroupns=host --conmon-pidfile /run/create_virtlogd_wrapper.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1764661676 --label config_id=tripleo_step2 --label container_name=create_virtlogd_wrapper --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/create_virtlogd_wrapper.log --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /container_puppet_apply.sh 4 file include ::tripleo::profile::base::nova::virtlogd_wrapper
Dec 02 08:07:21 np0005541914.localdomain systemd[1]: libpod-52525e1f46c0237e5557da21526ac2245465c71fd97b338531639879fe8a2bd7.scope: Deactivated successfully.
Dec 02 08:07:21 np0005541914.localdomain systemd[1]: libpod-52525e1f46c0237e5557da21526ac2245465c71fd97b338531639879fe8a2bd7.scope: Consumed 2.170s CPU time.
Dec 02 08:07:21 np0005541914.localdomain podman[58128]: 2025-12-02 08:07:21.47432979 +0000 UTC m=+2.974210970 container died 52525e1f46c0237e5557da21526ac2245465c71fd97b338531639879fe8a2bd7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step2, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, container_name=create_haproxy_wrapper, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 02 08:07:21 np0005541914.localdomain podman[58414]: 2025-12-02 08:07:21.535137865 +0000 UTC m=+0.051488106 container cleanup 52525e1f46c0237e5557da21526ac2245465c71fd97b338531639879fe8a2bd7 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, release=1761123044, architecture=x86_64, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=create_haproxy_wrapper, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step2, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:07:21 np0005541914.localdomain systemd[1]: libpod-conmon-52525e1f46c0237e5557da21526ac2245465c71fd97b338531639879fe8a2bd7.scope: Deactivated successfully.
Dec 02 08:07:21 np0005541914.localdomain python3[57857]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name create_haproxy_wrapper --conmon-pidfile /run/create_haproxy_wrapper.pid --detach=False --label config_id=tripleo_step2 --label container_name=create_haproxy_wrapper --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/create_haproxy_wrapper.log --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron:/var/lib/neutron:shared,z registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 /container_puppet_apply.sh 4 file include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers
Dec 02 08:07:21 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Dec 02 08:07:21 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Dec 02 08:07:21 np0005541914.localdomain sudo[57855]: pam_unix(sudo:session): session closed for user root
Dec 02 08:07:21 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 61 pg[7.8( v 42'39 (0'0,42'39] local-lis/les=45/46 n=0 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=61 pruub=12.421516418s) [3,4,5] r=-1 lpr=61 pi=[45,61)/1 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1222.979248047s@ mbc={}] start_peering_interval up [1,5,3] -> [3,4,5], acting [1,5,3] -> [3,4,5], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:07:21 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 61 pg[7.8( v 42'39 (0'0,42'39] local-lis/les=45/46 n=0 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=61 pruub=12.420804977s) [3,4,5] r=-1 lpr=61 pi=[45,61)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1222.979248047s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:07:21 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a1e958529aaf3ea18edfde977fa21cc545be3514f2ed0637a72be1cc0091549c-merged.mount: Deactivated successfully.
Dec 02 08:07:21 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-52525e1f46c0237e5557da21526ac2245465c71fd97b338531639879fe8a2bd7-userdata-shm.mount: Deactivated successfully.
Dec 02 08:07:21 np0005541914.localdomain sudo[58467]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ltrepqteetnqgnnatpbvcmwipeynpayl ; /usr/bin/python3
Dec 02 08:07:21 np0005541914.localdomain sudo[58467]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:07:22 np0005541914.localdomain python3[58469]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks2.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:07:22 np0005541914.localdomain sudo[58467]: pam_unix(sudo:session): session closed for user root
Dec 02 08:07:22 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Dec 02 08:07:22 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Dec 02 08:07:22 np0005541914.localdomain sshd[58484]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:07:22 np0005541914.localdomain sudo[58517]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ylkzybgyvoupeqqmzhhpcljegklqkjih ; /usr/bin/python3
Dec 02 08:07:22 np0005541914.localdomain sudo[58517]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:07:22 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 61 pg[7.8( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=45/45 les/c/f=46/46/0 sis=61) [3,4,5] r=1 lpr=61 pi=[45,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:07:22 np0005541914.localdomain sudo[58517]: pam_unix(sudo:session): session closed for user root
Dec 02 08:07:23 np0005541914.localdomain sudo[58560]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ovudvxysxsmuikvmmlxzauzlqjqegbbc ; /usr/bin/python3
Dec 02 08:07:23 np0005541914.localdomain sudo[58560]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:07:23 np0005541914.localdomain sudo[58560]: pam_unix(sudo:session): session closed for user root
Dec 02 08:07:23 np0005541914.localdomain sudo[58590]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ekoqwbqtmrrobefhldwtspaguaupotli ; /usr/bin/python3
Dec 02 08:07:23 np0005541914.localdomain sudo[58590]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:07:23 np0005541914.localdomain python3[58592]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks2.json short_hostname=np0005541914 step=2 update_config_hash_only=False
Dec 02 08:07:23 np0005541914.localdomain sudo[58590]: pam_unix(sudo:session): session closed for user root
Dec 02 08:07:23 np0005541914.localdomain sshd[58484]: Invalid user angel from 182.253.156.173 port 41782
Dec 02 08:07:24 np0005541914.localdomain sshd[58484]: Received disconnect from 182.253.156.173 port 41782:11: Bye Bye [preauth]
Dec 02 08:07:24 np0005541914.localdomain sshd[58484]: Disconnected from invalid user angel 182.253.156.173 port 41782 [preauth]
Dec 02 08:07:24 np0005541914.localdomain sudo[58606]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vjjcslnonuwqrtgtvwjidzzkxngywibz ; /usr/bin/python3
Dec 02 08:07:24 np0005541914.localdomain sudo[58606]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:07:24 np0005541914.localdomain python3[58608]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:07:24 np0005541914.localdomain sudo[58606]: pam_unix(sudo:session): session closed for user root
Dec 02 08:07:24 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Dec 02 08:07:24 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Dec 02 08:07:24 np0005541914.localdomain sudo[58622]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-odnklsambsuxomgqxoxovvrcelrfxnma ; /usr/bin/python3
Dec 02 08:07:24 np0005541914.localdomain sudo[58622]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:07:24 np0005541914.localdomain python3[58624]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_2 config_pattern=container-puppet-*.json config_overrides={} debug=True
Dec 02 08:07:24 np0005541914.localdomain sudo[58622]: pam_unix(sudo:session): session closed for user root
Dec 02 08:07:27 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 7.f scrub starts
Dec 02 08:07:27 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 7.f scrub ok
Dec 02 08:07:29 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.f scrub starts
Dec 02 08:07:29 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 5.f scrub ok
Dec 02 08:07:29 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 63 pg[7.9( v 42'39 (0'0,42'39] local-lis/les=47/48 n=0 ec=45/39 lis/c=47/47 les/c/f=48/48/0 sis=63 pruub=8.938389778s) [0,2,4] r=2 lpr=63 pi=[47,63)/1 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1222.738525391s@ mbc={}] start_peering_interval up [4,2,3] -> [0,2,4], acting [4,2,3] -> [0,2,4], acting_primary 4 -> 0, up_primary 4 -> 0, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:07:29 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 63 pg[7.9( v 42'39 (0'0,42'39] local-lis/les=47/48 n=0 ec=45/39 lis/c=47/47 les/c/f=48/48/0 sis=63 pruub=8.938322067s) [0,2,4] r=2 lpr=63 pi=[47,63)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1222.738525391s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:07:31 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 65 pg[7.a( v 42'39 (0'0,42'39] local-lis/les=49/50 n=1 ec=45/39 lis/c=49/49 les/c/f=50/50/0 sis=65 pruub=8.810658455s) [2,0,4] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1229.338745117s@ mbc={}] start_peering_interval up [3,5,1] -> [2,0,4], acting [3,5,1] -> [2,0,4], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:07:31 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 65 pg[7.a( v 42'39 (0'0,42'39] local-lis/les=49/50 n=1 ec=45/39 lis/c=49/49 les/c/f=50/50/0 sis=65 pruub=8.810417175s) [2,0,4] r=-1 lpr=65 pi=[49,65)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1229.338745117s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:07:32 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 65 pg[7.a( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=49/49 les/c/f=50/50/0 sis=65) [2,0,4] r=2 lpr=65 pi=[49,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:07:33 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Dec 02 08:07:33 np0005541914.localdomain ceph-osd[32707]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Dec 02 08:07:34 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 67 pg[7.b( v 42'39 (0'0,42'39] local-lis/les=51/52 n=1 ec=45/39 lis/c=51/51 les/c/f=52/52/0 sis=67 pruub=14.406540871s) [3,1,2] r=-1 lpr=67 pi=[51,67)/1 luod=0'0 crt=42'39 mlcod 0'0 active pruub 1232.786865234s@ mbc={}] start_peering_interval up [3,4,2] -> [3,1,2], acting [3,4,2] -> [3,1,2], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:07:34 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 67 pg[7.b( v 42'39 (0'0,42'39] local-lis/les=51/52 n=1 ec=45/39 lis/c=51/51 les/c/f=52/52/0 sis=67 pruub=14.406447411s) [3,1,2] r=-1 lpr=67 pi=[51,67)/1 crt=42'39 mlcod 0'0 unknown NOTIFY pruub 1232.786865234s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:07:35 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 68 pg[7.c( v 42'39 (0'0,42'39] local-lis/les=53/54 n=1 ec=45/39 lis/c=53/53 les/c/f=54/54/0 sis=68 pruub=15.075953484s) [1,3,2] r=0 lpr=68 pi=[53,68)/1 luod=0'0 crt=42'39 lcod 0'0 mlcod 0'0 active pruub 1239.598876953s@ mbc={}] start_peering_interval up [0,1,2] -> [1,3,2], acting [0,1,2] -> [1,3,2], acting_primary 0 -> 1, up_primary 0 -> 1, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:07:35 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 68 pg[7.c( v 42'39 (0'0,42'39] local-lis/les=53/54 n=1 ec=45/39 lis/c=53/53 les/c/f=54/54/0 sis=68 pruub=15.075953484s) [1,3,2] r=0 lpr=68 pi=[53,68)/1 crt=42'39 lcod 0'0 mlcod 0'0 unknown pruub 1239.598876953s@ mbc={}] state<Start>: transitioning to Primary
Dec 02 08:07:35 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 67 pg[7.b( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=51/51 les/c/f=52/52/0 sis=67) [3,1,2] r=1 lpr=67 pi=[51,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:07:36 np0005541914.localdomain sshd[58625]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:07:37 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 69 pg[7.c( v 42'39 (0'0,42'39] local-lis/les=68/69 n=1 ec=45/39 lis/c=53/53 les/c/f=54/54/0 sis=68) [1,3,2] r=0 lpr=68 pi=[53,68)/1 crt=42'39 lcod 0'0 mlcod 0'0 active+degraded mbc={255={(2+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:07:37 np0005541914.localdomain sshd[58625]: Invalid user t3rr0r from 103.52.115.25 port 56430
Dec 02 08:07:38 np0005541914.localdomain sshd[58625]: Received disconnect from 103.52.115.25 port 56430:11: Bye Bye [preauth]
Dec 02 08:07:38 np0005541914.localdomain sshd[58625]: Disconnected from invalid user t3rr0r 103.52.115.25 port 56430 [preauth]
Dec 02 08:07:38 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 70 pg[7.d( v 42'39 (0'0,42'39] local-lis/les=55/56 n=1 ec=45/39 lis/c=55/55 les/c/f=56/56/0 sis=70 pruub=12.525090218s) [1,3,5] r=-1 lpr=70 pi=[55,70)/1 luod=0'0 crt=42'39 mlcod 0'0 active pruub 1234.853393555s@ mbc={}] start_peering_interval up [2,0,4] -> [1,3,5], acting [2,0,4] -> [1,3,5], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:07:38 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 70 pg[7.d( v 42'39 (0'0,42'39] local-lis/les=55/56 n=1 ec=45/39 lis/c=55/55 les/c/f=56/56/0 sis=70 pruub=12.525009155s) [1,3,5] r=-1 lpr=70 pi=[55,70)/1 crt=42'39 mlcod 0'0 unknown NOTIFY pruub 1234.853393555s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:07:38 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 70 pg[7.d( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=55/55 les/c/f=56/56/0 sis=70) [1,3,5] r=0 lpr=70 pi=[55,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 02 08:07:39 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 71 pg[7.d( v 42'39 lc 42'13 (0'0,42'39] local-lis/les=70/71 n=1 ec=45/39 lis/c=55/55 les/c/f=56/56/0 sis=70) [1,3,5] r=0 lpr=70 pi=[55,70)/1 crt=42'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+3)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 02 08:07:42 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 7.d scrub starts
Dec 02 08:07:42 np0005541914.localdomain ceph-osd[31770]: log_channel(cluster) log [DBG] : 7.d scrub ok
Dec 02 08:07:43 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:07:44 np0005541914.localdomain podman[58628]: 2025-12-02 08:07:44.263835397 +0000 UTC m=+0.266400379 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, io.buildah.version=1.41.4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, release=1761123044, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 08:07:44 np0005541914.localdomain podman[58628]: 2025-12-02 08:07:44.455986403 +0000 UTC m=+0.458551315 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step1, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z)
Dec 02 08:07:44 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:07:45 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 72 pg[7.e( v 42'39 (0'0,42'39] local-lis/les=57/58 n=1 ec=45/39 lis/c=57/57 les/c/f=58/58/0 sis=72 pruub=15.034337997s) [3,5,1] r=-1 lpr=72 pi=[57,72)/1 luod=0'0 crt=42'39 mlcod 0'0 active pruub 1244.863769531s@ mbc={}] start_peering_interval up [0,4,5] -> [3,5,1], acting [0,4,5] -> [3,5,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:07:45 np0005541914.localdomain ceph-osd[32707]: osd.4 pg_epoch: 72 pg[7.e( v 42'39 (0'0,42'39] local-lis/les=57/58 n=1 ec=45/39 lis/c=57/57 les/c/f=58/58/0 sis=72 pruub=15.034082413s) [3,5,1] r=-1 lpr=72 pi=[57,72)/1 crt=42'39 mlcod 0'0 unknown NOTIFY pruub 1244.863769531s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:07:46 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 72 pg[7.e( empty local-lis/les=0/0 n=0 ec=45/39 lis/c=57/57 les/c/f=58/58/0 sis=72) [3,5,1] r=2 lpr=72 pi=[57,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Dec 02 08:07:47 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 74 pg[7.f( v 42'39 (0'0,42'39] local-lis/les=59/60 n=3 ec=45/39 lis/c=59/59 les/c/f=60/60/0 sis=74 pruub=13.103336334s) [0,5,1] r=2 lpr=74 pi=[59,74)/1 crt=42'39 mlcod 0'0 active pruub 1249.551757812s@ mbc={255={}}] start_peering_interval up [1,5,3] -> [0,5,1], acting [1,5,3] -> [0,5,1], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Dec 02 08:07:47 np0005541914.localdomain ceph-osd[31770]: osd.1 pg_epoch: 74 pg[7.f( v 42'39 (0'0,42'39] local-lis/les=59/60 n=3 ec=45/39 lis/c=59/59 les/c/f=60/60/0 sis=74 pruub=13.102553368s) [0,5,1] r=2 lpr=74 pi=[59,74)/1 crt=42'39 mlcod 0'0 unknown NOTIFY pruub 1249.551757812s@ mbc={}] state<Start>: transitioning to Stray
Dec 02 08:08:01 np0005541914.localdomain sudo[58656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:08:01 np0005541914.localdomain sudo[58656]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:08:01 np0005541914.localdomain sudo[58656]: pam_unix(sudo:session): session closed for user root
Dec 02 08:08:01 np0005541914.localdomain sudo[58671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:08:01 np0005541914.localdomain sudo[58671]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:08:01 np0005541914.localdomain sudo[58671]: pam_unix(sudo:session): session closed for user root
Dec 02 08:08:02 np0005541914.localdomain sudo[58718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:08:02 np0005541914.localdomain sudo[58718]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:08:02 np0005541914.localdomain sudo[58718]: pam_unix(sudo:session): session closed for user root
Dec 02 08:08:14 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:08:15 np0005541914.localdomain podman[58733]: 2025-12-02 08:08:15.07651386 +0000 UTC m=+0.080011530 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, tcib_managed=true, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, architecture=x86_64, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr)
Dec 02 08:08:15 np0005541914.localdomain podman[58733]: 2025-12-02 08:08:15.278446179 +0000 UTC m=+0.281943849 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step1)
Dec 02 08:08:15 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:08:38 np0005541914.localdomain sshd[58763]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:08:40 np0005541914.localdomain sshd[58763]: Received disconnect from 182.253.156.173 port 35290:11: Bye Bye [preauth]
Dec 02 08:08:40 np0005541914.localdomain sshd[58763]: Disconnected from authenticating user root 182.253.156.173 port 35290 [preauth]
Dec 02 08:08:45 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:08:46 np0005541914.localdomain podman[58765]: 2025-12-02 08:08:46.064653741 +0000 UTC m=+0.075554038 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, release=1761123044, io.buildah.version=1.41.4, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, container_name=metrics_qdr, managed_by=tripleo_ansible, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., tcib_managed=true)
Dec 02 08:08:46 np0005541914.localdomain podman[58765]: 2025-12-02 08:08:46.28716319 +0000 UTC m=+0.298063557 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team)
Dec 02 08:08:46 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:08:50 np0005541914.localdomain sshd[58796]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:08:51 np0005541914.localdomain sshd[58796]: Invalid user sol from 45.148.10.240 port 59666
Dec 02 08:08:51 np0005541914.localdomain sshd[58796]: Connection closed by invalid user sol 45.148.10.240 port 59666 [preauth]
Dec 02 08:08:58 np0005541914.localdomain sshd[58798]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:09:00 np0005541914.localdomain sshd[58798]: Invalid user elena from 103.52.115.25 port 45914
Dec 02 08:09:00 np0005541914.localdomain sshd[58798]: Received disconnect from 103.52.115.25 port 45914:11: Bye Bye [preauth]
Dec 02 08:09:00 np0005541914.localdomain sshd[58798]: Disconnected from invalid user elena 103.52.115.25 port 45914 [preauth]
Dec 02 08:09:02 np0005541914.localdomain sudo[58800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:09:02 np0005541914.localdomain sudo[58800]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:09:02 np0005541914.localdomain sudo[58800]: pam_unix(sudo:session): session closed for user root
Dec 02 08:09:02 np0005541914.localdomain sudo[58815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 02 08:09:02 np0005541914.localdomain sudo[58815]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:09:03 np0005541914.localdomain systemd[1]: tmp-crun.jwjeZo.mount: Deactivated successfully.
Dec 02 08:09:03 np0005541914.localdomain podman[58901]: 2025-12-02 08:09:03.420898484 +0000 UTC m=+0.085854573 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, RELEASE=main, name=rhceph, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 08:09:03 np0005541914.localdomain podman[58901]: 2025-12-02 08:09:03.55180466 +0000 UTC m=+0.216760699 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, distribution-scope=public, name=rhceph, GIT_BRANCH=main, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, CEPH_POINT_RELEASE=)
Dec 02 08:09:03 np0005541914.localdomain sudo[58815]: pam_unix(sudo:session): session closed for user root
Dec 02 08:09:03 np0005541914.localdomain sudo[58967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:09:03 np0005541914.localdomain sudo[58967]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:09:03 np0005541914.localdomain sudo[58967]: pam_unix(sudo:session): session closed for user root
Dec 02 08:09:03 np0005541914.localdomain sudo[58982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:09:04 np0005541914.localdomain sudo[58982]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:09:04 np0005541914.localdomain sudo[58982]: pam_unix(sudo:session): session closed for user root
Dec 02 08:09:05 np0005541914.localdomain sudo[59029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:09:05 np0005541914.localdomain sudo[59029]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:09:05 np0005541914.localdomain sudo[59029]: pam_unix(sudo:session): session closed for user root
Dec 02 08:09:16 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:09:17 np0005541914.localdomain podman[59044]: 2025-12-02 08:09:17.128214043 +0000 UTC m=+0.129160022 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, distribution-scope=public, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-type=git, architecture=x86_64)
Dec 02 08:09:17 np0005541914.localdomain podman[59044]: 2025-12-02 08:09:17.356995861 +0000 UTC m=+0.357941910 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step1, architecture=x86_64, url=https://www.redhat.com, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 08:09:17 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:09:47 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:09:48 np0005541914.localdomain podman[59073]: 2025-12-02 08:09:48.112618056 +0000 UTC m=+0.113688244 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true)
Dec 02 08:09:48 np0005541914.localdomain podman[59073]: 2025-12-02 08:09:48.340761283 +0000 UTC m=+0.341831441 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, vcs-type=git, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, version=17.1.12, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, release=1761123044, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible)
Dec 02 08:09:48 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:09:54 np0005541914.localdomain sshd[59102]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:09:56 np0005541914.localdomain sshd[59102]: Invalid user remote from 182.253.156.173 port 54700
Dec 02 08:09:56 np0005541914.localdomain sshd[59102]: Received disconnect from 182.253.156.173 port 54700:11: Bye Bye [preauth]
Dec 02 08:09:56 np0005541914.localdomain sshd[59102]: Disconnected from invalid user remote 182.253.156.173 port 54700 [preauth]
Dec 02 08:10:05 np0005541914.localdomain sudo[59104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:10:05 np0005541914.localdomain sudo[59104]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:10:05 np0005541914.localdomain sudo[59104]: pam_unix(sudo:session): session closed for user root
Dec 02 08:10:05 np0005541914.localdomain sudo[59119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:10:05 np0005541914.localdomain sudo[59119]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:10:06 np0005541914.localdomain sudo[59119]: pam_unix(sudo:session): session closed for user root
Dec 02 08:10:06 np0005541914.localdomain sudo[59166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:10:06 np0005541914.localdomain sudo[59166]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:10:06 np0005541914.localdomain sudo[59166]: pam_unix(sudo:session): session closed for user root
Dec 02 08:10:18 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:10:19 np0005541914.localdomain systemd[1]: tmp-crun.7bsfE0.mount: Deactivated successfully.
Dec 02 08:10:19 np0005541914.localdomain podman[59181]: 2025-12-02 08:10:19.077881411 +0000 UTC m=+0.082578969 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, vendor=Red Hat, Inc., container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1761123044, name=rhosp17/openstack-qdrouterd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z)
Dec 02 08:10:19 np0005541914.localdomain podman[59181]: 2025-12-02 08:10:19.285835072 +0000 UTC m=+0.290532640 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, release=1761123044)
Dec 02 08:10:19 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:10:20 np0005541914.localdomain sshd[59210]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:10:21 np0005541914.localdomain sshd[59210]: Invalid user roland from 103.52.115.25 port 34068
Dec 02 08:10:21 np0005541914.localdomain sshd[59210]: Received disconnect from 103.52.115.25 port 34068:11: Bye Bye [preauth]
Dec 02 08:10:21 np0005541914.localdomain sshd[59210]: Disconnected from invalid user roland 103.52.115.25 port 34068 [preauth]
Dec 02 08:10:49 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:10:50 np0005541914.localdomain systemd[1]: tmp-crun.Rx9uhK.mount: Deactivated successfully.
Dec 02 08:10:50 np0005541914.localdomain podman[59212]: 2025-12-02 08:10:50.079725835 +0000 UTC m=+0.086357865 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=metrics_qdr, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible)
Dec 02 08:10:50 np0005541914.localdomain podman[59212]: 2025-12-02 08:10:50.264023099 +0000 UTC m=+0.270655089 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-11-18T22:49:46Z, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-type=git, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:10:50 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:10:52 np0005541914.localdomain sshd[59242]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:10:52 np0005541914.localdomain sshd[59242]: Invalid user sol from 45.148.10.240 port 58510
Dec 02 08:10:52 np0005541914.localdomain sshd[59242]: Connection closed by invalid user sol 45.148.10.240 port 58510 [preauth]
Dec 02 08:11:06 np0005541914.localdomain sudo[59244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:11:07 np0005541914.localdomain sudo[59244]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:11:07 np0005541914.localdomain sudo[59244]: pam_unix(sudo:session): session closed for user root
Dec 02 08:11:07 np0005541914.localdomain sudo[59259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:11:07 np0005541914.localdomain sudo[59259]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:11:07 np0005541914.localdomain sudo[59259]: pam_unix(sudo:session): session closed for user root
Dec 02 08:11:08 np0005541914.localdomain sudo[59306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:11:08 np0005541914.localdomain sudo[59306]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:11:08 np0005541914.localdomain sudo[59306]: pam_unix(sudo:session): session closed for user root
Dec 02 08:11:16 np0005541914.localdomain sshd[59321]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:11:18 np0005541914.localdomain sshd[59321]: Invalid user user8 from 182.253.156.173 port 47660
Dec 02 08:11:18 np0005541914.localdomain sshd[59321]: Received disconnect from 182.253.156.173 port 47660:11: Bye Bye [preauth]
Dec 02 08:11:18 np0005541914.localdomain sshd[59321]: Disconnected from invalid user user8 182.253.156.173 port 47660 [preauth]
Dec 02 08:11:20 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:11:21 np0005541914.localdomain podman[59323]: 2025-12-02 08:11:21.067152186 +0000 UTC m=+0.074167661 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, tcib_managed=true, url=https://www.redhat.com)
Dec 02 08:11:21 np0005541914.localdomain podman[59323]: 2025-12-02 08:11:21.257157621 +0000 UTC m=+0.264173126 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, vcs-type=git, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1761123044, io.buildah.version=1.41.4, version=17.1.12)
Dec 02 08:11:21 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:11:42 np0005541914.localdomain sshd[59351]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:11:43 np0005541914.localdomain sshd[59351]: Received disconnect from 103.52.115.25 port 41182:11: Bye Bye [preauth]
Dec 02 08:11:43 np0005541914.localdomain sshd[59351]: Disconnected from authenticating user root 103.52.115.25 port 41182 [preauth]
Dec 02 08:11:51 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:11:52 np0005541914.localdomain podman[59353]: 2025-12-02 08:11:52.074133162 +0000 UTC m=+0.081527952 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, vcs-type=git, name=rhosp17/openstack-qdrouterd, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 08:11:52 np0005541914.localdomain podman[59353]: 2025-12-02 08:11:52.286527184 +0000 UTC m=+0.293921984 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 08:11:52 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:12:08 np0005541914.localdomain sudo[59382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:12:08 np0005541914.localdomain sudo[59382]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:12:08 np0005541914.localdomain sudo[59382]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:08 np0005541914.localdomain sudo[59397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:12:08 np0005541914.localdomain sudo[59397]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:12:09 np0005541914.localdomain sudo[59397]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:10 np0005541914.localdomain sudo[59443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:12:10 np0005541914.localdomain sudo[59443]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:12:10 np0005541914.localdomain sudo[59443]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:10 np0005541914.localdomain sudo[59503]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nqucuamdjdfntnlcwfzphogduayogryc ; /usr/bin/python3
Dec 02 08:12:10 np0005541914.localdomain sudo[59503]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:10 np0005541914.localdomain python3[59505]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:12:10 np0005541914.localdomain sudo[59503]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:11 np0005541914.localdomain sudo[59548]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bvyjpglboyniqkkenilqqhejfdawfwcf ; /usr/bin/python3
Dec 02 08:12:11 np0005541914.localdomain sudo[59548]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:11 np0005541914.localdomain python3[59550]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663130.5146744-98264-70616013773059/source _original_basename=tmp0rxr48ke follow=False checksum=62439dd24dde40c90e7a39f6a1b31cc6061fe59b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:12:11 np0005541914.localdomain sudo[59548]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:12 np0005541914.localdomain sudo[59578]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ewgcoyuaqviojgljtwistmkkbvimqflb ; /usr/bin/python3
Dec 02 08:12:12 np0005541914.localdomain sudo[59578]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:12 np0005541914.localdomain python3[59580]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 08:12:12 np0005541914.localdomain sudo[59578]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:13 np0005541914.localdomain sudo[59628]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-knvqmgkkeryfyxwvamgqjjmzpjnhdvvp ; /usr/bin/python3
Dec 02 08:12:13 np0005541914.localdomain sudo[59628]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:13 np0005541914.localdomain sudo[59628]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:13 np0005541914.localdomain sudo[59646]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bhckbegzypjmejcthhyyjjxzjhgnvyem ; /usr/bin/python3
Dec 02 08:12:13 np0005541914.localdomain sudo[59646]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:13 np0005541914.localdomain sudo[59646]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:14 np0005541914.localdomain sudo[59750]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hrtsqpshgnbjymjrrqpjdrgtzyhcwrms ; ANSIBLE_ASYNC_DIR=/tmp/.ansible_async /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663133.6802697-98474-155704683432526/async_wrapper.py 925576933445 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663133.6802697-98474-155704683432526/AnsiballZ_command.py _
Dec 02 08:12:14 np0005541914.localdomain sudo[59750]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Dec 02 08:12:14 np0005541914.localdomain ansible-async_wrapper.py[59752]: Invoked with 925576933445 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663133.6802697-98474-155704683432526/AnsiballZ_command.py _
Dec 02 08:12:14 np0005541914.localdomain ansible-async_wrapper.py[59755]: Starting module and watcher
Dec 02 08:12:14 np0005541914.localdomain ansible-async_wrapper.py[59755]: Start watching 59756 (3600)
Dec 02 08:12:14 np0005541914.localdomain ansible-async_wrapper.py[59756]: Start module (59756)
Dec 02 08:12:14 np0005541914.localdomain ansible-async_wrapper.py[59752]: Return async_wrapper task started.
Dec 02 08:12:14 np0005541914.localdomain sudo[59750]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:14 np0005541914.localdomain sudo[59771]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-crrihwjdoexsdjpiocgaqxxbfiqludhc ; /usr/bin/python3
Dec 02 08:12:14 np0005541914.localdomain sudo[59771]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:14 np0005541914.localdomain python3[59774]: ansible-ansible.legacy.async_status Invoked with jid=925576933445.59752 mode=status _async_dir=/tmp/.ansible_async
Dec 02 08:12:14 np0005541914.localdomain sudo[59771]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:17 np0005541914.localdomain puppet-user[59776]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 02 08:12:17 np0005541914.localdomain puppet-user[59776]:    (file: /etc/puppet/hiera.yaml)
Dec 02 08:12:17 np0005541914.localdomain puppet-user[59776]: Warning: Undefined variable '::deploy_config_name';
Dec 02 08:12:17 np0005541914.localdomain puppet-user[59776]:    (file & line not available)
Dec 02 08:12:17 np0005541914.localdomain puppet-user[59776]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 02 08:12:17 np0005541914.localdomain puppet-user[59776]:    (file & line not available)
Dec 02 08:12:17 np0005541914.localdomain puppet-user[59776]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8)
Dec 02 08:12:18 np0005541914.localdomain puppet-user[59776]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69)
Dec 02 08:12:18 np0005541914.localdomain puppet-user[59776]: Notice: Compiled catalog for np0005541914.localdomain in environment production in 0.11 seconds
Dec 02 08:12:18 np0005541914.localdomain puppet-user[59776]: Notice: Applied catalog in 0.04 seconds
Dec 02 08:12:18 np0005541914.localdomain puppet-user[59776]: Application:
Dec 02 08:12:18 np0005541914.localdomain puppet-user[59776]:    Initial environment: production
Dec 02 08:12:18 np0005541914.localdomain puppet-user[59776]:    Converged environment: production
Dec 02 08:12:18 np0005541914.localdomain puppet-user[59776]:          Run mode: user
Dec 02 08:12:18 np0005541914.localdomain puppet-user[59776]: Changes:
Dec 02 08:12:18 np0005541914.localdomain puppet-user[59776]: Events:
Dec 02 08:12:18 np0005541914.localdomain puppet-user[59776]: Resources:
Dec 02 08:12:18 np0005541914.localdomain puppet-user[59776]:             Total: 10
Dec 02 08:12:18 np0005541914.localdomain puppet-user[59776]: Time:
Dec 02 08:12:18 np0005541914.localdomain puppet-user[59776]:          Schedule: 0.00
Dec 02 08:12:18 np0005541914.localdomain puppet-user[59776]:              File: 0.00
Dec 02 08:12:18 np0005541914.localdomain puppet-user[59776]:              Exec: 0.01
Dec 02 08:12:18 np0005541914.localdomain puppet-user[59776]:            Augeas: 0.01
Dec 02 08:12:18 np0005541914.localdomain puppet-user[59776]:    Transaction evaluation: 0.03
Dec 02 08:12:18 np0005541914.localdomain puppet-user[59776]:    Catalog application: 0.04
Dec 02 08:12:18 np0005541914.localdomain puppet-user[59776]:    Config retrieval: 0.14
Dec 02 08:12:18 np0005541914.localdomain puppet-user[59776]:          Last run: 1764663138
Dec 02 08:12:18 np0005541914.localdomain puppet-user[59776]:        Filebucket: 0.00
Dec 02 08:12:18 np0005541914.localdomain puppet-user[59776]:             Total: 0.04
Dec 02 08:12:18 np0005541914.localdomain puppet-user[59776]: Version:
Dec 02 08:12:18 np0005541914.localdomain puppet-user[59776]:            Config: 1764663137
Dec 02 08:12:18 np0005541914.localdomain puppet-user[59776]:            Puppet: 7.10.0
Dec 02 08:12:18 np0005541914.localdomain ansible-async_wrapper.py[59756]: Module complete (59756)
Dec 02 08:12:19 np0005541914.localdomain ansible-async_wrapper.py[59755]: Done in kid B.
Dec 02 08:12:22 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:12:23 np0005541914.localdomain podman[59888]: 2025-12-02 08:12:23.08633501 +0000 UTC m=+0.087371315 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, distribution-scope=public, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, release=1761123044, tcib_managed=true, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 08:12:23 np0005541914.localdomain podman[59888]: 2025-12-02 08:12:23.293897328 +0000 UTC m=+0.294933653 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, container_name=metrics_qdr, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, name=rhosp17/openstack-qdrouterd, vcs-type=git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:12:23 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:12:24 np0005541914.localdomain sudo[59931]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lucknmlgpefldcrqktswhntebmxftgyl ; /usr/bin/python3
Dec 02 08:12:24 np0005541914.localdomain sudo[59931]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:24 np0005541914.localdomain python3[59933]: ansible-ansible.legacy.async_status Invoked with jid=925576933445.59752 mode=status _async_dir=/tmp/.ansible_async
Dec 02 08:12:24 np0005541914.localdomain sudo[59931]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:25 np0005541914.localdomain sudo[59947]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aunriqkbvdwyxxankmmzpiqkxxcgxdnl ; /usr/bin/python3
Dec 02 08:12:25 np0005541914.localdomain sudo[59947]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:25 np0005541914.localdomain python3[59949]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 02 08:12:25 np0005541914.localdomain sudo[59947]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:26 np0005541914.localdomain sudo[59963]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-equiogseeyggqqvdvswzogomqvkkjngr ; /usr/bin/python3
Dec 02 08:12:26 np0005541914.localdomain sudo[59963]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:26 np0005541914.localdomain python3[59965]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 08:12:26 np0005541914.localdomain sudo[59963]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:26 np0005541914.localdomain sudo[60013]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-isvwtswuielndgzrnwzfqgtuaogldxzl ; /usr/bin/python3
Dec 02 08:12:26 np0005541914.localdomain sudo[60013]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:27 np0005541914.localdomain python3[60015]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:12:27 np0005541914.localdomain sudo[60013]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:27 np0005541914.localdomain sudo[60031]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rjggylcwsryuyblvrzbwkeigvsxpgkmg ; /usr/bin/python3
Dec 02 08:12:27 np0005541914.localdomain sudo[60031]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:27 np0005541914.localdomain python3[60033]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmp4xfhx6hf recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 02 08:12:27 np0005541914.localdomain sudo[60031]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:27 np0005541914.localdomain sudo[60061]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ssccwqqcrsqzpqhcpcigdsfzkrtnruwl ; /usr/bin/python3
Dec 02 08:12:27 np0005541914.localdomain sudo[60061]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:27 np0005541914.localdomain python3[60063]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:12:27 np0005541914.localdomain sudo[60061]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:27 np0005541914.localdomain sudo[60077]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-alsgmrlgbcjtixzuygwnaldcbtjwvbnz ; /usr/bin/python3
Dec 02 08:12:27 np0005541914.localdomain sudo[60077]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:28 np0005541914.localdomain sudo[60077]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:28 np0005541914.localdomain sudo[60164]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kajpvdqafxtopeuiuocsunrmwhkcjupm ; /usr/bin/python3
Dec 02 08:12:28 np0005541914.localdomain sudo[60164]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:28 np0005541914.localdomain python3[60166]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None
Dec 02 08:12:28 np0005541914.localdomain sudo[60164]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:29 np0005541914.localdomain sudo[60183]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jlmttjgoxrlgpouwpmqprsvomuqqgdsx ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:29 np0005541914.localdomain sudo[60183]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:29 np0005541914.localdomain python3[60185]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:12:29 np0005541914.localdomain sudo[60183]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:30 np0005541914.localdomain sudo[60199]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xygejiaolryedffrjxaqifcfpczlcqop ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:30 np0005541914.localdomain sudo[60199]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:30 np0005541914.localdomain sudo[60199]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:31 np0005541914.localdomain sudo[60216]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nzirgjrfjsikunfbjcltzqavzhmwmswj ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:31 np0005541914.localdomain sudo[60216]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:31 np0005541914.localdomain python3[60218]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 08:12:31 np0005541914.localdomain sudo[60216]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:31 np0005541914.localdomain sudo[60266]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fswbfgeqffcmvtxwahutdocnsgapldbe ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:31 np0005541914.localdomain sudo[60266]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:31 np0005541914.localdomain python3[60268]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:12:31 np0005541914.localdomain sudo[60266]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:31 np0005541914.localdomain sudo[60284]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cvekwcrlpooofbrvuqzkbnrlhvdtgosx ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:31 np0005541914.localdomain sudo[60284]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:32 np0005541914.localdomain python3[60286]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:12:32 np0005541914.localdomain sudo[60284]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:32 np0005541914.localdomain sudo[60346]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tgehpkwxyciuxtnazrpohhjghfopnrgu ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:32 np0005541914.localdomain sudo[60346]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:32 np0005541914.localdomain python3[60348]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:12:32 np0005541914.localdomain sudo[60346]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:32 np0005541914.localdomain sudo[60364]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tfzbzovwfljqvcsihtvrhqbikjudpnzj ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:32 np0005541914.localdomain sudo[60364]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:32 np0005541914.localdomain python3[60366]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:12:32 np0005541914.localdomain sudo[60364]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:33 np0005541914.localdomain sudo[60426]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-apylofxweewuexumemcrlucqzgwbuskf ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:33 np0005541914.localdomain sudo[60426]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:33 np0005541914.localdomain python3[60428]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:12:33 np0005541914.localdomain sudo[60426]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:33 np0005541914.localdomain sudo[60444]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bpvjkqyndndtmuejahpvwawnwfkrpzhc ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:33 np0005541914.localdomain sudo[60444]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:33 np0005541914.localdomain python3[60446]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:12:33 np0005541914.localdomain sudo[60444]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:34 np0005541914.localdomain sudo[60506]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xahcerchwwgawhzqpvwwxgysyknoyhym ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:34 np0005541914.localdomain sudo[60506]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:34 np0005541914.localdomain python3[60508]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:12:34 np0005541914.localdomain sudo[60506]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:34 np0005541914.localdomain sudo[60524]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tpkbvrgsxfenvhgnboibcvydlhpphgdb ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:34 np0005541914.localdomain sudo[60524]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:34 np0005541914.localdomain sshd[60527]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:12:34 np0005541914.localdomain python3[60526]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:12:34 np0005541914.localdomain sudo[60524]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:34 np0005541914.localdomain sudo[60556]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-msrsxwuwimbyhwfadysmtbkvhfvonydc ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:34 np0005541914.localdomain sudo[60556]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:35 np0005541914.localdomain python3[60558]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 08:12:35 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:12:35 np0005541914.localdomain systemd-sysv-generator[60588]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:12:35 np0005541914.localdomain systemd-rc-local-generator[60582]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:12:35 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:12:35 np0005541914.localdomain sudo[60556]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:35 np0005541914.localdomain sudo[60642]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iaulouuyygvuzkevzlgmfoewihzzmpdy ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:35 np0005541914.localdomain sudo[60642]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:35 np0005541914.localdomain python3[60644]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:12:35 np0005541914.localdomain sudo[60642]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:35 np0005541914.localdomain sshd[60527]: Invalid user develop from 182.253.156.173 port 56134
Dec 02 08:12:36 np0005541914.localdomain sudo[60660]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xsjebpmqkiomjppkxkbinuxbbinpebgb ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:36 np0005541914.localdomain sudo[60660]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:36 np0005541914.localdomain python3[60662]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:12:36 np0005541914.localdomain sudo[60660]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:36 np0005541914.localdomain sshd[60527]: Received disconnect from 182.253.156.173 port 56134:11: Bye Bye [preauth]
Dec 02 08:12:36 np0005541914.localdomain sshd[60527]: Disconnected from invalid user develop 182.253.156.173 port 56134 [preauth]
Dec 02 08:12:36 np0005541914.localdomain sudo[60723]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bpavvpnrxdynhyosxnvqifwwzfjctgnu ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:36 np0005541914.localdomain sudo[60723]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:36 np0005541914.localdomain python3[60725]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:12:36 np0005541914.localdomain sudo[60723]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:36 np0005541914.localdomain sudo[60741]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pwfphqrqvpfuryvlmskejsoqjtmhazaw ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:36 np0005541914.localdomain sudo[60741]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:37 np0005541914.localdomain python3[60743]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:12:37 np0005541914.localdomain sudo[60741]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:37 np0005541914.localdomain sudo[60771]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mmdmmvkbmnnssdgbqufawdcvubrxcloi ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:37 np0005541914.localdomain sudo[60771]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:37 np0005541914.localdomain python3[60773]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 08:12:37 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:12:37 np0005541914.localdomain systemd-rc-local-generator[60798]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:12:37 np0005541914.localdomain systemd-sysv-generator[60804]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:12:37 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:12:37 np0005541914.localdomain systemd[1]: Starting Create netns directory...
Dec 02 08:12:37 np0005541914.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 02 08:12:37 np0005541914.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 02 08:12:37 np0005541914.localdomain systemd[1]: Finished Create netns directory.
Dec 02 08:12:37 np0005541914.localdomain sudo[60771]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:38 np0005541914.localdomain sudo[60829]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cgisuygijkmkqjhxfpezhqebaxgkdjtb ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:38 np0005541914.localdomain sudo[60829]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:38 np0005541914.localdomain python3[60831]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6
Dec 02 08:12:38 np0005541914.localdomain sudo[60829]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:38 np0005541914.localdomain sudo[60845]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cdhxdhmyqcdxipfzevgnojdmxpurkkez ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:38 np0005541914.localdomain sudo[60845]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:39 np0005541914.localdomain sudo[60845]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:40 np0005541914.localdomain sudo[60887]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-euysjmxswkdkpgcmaxdcdfkvuvisgnwh ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:40 np0005541914.localdomain sudo[60887]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:40 np0005541914.localdomain python3[60889]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step3 config_dir=/var/lib/tripleo-config/container-startup-config/step_3 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False
Dec 02 08:12:40 np0005541914.localdomain podman[61059]: 2025-12-02 08:12:40.905858877 +0000 UTC m=+0.057951329 container create 4dcb0adbd47065affb3904537f282a8b7da0bef27e4c6012a1f1e96596066458 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, container_name=ceilometer_init_log, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, vcs-type=git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 02 08:12:40 np0005541914.localdomain podman[61058]: 2025-12-02 08:12:40.932968286 +0000 UTC m=+0.088052867 container create 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, version=17.1.12, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, tcib_managed=true)
Dec 02 08:12:40 np0005541914.localdomain systemd[1]: Started libpod-conmon-4dcb0adbd47065affb3904537f282a8b7da0bef27e4c6012a1f1e96596066458.scope.
Dec 02 08:12:40 np0005541914.localdomain podman[61076]: 2025-12-02 08:12:40.950813448 +0000 UTC m=+0.092287033 container create 514e5582c5cb38fa58bb91dfc6ec9e95c95e93e763addfc95d2b5cbe820f4326 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, config_id=tripleo_step3, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, release=1761123044, distribution-scope=public, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, container_name=nova_statedir_owner, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:12:40 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:12:40 np0005541914.localdomain systemd[1]: Started libpod-conmon-2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.scope.
Dec 02 08:12:40 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9f966c4c02ca72bf571aaf0656247c88b73268323ddd77e58521b9ea3db73d1/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:40 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:12:40 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c13e199db7335dd51d53d563216fcc1a3ed75eba14190a583a84b8f73b6c9d42/merged/scripts supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:40 np0005541914.localdomain podman[61081]: 2025-12-02 08:12:40.970885646 +0000 UTC m=+0.106286030 container create fae4e39fbb099510d3e0c1e1174ca074b49d200a38fbd9e586e6ffec92dff36b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., build-date=2025-11-19T00:35:22Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, container_name=nova_virtlogd_wrapper, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-libvirt-container)
Dec 02 08:12:40 np0005541914.localdomain podman[61059]: 2025-12-02 08:12:40.873288066 +0000 UTC m=+0.025380518 image pull  registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1
Dec 02 08:12:40 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c13e199db7335dd51d53d563216fcc1a3ed75eba14190a583a84b8f73b6c9d42/merged/var/log/collectd supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:40 np0005541914.localdomain podman[61058]: 2025-12-02 08:12:40.879603115 +0000 UTC m=+0.034687716 image pull  registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1
Dec 02 08:12:40 np0005541914.localdomain podman[61059]: 2025-12-02 08:12:40.982395679 +0000 UTC m=+0.134488131 container init 4dcb0adbd47065affb3904537f282a8b7da0bef27e4c6012a1f1e96596066458 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, container_name=ceilometer_init_log, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, managed_by=tripleo_ansible, release=1761123044, io.openshift.expose-services=, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container)
Dec 02 08:12:40 np0005541914.localdomain podman[61076]: 2025-12-02 08:12:40.888311805 +0000 UTC m=+0.029785430 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Dec 02 08:12:40 np0005541914.localdomain podman[61059]: 2025-12-02 08:12:40.990001436 +0000 UTC m=+0.142093878 container start 4dcb0adbd47065affb3904537f282a8b7da0bef27e4c6012a1f1e96596066458 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_init_log, architecture=x86_64, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 02 08:12:40 np0005541914.localdomain systemd[1]: Started libpod-conmon-514e5582c5cb38fa58bb91dfc6ec9e95c95e93e763addfc95d2b5cbe820f4326.scope.
Dec 02 08:12:40 np0005541914.localdomain python3[60889]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_init_log --conmon-pidfile /run/ceilometer_init_log.pid --detach=True --label config_id=tripleo_step3 --label container_name=ceilometer_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_init_log.log --network none --user root --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 /bin/bash -c chown -R ceilometer:ceilometer /var/log/ceilometer
Dec 02 08:12:40 np0005541914.localdomain systemd[1]: libpod-4dcb0adbd47065affb3904537f282a8b7da0bef27e4c6012a1f1e96596066458.scope: Deactivated successfully.
Dec 02 08:12:40 np0005541914.localdomain systemd[1]: Started libpod-conmon-fae4e39fbb099510d3e0c1e1174ca074b49d200a38fbd9e586e6ffec92dff36b.scope.
Dec 02 08:12:41 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:12:41 np0005541914.localdomain podman[61081]: 2025-12-02 08:12:40.903025303 +0000 UTC m=+0.038425677 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 02 08:12:41 np0005541914.localdomain podman[61077]: 2025-12-02 08:12:40.903904129 +0000 UTC m=+0.040812177 image pull  registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1
Dec 02 08:12:41 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47f9fab5806f96664fad9b3e3421bfde63bb6a7412470abd2bfea5e9a57acc82/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:41 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47f9fab5806f96664fad9b3e3421bfde63bb6a7412470abd2bfea5e9a57acc82/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:41 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47f9fab5806f96664fad9b3e3421bfde63bb6a7412470abd2bfea5e9a57acc82/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:41 np0005541914.localdomain podman[61076]: 2025-12-02 08:12:41.01296837 +0000 UTC m=+0.154442005 container init 514e5582c5cb38fa58bb91dfc6ec9e95c95e93e763addfc95d2b5cbe820f4326 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_statedir_owner, release=1761123044, batch=17.1_20251118.1, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, version=17.1.12, architecture=x86_64)
Dec 02 08:12:41 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:12:41 np0005541914.localdomain podman[61076]: 2025-12-02 08:12:41.019724022 +0000 UTC m=+0.161197647 container start 514e5582c5cb38fa58bb91dfc6ec9e95c95e93e763addfc95d2b5cbe820f4326 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, name=rhosp17/openstack-nova-compute, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, version=17.1.12, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, container_name=nova_statedir_owner, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:12:41 np0005541914.localdomain podman[61076]: 2025-12-02 08:12:41.020019191 +0000 UTC m=+0.161492816 container attach 514e5582c5cb38fa58bb91dfc6ec9e95c95e93e763addfc95d2b5cbe820f4326 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, distribution-scope=public, vendor=Red Hat, Inc., config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, url=https://www.redhat.com, vcs-type=git, build-date=2025-11-19T00:36:58Z, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_statedir_owner, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible)
Dec 02 08:12:41 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c63bc0da00de6e07d0e525df0b33132c133b0af89f53ce43169161426eaeb98/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:41 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c63bc0da00de6e07d0e525df0b33132c133b0af89f53ce43169161426eaeb98/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:41 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c63bc0da00de6e07d0e525df0b33132c133b0af89f53ce43169161426eaeb98/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:41 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c63bc0da00de6e07d0e525df0b33132c133b0af89f53ce43169161426eaeb98/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:41 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c63bc0da00de6e07d0e525df0b33132c133b0af89f53ce43169161426eaeb98/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:41 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c63bc0da00de6e07d0e525df0b33132c133b0af89f53ce43169161426eaeb98/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:41 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c63bc0da00de6e07d0e525df0b33132c133b0af89f53ce43169161426eaeb98/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:41 np0005541914.localdomain podman[61077]: 2025-12-02 08:12:41.028665639 +0000 UTC m=+0.165573687 container create 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, build-date=2025-11-18T22:49:49Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, name=rhosp17/openstack-rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.buildah.version=1.41.4, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, tcib_managed=true, config_id=tripleo_step3, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-rsyslog-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vcs-type=git)
Dec 02 08:12:41 np0005541914.localdomain podman[61140]: 2025-12-02 08:12:41.064580999 +0000 UTC m=+0.057882066 container died 4dcb0adbd47065affb3904537f282a8b7da0bef27e4c6012a1f1e96596066458 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, vcs-type=git, distribution-scope=public, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step3, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_init_log, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:12:41 np0005541914.localdomain podman[61076]: 2025-12-02 08:12:41.067079254 +0000 UTC m=+0.208552879 container died 514e5582c5cb38fa58bb91dfc6ec9e95c95e93e763addfc95d2b5cbe820f4326 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2025-11-19T00:36:58Z, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, version=17.1.12, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, container_name=nova_statedir_owner, vendor=Red Hat, Inc., release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public)
Dec 02 08:12:41 np0005541914.localdomain systemd[1]: Started libpod-conmon-64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60.scope.
Dec 02 08:12:41 np0005541914.localdomain systemd[1]: libpod-514e5582c5cb38fa58bb91dfc6ec9e95c95e93e763addfc95d2b5cbe820f4326.scope: Deactivated successfully.
Dec 02 08:12:41 np0005541914.localdomain podman[61081]: 2025-12-02 08:12:41.079175424 +0000 UTC m=+0.214575778 container init fae4e39fbb099510d3e0c1e1174ca074b49d200a38fbd9e586e6ffec92dff36b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, container_name=nova_virtlogd_wrapper, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, name=rhosp17/openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, build-date=2025-11-19T00:35:22Z, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:12:41 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:12:41 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd78fb44465760df7c4be9cb01e48acc01a9b6623f14c40fffd8cb0fbb72ecf/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:41 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd78fb44465760df7c4be9cb01e48acc01a9b6623f14c40fffd8cb0fbb72ecf/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:41 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:12:41 np0005541914.localdomain podman[61058]: 2025-12-02 08:12:41.101285884 +0000 UTC m=+0.256370545 container init 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team)
Dec 02 08:12:41 np0005541914.localdomain sudo[61190]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 02 08:12:41 np0005541914.localdomain sudo[61199]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 02 08:12:41 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:12:41 np0005541914.localdomain systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring.
Dec 02 08:12:41 np0005541914.localdomain systemd[1]: Created slice User Slice of UID 0.
Dec 02 08:12:41 np0005541914.localdomain systemd[1]: Starting User Runtime Directory /run/user/0...
Dec 02 08:12:41 np0005541914.localdomain podman[61081]: 2025-12-02 08:12:41.140767701 +0000 UTC m=+0.276168055 container start fae4e39fbb099510d3e0c1e1174ca074b49d200a38fbd9e586e6ffec92dff36b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, container_name=nova_virtlogd_wrapper, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.buildah.version=1.41.4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-11-19T00:35:22Z, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, tcib_managed=true)
Dec 02 08:12:41 np0005541914.localdomain systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring.
Dec 02 08:12:41 np0005541914.localdomain podman[61176]: 2025-12-02 08:12:41.14478186 +0000 UTC m=+0.067578716 container cleanup 514e5582c5cb38fa58bb91dfc6ec9e95c95e93e763addfc95d2b5cbe820f4326 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, container_name=nova_statedir_owner, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, vendor=Red Hat, Inc., io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, version=17.1.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute)
Dec 02 08:12:41 np0005541914.localdomain python3[60889]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtlogd_wrapper --cgroupns=host --conmon-pidfile /run/nova_virtlogd_wrapper.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=51230b537c6b56095225b7a0a6b952d0 --label config_id=tripleo_step3 --label container_name=nova_virtlogd_wrapper --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtlogd_wrapper.log --network host --pid host --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 02 08:12:41 np0005541914.localdomain systemd[1]: libpod-conmon-514e5582c5cb38fa58bb91dfc6ec9e95c95e93e763addfc95d2b5cbe820f4326.scope: Deactivated successfully.
Dec 02 08:12:41 np0005541914.localdomain systemd[1]: Finished User Runtime Directory /run/user/0.
Dec 02 08:12:41 np0005541914.localdomain systemd[1]: Starting User Manager for UID 0...
Dec 02 08:12:41 np0005541914.localdomain python3[60889]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_statedir_owner --conmon-pidfile /run/nova_statedir_owner.pid --detach=False --env NOVA_STATEDIR_OWNERSHIP_SKIP=triliovault-mounts --env TRIPLEO_DEPLOY_IDENTIFIER=1764661676 --env __OS_DEBUG=true --label config_id=tripleo_step3 --label container_name=nova_statedir_owner --label managed_by=tripleo_ansible --label config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_statedir_owner.log --network none --privileged=False --security-opt label=disable --user root --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/container-config-scripts:/container-config-scripts:z registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 /container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py
Dec 02 08:12:41 np0005541914.localdomain systemd[61217]: pam_unix(systemd-user:session): session opened for user root(uid=0) by (uid=0)
Dec 02 08:12:41 np0005541914.localdomain podman[61058]: 2025-12-02 08:12:41.240565976 +0000 UTC m=+0.395650557 container start 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, version=17.1.12, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Dec 02 08:12:41 np0005541914.localdomain python3[60889]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name collectd --cap-add IPC_LOCK --conmon-pidfile /run/collectd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=d31718fcd17fdeee6489534105191c7a --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step3 --label container_name=collectd --label managed_by=tripleo_ansible --label config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/collectd.log --memory 512m --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro --volume /var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/collectd:/var/log/collectd:rw,z --volume /var/lib/container-config-scripts:/config-scripts:ro --volume /var/lib/container-user-scripts:/scripts:z --volume /run:/run:rw --volume /sys/fs/cgroup:/sys/fs/cgroup:ro registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1
Dec 02 08:12:41 np0005541914.localdomain podman[61201]: 2025-12-02 08:12:41.280542707 +0000 UTC m=+0.141323394 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=starting, distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, container_name=collectd, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc.)
Dec 02 08:12:41 np0005541914.localdomain podman[61077]: 2025-12-02 08:12:41.291046561 +0000 UTC m=+0.427954599 container init 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, container_name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, build-date=2025-11-18T22:49:49Z, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, distribution-scope=public, release=1761123044, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vcs-type=git)
Dec 02 08:12:41 np0005541914.localdomain podman[61077]: 2025-12-02 08:12:41.297656898 +0000 UTC m=+0.434564936 container start 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, version=17.1.12, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-rsyslog, build-date=2025-11-18T22:49:49Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20251118.1, url=https://www.redhat.com)
Dec 02 08:12:41 np0005541914.localdomain systemd[61217]: Queued start job for default target Main User Target.
Dec 02 08:12:41 np0005541914.localdomain systemd[61217]: Created slice User Application Slice.
Dec 02 08:12:41 np0005541914.localdomain systemd[61217]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec 02 08:12:41 np0005541914.localdomain systemd[61217]: Started Daily Cleanup of User's Temporary Directories.
Dec 02 08:12:41 np0005541914.localdomain systemd[61217]: Reached target Paths.
Dec 02 08:12:41 np0005541914.localdomain systemd[61217]: Reached target Timers.
Dec 02 08:12:41 np0005541914.localdomain python3[60889]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name rsyslog --conmon-pidfile /run/rsyslog.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=96606bb2d91ec59ed336cbd6010f1851 --label config_id=tripleo_step3 --label container_name=rsyslog --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/rsyslog.log --network host --privileged=True --security-opt label=disable --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro --volume /var/log/containers:/var/log/containers:ro --volume /var/log/containers/rsyslog:/var/log/rsyslog:rw,z --volume /var/log:/var/log/host:ro --volume /var/lib/rsyslog.container:/var/lib/rsyslog:rw,z registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1
Dec 02 08:12:41 np0005541914.localdomain systemd[61217]: Starting D-Bus User Message Bus Socket...
Dec 02 08:12:41 np0005541914.localdomain systemd[61217]: Starting Create User's Volatile Files and Directories...
Dec 02 08:12:41 np0005541914.localdomain sudo[61293]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 02 08:12:41 np0005541914.localdomain sudo[61293]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 02 08:12:41 np0005541914.localdomain systemd[61217]: Listening on D-Bus User Message Bus Socket.
Dec 02 08:12:41 np0005541914.localdomain systemd[61217]: Finished Create User's Volatile Files and Directories.
Dec 02 08:12:41 np0005541914.localdomain systemd[61217]: Reached target Sockets.
Dec 02 08:12:41 np0005541914.localdomain systemd[61217]: Reached target Basic System.
Dec 02 08:12:41 np0005541914.localdomain systemd[1]: Started User Manager for UID 0.
Dec 02 08:12:41 np0005541914.localdomain systemd[61217]: Reached target Main User Target.
Dec 02 08:12:41 np0005541914.localdomain systemd[61217]: Startup finished in 117ms.
Dec 02 08:12:41 np0005541914.localdomain podman[61201]: 2025-12-02 08:12:41.312653484 +0000 UTC m=+0.173434121 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, version=17.1.12, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, distribution-scope=public, tcib_managed=true, release=1761123044)
Dec 02 08:12:41 np0005541914.localdomain systemd[1]: Started Session c1 of User root.
Dec 02 08:12:41 np0005541914.localdomain systemd[1]: Started Session c2 of User root.
Dec 02 08:12:41 np0005541914.localdomain sudo[61190]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 02 08:12:41 np0005541914.localdomain sudo[61199]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 02 08:12:41 np0005541914.localdomain podman[61201]: unhealthy
Dec 02 08:12:41 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:12:41 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Failed with result 'exit-code'.
Dec 02 08:12:41 np0005541914.localdomain sudo[61293]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:41 np0005541914.localdomain systemd[1]: libpod-64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60.scope: Deactivated successfully.
Dec 02 08:12:41 np0005541914.localdomain sudo[61199]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:41 np0005541914.localdomain systemd[1]: session-c2.scope: Deactivated successfully.
Dec 02 08:12:41 np0005541914.localdomain sudo[61190]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:41 np0005541914.localdomain systemd[1]: session-c1.scope: Deactivated successfully.
Dec 02 08:12:41 np0005541914.localdomain podman[61315]: 2025-12-02 08:12:41.430285141 +0000 UTC m=+0.041813278 container died 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vcs-type=git, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, io.openshift.expose-services=, version=17.1.12, url=https://www.redhat.com, container_name=rsyslog, name=rhosp17/openstack-rsyslog, config_id=tripleo_step3, tcib_managed=true, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:49Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4)
Dec 02 08:12:41 np0005541914.localdomain podman[61144]: 2025-12-02 08:12:41.443791084 +0000 UTC m=+0.437101562 container cleanup 4dcb0adbd47065affb3904537f282a8b7da0bef27e4c6012a1f1e96596066458 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step3, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_init_log, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, distribution-scope=public)
Dec 02 08:12:41 np0005541914.localdomain podman[61315]: 2025-12-02 08:12:41.447088732 +0000 UTC m=+0.058616839 container cleanup 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-rsyslog-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, build-date=2025-11-18T22:49:49Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, name=rhosp17/openstack-rsyslog, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, container_name=rsyslog, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, tcib_managed=true, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044)
Dec 02 08:12:41 np0005541914.localdomain systemd[1]: libpod-conmon-4dcb0adbd47065affb3904537f282a8b7da0bef27e4c6012a1f1e96596066458.scope: Deactivated successfully.
Dec 02 08:12:41 np0005541914.localdomain systemd[1]: libpod-conmon-64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60.scope: Deactivated successfully.
Dec 02 08:12:41 np0005541914.localdomain podman[61415]: 2025-12-02 08:12:41.624418859 +0000 UTC m=+0.053458035 container create c02a8a11b94227111c66c22001221e662ea333a2c613bf3410586b68e637798a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, name=rhosp17/openstack-nova-libvirt, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-nova-libvirt-container, build-date=2025-11-19T00:35:22Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Dec 02 08:12:41 np0005541914.localdomain systemd[1]: Started libpod-conmon-c02a8a11b94227111c66c22001221e662ea333a2c613bf3410586b68e637798a.scope.
Dec 02 08:12:41 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:12:41 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f14a2782138c084b8d1f9a2d1c3241237dbc098d9496c81144c959b54b35a260/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:41 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f14a2782138c084b8d1f9a2d1c3241237dbc098d9496c81144c959b54b35a260/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:41 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f14a2782138c084b8d1f9a2d1c3241237dbc098d9496c81144c959b54b35a260/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:41 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f14a2782138c084b8d1f9a2d1c3241237dbc098d9496c81144c959b54b35a260/merged/var/log/swtpm/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:41 np0005541914.localdomain podman[61415]: 2025-12-02 08:12:41.676320576 +0000 UTC m=+0.105359762 container init c02a8a11b94227111c66c22001221e662ea333a2c613bf3410586b68e637798a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, build-date=2025-11-19T00:35:22Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-libvirt, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://www.redhat.com)
Dec 02 08:12:41 np0005541914.localdomain podman[61415]: 2025-12-02 08:12:41.681273854 +0000 UTC m=+0.110313050 container start c02a8a11b94227111c66c22001221e662ea333a2c613bf3410586b68e637798a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, batch=17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:35:22Z, io.buildah.version=1.41.4, url=https://www.redhat.com, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, name=rhosp17/openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt)
Dec 02 08:12:41 np0005541914.localdomain podman[61415]: 2025-12-02 08:12:41.603056992 +0000 UTC m=+0.032096178 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 02 08:12:41 np0005541914.localdomain podman[61471]: 2025-12-02 08:12:41.798315713 +0000 UTC m=+0.078521362 container create c52787fc6278444352c6e9fc9a31127ec6ce41ddcd861f2779c74dbb5cb69b10 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step3, distribution-scope=public, tcib_managed=true, container_name=nova_virtsecretd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, build-date=2025-11-19T00:35:22Z, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64)
Dec 02 08:12:41 np0005541914.localdomain systemd[1]: Started libpod-conmon-c52787fc6278444352c6e9fc9a31127ec6ce41ddcd861f2779c74dbb5cb69b10.scope.
Dec 02 08:12:41 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:12:41 np0005541914.localdomain podman[61471]: 2025-12-02 08:12:41.754654321 +0000 UTC m=+0.034860000 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 02 08:12:41 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99c986d4857ab1fa44ce62584eec376fd6f28bcc79d8fb56e2c5847b897969a/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:41 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99c986d4857ab1fa44ce62584eec376fd6f28bcc79d8fb56e2c5847b897969a/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:41 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99c986d4857ab1fa44ce62584eec376fd6f28bcc79d8fb56e2c5847b897969a/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:41 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99c986d4857ab1fa44ce62584eec376fd6f28bcc79d8fb56e2c5847b897969a/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:41 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99c986d4857ab1fa44ce62584eec376fd6f28bcc79d8fb56e2c5847b897969a/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:41 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99c986d4857ab1fa44ce62584eec376fd6f28bcc79d8fb56e2c5847b897969a/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:41 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99c986d4857ab1fa44ce62584eec376fd6f28bcc79d8fb56e2c5847b897969a/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:41 np0005541914.localdomain podman[61471]: 2025-12-02 08:12:41.864200267 +0000 UTC m=+0.144405916 container init c52787fc6278444352c6e9fc9a31127ec6ce41ddcd861f2779c74dbb5cb69b10 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, io.buildah.version=1.41.4, tcib_managed=true, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, container_name=nova_virtsecretd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.12, url=https://www.redhat.com, release=1761123044, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public)
Dec 02 08:12:41 np0005541914.localdomain podman[61471]: 2025-12-02 08:12:41.874730701 +0000 UTC m=+0.154936350 container start c52787fc6278444352c6e9fc9a31127ec6ce41ddcd861f2779c74dbb5cb69b10 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, config_id=tripleo_step3, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, container_name=nova_virtsecretd, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, name=rhosp17/openstack-nova-libvirt, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:12:41 np0005541914.localdomain python3[60889]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtsecretd --cgroupns=host --conmon-pidfile /run/nova_virtsecretd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=51230b537c6b56095225b7a0a6b952d0 --label config_id=tripleo_step3 --label container_name=nova_virtsecretd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtsecretd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 02 08:12:41 np0005541914.localdomain sudo[61491]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 02 08:12:41 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a9f966c4c02ca72bf571aaf0656247c88b73268323ddd77e58521b9ea3db73d1-merged.mount: Deactivated successfully.
Dec 02 08:12:41 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4dcb0adbd47065affb3904537f282a8b7da0bef27e4c6012a1f1e96596066458-userdata-shm.mount: Deactivated successfully.
Dec 02 08:12:41 np0005541914.localdomain systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring.
Dec 02 08:12:41 np0005541914.localdomain systemd[1]: Started Session c3 of User root.
Dec 02 08:12:41 np0005541914.localdomain sudo[61491]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 02 08:12:42 np0005541914.localdomain sudo[61491]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:42 np0005541914.localdomain systemd[1]: session-c3.scope: Deactivated successfully.
Dec 02 08:12:42 np0005541914.localdomain podman[61606]: 2025-12-02 08:12:42.321320324 +0000 UTC m=+0.080162050 container create f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, release=1761123044, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 02 08:12:42 np0005541914.localdomain systemd[1]: Started libpod-conmon-f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.scope.
Dec 02 08:12:42 np0005541914.localdomain podman[61619]: 2025-12-02 08:12:42.372784209 +0000 UTC m=+0.101291711 container create 380936fd184910f75d26f0daadef5c0e8a2dd7b0ccf2a1fab48d9a9f23b2b8f3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, release=1761123044, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.12, config_id=tripleo_step3, build-date=2025-11-19T00:35:22Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_virtnodedevd, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt)
Dec 02 08:12:42 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:12:42 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f5c4d65539870ee2bafb1f7e39854f191dd3f1ae459b319446f5932294db9e/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:42 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63f5c4d65539870ee2bafb1f7e39854f191dd3f1ae459b319446f5932294db9e/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:42 np0005541914.localdomain podman[61606]: 2025-12-02 08:12:42.289563008 +0000 UTC m=+0.048404754 image pull  registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1
Dec 02 08:12:42 np0005541914.localdomain systemd[1]: Started libpod-conmon-380936fd184910f75d26f0daadef5c0e8a2dd7b0ccf2a1fab48d9a9f23b2b8f3.scope.
Dec 02 08:12:42 np0005541914.localdomain podman[61619]: 2025-12-02 08:12:42.319929483 +0000 UTC m=+0.048437015 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 02 08:12:42 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:12:42 np0005541914.localdomain podman[61606]: 2025-12-02 08:12:42.425612004 +0000 UTC m=+0.184453730 container init f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, vendor=Red Hat, Inc.)
Dec 02 08:12:42 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:12:42 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3c0368aac3df7a24e1cc908793cb027783f4fd6a7c0af2cb89163a01527dd3a/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:42 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3c0368aac3df7a24e1cc908793cb027783f4fd6a7c0af2cb89163a01527dd3a/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:42 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3c0368aac3df7a24e1cc908793cb027783f4fd6a7c0af2cb89163a01527dd3a/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:42 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3c0368aac3df7a24e1cc908793cb027783f4fd6a7c0af2cb89163a01527dd3a/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:42 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3c0368aac3df7a24e1cc908793cb027783f4fd6a7c0af2cb89163a01527dd3a/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:42 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3c0368aac3df7a24e1cc908793cb027783f4fd6a7c0af2cb89163a01527dd3a/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:42 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3c0368aac3df7a24e1cc908793cb027783f4fd6a7c0af2cb89163a01527dd3a/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:42 np0005541914.localdomain sudo[61648]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 02 08:12:42 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:12:42 np0005541914.localdomain podman[61619]: 2025-12-02 08:12:42.447170486 +0000 UTC m=+0.175677948 container init 380936fd184910f75d26f0daadef5c0e8a2dd7b0ccf2a1fab48d9a9f23b2b8f3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, release=1761123044, build-date=2025-11-19T00:35:22Z, batch=17.1_20251118.1, version=17.1.12, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_virtnodedevd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, distribution-scope=public, config_id=tripleo_step3, io.openshift.expose-services=, name=rhosp17/openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 02 08:12:42 np0005541914.localdomain systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring.
Dec 02 08:12:42 np0005541914.localdomain podman[61619]: 2025-12-02 08:12:42.457974488 +0000 UTC m=+0.186481940 container start 380936fd184910f75d26f0daadef5c0e8a2dd7b0ccf2a1fab48d9a9f23b2b8f3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, release=1761123044, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, build-date=2025-11-19T00:35:22Z, name=rhosp17/openstack-nova-libvirt, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.component=openstack-nova-libvirt-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_virtnodedevd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step3)
Dec 02 08:12:42 np0005541914.localdomain python3[60889]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtnodedevd --cgroupns=host --conmon-pidfile /run/nova_virtnodedevd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=51230b537c6b56095225b7a0a6b952d0 --label config_id=tripleo_step3 --label container_name=nova_virtnodedevd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtnodedevd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 02 08:12:42 np0005541914.localdomain systemd[1]: Started Session c4 of User root.
Dec 02 08:12:42 np0005541914.localdomain sudo[61648]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 02 08:12:42 np0005541914.localdomain podman[61606]: 2025-12-02 08:12:42.502910128 +0000 UTC m=+0.261751844 container start f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, release=1761123044, name=rhosp17/openstack-iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:12:42 np0005541914.localdomain python3[60889]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name iscsid --conmon-pidfile /run/iscsid.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=d89676d7ec0a7c13ef9894fdb26c6e3a --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step3 --label container_name=iscsid --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/iscsid.log --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1
Dec 02 08:12:42 np0005541914.localdomain sudo[61657]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 02 08:12:42 np0005541914.localdomain systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring.
Dec 02 08:12:42 np0005541914.localdomain systemd[1]: Started Session c5 of User root.
Dec 02 08:12:42 np0005541914.localdomain sudo[61657]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 02 08:12:42 np0005541914.localdomain sudo[61648]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:42 np0005541914.localdomain podman[61649]: 2025-12-02 08:12:42.552889598 +0000 UTC m=+0.096416115 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=starting, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, release=1761123044, container_name=iscsid, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12)
Dec 02 08:12:42 np0005541914.localdomain systemd[1]: session-c4.scope: Deactivated successfully.
Dec 02 08:12:42 np0005541914.localdomain kernel: Loading iSCSI transport class v2.0-870.
Dec 02 08:12:42 np0005541914.localdomain sudo[61657]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:42 np0005541914.localdomain systemd[1]: session-c5.scope: Deactivated successfully.
Dec 02 08:12:42 np0005541914.localdomain podman[61649]: 2025-12-02 08:12:42.62273794 +0000 UTC m=+0.166264427 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, release=1761123044, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.4, container_name=iscsid, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:12:42 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:12:43 np0005541914.localdomain podman[61787]: 2025-12-02 08:12:43.058551852 +0000 UTC m=+0.092985923 container create f40fa7232d1891a6529748e28e7c1664ec9dcff5f8e50a1478bc8a15766c7379 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, release=1761123044, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public, io.openshift.expose-services=, container_name=nova_virtstoraged, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, name=rhosp17/openstack-nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, vendor=Red Hat, Inc., build-date=2025-11-19T00:35:22Z)
Dec 02 08:12:43 np0005541914.localdomain podman[61787]: 2025-12-02 08:12:43.009780838 +0000 UTC m=+0.044214939 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 02 08:12:43 np0005541914.localdomain systemd[1]: Started libpod-conmon-f40fa7232d1891a6529748e28e7c1664ec9dcff5f8e50a1478bc8a15766c7379.scope.
Dec 02 08:12:43 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:12:43 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14ddf7e0c76befb63a54b1348ab4f9ad7d65a2f392d0685c8169eecf2841ddca/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:43 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14ddf7e0c76befb63a54b1348ab4f9ad7d65a2f392d0685c8169eecf2841ddca/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:43 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14ddf7e0c76befb63a54b1348ab4f9ad7d65a2f392d0685c8169eecf2841ddca/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:43 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14ddf7e0c76befb63a54b1348ab4f9ad7d65a2f392d0685c8169eecf2841ddca/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:43 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14ddf7e0c76befb63a54b1348ab4f9ad7d65a2f392d0685c8169eecf2841ddca/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:43 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14ddf7e0c76befb63a54b1348ab4f9ad7d65a2f392d0685c8169eecf2841ddca/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:43 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14ddf7e0c76befb63a54b1348ab4f9ad7d65a2f392d0685c8169eecf2841ddca/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:43 np0005541914.localdomain podman[61787]: 2025-12-02 08:12:43.164576333 +0000 UTC m=+0.199010394 container init f40fa7232d1891a6529748e28e7c1664ec9dcff5f8e50a1478bc8a15766c7379 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, container_name=nova_virtstoraged, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:35:22Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, com.redhat.component=openstack-nova-libvirt-container)
Dec 02 08:12:43 np0005541914.localdomain podman[61787]: 2025-12-02 08:12:43.178716584 +0000 UTC m=+0.213150645 container start f40fa7232d1891a6529748e28e7c1664ec9dcff5f8e50a1478bc8a15766c7379 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, url=https://www.redhat.com, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, version=17.1.12, vendor=Red Hat, Inc., config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, release=1761123044, build-date=2025-11-19T00:35:22Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtstoraged, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:12:43 np0005541914.localdomain python3[60889]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtstoraged --cgroupns=host --conmon-pidfile /run/nova_virtstoraged.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=51230b537c6b56095225b7a0a6b952d0 --label config_id=tripleo_step3 --label container_name=nova_virtstoraged --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtstoraged.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 02 08:12:43 np0005541914.localdomain sudo[61807]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 02 08:12:43 np0005541914.localdomain systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring.
Dec 02 08:12:43 np0005541914.localdomain systemd[1]: Started Session c6 of User root.
Dec 02 08:12:43 np0005541914.localdomain sudo[61807]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 02 08:12:43 np0005541914.localdomain sudo[61807]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:43 np0005541914.localdomain systemd[1]: session-c6.scope: Deactivated successfully.
Dec 02 08:12:43 np0005541914.localdomain podman[61890]: 2025-12-02 08:12:43.595931352 +0000 UTC m=+0.070031559 container create cdeb0b383234c27a470905ec1b27681b773b0e8391f64240e9c886e50faf6aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, release=1761123044, com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, architecture=x86_64, container_name=nova_virtqemud, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, name=rhosp17/openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, build-date=2025-11-19T00:35:22Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, version=17.1.12)
Dec 02 08:12:43 np0005541914.localdomain systemd[1]: Started libpod-conmon-cdeb0b383234c27a470905ec1b27681b773b0e8391f64240e9c886e50faf6aa7.scope.
Dec 02 08:12:43 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:12:43 np0005541914.localdomain podman[61890]: 2025-12-02 08:12:43.553170618 +0000 UTC m=+0.027270815 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 02 08:12:43 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/307fde9f9a17104e6d254f3661d03569d645ee844efb3016652158492a4ae8a6/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:43 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/307fde9f9a17104e6d254f3661d03569d645ee844efb3016652158492a4ae8a6/merged/var/log/swtpm supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:43 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/307fde9f9a17104e6d254f3661d03569d645ee844efb3016652158492a4ae8a6/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:43 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/307fde9f9a17104e6d254f3661d03569d645ee844efb3016652158492a4ae8a6/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:43 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/307fde9f9a17104e6d254f3661d03569d645ee844efb3016652158492a4ae8a6/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:43 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/307fde9f9a17104e6d254f3661d03569d645ee844efb3016652158492a4ae8a6/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:43 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/307fde9f9a17104e6d254f3661d03569d645ee844efb3016652158492a4ae8a6/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:43 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/307fde9f9a17104e6d254f3661d03569d645ee844efb3016652158492a4ae8a6/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:43 np0005541914.localdomain podman[61890]: 2025-12-02 08:12:43.660962642 +0000 UTC m=+0.135062849 container init cdeb0b383234c27a470905ec1b27681b773b0e8391f64240e9c886e50faf6aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.4, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtqemud, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible)
Dec 02 08:12:43 np0005541914.localdomain podman[61890]: 2025-12-02 08:12:43.671729032 +0000 UTC m=+0.145829249 container start cdeb0b383234c27a470905ec1b27681b773b0e8391f64240e9c886e50faf6aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, build-date=2025-11-19T00:35:22Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtqemud, release=1761123044, name=rhosp17/openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, distribution-scope=public, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12)
Dec 02 08:12:43 np0005541914.localdomain python3[60889]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtqemud --cgroupns=host --conmon-pidfile /run/nova_virtqemud.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=51230b537c6b56095225b7a0a6b952d0 --label config_id=tripleo_step3 --label container_name=nova_virtqemud --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtqemud.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro --volume /var/log/containers/libvirt/swtpm:/var/log/swtpm:z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 02 08:12:43 np0005541914.localdomain sudo[61908]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 02 08:12:43 np0005541914.localdomain systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring.
Dec 02 08:12:43 np0005541914.localdomain systemd[1]: Started Session c7 of User root.
Dec 02 08:12:43 np0005541914.localdomain sudo[61908]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 02 08:12:43 np0005541914.localdomain sudo[61908]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:43 np0005541914.localdomain systemd[1]: session-c7.scope: Deactivated successfully.
Dec 02 08:12:44 np0005541914.localdomain podman[61991]: 2025-12-02 08:12:44.109046859 +0000 UTC m=+0.084012346 container create f29a5f0fd81e25a86ce75a1b4ca9b6107de1c1148568894414cd55dacac64358 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, vendor=Red Hat, Inc., release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, url=https://www.redhat.com, container_name=nova_virtproxyd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, build-date=2025-11-19T00:35:22Z, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_id=tripleo_step3, architecture=x86_64, version=17.1.12, com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, name=rhosp17/openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, vcs-type=git)
Dec 02 08:12:44 np0005541914.localdomain systemd[1]: Started libpod-conmon-f29a5f0fd81e25a86ce75a1b4ca9b6107de1c1148568894414cd55dacac64358.scope.
Dec 02 08:12:44 np0005541914.localdomain podman[61991]: 2025-12-02 08:12:44.061223613 +0000 UTC m=+0.036189120 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 02 08:12:44 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:12:44 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02df3188ed09c76117009d9e268cf57a20be20a288a1b1dd5d724192cbba084/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:44 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02df3188ed09c76117009d9e268cf57a20be20a288a1b1dd5d724192cbba084/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:44 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02df3188ed09c76117009d9e268cf57a20be20a288a1b1dd5d724192cbba084/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:44 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02df3188ed09c76117009d9e268cf57a20be20a288a1b1dd5d724192cbba084/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:44 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02df3188ed09c76117009d9e268cf57a20be20a288a1b1dd5d724192cbba084/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:44 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02df3188ed09c76117009d9e268cf57a20be20a288a1b1dd5d724192cbba084/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:44 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02df3188ed09c76117009d9e268cf57a20be20a288a1b1dd5d724192cbba084/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Dec 02 08:12:44 np0005541914.localdomain podman[61991]: 2025-12-02 08:12:44.18690402 +0000 UTC m=+0.161869507 container init f29a5f0fd81e25a86ce75a1b4ca9b6107de1c1148568894414cd55dacac64358 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, release=1761123044, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtproxyd, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, architecture=x86_64, batch=17.1_20251118.1)
Dec 02 08:12:44 np0005541914.localdomain podman[61991]: 2025-12-02 08:12:44.197082554 +0000 UTC m=+0.172048041 container start f29a5f0fd81e25a86ce75a1b4ca9b6107de1c1148568894414cd55dacac64358 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, tcib_managed=true, batch=17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., config_id=tripleo_step3, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:35:22Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, container_name=nova_virtproxyd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, release=1761123044, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt)
Dec 02 08:12:44 np0005541914.localdomain python3[60889]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtproxyd --cgroupns=host --conmon-pidfile /run/nova_virtproxyd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=51230b537c6b56095225b7a0a6b952d0 --label config_id=tripleo_step3 --label container_name=nova_virtproxyd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtproxyd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 02 08:12:44 np0005541914.localdomain sudo[62010]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 02 08:12:44 np0005541914.localdomain systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring.
Dec 02 08:12:44 np0005541914.localdomain systemd[1]: Started Session c8 of User root.
Dec 02 08:12:44 np0005541914.localdomain sudo[62010]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 02 08:12:44 np0005541914.localdomain sudo[62010]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:44 np0005541914.localdomain systemd[1]: session-c8.scope: Deactivated successfully.
Dec 02 08:12:44 np0005541914.localdomain sudo[60887]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:44 np0005541914.localdomain sudo[62070]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-esklvzotkangwihzajbfmoryekfdrcsg ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:44 np0005541914.localdomain sudo[62070]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:44 np0005541914.localdomain python3[62072]: ansible-file Invoked with path=/etc/systemd/system/tripleo_collectd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:12:44 np0005541914.localdomain sudo[62070]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:44 np0005541914.localdomain sudo[62086]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kvfxqmcmcpgociplzygupattmqvzscbe ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:44 np0005541914.localdomain sudo[62086]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:45 np0005541914.localdomain python3[62088]: ansible-file Invoked with path=/etc/systemd/system/tripleo_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:12:45 np0005541914.localdomain sudo[62086]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:45 np0005541914.localdomain sudo[62102]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-akgnmcvosdzhvjtzfrkkdsgeryqkldkg ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:45 np0005541914.localdomain sudo[62102]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:45 np0005541914.localdomain python3[62104]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:12:45 np0005541914.localdomain sudo[62102]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:45 np0005541914.localdomain sudo[62118]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-twhndpzxsvsaanqvsrtddpymgslwehvl ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:45 np0005541914.localdomain sudo[62118]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:45 np0005541914.localdomain python3[62120]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:12:45 np0005541914.localdomain sudo[62118]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:45 np0005541914.localdomain sudo[62134]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aedmjdgmsbsymaynmovgcytbgkwisaki ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:45 np0005541914.localdomain sudo[62134]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:45 np0005541914.localdomain python3[62136]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:12:45 np0005541914.localdomain sudo[62134]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:45 np0005541914.localdomain sudo[62150]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vhujboycjozcwtuzaegkrdwswtidpsyz ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:45 np0005541914.localdomain sudo[62150]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:46 np0005541914.localdomain python3[62152]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:12:46 np0005541914.localdomain sudo[62150]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:46 np0005541914.localdomain sudo[62166]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wxllatrbapwcrjsjolnxfxceeyscyppb ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:46 np0005541914.localdomain sudo[62166]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:46 np0005541914.localdomain python3[62168]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:12:46 np0005541914.localdomain sudo[62166]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:46 np0005541914.localdomain sudo[62182]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zimuyqfxgzhxjfugbxgvsqwznwdcalhs ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:46 np0005541914.localdomain sudo[62182]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:46 np0005541914.localdomain python3[62184]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:12:46 np0005541914.localdomain sudo[62182]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:46 np0005541914.localdomain sudo[62198]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jgzwfeqfyaitjenwwtyxtsxcctlncvkx ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:46 np0005541914.localdomain sudo[62198]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:46 np0005541914.localdomain python3[62200]: ansible-file Invoked with path=/etc/systemd/system/tripleo_rsyslog.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:12:46 np0005541914.localdomain sudo[62198]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:46 np0005541914.localdomain sudo[62215]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zobuyskiuhdvyvvolqepsvggzlabucbo ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:46 np0005541914.localdomain sudo[62215]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:47 np0005541914.localdomain python3[62217]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_collectd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 08:12:47 np0005541914.localdomain sudo[62215]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:47 np0005541914.localdomain sudo[62231]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vsupxqheztbzjfstzdsnrmuqohqdxrbr ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:47 np0005541914.localdomain sudo[62231]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:47 np0005541914.localdomain python3[62233]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_iscsid_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 08:12:47 np0005541914.localdomain sudo[62231]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:47 np0005541914.localdomain sudo[62247]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xeeqfvbhpvdjwlxvzxcpszubwdkaiwnz ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:47 np0005541914.localdomain sudo[62247]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:47 np0005541914.localdomain python3[62249]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 08:12:47 np0005541914.localdomain sudo[62247]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:47 np0005541914.localdomain sudo[62263]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wjuwpvjfdrimhgrsnudctwtaecuinzjm ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:47 np0005541914.localdomain sudo[62263]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:47 np0005541914.localdomain python3[62265]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 08:12:47 np0005541914.localdomain sudo[62263]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:48 np0005541914.localdomain sudo[62279]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mjfgnoallijeadclpjlykzogkzxdsiir ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:48 np0005541914.localdomain sudo[62279]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:48 np0005541914.localdomain python3[62281]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 08:12:48 np0005541914.localdomain sudo[62279]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:48 np0005541914.localdomain sudo[62295]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gljiorlrqekgynmxcffrdjkxgghjohcy ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:48 np0005541914.localdomain sudo[62295]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:48 np0005541914.localdomain python3[62297]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 08:12:48 np0005541914.localdomain sudo[62295]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:48 np0005541914.localdomain sudo[62311]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sbvgwjaatjisnvvwfifwgeoxygxjcxsc ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:48 np0005541914.localdomain sudo[62311]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:48 np0005541914.localdomain python3[62313]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 08:12:48 np0005541914.localdomain sudo[62311]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:48 np0005541914.localdomain sudo[62327]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lvaprtkaorrixfsotatzfuzzjxywntfs ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:48 np0005541914.localdomain sudo[62327]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:48 np0005541914.localdomain python3[62329]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 08:12:48 np0005541914.localdomain sudo[62327]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:48 np0005541914.localdomain sudo[62343]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mxsiuueybtlbhwulhqqgsinfiqmmfjdp ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:48 np0005541914.localdomain sudo[62343]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:49 np0005541914.localdomain python3[62345]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_rsyslog_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 08:12:49 np0005541914.localdomain sudo[62343]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:49 np0005541914.localdomain sudo[62404]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-atdxzmvwhkcqjlsuxiottbjouklunbro ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:49 np0005541914.localdomain sudo[62404]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:49 np0005541914.localdomain python3[62406]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663169.18788-99752-254175805951974/source dest=/etc/systemd/system/tripleo_collectd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:12:49 np0005541914.localdomain sudo[62404]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:50 np0005541914.localdomain sudo[62433]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fsxwcuebskaxgfkdwlaqimjtvwueebql ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:50 np0005541914.localdomain sudo[62433]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:50 np0005541914.localdomain python3[62435]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663169.18788-99752-254175805951974/source dest=/etc/systemd/system/tripleo_iscsid.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:12:50 np0005541914.localdomain sudo[62433]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:50 np0005541914.localdomain sudo[62462]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gtheksmfchpdlwdhjnvmhnyudxbntybp ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:50 np0005541914.localdomain sudo[62462]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:50 np0005541914.localdomain python3[62464]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663169.18788-99752-254175805951974/source dest=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:12:50 np0005541914.localdomain sudo[62462]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:51 np0005541914.localdomain sudo[62491]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gneritrbqdxslhhctzlycxksqkmkytgb ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:51 np0005541914.localdomain sudo[62491]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:51 np0005541914.localdomain python3[62493]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663169.18788-99752-254175805951974/source dest=/etc/systemd/system/tripleo_nova_virtnodedevd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:12:51 np0005541914.localdomain sudo[62491]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:51 np0005541914.localdomain sudo[62520]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qqfgkacvqtxwgmjgfdghxeqvvukrfsyh ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:51 np0005541914.localdomain sudo[62520]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:51 np0005541914.localdomain python3[62522]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663169.18788-99752-254175805951974/source dest=/etc/systemd/system/tripleo_nova_virtproxyd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:12:51 np0005541914.localdomain sudo[62520]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:52 np0005541914.localdomain sudo[62549]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ajnrolhrxbrvpjmpiostalgrwnfdprlh ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:52 np0005541914.localdomain sudo[62549]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:52 np0005541914.localdomain python3[62551]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663169.18788-99752-254175805951974/source dest=/etc/systemd/system/tripleo_nova_virtqemud.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:12:52 np0005541914.localdomain sudo[62549]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:52 np0005541914.localdomain sudo[62578]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yxjdepxryxpclnounpbabbbtbwobanqc ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:52 np0005541914.localdomain sudo[62578]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:52 np0005541914.localdomain python3[62580]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663169.18788-99752-254175805951974/source dest=/etc/systemd/system/tripleo_nova_virtsecretd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:12:52 np0005541914.localdomain sudo[62578]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:53 np0005541914.localdomain sudo[62607]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rjqeuogvbgpdlhdszxzywjncayzweihn ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:53 np0005541914.localdomain sudo[62607]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:53 np0005541914.localdomain python3[62609]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663169.18788-99752-254175805951974/source dest=/etc/systemd/system/tripleo_nova_virtstoraged.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:12:53 np0005541914.localdomain sudo[62607]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:53 np0005541914.localdomain sudo[62636]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-itdxvsnecgazkoeuooudiqijibylcghb ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:53 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:12:53 np0005541914.localdomain sudo[62636]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:53 np0005541914.localdomain podman[62638]: 2025-12-02 08:12:53.947623032 +0000 UTC m=+0.092531031 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, io.buildah.version=1.41.4, version=17.1.12, release=1761123044, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, vcs-type=git, build-date=2025-11-18T22:49:46Z)
Dec 02 08:12:53 np0005541914.localdomain python3[62639]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663169.18788-99752-254175805951974/source dest=/etc/systemd/system/tripleo_rsyslog.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:12:53 np0005541914.localdomain sudo[62636]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:54 np0005541914.localdomain sudo[62680]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iesubvgwrbqdqbnlbugtjhqjpkbwwzmb ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:54 np0005541914.localdomain sudo[62680]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:54 np0005541914.localdomain podman[62638]: 2025-12-02 08:12:54.199339095 +0000 UTC m=+0.344247114 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:12:54 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:12:54 np0005541914.localdomain python3[62682]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 02 08:12:54 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:12:54 np0005541914.localdomain systemd-rc-local-generator[62703]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:12:54 np0005541914.localdomain systemd-sysv-generator[62707]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:12:54 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:12:54 np0005541914.localdomain systemd[1]: Stopping User Manager for UID 0...
Dec 02 08:12:54 np0005541914.localdomain systemd[61217]: Activating special unit Exit the Session...
Dec 02 08:12:54 np0005541914.localdomain systemd[61217]: Stopped target Main User Target.
Dec 02 08:12:54 np0005541914.localdomain systemd[61217]: Stopped target Basic System.
Dec 02 08:12:54 np0005541914.localdomain systemd[61217]: Stopped target Paths.
Dec 02 08:12:54 np0005541914.localdomain systemd[61217]: Stopped target Sockets.
Dec 02 08:12:54 np0005541914.localdomain systemd[61217]: Stopped target Timers.
Dec 02 08:12:54 np0005541914.localdomain systemd[61217]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 02 08:12:54 np0005541914.localdomain systemd[61217]: Closed D-Bus User Message Bus Socket.
Dec 02 08:12:54 np0005541914.localdomain systemd[61217]: Stopped Create User's Volatile Files and Directories.
Dec 02 08:12:54 np0005541914.localdomain systemd[61217]: Removed slice User Application Slice.
Dec 02 08:12:54 np0005541914.localdomain systemd[61217]: Reached target Shutdown.
Dec 02 08:12:54 np0005541914.localdomain systemd[61217]: Finished Exit the Session.
Dec 02 08:12:54 np0005541914.localdomain systemd[61217]: Reached target Exit the Session.
Dec 02 08:12:54 np0005541914.localdomain systemd[1]: user@0.service: Deactivated successfully.
Dec 02 08:12:54 np0005541914.localdomain systemd[1]: Stopped User Manager for UID 0.
Dec 02 08:12:54 np0005541914.localdomain systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec 02 08:12:54 np0005541914.localdomain sudo[62680]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:54 np0005541914.localdomain systemd[1]: run-user-0.mount: Deactivated successfully.
Dec 02 08:12:54 np0005541914.localdomain systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec 02 08:12:54 np0005541914.localdomain systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec 02 08:12:54 np0005541914.localdomain systemd[1]: Removed slice User Slice of UID 0.
Dec 02 08:12:54 np0005541914.localdomain sudo[62734]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ceukixdbeiyxzqviuotdbwbalsbpmzwd ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:54 np0005541914.localdomain sudo[62734]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:55 np0005541914.localdomain python3[62736]: ansible-systemd Invoked with state=restarted name=tripleo_collectd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 08:12:55 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:12:55 np0005541914.localdomain systemd-sysv-generator[62768]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:12:55 np0005541914.localdomain systemd-rc-local-generator[62765]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:12:55 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:12:55 np0005541914.localdomain systemd[1]: Starting collectd container...
Dec 02 08:12:55 np0005541914.localdomain systemd[1]: Started collectd container.
Dec 02 08:12:55 np0005541914.localdomain sudo[62734]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:55 np0005541914.localdomain sudo[62799]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jghyavjznomasloearehvusvvfokzahn ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:55 np0005541914.localdomain sudo[62799]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:56 np0005541914.localdomain python3[62801]: ansible-systemd Invoked with state=restarted name=tripleo_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 08:12:56 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:12:56 np0005541914.localdomain systemd-rc-local-generator[62825]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:12:56 np0005541914.localdomain systemd-sysv-generator[62831]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:12:56 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:12:56 np0005541914.localdomain systemd[1]: Starting iscsid container...
Dec 02 08:12:56 np0005541914.localdomain systemd[1]: Started iscsid container.
Dec 02 08:12:56 np0005541914.localdomain sudo[62799]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:57 np0005541914.localdomain sudo[62865]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iplwkipgicoxcuyqleqierdshgfiqeoc ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:57 np0005541914.localdomain sudo[62865]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:57 np0005541914.localdomain python3[62867]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtlogd_wrapper.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 08:12:57 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:12:57 np0005541914.localdomain systemd-rc-local-generator[62896]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:12:57 np0005541914.localdomain systemd-sysv-generator[62900]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:12:57 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:12:57 np0005541914.localdomain systemd[1]: Starting nova_virtlogd_wrapper container...
Dec 02 08:12:57 np0005541914.localdomain systemd[1]: Started nova_virtlogd_wrapper container.
Dec 02 08:12:57 np0005541914.localdomain sudo[62865]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:57 np0005541914.localdomain sshd[62918]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:12:58 np0005541914.localdomain sudo[62933]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fuprnxxzqsuvnilljtnxxpoevyddfhtw ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:58 np0005541914.localdomain sudo[62933]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:58 np0005541914.localdomain python3[62935]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtnodedevd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 08:12:58 np0005541914.localdomain sshd[62918]: Invalid user sol from 45.148.10.240 port 35604
Dec 02 08:12:58 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:12:58 np0005541914.localdomain sshd[62918]: Connection closed by invalid user sol 45.148.10.240 port 35604 [preauth]
Dec 02 08:12:58 np0005541914.localdomain systemd-sysv-generator[62966]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:12:58 np0005541914.localdomain systemd-rc-local-generator[62960]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:12:58 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:12:58 np0005541914.localdomain systemd[1]: Starting nova_virtnodedevd container...
Dec 02 08:12:58 np0005541914.localdomain tripleo-start-podman-container[62975]: Creating additional drop-in dependency for "nova_virtnodedevd" (380936fd184910f75d26f0daadef5c0e8a2dd7b0ccf2a1fab48d9a9f23b2b8f3)
Dec 02 08:12:58 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:12:58 np0005541914.localdomain systemd-rc-local-generator[63031]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:12:58 np0005541914.localdomain systemd-sysv-generator[63036]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:12:59 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:12:59 np0005541914.localdomain systemd[1]: Started nova_virtnodedevd container.
Dec 02 08:12:59 np0005541914.localdomain sudo[62933]: pam_unix(sudo:session): session closed for user root
Dec 02 08:12:59 np0005541914.localdomain sudo[63056]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tvwqjzfvctwcvyowlfyijipgtszroyvw ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:12:59 np0005541914.localdomain sudo[63056]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:12:59 np0005541914.localdomain python3[63058]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtproxyd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 08:12:59 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:13:00 np0005541914.localdomain systemd-rc-local-generator[63085]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:13:00 np0005541914.localdomain systemd-sysv-generator[63088]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:13:00 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:13:00 np0005541914.localdomain systemd[1]: Starting nova_virtproxyd container...
Dec 02 08:13:00 np0005541914.localdomain tripleo-start-podman-container[63098]: Creating additional drop-in dependency for "nova_virtproxyd" (f29a5f0fd81e25a86ce75a1b4ca9b6107de1c1148568894414cd55dacac64358)
Dec 02 08:13:00 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:13:00 np0005541914.localdomain systemd-rc-local-generator[63155]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:13:00 np0005541914.localdomain systemd-sysv-generator[63158]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:13:00 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:13:00 np0005541914.localdomain systemd[1]: Started nova_virtproxyd container.
Dec 02 08:13:00 np0005541914.localdomain sudo[63056]: pam_unix(sudo:session): session closed for user root
Dec 02 08:13:02 np0005541914.localdomain sudo[63180]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hyhlgqkjxamujfkbdggjbhholyugsdqw ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:13:02 np0005541914.localdomain sudo[63180]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:13:02 np0005541914.localdomain python3[63182]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtqemud.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 08:13:02 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:13:02 np0005541914.localdomain systemd-rc-local-generator[63209]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:13:02 np0005541914.localdomain systemd-sysv-generator[63213]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:13:02 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:13:03 np0005541914.localdomain systemd[1]: Starting nova_virtqemud container...
Dec 02 08:13:03 np0005541914.localdomain tripleo-start-podman-container[63221]: Creating additional drop-in dependency for "nova_virtqemud" (cdeb0b383234c27a470905ec1b27681b773b0e8391f64240e9c886e50faf6aa7)
Dec 02 08:13:03 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:13:03 np0005541914.localdomain systemd-rc-local-generator[63277]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:13:03 np0005541914.localdomain systemd-sysv-generator[63282]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:13:03 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:13:03 np0005541914.localdomain systemd[1]: Started nova_virtqemud container.
Dec 02 08:13:03 np0005541914.localdomain sudo[63180]: pam_unix(sudo:session): session closed for user root
Dec 02 08:13:03 np0005541914.localdomain sudo[63304]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hsdrpxgttohfhidryicyasbupkitbzic ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:13:03 np0005541914.localdomain sudo[63304]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:13:04 np0005541914.localdomain python3[63306]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtsecretd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 08:13:04 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:13:04 np0005541914.localdomain systemd-sysv-generator[63335]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:13:04 np0005541914.localdomain systemd-rc-local-generator[63331]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:13:04 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:13:04 np0005541914.localdomain systemd[1]: Starting nova_virtsecretd container...
Dec 02 08:13:04 np0005541914.localdomain tripleo-start-podman-container[63345]: Creating additional drop-in dependency for "nova_virtsecretd" (c52787fc6278444352c6e9fc9a31127ec6ce41ddcd861f2779c74dbb5cb69b10)
Dec 02 08:13:04 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:13:04 np0005541914.localdomain systemd-rc-local-generator[63405]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:13:04 np0005541914.localdomain systemd-sysv-generator[63408]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:13:04 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:13:05 np0005541914.localdomain systemd[1]: Started nova_virtsecretd container.
Dec 02 08:13:05 np0005541914.localdomain sudo[63304]: pam_unix(sudo:session): session closed for user root
Dec 02 08:13:05 np0005541914.localdomain sudo[63427]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vagnofrefbaomjxgmybhxxaxhottghyo ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:13:05 np0005541914.localdomain sudo[63427]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:13:05 np0005541914.localdomain python3[63429]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtstoraged.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 08:13:05 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:13:05 np0005541914.localdomain systemd-rc-local-generator[63457]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:13:05 np0005541914.localdomain systemd-sysv-generator[63461]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:13:05 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:13:06 np0005541914.localdomain systemd[1]: Starting nova_virtstoraged container...
Dec 02 08:13:06 np0005541914.localdomain tripleo-start-podman-container[63469]: Creating additional drop-in dependency for "nova_virtstoraged" (f40fa7232d1891a6529748e28e7c1664ec9dcff5f8e50a1478bc8a15766c7379)
Dec 02 08:13:06 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:13:06 np0005541914.localdomain systemd-sysv-generator[63531]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:13:06 np0005541914.localdomain systemd-rc-local-generator[63526]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:13:06 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:13:06 np0005541914.localdomain sshd[63537]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:13:06 np0005541914.localdomain systemd[1]: Started nova_virtstoraged container.
Dec 02 08:13:06 np0005541914.localdomain sudo[63427]: pam_unix(sudo:session): session closed for user root
Dec 02 08:13:06 np0005541914.localdomain sudo[63552]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gqbcrkvddcubxtdjpppafsohsgmjfbcy ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:13:06 np0005541914.localdomain sudo[63552]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:13:07 np0005541914.localdomain python3[63554]: ansible-systemd Invoked with state=restarted name=tripleo_rsyslog.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 08:13:07 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:13:07 np0005541914.localdomain systemd-rc-local-generator[63576]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:13:07 np0005541914.localdomain systemd-sysv-generator[63582]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:13:07 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:13:07 np0005541914.localdomain systemd[1]: Starting rsyslog container...
Dec 02 08:13:07 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:13:07 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd78fb44465760df7c4be9cb01e48acc01a9b6623f14c40fffd8cb0fbb72ecf/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff)
Dec 02 08:13:07 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd78fb44465760df7c4be9cb01e48acc01a9b6623f14c40fffd8cb0fbb72ecf/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff)
Dec 02 08:13:07 np0005541914.localdomain podman[63594]: 2025-12-02 08:13:07.529067254 +0000 UTC m=+0.134559072 container init 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, batch=17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, io.buildah.version=1.41.4, name=rhosp17/openstack-rsyslog, container_name=rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vendor=Red Hat, Inc., vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:49Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-rsyslog-container)
Dec 02 08:13:07 np0005541914.localdomain podman[63594]: 2025-12-02 08:13:07.539834285 +0000 UTC m=+0.145326083 container start 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=openstack-rsyslog-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, container_name=rsyslog, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp17/openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, config_id=tripleo_step3, build-date=2025-11-18T22:49:49Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, url=https://www.redhat.com, release=1761123044)
Dec 02 08:13:07 np0005541914.localdomain podman[63594]: rsyslog
Dec 02 08:13:07 np0005541914.localdomain systemd[1]: Started rsyslog container.
Dec 02 08:13:07 np0005541914.localdomain sudo[63612]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 02 08:13:07 np0005541914.localdomain sudo[63612]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 02 08:13:07 np0005541914.localdomain sudo[63552]: pam_unix(sudo:session): session closed for user root
Dec 02 08:13:07 np0005541914.localdomain sudo[63612]: pam_unix(sudo:session): session closed for user root
Dec 02 08:13:07 np0005541914.localdomain systemd[1]: libpod-64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60.scope: Deactivated successfully.
Dec 02 08:13:07 np0005541914.localdomain podman[63629]: 2025-12-02 08:13:07.72781721 +0000 UTC m=+0.055675071 container died 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, build-date=2025-11-18T22:49:49Z, com.redhat.component=openstack-rsyslog-container, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, config_id=tripleo_step3, name=rhosp17/openstack-rsyslog, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, managed_by=tripleo_ansible, release=1761123044, container_name=rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4)
Dec 02 08:13:07 np0005541914.localdomain podman[63629]: 2025-12-02 08:13:07.756763882 +0000 UTC m=+0.084621713 container cleanup 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, name=rhosp17/openstack-rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, description=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=rsyslog, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, build-date=2025-11-18T22:49:49Z)
Dec 02 08:13:07 np0005541914.localdomain systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:13:07 np0005541914.localdomain podman[63641]: 2025-12-02 08:13:07.843691934 +0000 UTC m=+0.055600779 container cleanup 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:49Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, config_id=tripleo_step3, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, name=rhosp17/openstack-rsyslog, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-rsyslog-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 rsyslog)
Dec 02 08:13:07 np0005541914.localdomain podman[63641]: rsyslog
Dec 02 08:13:07 np0005541914.localdomain systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'.
Dec 02 08:13:07 np0005541914.localdomain sudo[63664]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-itpshjjtyctcfaiicrnqyyyurcyonrhc ; /usr/bin/python3
Dec 02 08:13:07 np0005541914.localdomain sudo[63664]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:13:07 np0005541914.localdomain systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 1.
Dec 02 08:13:07 np0005541914.localdomain systemd[1]: Stopped rsyslog container.
Dec 02 08:13:07 np0005541914.localdomain systemd[1]: Starting rsyslog container...
Dec 02 08:13:08 np0005541914.localdomain python3[63669]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks3.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:13:08 np0005541914.localdomain sudo[63664]: pam_unix(sudo:session): session closed for user root
Dec 02 08:13:08 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:13:08 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd78fb44465760df7c4be9cb01e48acc01a9b6623f14c40fffd8cb0fbb72ecf/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff)
Dec 02 08:13:08 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd78fb44465760df7c4be9cb01e48acc01a9b6623f14c40fffd8cb0fbb72ecf/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff)
Dec 02 08:13:08 np0005541914.localdomain podman[63670]: 2025-12-02 08:13:08.115651111 +0000 UTC m=+0.113799744 container init 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, build-date=2025-11-18T22:49:49Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.openshift.expose-services=, com.redhat.component=openstack-rsyslog-container, batch=17.1_20251118.1, config_id=tripleo_step3, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, container_name=rsyslog, name=rhosp17/openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., url=https://www.redhat.com)
Dec 02 08:13:08 np0005541914.localdomain podman[63670]: 2025-12-02 08:13:08.125403931 +0000 UTC m=+0.123552564 container start 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, vcs-type=git, name=rhosp17/openstack-rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, distribution-scope=public, container_name=rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:49Z, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, version=17.1.12, io.buildah.version=1.41.4, tcib_managed=true)
Dec 02 08:13:08 np0005541914.localdomain podman[63670]: rsyslog
Dec 02 08:13:08 np0005541914.localdomain systemd[1]: Started rsyslog container.
Dec 02 08:13:08 np0005541914.localdomain sudo[63689]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 02 08:13:08 np0005541914.localdomain sudo[63689]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 02 08:13:08 np0005541914.localdomain sshd[63537]: Received disconnect from 103.52.115.25 port 50326:11: Bye Bye [preauth]
Dec 02 08:13:08 np0005541914.localdomain sshd[63537]: Disconnected from authenticating user ftp 103.52.115.25 port 50326 [preauth]
Dec 02 08:13:08 np0005541914.localdomain sudo[63689]: pam_unix(sudo:session): session closed for user root
Dec 02 08:13:08 np0005541914.localdomain systemd[1]: libpod-64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60.scope: Deactivated successfully.
Dec 02 08:13:08 np0005541914.localdomain podman[63692]: 2025-12-02 08:13:08.278586988 +0000 UTC m=+0.055026571 container died 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-rsyslog-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, name=rhosp17/openstack-rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., release=1761123044, config_id=tripleo_step3, build-date=2025-11-18T22:49:49Z, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, container_name=rsyslog, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog)
Dec 02 08:13:08 np0005541914.localdomain podman[63692]: 2025-12-02 08:13:08.298362378 +0000 UTC m=+0.074801941 container cleanup 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, name=rhosp17/openstack-rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, release=1761123044, container_name=rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:49Z, version=17.1.12, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., vcs-type=git, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, com.redhat.component=openstack-rsyslog-container, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']})
Dec 02 08:13:08 np0005541914.localdomain systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:13:08 np0005541914.localdomain podman[63706]: 2025-12-02 08:13:08.379743154 +0000 UTC m=+0.048402834 container cleanup 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, vendor=Red Hat, Inc., com.redhat.component=openstack-rsyslog-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, build-date=2025-11-18T22:49:49Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, release=1761123044, container_name=rsyslog, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog)
Dec 02 08:13:08 np0005541914.localdomain podman[63706]: rsyslog
Dec 02 08:13:08 np0005541914.localdomain systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'.
Dec 02 08:13:08 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-0fd78fb44465760df7c4be9cb01e48acc01a9b6623f14c40fffd8cb0fbb72ecf-merged.mount: Deactivated successfully.
Dec 02 08:13:08 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60-userdata-shm.mount: Deactivated successfully.
Dec 02 08:13:08 np0005541914.localdomain sudo[63765]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fxldfjwxlltophrxdtfzfcokkvvpqstr ; /usr/bin/python3
Dec 02 08:13:08 np0005541914.localdomain systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 2.
Dec 02 08:13:08 np0005541914.localdomain systemd[1]: Stopped rsyslog container.
Dec 02 08:13:08 np0005541914.localdomain sudo[63765]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:13:08 np0005541914.localdomain systemd[1]: Starting rsyslog container...
Dec 02 08:13:08 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:13:08 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd78fb44465760df7c4be9cb01e48acc01a9b6623f14c40fffd8cb0fbb72ecf/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff)
Dec 02 08:13:08 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd78fb44465760df7c4be9cb01e48acc01a9b6623f14c40fffd8cb0fbb72ecf/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff)
Dec 02 08:13:08 np0005541914.localdomain podman[63767]: 2025-12-02 08:13:08.636310283 +0000 UTC m=+0.109842266 container init 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, name=rhosp17/openstack-rsyslog, com.redhat.component=openstack-rsyslog-container, release=1761123044, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, build-date=2025-11-18T22:49:49Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, batch=17.1_20251118.1, container_name=rsyslog, config_id=tripleo_step3, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:13:08 np0005541914.localdomain podman[63767]: 2025-12-02 08:13:08.642563449 +0000 UTC m=+0.116095452 container start 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, url=https://www.redhat.com, com.redhat.component=openstack-rsyslog-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, tcib_managed=true, build-date=2025-11-18T22:49:49Z)
Dec 02 08:13:08 np0005541914.localdomain podman[63767]: rsyslog
Dec 02 08:13:08 np0005541914.localdomain systemd[1]: Started rsyslog container.
Dec 02 08:13:08 np0005541914.localdomain sudo[63787]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 02 08:13:08 np0005541914.localdomain sudo[63787]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 02 08:13:08 np0005541914.localdomain sudo[63765]: pam_unix(sudo:session): session closed for user root
Dec 02 08:13:08 np0005541914.localdomain sudo[63787]: pam_unix(sudo:session): session closed for user root
Dec 02 08:13:08 np0005541914.localdomain systemd[1]: libpod-64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60.scope: Deactivated successfully.
Dec 02 08:13:08 np0005541914.localdomain podman[63792]: 2025-12-02 08:13:08.767872565 +0000 UTC m=+0.032759328 container died 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-rsyslog-container, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=rsyslog, name=rhosp17/openstack-rsyslog, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:49Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, config_id=tripleo_step3, distribution-scope=public, version=17.1.12, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20251118.1)
Dec 02 08:13:08 np0005541914.localdomain podman[63792]: 2025-12-02 08:13:08.790329364 +0000 UTC m=+0.055216117 container cleanup 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:49Z, com.redhat.component=openstack-rsyslog-container, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, batch=17.1_20251118.1, config_id=tripleo_step3, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-rsyslog, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, release=1761123044, managed_by=tripleo_ansible, container_name=rsyslog, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']})
Dec 02 08:13:08 np0005541914.localdomain systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:13:08 np0005541914.localdomain podman[63821]: 2025-12-02 08:13:08.858877097 +0000 UTC m=+0.037675133 container cleanup 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, build-date=2025-11-18T22:49:49Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, url=https://www.redhat.com, tcib_managed=true, config_id=tripleo_step3, vcs-type=git, name=rhosp17/openstack-rsyslog, container_name=rsyslog, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 rsyslog)
Dec 02 08:13:08 np0005541914.localdomain podman[63821]: rsyslog
Dec 02 08:13:08 np0005541914.localdomain systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'.
Dec 02 08:13:08 np0005541914.localdomain sudo[63855]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mqupdciynrvnvzecxiszrwlkdvhrzwfm ; /usr/bin/python3
Dec 02 08:13:08 np0005541914.localdomain sudo[63855]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:13:08 np0005541914.localdomain systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 3.
Dec 02 08:13:08 np0005541914.localdomain systemd[1]: Stopped rsyslog container.
Dec 02 08:13:08 np0005541914.localdomain systemd[1]: Starting rsyslog container...
Dec 02 08:13:09 np0005541914.localdomain sudo[63855]: pam_unix(sudo:session): session closed for user root
Dec 02 08:13:09 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:13:09 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd78fb44465760df7c4be9cb01e48acc01a9b6623f14c40fffd8cb0fbb72ecf/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff)
Dec 02 08:13:09 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd78fb44465760df7c4be9cb01e48acc01a9b6623f14c40fffd8cb0fbb72ecf/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff)
Dec 02 08:13:09 np0005541914.localdomain podman[63858]: 2025-12-02 08:13:09.098135691 +0000 UTC m=+0.105618410 container init 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, container_name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 rsyslog, release=1761123044, name=rhosp17/openstack-rsyslog, vendor=Red Hat, Inc., io.buildah.version=1.41.4, version=17.1.12, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, com.redhat.component=openstack-rsyslog-container, build-date=2025-11-18T22:49:49Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64)
Dec 02 08:13:09 np0005541914.localdomain podman[63858]: 2025-12-02 08:13:09.103365087 +0000 UTC m=+0.110847806 container start 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, vcs-type=git, version=17.1.12, name=rhosp17/openstack-rsyslog, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-rsyslog-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=rsyslog, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, url=https://www.redhat.com, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, build-date=2025-11-18T22:49:49Z)
Dec 02 08:13:09 np0005541914.localdomain podman[63858]: rsyslog
Dec 02 08:13:09 np0005541914.localdomain systemd[1]: Started rsyslog container.
Dec 02 08:13:09 np0005541914.localdomain sudo[63892]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 02 08:13:09 np0005541914.localdomain sudo[63892]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 02 08:13:09 np0005541914.localdomain sudo[63892]: pam_unix(sudo:session): session closed for user root
Dec 02 08:13:09 np0005541914.localdomain systemd[1]: libpod-64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60.scope: Deactivated successfully.
Dec 02 08:13:09 np0005541914.localdomain podman[63895]: 2025-12-02 08:13:09.192700539 +0000 UTC m=+0.031112528 container died 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, com.redhat.component=openstack-rsyslog-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-rsyslog, architecture=x86_64, container_name=rsyslog, tcib_managed=true, batch=17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:49Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4)
Dec 02 08:13:09 np0005541914.localdomain podman[63895]: 2025-12-02 08:13:09.212606713 +0000 UTC m=+0.051018642 container cleanup 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, container_name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:49Z, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-rsyslog-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp17/openstack-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., config_id=tripleo_step3, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com)
Dec 02 08:13:09 np0005541914.localdomain systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:13:09 np0005541914.localdomain sudo[63922]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lcjudwriyijxjshhgzkguytewlvnlbin ; /usr/bin/python3
Dec 02 08:13:09 np0005541914.localdomain sudo[63922]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:13:09 np0005541914.localdomain podman[63923]: 2025-12-02 08:13:09.270162919 +0000 UTC m=+0.030215092 container cleanup 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, build-date=2025-11-18T22:49:49Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, name=rhosp17/openstack-rsyslog, url=https://www.redhat.com, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, version=17.1.12, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-rsyslog-container, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044)
Dec 02 08:13:09 np0005541914.localdomain podman[63923]: rsyslog
Dec 02 08:13:09 np0005541914.localdomain systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'.
Dec 02 08:13:09 np0005541914.localdomain python3[63931]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks3.json short_hostname=np0005541914 step=3 update_config_hash_only=False
Dec 02 08:13:09 np0005541914.localdomain sudo[63922]: pam_unix(sudo:session): session closed for user root
Dec 02 08:13:09 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-0fd78fb44465760df7c4be9cb01e48acc01a9b6623f14c40fffd8cb0fbb72ecf-merged.mount: Deactivated successfully.
Dec 02 08:13:09 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60-userdata-shm.mount: Deactivated successfully.
Dec 02 08:13:09 np0005541914.localdomain systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 4.
Dec 02 08:13:09 np0005541914.localdomain systemd[1]: Stopped rsyslog container.
Dec 02 08:13:09 np0005541914.localdomain systemd[1]: Starting rsyslog container...
Dec 02 08:13:09 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:13:09 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd78fb44465760df7c4be9cb01e48acc01a9b6623f14c40fffd8cb0fbb72ecf/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff)
Dec 02 08:13:09 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fd78fb44465760df7c4be9cb01e48acc01a9b6623f14c40fffd8cb0fbb72ecf/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff)
Dec 02 08:13:09 np0005541914.localdomain podman[63937]: 2025-12-02 08:13:09.583585452 +0000 UTC m=+0.095922310 container init 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, com.redhat.component=openstack-rsyslog-container, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, release=1761123044, architecture=x86_64, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:49Z, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 08:13:09 np0005541914.localdomain podman[63937]: 2025-12-02 08:13:09.591969122 +0000 UTC m=+0.104306010 container start 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, release=1761123044, batch=17.1_20251118.1, com.redhat.component=openstack-rsyslog-container, architecture=x86_64, build-date=2025-11-18T22:49:49Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, managed_by=tripleo_ansible)
Dec 02 08:13:09 np0005541914.localdomain podman[63937]: rsyslog
Dec 02 08:13:09 np0005541914.localdomain systemd[1]: Started rsyslog container.
Dec 02 08:13:09 np0005541914.localdomain sudo[63954]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 02 08:13:09 np0005541914.localdomain sudo[63954]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 02 08:13:09 np0005541914.localdomain sudo[63954]: pam_unix(sudo:session): session closed for user root
Dec 02 08:13:09 np0005541914.localdomain systemd[1]: libpod-64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60.scope: Deactivated successfully.
Dec 02 08:13:09 np0005541914.localdomain podman[63957]: 2025-12-02 08:13:09.723856424 +0000 UTC m=+0.037450207 container died 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, url=https://www.redhat.com, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:49Z, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-rsyslog-container, version=17.1.12, description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, name=rhosp17/openstack-rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, release=1761123044)
Dec 02 08:13:09 np0005541914.localdomain podman[63957]: 2025-12-02 08:13:09.745698225 +0000 UTC m=+0.059291968 container cleanup 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, batch=17.1_20251118.1, tcib_managed=true, build-date=2025-11-18T22:49:49Z, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, architecture=x86_64, config_id=tripleo_step3, name=rhosp17/openstack-rsyslog, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, com.redhat.component=openstack-rsyslog-container, io.openshift.expose-services=)
Dec 02 08:13:09 np0005541914.localdomain systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:13:09 np0005541914.localdomain podman[63971]: 2025-12-02 08:13:09.80861594 +0000 UTC m=+0.039063025 container cleanup 64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, url=https://www.redhat.com, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step3, build-date=2025-11-18T22:49:49Z, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20251118.1, container_name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '96606bb2d91ec59ed336cbd6010f1851'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-rsyslog-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team)
Dec 02 08:13:09 np0005541914.localdomain podman[63971]: rsyslog
Dec 02 08:13:09 np0005541914.localdomain systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'.
Dec 02 08:13:09 np0005541914.localdomain systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 5.
Dec 02 08:13:09 np0005541914.localdomain systemd[1]: Stopped rsyslog container.
Dec 02 08:13:09 np0005541914.localdomain systemd[1]: tripleo_rsyslog.service: Start request repeated too quickly.
Dec 02 08:13:09 np0005541914.localdomain systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'.
Dec 02 08:13:09 np0005541914.localdomain systemd[1]: Failed to start rsyslog container.
Dec 02 08:13:09 np0005541914.localdomain sudo[63996]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pcrmzersnfywumfyvzzqmbubdwpctocp ; /usr/bin/python3
Dec 02 08:13:09 np0005541914.localdomain sudo[63996]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:13:10 np0005541914.localdomain python3[63998]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:13:10 np0005541914.localdomain sudo[63996]: pam_unix(sudo:session): session closed for user root
Dec 02 08:13:10 np0005541914.localdomain sudo[63999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:13:10 np0005541914.localdomain sudo[63999]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:13:10 np0005541914.localdomain sudo[63999]: pam_unix(sudo:session): session closed for user root
Dec 02 08:13:10 np0005541914.localdomain sudo[64014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:13:10 np0005541914.localdomain sudo[64014]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:13:10 np0005541914.localdomain sudo[64042]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zvcrdzwpjlfzugjsvkgdqfdxfiaptzvq ; /usr/bin/python3
Dec 02 08:13:10 np0005541914.localdomain sudo[64042]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:13:10 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-0fd78fb44465760df7c4be9cb01e48acc01a9b6623f14c40fffd8cb0fbb72ecf-merged.mount: Deactivated successfully.
Dec 02 08:13:10 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60-userdata-shm.mount: Deactivated successfully.
Dec 02 08:13:10 np0005541914.localdomain python3[64044]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_3 config_pattern=container-puppet-*.json config_overrides={} debug=True
Dec 02 08:13:10 np0005541914.localdomain sudo[64042]: pam_unix(sudo:session): session closed for user root
Dec 02 08:13:10 np0005541914.localdomain sudo[64014]: pam_unix(sudo:session): session closed for user root
Dec 02 08:13:11 np0005541914.localdomain sudo[64076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:13:11 np0005541914.localdomain sudo[64076]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:13:11 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:13:11 np0005541914.localdomain sudo[64076]: pam_unix(sudo:session): session closed for user root
Dec 02 08:13:11 np0005541914.localdomain podman[64091]: 2025-12-02 08:13:11.886766523 +0000 UTC m=+0.084547171 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=starting, config_id=tripleo_step3, name=rhosp17/openstack-collectd, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, url=https://www.redhat.com)
Dec 02 08:13:11 np0005541914.localdomain podman[64091]: 2025-12-02 08:13:11.901744191 +0000 UTC m=+0.099524769 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_id=tripleo_step3, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Dec 02 08:13:11 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:13:12 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:13:13 np0005541914.localdomain podman[64111]: 2025-12-02 08:13:13.221966468 +0000 UTC m=+0.229457051 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-iscsid, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Dec 02 08:13:13 np0005541914.localdomain podman[64111]: 2025-12-02 08:13:13.273028067 +0000 UTC m=+0.280518610 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, config_id=tripleo_step3, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, tcib_managed=true, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid)
Dec 02 08:13:13 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:13:24 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:13:25 np0005541914.localdomain systemd[1]: tmp-crun.dSxOgp.mount: Deactivated successfully.
Dec 02 08:13:25 np0005541914.localdomain podman[64130]: 2025-12-02 08:13:25.086940122 +0000 UTC m=+0.094560201 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step1, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, version=17.1.12, distribution-scope=public, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64)
Dec 02 08:13:25 np0005541914.localdomain podman[64130]: 2025-12-02 08:13:25.29921931 +0000 UTC m=+0.306839299 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, vcs-type=git, config_id=tripleo_step1, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible)
Dec 02 08:13:25 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:13:41 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:13:42 np0005541914.localdomain podman[64159]: 2025-12-02 08:13:42.081240401 +0000 UTC m=+0.086691603 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, release=1761123044, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, version=17.1.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, container_name=collectd, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 08:13:42 np0005541914.localdomain podman[64159]: 2025-12-02 08:13:42.090683698 +0000 UTC m=+0.096134920 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.4, release=1761123044, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, vendor=Red Hat, Inc.)
Dec 02 08:13:42 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:13:43 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:13:44 np0005541914.localdomain podman[64179]: 2025-12-02 08:13:44.05438291 +0000 UTC m=+0.062384826 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step3, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, container_name=iscsid, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, tcib_managed=true)
Dec 02 08:13:44 np0005541914.localdomain podman[64179]: 2025-12-02 08:13:44.068183065 +0000 UTC m=+0.076185011 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, vcs-type=git, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Dec 02 08:13:44 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:13:50 np0005541914.localdomain sshd[64198]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:13:51 np0005541914.localdomain sshd[64198]: Invalid user admin1 from 182.253.156.173 port 56518
Dec 02 08:13:51 np0005541914.localdomain sshd[64198]: Received disconnect from 182.253.156.173 port 56518:11: Bye Bye [preauth]
Dec 02 08:13:51 np0005541914.localdomain sshd[64198]: Disconnected from invalid user admin1 182.253.156.173 port 56518 [preauth]
Dec 02 08:13:55 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:13:56 np0005541914.localdomain podman[64200]: 2025-12-02 08:13:56.083634049 +0000 UTC m=+0.085091012 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, tcib_managed=true, batch=17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.12, release=1761123044, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:13:56 np0005541914.localdomain podman[64200]: 2025-12-02 08:13:56.27983367 +0000 UTC m=+0.281290613 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, tcib_managed=true, io.openshift.expose-services=, release=1761123044, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, distribution-scope=public, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_id=tripleo_step1, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team)
Dec 02 08:13:56 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:14:11 np0005541914.localdomain sudo[64228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:14:11 np0005541914.localdomain sudo[64228]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:14:11 np0005541914.localdomain sudo[64228]: pam_unix(sudo:session): session closed for user root
Dec 02 08:14:11 np0005541914.localdomain sudo[64243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:14:11 np0005541914.localdomain sudo[64243]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:14:12 np0005541914.localdomain sudo[64243]: pam_unix(sudo:session): session closed for user root
Dec 02 08:14:12 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:14:13 np0005541914.localdomain systemd[1]: tmp-crun.oMwu9f.mount: Deactivated successfully.
Dec 02 08:14:13 np0005541914.localdomain podman[64290]: 2025-12-02 08:14:13.08342028 +0000 UTC m=+0.086752335 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, release=1761123044, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, version=17.1.12, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd)
Dec 02 08:14:13 np0005541914.localdomain podman[64290]: 2025-12-02 08:14:13.095855532 +0000 UTC m=+0.099187567 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, tcib_managed=true, batch=17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, release=1761123044, name=rhosp17/openstack-collectd, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:14:13 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:14:13 np0005541914.localdomain sudo[64308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:14:13 np0005541914.localdomain sudo[64308]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:14:13 np0005541914.localdomain sudo[64308]: pam_unix(sudo:session): session closed for user root
Dec 02 08:14:14 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:14:15 np0005541914.localdomain podman[64323]: 2025-12-02 08:14:15.066898075 +0000 UTC m=+0.075850691 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., io.buildah.version=1.41.4, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public)
Dec 02 08:14:15 np0005541914.localdomain podman[64323]: 2025-12-02 08:14:15.10388285 +0000 UTC m=+0.112835476 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, release=1761123044, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, config_id=tripleo_step3, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://www.redhat.com, tcib_managed=true, name=rhosp17/openstack-iscsid)
Dec 02 08:14:15 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:14:25 np0005541914.localdomain sshd[64342]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:14:27 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:14:27 np0005541914.localdomain podman[64344]: 2025-12-02 08:14:27.127911117 +0000 UTC m=+0.086017092 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, vcs-type=git, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1761123044, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container)
Dec 02 08:14:27 np0005541914.localdomain sshd[64342]: Received disconnect from 103.52.115.25 port 53942:11: Bye Bye [preauth]
Dec 02 08:14:27 np0005541914.localdomain sshd[64342]: Disconnected from authenticating user root 103.52.115.25 port 53942 [preauth]
Dec 02 08:14:27 np0005541914.localdomain podman[64344]: 2025-12-02 08:14:27.324416388 +0000 UTC m=+0.282522353 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=metrics_qdr, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public)
Dec 02 08:14:27 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:14:43 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:14:44 np0005541914.localdomain podman[64374]: 2025-12-02 08:14:44.219691005 +0000 UTC m=+0.224393771 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, release=1761123044, version=17.1.12, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, distribution-scope=public, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd)
Dec 02 08:14:44 np0005541914.localdomain podman[64374]: 2025-12-02 08:14:44.230205207 +0000 UTC m=+0.234907983 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, distribution-scope=public, managed_by=tripleo_ansible)
Dec 02 08:14:44 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:14:45 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:14:46 np0005541914.localdomain systemd[1]: tmp-crun.ZjCkrA.mount: Deactivated successfully.
Dec 02 08:14:46 np0005541914.localdomain podman[64394]: 2025-12-02 08:14:46.086993994 +0000 UTC m=+0.089321102 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, architecture=x86_64, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public)
Dec 02 08:14:46 np0005541914.localdomain podman[64394]: 2025-12-02 08:14:46.125920646 +0000 UTC m=+0.128247734 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, managed_by=tripleo_ansible, container_name=iscsid, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:14:46 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:14:57 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:14:58 np0005541914.localdomain podman[64413]: 2025-12-02 08:14:58.094495357 +0000 UTC m=+0.092357037 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, batch=17.1_20251118.1, release=1761123044, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:14:58 np0005541914.localdomain podman[64413]: 2025-12-02 08:14:58.321842855 +0000 UTC m=+0.319704535 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Dec 02 08:14:58 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:14:59 np0005541914.localdomain sshd[64443]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:15:00 np0005541914.localdomain sshd[64443]: Invalid user sol from 45.148.10.240 port 52286
Dec 02 08:15:00 np0005541914.localdomain sshd[64443]: Connection closed by invalid user sol 45.148.10.240 port 52286 [preauth]
Dec 02 08:15:01 np0005541914.localdomain anacron[6721]: Job `cron.monthly' started
Dec 02 08:15:01 np0005541914.localdomain anacron[6721]: Job `cron.monthly' terminated
Dec 02 08:15:01 np0005541914.localdomain anacron[6721]: Normal exit (3 jobs run)
Dec 02 08:15:02 np0005541914.localdomain sshd[64447]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:15:04 np0005541914.localdomain sshd[64447]: Received disconnect from 182.253.156.173 port 45660:11: Bye Bye [preauth]
Dec 02 08:15:04 np0005541914.localdomain sshd[64447]: Disconnected from authenticating user root 182.253.156.173 port 45660 [preauth]
Dec 02 08:15:13 np0005541914.localdomain sudo[64449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:15:13 np0005541914.localdomain sudo[64449]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:15:13 np0005541914.localdomain sudo[64449]: pam_unix(sudo:session): session closed for user root
Dec 02 08:15:13 np0005541914.localdomain sudo[64464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:15:13 np0005541914.localdomain sudo[64464]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:15:14 np0005541914.localdomain sudo[64464]: pam_unix(sudo:session): session closed for user root
Dec 02 08:15:14 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:15:15 np0005541914.localdomain podman[64511]: 2025-12-02 08:15:15.089385126 +0000 UTC m=+0.092331916 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.buildah.version=1.41.4, distribution-scope=public, version=17.1.12, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=)
Dec 02 08:15:15 np0005541914.localdomain podman[64511]: 2025-12-02 08:15:15.126908694 +0000 UTC m=+0.129855454 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.12, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64)
Dec 02 08:15:15 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:15:16 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:15:17 np0005541914.localdomain podman[64531]: 2025-12-02 08:15:17.071088594 +0000 UTC m=+0.079365411 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, release=1761123044, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Dec 02 08:15:17 np0005541914.localdomain podman[64531]: 2025-12-02 08:15:17.084922824 +0000 UTC m=+0.093199621 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, distribution-scope=public, version=17.1.12, tcib_managed=true, vcs-type=git, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team)
Dec 02 08:15:17 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:15:18 np0005541914.localdomain sudo[64551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:15:18 np0005541914.localdomain sudo[64551]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:15:18 np0005541914.localdomain sudo[64551]: pam_unix(sudo:session): session closed for user root
Dec 02 08:15:28 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:15:29 np0005541914.localdomain podman[64566]: 2025-12-02 08:15:29.079040389 +0000 UTC m=+0.076637406 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, vendor=Red Hat, Inc., vcs-type=git, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.openshift.expose-services=, distribution-scope=public)
Dec 02 08:15:29 np0005541914.localdomain podman[64566]: 2025-12-02 08:15:29.26085281 +0000 UTC m=+0.258449827 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, config_id=tripleo_step1, architecture=x86_64, version=17.1.12, vendor=Red Hat, Inc., vcs-type=git, tcib_managed=true, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, distribution-scope=public)
Dec 02 08:15:29 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:15:44 np0005541914.localdomain sshd[64595]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:15:45 np0005541914.localdomain sshd[64595]: Invalid user ubuntu from 103.52.115.25 port 52458
Dec 02 08:15:45 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:15:45 np0005541914.localdomain podman[64597]: 2025-12-02 08:15:45.75986054 +0000 UTC m=+0.081222319 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, vcs-type=git, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 08:15:45 np0005541914.localdomain podman[64597]: 2025-12-02 08:15:45.77299269 +0000 UTC m=+0.094354499 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step3, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd)
Dec 02 08:15:45 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:15:45 np0005541914.localdomain sshd[64595]: Received disconnect from 103.52.115.25 port 52458:11: Bye Bye [preauth]
Dec 02 08:15:45 np0005541914.localdomain sshd[64595]: Disconnected from invalid user ubuntu 103.52.115.25 port 52458 [preauth]
Dec 02 08:15:47 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:15:48 np0005541914.localdomain podman[64618]: 2025-12-02 08:15:48.048110483 +0000 UTC m=+0.060627528 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, container_name=iscsid, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, release=1761123044)
Dec 02 08:15:48 np0005541914.localdomain podman[64618]: 2025-12-02 08:15:48.060713295 +0000 UTC m=+0.073230270 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, version=17.1.12, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, distribution-scope=public, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Dec 02 08:15:48 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:15:59 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:16:00 np0005541914.localdomain podman[64636]: 2025-12-02 08:16:00.079355545 +0000 UTC m=+0.084969236 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vcs-type=git, version=17.1.12, tcib_managed=true, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step1, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, managed_by=tripleo_ansible)
Dec 02 08:16:00 np0005541914.localdomain podman[64636]: 2025-12-02 08:16:00.244022782 +0000 UTC m=+0.249636473 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, tcib_managed=true)
Dec 02 08:16:00 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:16:15 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:16:16 np0005541914.localdomain podman[64665]: 2025-12-02 08:16:16.068445671 +0000 UTC m=+0.076764151 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, release=1761123044, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12)
Dec 02 08:16:16 np0005541914.localdomain podman[64665]: 2025-12-02 08:16:16.101776798 +0000 UTC m=+0.110095258 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.buildah.version=1.41.4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_id=tripleo_step3, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 08:16:16 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:16:16 np0005541914.localdomain sshd[64686]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:16:18 np0005541914.localdomain sshd[64686]: Invalid user admin from 182.253.156.173 port 55140
Dec 02 08:16:18 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:16:18 np0005541914.localdomain podman[64688]: 2025-12-02 08:16:18.18637626 +0000 UTC m=+0.088887458 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, release=1761123044, architecture=x86_64, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, container_name=iscsid, url=https://www.redhat.com, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible)
Dec 02 08:16:18 np0005541914.localdomain podman[64688]: 2025-12-02 08:16:18.194149912 +0000 UTC m=+0.096661110 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, release=1761123044, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, config_id=tripleo_step3, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, vcs-type=git, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, tcib_managed=true)
Dec 02 08:16:18 np0005541914.localdomain sudo[64700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:16:18 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:16:18 np0005541914.localdomain sudo[64700]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:16:18 np0005541914.localdomain sudo[64700]: pam_unix(sudo:session): session closed for user root
Dec 02 08:16:18 np0005541914.localdomain sudo[64721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:16:18 np0005541914.localdomain sudo[64721]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:16:18 np0005541914.localdomain sshd[64686]: Received disconnect from 182.253.156.173 port 55140:11: Bye Bye [preauth]
Dec 02 08:16:18 np0005541914.localdomain sshd[64686]: Disconnected from invalid user admin 182.253.156.173 port 55140 [preauth]
Dec 02 08:16:18 np0005541914.localdomain sudo[64721]: pam_unix(sudo:session): session closed for user root
Dec 02 08:16:19 np0005541914.localdomain sudo[64767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:16:19 np0005541914.localdomain sudo[64767]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:16:19 np0005541914.localdomain sudo[64767]: pam_unix(sudo:session): session closed for user root
Dec 02 08:16:19 np0005541914.localdomain sudo[64782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Dec 02 08:16:19 np0005541914.localdomain sudo[64782]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:16:19 np0005541914.localdomain sudo[64782]: pam_unix(sudo:session): session closed for user root
Dec 02 08:16:25 np0005541914.localdomain sudo[64816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:16:25 np0005541914.localdomain sudo[64816]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:16:25 np0005541914.localdomain sudo[64816]: pam_unix(sudo:session): session closed for user root
Dec 02 08:16:31 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:16:31 np0005541914.localdomain systemd[1]: tmp-crun.l3biKo.mount: Deactivated successfully.
Dec 02 08:16:31 np0005541914.localdomain podman[64831]: 2025-12-02 08:16:31.127442778 +0000 UTC m=+0.080949752 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.4, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:16:31 np0005541914.localdomain podman[64831]: 2025-12-02 08:16:31.344436924 +0000 UTC m=+0.297943938 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, container_name=metrics_qdr, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:16:31 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:16:46 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:16:47 np0005541914.localdomain systemd[1]: tmp-crun.VIGrIx.mount: Deactivated successfully.
Dec 02 08:16:47 np0005541914.localdomain podman[64862]: 2025-12-02 08:16:47.083110963 +0000 UTC m=+0.089500278 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:16:47 np0005541914.localdomain podman[64862]: 2025-12-02 08:16:47.12092192 +0000 UTC m=+0.127311245 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team)
Dec 02 08:16:47 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:16:48 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:16:49 np0005541914.localdomain podman[64883]: 2025-12-02 08:16:49.072927893 +0000 UTC m=+0.077134703 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, vcs-type=git, batch=17.1_20251118.1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, distribution-scope=public, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid)
Dec 02 08:16:49 np0005541914.localdomain podman[64883]: 2025-12-02 08:16:49.111875816 +0000 UTC m=+0.116082656 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, architecture=x86_64, url=https://www.redhat.com, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, container_name=iscsid, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, vendor=Red Hat, Inc.)
Dec 02 08:16:49 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:17:01 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:17:02 np0005541914.localdomain podman[64903]: 2025-12-02 08:17:02.050429274 +0000 UTC m=+0.059322748 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com)
Dec 02 08:17:02 np0005541914.localdomain podman[64903]: 2025-12-02 08:17:02.249358277 +0000 UTC m=+0.258251791 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, build-date=2025-11-18T22:49:46Z, release=1761123044, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, distribution-scope=public)
Dec 02 08:17:02 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:17:04 np0005541914.localdomain sshd[64934]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:17:06 np0005541914.localdomain sshd[64936]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:17:06 np0005541914.localdomain sshd[64934]: Received disconnect from 103.52.115.25 port 59960:11: Bye Bye [preauth]
Dec 02 08:17:06 np0005541914.localdomain sshd[64934]: Disconnected from authenticating user root 103.52.115.25 port 59960 [preauth]
Dec 02 08:17:06 np0005541914.localdomain sshd[64936]: Invalid user sol from 45.148.10.240 port 49650
Dec 02 08:17:06 np0005541914.localdomain sshd[64936]: Connection closed by invalid user sol 45.148.10.240 port 49650 [preauth]
Dec 02 08:17:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 08:17:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 1800.1 total, 600.0 interval
                                                          Cumulative writes: 4399 writes, 20K keys, 4399 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 4399 writes, 504 syncs, 8.73 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 190 writes, 549 keys, 190 commit groups, 1.0 writes per commit group, ingest: 0.50 MB, 0.00 MB/s
                                                          Interval WAL: 190 writes, 93 syncs, 2.04 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 02 08:17:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 08:17:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 1800.2 total, 600.0 interval
                                                          Cumulative writes: 5262 writes, 23K keys, 5262 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 5262 writes, 560 syncs, 9.40 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 169 writes, 444 keys, 169 commit groups, 1.0 writes per commit group, ingest: 0.42 MB, 0.00 MB/s
                                                          Interval WAL: 169 writes, 83 syncs, 2.04 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 02 08:17:17 np0005541914.localdomain sudo[64983]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jchdndcoxjcwsambgqcpdepebnhimfyc ; /usr/bin/python3
Dec 02 08:17:17 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:17:17 np0005541914.localdomain sudo[64983]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:18 np0005541914.localdomain systemd[1]: tmp-crun.CYdHEr.mount: Deactivated successfully.
Dec 02 08:17:18 np0005541914.localdomain podman[64985]: 2025-12-02 08:17:18.117837506 +0000 UTC m=+0.114422723 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vcs-type=git, batch=17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd)
Dec 02 08:17:18 np0005541914.localdomain podman[64985]: 2025-12-02 08:17:18.156867051 +0000 UTC m=+0.153452238 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, version=17.1.12, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true)
Dec 02 08:17:18 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:17:18 np0005541914.localdomain python3[64986]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:17:18 np0005541914.localdomain sudo[64983]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:18 np0005541914.localdomain sudo[65051]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zfgemgbdmmzjkevsshsguuflrduavnuz ; /usr/bin/python3
Dec 02 08:17:18 np0005541914.localdomain sudo[65051]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:18 np0005541914.localdomain python3[65053]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663437.8122132-107022-101980870140753/source _original_basename=tmpw_u8sle8 follow=False checksum=ee48fb03297eb703b1954c8852d0f67fab51dac1 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:17:18 np0005541914.localdomain sudo[65051]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:19 np0005541914.localdomain sudo[65113]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-blroooujytevnziblxjugguhmnimnroj ; /usr/bin/python3
Dec 02 08:17:19 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:17:19 np0005541914.localdomain sudo[65113]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:19 np0005541914.localdomain systemd[1]: tmp-crun.wxevjU.mount: Deactivated successfully.
Dec 02 08:17:19 np0005541914.localdomain podman[65115]: 2025-12-02 08:17:19.929910903 +0000 UTC m=+0.089773886 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, container_name=iscsid)
Dec 02 08:17:19 np0005541914.localdomain podman[65115]: 2025-12-02 08:17:19.963172979 +0000 UTC m=+0.123035932 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, vcs-type=git, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, container_name=iscsid, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible)
Dec 02 08:17:19 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:17:19 np0005541914.localdomain python3[65116]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/recover_tripleo_nova_virtqemud.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:17:20 np0005541914.localdomain sudo[65113]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:20 np0005541914.localdomain sudo[65175]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ptoucocyvuxpsgwybwldlktvqznakdqj ; /usr/bin/python3
Dec 02 08:17:20 np0005541914.localdomain sudo[65175]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:20 np0005541914.localdomain python3[65177]: ansible-ansible.legacy.copy Invoked with dest=/usr/libexec/recover_tripleo_nova_virtqemud.sh mode=0755 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663439.6693919-107120-208301867448792/source _original_basename=tmpsmgcjw7y follow=False checksum=922b8aa8342176110bffc2e39abdccc2b39e53a9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:17:20 np0005541914.localdomain sudo[65175]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:20 np0005541914.localdomain sudo[65237]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cvtwnxznahleiqxnrnhjyshvhkwvlzfh ; /usr/bin/python3
Dec 02 08:17:20 np0005541914.localdomain sudo[65237]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:20 np0005541914.localdomain python3[65239]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:17:20 np0005541914.localdomain sudo[65237]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:21 np0005541914.localdomain sudo[65280]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-abcizuymullnagftvcoobzdrwtqavxji ; /usr/bin/python3
Dec 02 08:17:21 np0005541914.localdomain sudo[65280]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:21 np0005541914.localdomain python3[65282]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_virtqemud_recover.service mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663440.642849-107177-143401311914366/source _original_basename=tmpp5pyajf0 follow=False checksum=92f73544b703afc85885fa63ab07bdf8f8671554 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:17:21 np0005541914.localdomain sudo[65280]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:21 np0005541914.localdomain sudo[65342]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aicmqubusfkabgmqzgexioyxurptfgml ; /usr/bin/python3
Dec 02 08:17:21 np0005541914.localdomain sudo[65342]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:22 np0005541914.localdomain python3[65344]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.timer follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:17:22 np0005541914.localdomain sudo[65342]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:22 np0005541914.localdomain sudo[65385]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-radabbtyuartuipdnnjcunetnkosxequ ; /usr/bin/python3
Dec 02 08:17:22 np0005541914.localdomain sudo[65385]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:22 np0005541914.localdomain python3[65387]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_virtqemud_recover.timer mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663441.7007818-107231-108339490604747/source _original_basename=tmptk2a23br follow=False checksum=c6e5f76a53c0d6ccaf46c4b48d813dc2891ad8e9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:17:22 np0005541914.localdomain sudo[65385]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:22 np0005541914.localdomain sudo[65415]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hkparxjyaujenkhekxrdsdansotkfahp ; /usr/bin/python3
Dec 02 08:17:22 np0005541914.localdomain sudo[65415]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:23 np0005541914.localdomain python3[65417]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_virtqemud_recover.service daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 02 08:17:23 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:17:23 np0005541914.localdomain systemd-sysv-generator[65444]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:17:23 np0005541914.localdomain systemd-rc-local-generator[65439]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:17:23 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:17:23 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:17:23 np0005541914.localdomain systemd-rc-local-generator[65479]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:17:23 np0005541914.localdomain systemd-sysv-generator[65485]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:17:23 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:17:23 np0005541914.localdomain sudo[65415]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:24 np0005541914.localdomain sudo[65505]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lmttrjzeufkmfcaiqdphldlehqzoammg ; /usr/bin/python3
Dec 02 08:17:24 np0005541914.localdomain sudo[65505]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:24 np0005541914.localdomain python3[65507]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_virtqemud_recover.timer state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 08:17:24 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:17:24 np0005541914.localdomain systemd-rc-local-generator[65530]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:17:24 np0005541914.localdomain systemd-sysv-generator[65534]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:17:24 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:17:24 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:17:24 np0005541914.localdomain systemd-rc-local-generator[65572]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:17:24 np0005541914.localdomain systemd-sysv-generator[65575]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:17:24 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:17:24 np0005541914.localdomain systemd[1]: Started Check and recover tripleo_nova_virtqemud every 10m.
Dec 02 08:17:24 np0005541914.localdomain sudo[65505]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:25 np0005541914.localdomain sudo[65596]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fjtsmppavtvihnlsdiherrtalljdgblf ; /usr/bin/python3
Dec 02 08:17:25 np0005541914.localdomain sudo[65596]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:25 np0005541914.localdomain sudo[65599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:17:25 np0005541914.localdomain sudo[65599]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:17:25 np0005541914.localdomain sudo[65599]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:25 np0005541914.localdomain python3[65598]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl enable --now tripleo_nova_virtqemud_recover.timer _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 08:17:25 np0005541914.localdomain sudo[65614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 02 08:17:25 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:17:25 np0005541914.localdomain sudo[65614]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:17:25 np0005541914.localdomain systemd-rc-local-generator[65653]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:17:25 np0005541914.localdomain systemd-sysv-generator[65657]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:17:25 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:17:25 np0005541914.localdomain sudo[65596]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:25 np0005541914.localdomain sudo[65614]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:25 np0005541914.localdomain sudo[65699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:17:25 np0005541914.localdomain sudo[65699]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:17:25 np0005541914.localdomain sudo[65699]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:25 np0005541914.localdomain sudo[65750]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rfojswbcpooqfirjzhxazkckolrdsgjk ; /usr/bin/python3
Dec 02 08:17:25 np0005541914.localdomain sudo[65750]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:25 np0005541914.localdomain sudo[65746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:17:25 np0005541914.localdomain sudo[65746]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:17:26 np0005541914.localdomain python3[65761]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:17:26 np0005541914.localdomain sudo[65750]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:26 np0005541914.localdomain sudo[65820]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kdrnqndngxxmvuqpbqulkigsosajwphz ; /usr/bin/python3
Dec 02 08:17:26 np0005541914.localdomain sudo[65820]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:26 np0005541914.localdomain python3[65822]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_libvirt.target group=root mode=0644 owner=root src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663445.7601256-107414-24241160205576/source _original_basename=tmp9yd68cck follow=False checksum=c064b4a8e7d3d1d7c62d1f80a09e350659996afd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:17:26 np0005541914.localdomain sudo[65820]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:26 np0005541914.localdomain sudo[65746]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:26 np0005541914.localdomain sudo[65880]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hizlthejoapjtzqvxhcfzxvovgngdubt ; /usr/bin/python3
Dec 02 08:17:26 np0005541914.localdomain sudo[65857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:17:26 np0005541914.localdomain sudo[65880]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:26 np0005541914.localdomain sudo[65857]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:17:26 np0005541914.localdomain sudo[65857]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:26 np0005541914.localdomain sudo[65885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074 -- inventory --format=json-pretty --filter-for-batch
Dec 02 08:17:26 np0005541914.localdomain sudo[65885]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:17:26 np0005541914.localdomain python3[65883]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 08:17:26 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:17:27 np0005541914.localdomain systemd-sysv-generator[65924]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:17:27 np0005541914.localdomain systemd-rc-local-generator[65920]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:17:27 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:17:27 np0005541914.localdomain systemd[1]: Reached target tripleo_nova_libvirt.target.
Dec 02 08:17:27 np0005541914.localdomain sudo[65880]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:27 np0005541914.localdomain podman[65979]: 
Dec 02 08:17:27 np0005541914.localdomain podman[65979]: 2025-12-02 08:17:27.490835337 +0000 UTC m=+0.076829794 container create 3121d5ddcd0e4918541e6b11a24449a33c87b376ef82edab97eab2b88f05c8bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_moser, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vendor=Red Hat, Inc., version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_CLEAN=True, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-type=git, architecture=x86_64, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.openshift.tags=rhceph ceph, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph)
Dec 02 08:17:27 np0005541914.localdomain systemd[1]: Started libpod-conmon-3121d5ddcd0e4918541e6b11a24449a33c87b376ef82edab97eab2b88f05c8bd.scope.
Dec 02 08:17:27 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:17:27 np0005541914.localdomain podman[65979]: 2025-12-02 08:17:27.459500501 +0000 UTC m=+0.045494978 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 08:17:27 np0005541914.localdomain sudo[66007]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-woghsgiwtwmiqofeqvskcuyilgknbnwi ; /usr/bin/python3
Dec 02 08:17:27 np0005541914.localdomain podman[65979]: 2025-12-02 08:17:27.572727426 +0000 UTC m=+0.158721883 container init 3121d5ddcd0e4918541e6b11a24449a33c87b376ef82edab97eab2b88f05c8bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_moser, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, com.redhat.component=rhceph-container, architecture=x86_64, io.openshift.expose-services=, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, vcs-type=git, distribution-scope=public, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Dec 02 08:17:27 np0005541914.localdomain sudo[66007]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:27 np0005541914.localdomain podman[65979]: 2025-12-02 08:17:27.585048559 +0000 UTC m=+0.171043026 container start 3121d5ddcd0e4918541e6b11a24449a33c87b376ef82edab97eab2b88f05c8bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_moser, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, name=rhceph, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., GIT_BRANCH=main, release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main)
Dec 02 08:17:27 np0005541914.localdomain podman[65979]: 2025-12-02 08:17:27.585339189 +0000 UTC m=+0.171333636 container attach 3121d5ddcd0e4918541e6b11a24449a33c87b376ef82edab97eab2b88f05c8bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_moser, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, release=1763362218, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Dec 02 08:17:27 np0005541914.localdomain systemd[1]: libpod-3121d5ddcd0e4918541e6b11a24449a33c87b376ef82edab97eab2b88f05c8bd.scope: Deactivated successfully.
Dec 02 08:17:27 np0005541914.localdomain crazy_moser[66008]: 167 167
Dec 02 08:17:27 np0005541914.localdomain podman[65979]: 2025-12-02 08:17:27.589712635 +0000 UTC m=+0.175707072 container died 3121d5ddcd0e4918541e6b11a24449a33c87b376ef82edab97eab2b88f05c8bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_moser, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public)
Dec 02 08:17:27 np0005541914.localdomain podman[66015]: 2025-12-02 08:17:27.678920462 +0000 UTC m=+0.072958302 container remove 3121d5ddcd0e4918541e6b11a24449a33c87b376ef82edab97eab2b88f05c8bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_moser, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, version=7, vendor=Red Hat, Inc., name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, distribution-scope=public, io.openshift.tags=rhceph ceph, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 02 08:17:27 np0005541914.localdomain systemd[1]: libpod-conmon-3121d5ddcd0e4918541e6b11a24449a33c87b376ef82edab97eab2b88f05c8bd.scope: Deactivated successfully.
Dec 02 08:17:27 np0005541914.localdomain python3[66013]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 08:17:27 np0005541914.localdomain sudo[66007]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:27 np0005541914.localdomain podman[66037]: 
Dec 02 08:17:27 np0005541914.localdomain podman[66037]: 2025-12-02 08:17:27.870530337 +0000 UTC m=+0.076567954 container create 2daa3c65843ef695215e0163f5ba9bac259785bbb7eea718859b73130acbe671 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_margulis, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, GIT_CLEAN=True, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, distribution-scope=public, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 02 08:17:27 np0005541914.localdomain systemd[1]: Started libpod-conmon-2daa3c65843ef695215e0163f5ba9bac259785bbb7eea718859b73130acbe671.scope.
Dec 02 08:17:27 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:17:27 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3dc1cf386ade8161ec38d7c976fbdb41dcb4e9eb86c886693b13588cd208a6b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 02 08:17:27 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3dc1cf386ade8161ec38d7c976fbdb41dcb4e9eb86c886693b13588cd208a6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 02 08:17:27 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3dc1cf386ade8161ec38d7c976fbdb41dcb4e9eb86c886693b13588cd208a6b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 02 08:17:27 np0005541914.localdomain podman[66037]: 2025-12-02 08:17:27.848058068 +0000 UTC m=+0.054095655 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 08:17:27 np0005541914.localdomain podman[66037]: 2025-12-02 08:17:27.948387022 +0000 UTC m=+0.154424639 container init 2daa3c65843ef695215e0163f5ba9bac259785bbb7eea718859b73130acbe671 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_margulis, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, RELEASE=main, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.buildah.version=1.41.4, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, vcs-type=git, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 08:17:27 np0005541914.localdomain podman[66037]: 2025-12-02 08:17:27.957924879 +0000 UTC m=+0.163962496 container start 2daa3c65843ef695215e0163f5ba9bac259785bbb7eea718859b73130acbe671 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_margulis, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, version=7, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True)
Dec 02 08:17:27 np0005541914.localdomain podman[66037]: 2025-12-02 08:17:27.958138536 +0000 UTC m=+0.164176153 container attach 2daa3c65843ef695215e0163f5ba9bac259785bbb7eea718859b73130acbe671 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_margulis, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vcs-type=git, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, version=7, name=rhceph, distribution-scope=public, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, build-date=2025-11-26T19:44:28Z, ceph=True, io.buildah.version=1.41.4)
Dec 02 08:17:28 np0005541914.localdomain sudo[66104]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tqjnmfetjmpjshhkswcnyxfmeiiksueh ; /usr/bin/python3
Dec 02 08:17:28 np0005541914.localdomain sudo[66104]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:28 np0005541914.localdomain sudo[66104]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:28 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-0e0a6f01ad14f243016ec121b93ed9c9fa375a910e90982e158037cdfdbcf2a3-merged.mount: Deactivated successfully.
Dec 02 08:17:28 np0005541914.localdomain sudo[66136]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bsiwvlgratcfuwzuyygdckzehjsykkzr ; /usr/bin/python3
Dec 02 08:17:28 np0005541914.localdomain sudo[66136]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:28 np0005541914.localdomain sudo[66136]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:29 np0005541914.localdomain focused_margulis[66054]: [
Dec 02 08:17:29 np0005541914.localdomain focused_margulis[66054]:     {
Dec 02 08:17:29 np0005541914.localdomain focused_margulis[66054]:         "available": false,
Dec 02 08:17:29 np0005541914.localdomain focused_margulis[66054]:         "ceph_device": false,
Dec 02 08:17:29 np0005541914.localdomain focused_margulis[66054]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 02 08:17:29 np0005541914.localdomain focused_margulis[66054]:         "lsm_data": {},
Dec 02 08:17:29 np0005541914.localdomain focused_margulis[66054]:         "lvs": [],
Dec 02 08:17:29 np0005541914.localdomain focused_margulis[66054]:         "path": "/dev/sr0",
Dec 02 08:17:29 np0005541914.localdomain focused_margulis[66054]:         "rejected_reasons": [
Dec 02 08:17:29 np0005541914.localdomain focused_margulis[66054]:             "Insufficient space (<5GB)",
Dec 02 08:17:29 np0005541914.localdomain focused_margulis[66054]:             "Has a FileSystem"
Dec 02 08:17:29 np0005541914.localdomain focused_margulis[66054]:         ],
Dec 02 08:17:29 np0005541914.localdomain focused_margulis[66054]:         "sys_api": {
Dec 02 08:17:29 np0005541914.localdomain focused_margulis[66054]:             "actuators": null,
Dec 02 08:17:29 np0005541914.localdomain focused_margulis[66054]:             "device_nodes": "sr0",
Dec 02 08:17:29 np0005541914.localdomain focused_margulis[66054]:             "human_readable_size": "482.00 KB",
Dec 02 08:17:29 np0005541914.localdomain focused_margulis[66054]:             "id_bus": "ata",
Dec 02 08:17:29 np0005541914.localdomain focused_margulis[66054]:             "model": "QEMU DVD-ROM",
Dec 02 08:17:29 np0005541914.localdomain focused_margulis[66054]:             "nr_requests": "2",
Dec 02 08:17:29 np0005541914.localdomain focused_margulis[66054]:             "partitions": {},
Dec 02 08:17:29 np0005541914.localdomain focused_margulis[66054]:             "path": "/dev/sr0",
Dec 02 08:17:29 np0005541914.localdomain focused_margulis[66054]:             "removable": "1",
Dec 02 08:17:29 np0005541914.localdomain focused_margulis[66054]:             "rev": "2.5+",
Dec 02 08:17:29 np0005541914.localdomain focused_margulis[66054]:             "ro": "0",
Dec 02 08:17:29 np0005541914.localdomain focused_margulis[66054]:             "rotational": "1",
Dec 02 08:17:29 np0005541914.localdomain focused_margulis[66054]:             "sas_address": "",
Dec 02 08:17:29 np0005541914.localdomain focused_margulis[66054]:             "sas_device_handle": "",
Dec 02 08:17:29 np0005541914.localdomain focused_margulis[66054]:             "scheduler_mode": "mq-deadline",
Dec 02 08:17:29 np0005541914.localdomain focused_margulis[66054]:             "sectors": 0,
Dec 02 08:17:29 np0005541914.localdomain focused_margulis[66054]:             "sectorsize": "2048",
Dec 02 08:17:29 np0005541914.localdomain focused_margulis[66054]:             "size": 493568.0,
Dec 02 08:17:29 np0005541914.localdomain focused_margulis[66054]:             "support_discard": "0",
Dec 02 08:17:29 np0005541914.localdomain focused_margulis[66054]:             "type": "disk",
Dec 02 08:17:29 np0005541914.localdomain focused_margulis[66054]:             "vendor": "QEMU"
Dec 02 08:17:29 np0005541914.localdomain focused_margulis[66054]:         }
Dec 02 08:17:29 np0005541914.localdomain focused_margulis[66054]:     }
Dec 02 08:17:29 np0005541914.localdomain focused_margulis[66054]: ]
Dec 02 08:17:29 np0005541914.localdomain systemd[1]: libpod-2daa3c65843ef695215e0163f5ba9bac259785bbb7eea718859b73130acbe671.scope: Deactivated successfully.
Dec 02 08:17:29 np0005541914.localdomain systemd[1]: libpod-2daa3c65843ef695215e0163f5ba9bac259785bbb7eea718859b73130acbe671.scope: Consumed 1.058s CPU time.
Dec 02 08:17:29 np0005541914.localdomain podman[66037]: 2025-12-02 08:17:29.034112895 +0000 UTC m=+1.240150572 container died 2daa3c65843ef695215e0163f5ba9bac259785bbb7eea718859b73130acbe671 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_margulis, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_CLEAN=True, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, distribution-scope=public, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 02 08:17:29 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a3dc1cf386ade8161ec38d7c976fbdb41dcb4e9eb86c886693b13588cd208a6b-merged.mount: Deactivated successfully.
Dec 02 08:17:29 np0005541914.localdomain podman[67735]: 2025-12-02 08:17:29.131441295 +0000 UTC m=+0.086240816 container remove 2daa3c65843ef695215e0163f5ba9bac259785bbb7eea718859b73130acbe671 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_margulis, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=rhceph, GIT_BRANCH=main, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, version=7, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, release=1763362218)
Dec 02 08:17:29 np0005541914.localdomain systemd[1]: libpod-conmon-2daa3c65843ef695215e0163f5ba9bac259785bbb7eea718859b73130acbe671.scope: Deactivated successfully.
Dec 02 08:17:29 np0005541914.localdomain sudo[65885]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:29 np0005541914.localdomain sudo[67804]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jcnrwnxxnmjlwdwxhlukbtgwttetgzit ; ANSIBLE_ASYNC_DIR=/tmp/.ansible_async /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663448.9636645-107510-208696551997559/async_wrapper.py 736775549377 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663448.9636645-107510-208696551997559/AnsiballZ_command.py _
Dec 02 08:17:29 np0005541914.localdomain sudo[67804]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Dec 02 08:17:29 np0005541914.localdomain ansible-async_wrapper.py[67806]: Invoked with 736775549377 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663448.9636645-107510-208696551997559/AnsiballZ_command.py _
Dec 02 08:17:29 np0005541914.localdomain ansible-async_wrapper.py[67809]: Starting module and watcher
Dec 02 08:17:29 np0005541914.localdomain ansible-async_wrapper.py[67809]: Start watching 67810 (3600)
Dec 02 08:17:29 np0005541914.localdomain ansible-async_wrapper.py[67810]: Start module (67810)
Dec 02 08:17:29 np0005541914.localdomain ansible-async_wrapper.py[67806]: Return async_wrapper task started.
Dec 02 08:17:29 np0005541914.localdomain sudo[67804]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:29 np0005541914.localdomain sudo[67828]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-abikmjobtjsrdiazfbngwwpaeblwyxgi ; /usr/bin/python3
Dec 02 08:17:29 np0005541914.localdomain sudo[67828]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:29 np0005541914.localdomain python3[67830]: ansible-ansible.legacy.async_status Invoked with jid=736775549377.67806 mode=status _async_dir=/tmp/.ansible_async
Dec 02 08:17:29 np0005541914.localdomain sudo[67828]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:30 np0005541914.localdomain sudo[67831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:17:30 np0005541914.localdomain sudo[67831]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:17:30 np0005541914.localdomain sudo[67831]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:32 np0005541914.localdomain sshd[67955]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:17:32 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:17:33 np0005541914.localdomain podman[67957]: 2025-12-02 08:17:33.06749258 +0000 UTC m=+0.069550246 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, config_id=tripleo_step1, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, vcs-type=git, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:17:33 np0005541914.localdomain puppet-user[67824]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 02 08:17:33 np0005541914.localdomain puppet-user[67824]:    (file: /etc/puppet/hiera.yaml)
Dec 02 08:17:33 np0005541914.localdomain puppet-user[67824]: Warning: Undefined variable '::deploy_config_name';
Dec 02 08:17:33 np0005541914.localdomain puppet-user[67824]:    (file & line not available)
Dec 02 08:17:33 np0005541914.localdomain podman[67957]: 2025-12-02 08:17:33.230905658 +0000 UTC m=+0.232963354 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, distribution-scope=public, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1)
Dec 02 08:17:33 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:17:33 np0005541914.localdomain puppet-user[67824]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 02 08:17:33 np0005541914.localdomain puppet-user[67824]:    (file & line not available)
Dec 02 08:17:33 np0005541914.localdomain puppet-user[67824]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8)
Dec 02 08:17:33 np0005541914.localdomain puppet-user[67824]: Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/snmp/manifests/params.pp", 310]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Dec 02 08:17:33 np0005541914.localdomain puppet-user[67824]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Dec 02 08:17:33 np0005541914.localdomain puppet-user[67824]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Dec 02 08:17:33 np0005541914.localdomain puppet-user[67824]:                     with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 358]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Dec 02 08:17:33 np0005541914.localdomain puppet-user[67824]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Dec 02 08:17:33 np0005541914.localdomain puppet-user[67824]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Dec 02 08:17:33 np0005541914.localdomain puppet-user[67824]:                     with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 367]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Dec 02 08:17:33 np0005541914.localdomain puppet-user[67824]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Dec 02 08:17:33 np0005541914.localdomain puppet-user[67824]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Dec 02 08:17:33 np0005541914.localdomain puppet-user[67824]:                     with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 382]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Dec 02 08:17:33 np0005541914.localdomain puppet-user[67824]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Dec 02 08:17:33 np0005541914.localdomain puppet-user[67824]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Dec 02 08:17:33 np0005541914.localdomain puppet-user[67824]:                     with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 388]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Dec 02 08:17:33 np0005541914.localdomain puppet-user[67824]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Dec 02 08:17:33 np0005541914.localdomain puppet-user[67824]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Dec 02 08:17:33 np0005541914.localdomain puppet-user[67824]:                     with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 393]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Dec 02 08:17:33 np0005541914.localdomain puppet-user[67824]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Dec 02 08:17:33 np0005541914.localdomain puppet-user[67824]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69)
Dec 02 08:17:33 np0005541914.localdomain puppet-user[67824]: Notice: Compiled catalog for np0005541914.localdomain in environment production in 0.26 seconds
Dec 02 08:17:34 np0005541914.localdomain sshd[67955]: Received disconnect from 182.253.156.173 port 34746:11: Bye Bye [preauth]
Dec 02 08:17:34 np0005541914.localdomain sshd[67955]: Disconnected from authenticating user root 182.253.156.173 port 34746 [preauth]
Dec 02 08:17:34 np0005541914.localdomain ansible-async_wrapper.py[67809]: 67810 still running (3600)
Dec 02 08:17:39 np0005541914.localdomain ansible-async_wrapper.py[67809]: 67810 still running (3595)
Dec 02 08:17:40 np0005541914.localdomain sudo[68074]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-phzhingjlmazzeewydmusagktlhbhpqr ; /usr/bin/python3
Dec 02 08:17:40 np0005541914.localdomain sudo[68074]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:40 np0005541914.localdomain python3[68076]: ansible-ansible.legacy.async_status Invoked with jid=736775549377.67806 mode=status _async_dir=/tmp/.ansible_async
Dec 02 08:17:40 np0005541914.localdomain sudo[68074]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:41 np0005541914.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 02 08:17:41 np0005541914.localdomain systemd[1]: Starting man-db-cache-update.service...
Dec 02 08:17:41 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:17:41 np0005541914.localdomain systemd-rc-local-generator[68157]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:17:41 np0005541914.localdomain systemd-sysv-generator[68163]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:17:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:17:41 np0005541914.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Dec 02 08:17:42 np0005541914.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 02 08:17:42 np0005541914.localdomain systemd[1]: Finished man-db-cache-update.service.
Dec 02 08:17:42 np0005541914.localdomain systemd[1]: man-db-cache-update.service: Consumed 1.180s CPU time.
Dec 02 08:17:42 np0005541914.localdomain systemd[1]: run-r3f68b58bb0b54722b6ce99d9b8400895.service: Deactivated successfully.
Dec 02 08:17:42 np0005541914.localdomain puppet-user[67824]: Notice: /Stage[main]/Snmp/Package[snmpd]/ensure: created
Dec 02 08:17:42 np0005541914.localdomain puppet-user[67824]: Notice: /Stage[main]/Snmp/File[snmpd.conf]/content: content changed '{sha256}2b743f970e80e2150759bfc66f2d8d0fbd8b31624f79e2991248d1a5ac57494e' to '{sha256}79bc8750f7571a711937563794c62afe468b6afdec06ce72ff411863e05e4aa2'
Dec 02 08:17:42 np0005541914.localdomain puppet-user[67824]: Notice: /Stage[main]/Snmp/File[snmpd.sysconfig]/content: content changed '{sha256}b63afb2dee7419b6834471f88581d981c8ae5c8b27b9d329ba67a02f3ddd8221' to '{sha256}3917ee8bbc680ad50d77186ad4a1d2705c2025c32fc32f823abbda7f2328dfbd'
Dec 02 08:17:42 np0005541914.localdomain puppet-user[67824]: Notice: /Stage[main]/Snmp/File[snmptrapd.conf]/content: content changed '{sha256}2e1ca894d609ef337b6243909bf5623c87fd5df98ecbd00c7d4c12cf12f03c4e' to '{sha256}3ecf18da1ba84ea3932607f2b903ee6a038b6f9ac4e1e371e48f3ef61c5052ea'
Dec 02 08:17:42 np0005541914.localdomain puppet-user[67824]: Notice: /Stage[main]/Snmp/File[snmptrapd.sysconfig]/content: content changed '{sha256}86ee5797ad10cb1ea0f631e9dfa6ae278ecf4f4d16f4c80f831cdde45601b23c' to '{sha256}2244553364afcca151958f8e2003e4c182f5e2ecfbe55405cec73fd818581e97'
Dec 02 08:17:42 np0005541914.localdomain puppet-user[67824]: Notice: /Stage[main]/Snmp/Service[snmptrapd]: Triggered 'refresh' from 2 events
Dec 02 08:17:44 np0005541914.localdomain ansible-async_wrapper.py[67809]: 67810 still running (3590)
Dec 02 08:17:47 np0005541914.localdomain puppet-user[67824]: Notice: /Stage[main]/Tripleo::Profile::Base::Snmp/Snmp::Snmpv3_user[ro_snmp_user]/Exec[create-snmpv3-user-ro_snmp_user]/returns: executed successfully
Dec 02 08:17:47 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:17:48 np0005541914.localdomain systemd-rc-local-generator[69203]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:17:48 np0005541914.localdomain systemd-sysv-generator[69208]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:17:48 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:17:48 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:17:48 np0005541914.localdomain systemd[1]: Starting Simple Network Management Protocol (SNMP) Daemon....
Dec 02 08:17:48 np0005541914.localdomain snmpd[69217]: Can't find directory of RPM packages
Dec 02 08:17:48 np0005541914.localdomain systemd[1]: tmp-crun.pCSP8q.mount: Deactivated successfully.
Dec 02 08:17:48 np0005541914.localdomain snmpd[69217]: Duplicate IPv4 address detected, some interfaces may not be visible in IP-MIB
Dec 02 08:17:48 np0005541914.localdomain podman[69216]: 2025-12-02 08:17:48.386083962 +0000 UTC m=+0.080423505 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, tcib_managed=true, vendor=Red Hat, Inc., container_name=collectd, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, distribution-scope=public, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vcs-type=git)
Dec 02 08:17:48 np0005541914.localdomain systemd[1]: Started Simple Network Management Protocol (SNMP) Daemon..
Dec 02 08:17:48 np0005541914.localdomain podman[69216]: 2025-12-02 08:17:48.421312838 +0000 UTC m=+0.115652401 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, release=1761123044, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, url=https://www.redhat.com, batch=17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., version=17.1.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, distribution-scope=public)
Dec 02 08:17:48 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:17:48 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:17:48 np0005541914.localdomain systemd-rc-local-generator[69261]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:17:48 np0005541914.localdomain systemd-sysv-generator[69264]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:17:48 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:17:48 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:17:48 np0005541914.localdomain systemd-rc-local-generator[69298]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:17:48 np0005541914.localdomain systemd-sysv-generator[69304]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:17:48 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:17:49 np0005541914.localdomain puppet-user[67824]: Notice: /Stage[main]/Snmp/Service[snmpd]/ensure: ensure changed 'stopped' to 'running'
Dec 02 08:17:49 np0005541914.localdomain puppet-user[67824]: Notice: Applied catalog in 15.51 seconds
Dec 02 08:17:49 np0005541914.localdomain puppet-user[67824]: Application:
Dec 02 08:17:49 np0005541914.localdomain puppet-user[67824]:    Initial environment: production
Dec 02 08:17:49 np0005541914.localdomain puppet-user[67824]:    Converged environment: production
Dec 02 08:17:49 np0005541914.localdomain puppet-user[67824]:          Run mode: user
Dec 02 08:17:49 np0005541914.localdomain puppet-user[67824]: Changes:
Dec 02 08:17:49 np0005541914.localdomain puppet-user[67824]:             Total: 8
Dec 02 08:17:49 np0005541914.localdomain puppet-user[67824]: Events:
Dec 02 08:17:49 np0005541914.localdomain puppet-user[67824]:           Success: 8
Dec 02 08:17:49 np0005541914.localdomain puppet-user[67824]:             Total: 8
Dec 02 08:17:49 np0005541914.localdomain puppet-user[67824]: Resources:
Dec 02 08:17:49 np0005541914.localdomain puppet-user[67824]:         Restarted: 1
Dec 02 08:17:49 np0005541914.localdomain puppet-user[67824]:           Changed: 8
Dec 02 08:17:49 np0005541914.localdomain puppet-user[67824]:       Out of sync: 8
Dec 02 08:17:49 np0005541914.localdomain puppet-user[67824]:             Total: 19
Dec 02 08:17:49 np0005541914.localdomain puppet-user[67824]: Time:
Dec 02 08:17:49 np0005541914.localdomain puppet-user[67824]:        Filebucket: 0.00
Dec 02 08:17:49 np0005541914.localdomain puppet-user[67824]:          Schedule: 0.00
Dec 02 08:17:49 np0005541914.localdomain puppet-user[67824]:            Augeas: 0.01
Dec 02 08:17:49 np0005541914.localdomain puppet-user[67824]:              File: 0.07
Dec 02 08:17:49 np0005541914.localdomain puppet-user[67824]:    Config retrieval: 0.32
Dec 02 08:17:49 np0005541914.localdomain puppet-user[67824]:           Service: 1.16
Dec 02 08:17:49 np0005541914.localdomain puppet-user[67824]:    Transaction evaluation: 15.50
Dec 02 08:17:49 np0005541914.localdomain puppet-user[67824]:    Catalog application: 15.51
Dec 02 08:17:49 np0005541914.localdomain puppet-user[67824]:          Last run: 1764663469
Dec 02 08:17:49 np0005541914.localdomain puppet-user[67824]:              Exec: 5.06
Dec 02 08:17:49 np0005541914.localdomain puppet-user[67824]:           Package: 9.01
Dec 02 08:17:49 np0005541914.localdomain puppet-user[67824]:             Total: 15.52
Dec 02 08:17:49 np0005541914.localdomain puppet-user[67824]: Version:
Dec 02 08:17:49 np0005541914.localdomain puppet-user[67824]:            Config: 1764663453
Dec 02 08:17:49 np0005541914.localdomain puppet-user[67824]:            Puppet: 7.10.0
Dec 02 08:17:49 np0005541914.localdomain ansible-async_wrapper.py[67810]: Module complete (67810)
Dec 02 08:17:49 np0005541914.localdomain ansible-async_wrapper.py[67809]: Done in kid B.
Dec 02 08:17:50 np0005541914.localdomain sudo[69323]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iezgnntwqzpihbpxolyumspayjjosmni ; /usr/bin/python3
Dec 02 08:17:50 np0005541914.localdomain sudo[69323]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:50 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:17:50 np0005541914.localdomain systemd[1]: tmp-crun.gOlKtO.mount: Deactivated successfully.
Dec 02 08:17:50 np0005541914.localdomain podman[69325]: 2025-12-02 08:17:50.409054275 +0000 UTC m=+0.091959045 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.4)
Dec 02 08:17:50 np0005541914.localdomain podman[69325]: 2025-12-02 08:17:50.447864863 +0000 UTC m=+0.130769633 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step3, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid)
Dec 02 08:17:50 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:17:50 np0005541914.localdomain python3[69326]: ansible-ansible.legacy.async_status Invoked with jid=736775549377.67806 mode=status _async_dir=/tmp/.ansible_async
Dec 02 08:17:50 np0005541914.localdomain sudo[69323]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:51 np0005541914.localdomain sudo[69356]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xvsvhmocgjajkhegocfworqwvqymetlw ; /usr/bin/python3
Dec 02 08:17:51 np0005541914.localdomain sudo[69356]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:51 np0005541914.localdomain python3[69358]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 02 08:17:51 np0005541914.localdomain sudo[69356]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:51 np0005541914.localdomain sudo[69372]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qhfrxvecgmgvknawodcqorwcgfvtoruy ; /usr/bin/python3
Dec 02 08:17:51 np0005541914.localdomain sudo[69372]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:51 np0005541914.localdomain python3[69374]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 08:17:51 np0005541914.localdomain sudo[69372]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:51 np0005541914.localdomain sudo[69422]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sjshiykuebdvabweypainvyycpotkvzy ; /usr/bin/python3
Dec 02 08:17:51 np0005541914.localdomain sudo[69422]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:52 np0005541914.localdomain python3[69424]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:17:52 np0005541914.localdomain sudo[69422]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:52 np0005541914.localdomain sudo[69440]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aszkzrwlvivdasygfdkmrsjmvxakhtuc ; /usr/bin/python3
Dec 02 08:17:52 np0005541914.localdomain sudo[69440]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:52 np0005541914.localdomain python3[69442]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpm2w18f56 recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 02 08:17:52 np0005541914.localdomain sudo[69440]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:52 np0005541914.localdomain sudo[69470]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-koqjwmkyzxvdroouwfvthtmybxfmhztx ; /usr/bin/python3
Dec 02 08:17:52 np0005541914.localdomain sudo[69470]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:52 np0005541914.localdomain python3[69472]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:17:52 np0005541914.localdomain sudo[69470]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:52 np0005541914.localdomain sudo[69486]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mqnfhnqiuggsholbqscamcnfoesbahpu ; /usr/bin/python3
Dec 02 08:17:52 np0005541914.localdomain sudo[69486]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:53 np0005541914.localdomain sudo[69486]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:53 np0005541914.localdomain sudo[69573]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qmtyjofqdmnqsycqrnxobptglyyjacnw ; /usr/bin/python3
Dec 02 08:17:53 np0005541914.localdomain sudo[69573]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:54 np0005541914.localdomain python3[69575]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None
Dec 02 08:17:54 np0005541914.localdomain sudo[69573]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:54 np0005541914.localdomain sudo[69592]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tqfzgrcljabcixplewzyafrkfnrjaskw ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:17:54 np0005541914.localdomain sudo[69592]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:54 np0005541914.localdomain python3[69594]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:17:54 np0005541914.localdomain sudo[69592]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:54 np0005541914.localdomain sudo[69608]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-axcpxzkibordsiqicjqmnngtqrptxqrr ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:17:54 np0005541914.localdomain sudo[69608]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:55 np0005541914.localdomain sudo[69608]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:55 np0005541914.localdomain sudo[69624]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-okiovnyzqqmearuddpiziuzwpesilspg ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:17:55 np0005541914.localdomain sudo[69624]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:55 np0005541914.localdomain python3[69626]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 08:17:55 np0005541914.localdomain sudo[69624]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:56 np0005541914.localdomain sudo[69674]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qarmjssntwofjjefahlyugyeurworlxk ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:17:56 np0005541914.localdomain sudo[69674]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:56 np0005541914.localdomain python3[69676]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:17:56 np0005541914.localdomain sudo[69674]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:56 np0005541914.localdomain sudo[69692]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kxcwmtiarjfyoducqhpsfdxeujurbarz ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:17:56 np0005541914.localdomain sudo[69692]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:56 np0005541914.localdomain python3[69694]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:17:56 np0005541914.localdomain sudo[69692]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:56 np0005541914.localdomain sudo[69754]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eorwbafkbkzaxkzweajbtkytebazffme ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:17:56 np0005541914.localdomain sudo[69754]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:57 np0005541914.localdomain python3[69756]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:17:57 np0005541914.localdomain sudo[69754]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:57 np0005541914.localdomain sudo[69772]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mzkmnityexjrzznuknhvnutnekjztmap ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:17:57 np0005541914.localdomain sudo[69772]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:57 np0005541914.localdomain python3[69774]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:17:57 np0005541914.localdomain sudo[69772]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:57 np0005541914.localdomain sudo[69834]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xwlwyzftfvzeiindpqsjfrhnrfgfbvtx ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:17:57 np0005541914.localdomain sudo[69834]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:58 np0005541914.localdomain python3[69836]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:17:58 np0005541914.localdomain sudo[69834]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:58 np0005541914.localdomain sudo[69852]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ikayezavgdxcqtpqbxrndwgumtcynwmb ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:17:58 np0005541914.localdomain sudo[69852]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:58 np0005541914.localdomain python3[69854]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:17:58 np0005541914.localdomain sudo[69852]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:58 np0005541914.localdomain sudo[69914]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-itshxdvumiptjfwmahriqlrljmcakhtg ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:17:58 np0005541914.localdomain sudo[69914]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:58 np0005541914.localdomain python3[69916]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:17:58 np0005541914.localdomain sudo[69914]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:59 np0005541914.localdomain sudo[69932]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rkxymqfpqxgpifkxkmmxzawxxjnlkkry ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:17:59 np0005541914.localdomain sudo[69932]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:59 np0005541914.localdomain python3[69934]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:17:59 np0005541914.localdomain sudo[69932]: pam_unix(sudo:session): session closed for user root
Dec 02 08:17:59 np0005541914.localdomain sudo[69962]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gxzwottlcbysngbnlnawcawmevwrlmpe ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:17:59 np0005541914.localdomain sudo[69962]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:17:59 np0005541914.localdomain python3[69964]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 08:17:59 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:17:59 np0005541914.localdomain systemd-rc-local-generator[69988]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:17:59 np0005541914.localdomain systemd-sysv-generator[69991]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:17:59 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:18:00 np0005541914.localdomain sudo[69962]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:00 np0005541914.localdomain sudo[70048]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fimywrkftgczuysmvizafequqazkwtgr ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:18:00 np0005541914.localdomain sudo[70048]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:00 np0005541914.localdomain python3[70050]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:18:00 np0005541914.localdomain sudo[70048]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:00 np0005541914.localdomain sudo[70066]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dydhitxochhfkczbkkenjyqskxzryhaj ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:18:00 np0005541914.localdomain sudo[70066]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:00 np0005541914.localdomain python3[70068]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:18:00 np0005541914.localdomain sudo[70066]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:01 np0005541914.localdomain sudo[70128]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jklvmqwndrmjgodecukwzrqzwicnjbgp ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:18:01 np0005541914.localdomain sudo[70128]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:01 np0005541914.localdomain python3[70130]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:18:01 np0005541914.localdomain sudo[70128]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:01 np0005541914.localdomain sudo[70146]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-utlnnnuliprjukijkyxcmclpmankdgel ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:18:01 np0005541914.localdomain sudo[70146]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:01 np0005541914.localdomain python3[70148]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:18:01 np0005541914.localdomain sudo[70146]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:01 np0005541914.localdomain sudo[70176]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lljvhajwyfmtztafezknrotbcpzaqopo ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:18:01 np0005541914.localdomain sudo[70176]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:02 np0005541914.localdomain python3[70178]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 08:18:02 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:18:02 np0005541914.localdomain systemd-rc-local-generator[70204]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:18:02 np0005541914.localdomain systemd-sysv-generator[70209]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:18:02 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:18:02 np0005541914.localdomain systemd[1]: Starting Create netns directory...
Dec 02 08:18:02 np0005541914.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 02 08:18:02 np0005541914.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 02 08:18:02 np0005541914.localdomain systemd[1]: Finished Create netns directory.
Dec 02 08:18:02 np0005541914.localdomain sudo[70176]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:03 np0005541914.localdomain sudo[70233]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wjsmswfhyjitxtvamkghrnsobcisfxef ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:18:03 np0005541914.localdomain sudo[70233]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:03 np0005541914.localdomain python3[70235]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6
Dec 02 08:18:03 np0005541914.localdomain sudo[70233]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:03 np0005541914.localdomain sudo[70249]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zqixrhygoxswkgxodzlcogdrdqmtxrql ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:18:03 np0005541914.localdomain sudo[70249]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:03 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:18:03 np0005541914.localdomain podman[70252]: 2025-12-02 08:18:03.661469086 +0000 UTC m=+0.099340764 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, vendor=Red Hat, Inc., architecture=x86_64, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, distribution-scope=public, name=rhosp17/openstack-qdrouterd, tcib_managed=true, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, version=17.1.12)
Dec 02 08:18:03 np0005541914.localdomain podman[70252]: 2025-12-02 08:18:03.881539748 +0000 UTC m=+0.319411496 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, release=1761123044, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git)
Dec 02 08:18:03 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:18:04 np0005541914.localdomain sudo[70249]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:05 np0005541914.localdomain sudo[70319]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zdpanudcmqecwomdfsbhjonagzqfoyhs ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:18:05 np0005541914.localdomain sudo[70319]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:05 np0005541914.localdomain python3[70321]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step4 config_dir=/var/lib/tripleo-config/container-startup-config/step_4 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False
Dec 02 08:18:05 np0005541914.localdomain podman[70466]: 2025-12-02 08:18:05.988067692 +0000 UTC m=+0.080236319 container create a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, config_id=tripleo_step4, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 02 08:18:06 np0005541914.localdomain systemd[1]: Started libpod-conmon-a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.scope.
Dec 02 08:18:06 np0005541914.localdomain podman[70492]: 2025-12-02 08:18:06.029599656 +0000 UTC m=+0.085064790 container create 01fa9ed7c38a9533f171c79267fdee2e0f06716a6e7cb04d371acb30af6b0e69 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, version=17.1.12, build-date=2025-11-19T00:35:22Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, tcib_managed=true, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_libvirt_init_secret, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-libvirt)
Dec 02 08:18:06 np0005541914.localdomain podman[70466]: 2025-12-02 08:18:05.943970679 +0000 UTC m=+0.036139336 image pull  registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1
Dec 02 08:18:06 np0005541914.localdomain podman[70504]: 2025-12-02 08:18:06.057206225 +0000 UTC m=+0.092306875 container create e116e95591203fdc7f3a4b3a13962cfe84ce654738a9eb088956becbc4c1e1c3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, com.redhat.component=openstack-ovn-controller-container, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_id=tripleo_step4, container_name=configure_cms_options, version=17.1.12)
Dec 02 08:18:06 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:18:06 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d06b9618ea7afeaba672d022a7f469c1b4fb954818b2395f63391bb50912ecbb/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff)
Dec 02 08:18:06 np0005541914.localdomain systemd[1]: Started libpod-conmon-01fa9ed7c38a9533f171c79267fdee2e0f06716a6e7cb04d371acb30af6b0e69.scope.
Dec 02 08:18:06 np0005541914.localdomain systemd[1]: Started libpod-conmon-e116e95591203fdc7f3a4b3a13962cfe84ce654738a9eb088956becbc4c1e1c3.scope.
Dec 02 08:18:06 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:18:06 np0005541914.localdomain podman[70492]: 2025-12-02 08:18:05.982025144 +0000 UTC m=+0.037490268 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Dec 02 08:18:06 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dfa5ad77b1341d3196a32aea0408575f7ecd87125bb33cfdce442fdca4faf78/merged/etc/nova supports timestamps until 2038 (0x7fffffff)
Dec 02 08:18:06 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dfa5ad77b1341d3196a32aea0408575f7ecd87125bb33cfdce442fdca4faf78/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 02 08:18:06 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dfa5ad77b1341d3196a32aea0408575f7ecd87125bb33cfdce442fdca4faf78/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 02 08:18:06 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:18:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:18:06 np0005541914.localdomain podman[70466]: 2025-12-02 08:18:06.096716385 +0000 UTC m=+0.188885002 container init a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_id=tripleo_step4, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc.)
Dec 02 08:18:06 np0005541914.localdomain podman[70493]: 2025-12-02 08:18:06.097216721 +0000 UTC m=+0.147690980 container create 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, release=1761123044, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, vcs-type=git, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:18:06 np0005541914.localdomain podman[70504]: 2025-12-02 08:18:06.020061378 +0000 UTC m=+0.055162078 image pull  registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1
Dec 02 08:18:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:18:06 np0005541914.localdomain sudo[70571]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 02 08:18:06 np0005541914.localdomain sudo[70571]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 02 08:18:06 np0005541914.localdomain systemd[1]: Started libpod-conmon-814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.scope.
Dec 02 08:18:06 np0005541914.localdomain podman[70466]: 2025-12-02 08:18:06.130217958 +0000 UTC m=+0.222386595 container start a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, version=17.1.12, io.openshift.expose-services=, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 02 08:18:06 np0005541914.localdomain python3[70321]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=885e9e62222ac12bce952717b40ccfc4 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ceilometer_agent_ipmi --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_agent_ipmi.log --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1
Dec 02 08:18:06 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:18:06 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0089ea487a0d5fd991d7e6cecf5db6fae8c1b61a42816d2acbe202fbd50d575/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff)
Dec 02 08:18:06 np0005541914.localdomain podman[70493]: 2025-12-02 08:18:06.04677323 +0000 UTC m=+0.097247499 image pull  registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1
Dec 02 08:18:06 np0005541914.localdomain podman[70492]: 2025-12-02 08:18:06.151070467 +0000 UTC m=+0.206535571 container init 01fa9ed7c38a9533f171c79267fdee2e0f06716a6e7cb04d371acb30af6b0e69 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, container_name=nova_libvirt_init_secret, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, com.redhat.component=openstack-nova-libvirt-container, build-date=2025-11-19T00:35:22Z, config_id=tripleo_step4, version=17.1.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc.)
Dec 02 08:18:06 np0005541914.localdomain podman[70492]: 2025-12-02 08:18:06.158435537 +0000 UTC m=+0.213900641 container start 01fa9ed7c38a9533f171c79267fdee2e0f06716a6e7cb04d371acb30af6b0e69 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, build-date=2025-11-19T00:35:22Z, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step4, com.redhat.component=openstack-nova-libvirt-container, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, name=rhosp17/openstack-nova-libvirt, container_name=nova_libvirt_init_secret, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1)
Dec 02 08:18:06 np0005541914.localdomain podman[70492]: 2025-12-02 08:18:06.158635503 +0000 UTC m=+0.214100607 container attach 01fa9ed7c38a9533f171c79267fdee2e0f06716a6e7cb04d371acb30af6b0e69 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, build-date=2025-11-19T00:35:22Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step4, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, container_name=nova_libvirt_init_secret, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-libvirt, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team)
Dec 02 08:18:06 np0005541914.localdomain podman[70539]: 2025-12-02 08:18:06.060415795 +0000 UTC m=+0.045197818 image pull  registry.redhat.io/rhosp-rhel9/openstack-cron:17.1
Dec 02 08:18:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:18:06 np0005541914.localdomain podman[70493]: 2025-12-02 08:18:06.171508123 +0000 UTC m=+0.221982412 container init 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.4, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, config_id=tripleo_step4, batch=17.1_20251118.1, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container)
Dec 02 08:18:06 np0005541914.localdomain sudo[70571]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:06 np0005541914.localdomain sudo[70606]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 02 08:18:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:18:06 np0005541914.localdomain sudo[70606]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 02 08:18:06 np0005541914.localdomain podman[70573]: 2025-12-02 08:18:06.195661786 +0000 UTC m=+0.062764495 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=starting, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, tcib_managed=true, distribution-scope=public, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, version=17.1.12)
Dec 02 08:18:06 np0005541914.localdomain podman[70539]: 2025-12-02 08:18:06.210571599 +0000 UTC m=+0.195353592 container create 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, container_name=logrotate_crond, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Dec 02 08:18:06 np0005541914.localdomain systemd[1]: Started libpod-conmon-7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.scope.
Dec 02 08:18:06 np0005541914.localdomain podman[70493]: 2025-12-02 08:18:06.253695743 +0000 UTC m=+0.304169992 container start 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, build-date=2025-11-19T00:11:48Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, tcib_managed=true, io.buildah.version=1.41.4, url=https://www.redhat.com)
Dec 02 08:18:06 np0005541914.localdomain python3[70321]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=885e9e62222ac12bce952717b40ccfc4 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ceilometer_agent_compute --label managed_by=tripleo_ansible --label config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_agent_compute.log --network host --privileged=False --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1
Dec 02 08:18:06 np0005541914.localdomain sudo[70606]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:06 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:18:06 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4bf0a50fd432b1e17b5b60f382aa20fe21251bda35e0089667eec28efb9c70f/merged/var/log/containers supports timestamps until 2038 (0x7fffffff)
Dec 02 08:18:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:18:06 np0005541914.localdomain podman[70613]: 2025-12-02 08:18:06.29823758 +0000 UTC m=+0.096049332 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=starting, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1761123044, build-date=2025-11-19T00:11:48Z, vcs-type=git, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute)
Dec 02 08:18:06 np0005541914.localdomain podman[70573]: 2025-12-02 08:18:06.298969272 +0000 UTC m=+0.166071991 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_id=tripleo_step4, version=17.1.12, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:18:06 np0005541914.localdomain podman[70539]: 2025-12-02 08:18:06.299041444 +0000 UTC m=+0.283823427 container init 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, name=rhosp17/openstack-cron, release=1761123044, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron)
Dec 02 08:18:06 np0005541914.localdomain podman[70573]: unhealthy
Dec 02 08:18:06 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:18:06 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Failed with result 'exit-code'.
Dec 02 08:18:06 np0005541914.localdomain podman[70504]: 2025-12-02 08:18:06.309566762 +0000 UTC m=+0.344667402 container init e116e95591203fdc7f3a4b3a13962cfe84ce654738a9eb088956becbc4c1e1c3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, tcib_managed=true, container_name=configure_cms_options, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, build-date=2025-11-18T23:34:05Z, release=1761123044, config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, name=rhosp17/openstack-ovn-controller, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, architecture=x86_64)
Dec 02 08:18:06 np0005541914.localdomain podman[70504]: 2025-12-02 08:18:06.321432011 +0000 UTC m=+0.356532651 container start e116e95591203fdc7f3a4b3a13962cfe84ce654738a9eb088956becbc4c1e1c3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, container_name=configure_cms_options, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:18:06 np0005541914.localdomain podman[70504]: 2025-12-02 08:18:06.321729451 +0000 UTC m=+0.356830111 container attach e116e95591203fdc7f3a4b3a13962cfe84ce654738a9eb088956becbc4c1e1c3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, version=17.1.12, architecture=x86_64, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, release=1761123044, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=configure_cms_options, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, build-date=2025-11-18T23:34:05Z, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Dec 02 08:18:06 np0005541914.localdomain sudo[70672]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 02 08:18:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:18:06 np0005541914.localdomain systemd[1]: libpod-01fa9ed7c38a9533f171c79267fdee2e0f06716a6e7cb04d371acb30af6b0e69.scope: Deactivated successfully.
Dec 02 08:18:06 np0005541914.localdomain sudo[70672]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 02 08:18:06 np0005541914.localdomain podman[70539]: 2025-12-02 08:18:06.328847143 +0000 UTC m=+0.313629136 container start 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, version=17.1.12, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, release=1761123044, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_id=tripleo_step4)
Dec 02 08:18:06 np0005541914.localdomain podman[70492]: 2025-12-02 08:18:06.333302471 +0000 UTC m=+0.388767565 container died 01fa9ed7c38a9533f171c79267fdee2e0f06716a6e7cb04d371acb30af6b0e69 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, build-date=2025-11-19T00:35:22Z, name=rhosp17/openstack-nova-libvirt, config_id=tripleo_step4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-type=git, release=1761123044, managed_by=tripleo_ansible, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_libvirt_init_secret, tcib_managed=true, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']})
Dec 02 08:18:06 np0005541914.localdomain python3[70321]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name logrotate_crond --conmon-pidfile /run/logrotate_crond.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=53ed83bb0cae779ff95edb2002262c6f --healthcheck-command /usr/share/openstack-tripleo-common/healthcheck/cron --label config_id=tripleo_step4 --label container_name=logrotate_crond --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/logrotate_crond.log --network none --pid host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro --volume /var/log/containers:/var/log/containers:z registry.redhat.io/rhosp-rhel9/openstack-cron:17.1
Dec 02 08:18:06 np0005541914.localdomain sudo[70672]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:06 np0005541914.localdomain crond[70671]: (CRON) STARTUP (1.5.7)
Dec 02 08:18:06 np0005541914.localdomain crond[70671]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 48% if used.)
Dec 02 08:18:06 np0005541914.localdomain crond[70671]: (CRON) INFO (running with inotify support)
Dec 02 08:18:06 np0005541914.localdomain podman[70678]: 2025-12-02 08:18:06.411466174 +0000 UTC m=+0.073452427 container cleanup 01fa9ed7c38a9533f171c79267fdee2e0f06716a6e7cb04d371acb30af6b0e69 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, version=17.1.12, build-date=2025-11-19T00:35:22Z, name=rhosp17/openstack-nova-libvirt, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, container_name=nova_libvirt_init_secret, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.41.4)
Dec 02 08:18:06 np0005541914.localdomain ovs-vsctl[70710]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . external_ids ovn-cms-options
Dec 02 08:18:06 np0005541914.localdomain systemd[1]: libpod-conmon-01fa9ed7c38a9533f171c79267fdee2e0f06716a6e7cb04d371acb30af6b0e69.scope: Deactivated successfully.
Dec 02 08:18:06 np0005541914.localdomain systemd[1]: libpod-e116e95591203fdc7f3a4b3a13962cfe84ce654738a9eb088956becbc4c1e1c3.scope: Deactivated successfully.
Dec 02 08:18:06 np0005541914.localdomain python3[70321]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_libvirt_init_secret --cgroupns=host --conmon-pidfile /run/nova_libvirt_init_secret.pid --detach=False --env LIBVIRT_DEFAULT_URI=qemu:///system --env TRIPLEO_CONFIG_HASH=51230b537c6b56095225b7a0a6b952d0 --label config_id=tripleo_step4 --label container_name=nova_libvirt_init_secret --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_libvirt_init_secret.log --network host --privileged=False --security-opt label=disable --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova --volume /etc/libvirt:/etc/libvirt --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro --volume /var/lib/tripleo-config/ceph:/etc/ceph:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /nova_libvirt_init_secret.sh ceph:openstack
Dec 02 08:18:06 np0005541914.localdomain podman[70504]: 2025-12-02 08:18:06.421359153 +0000 UTC m=+0.456459793 container died e116e95591203fdc7f3a4b3a13962cfe84ce654738a9eb088956becbc4c1e1c3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-18T23:34:05Z, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, name=rhosp17/openstack-ovn-controller, version=17.1.12, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=configure_cms_options, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:18:06 np0005541914.localdomain podman[70613]: 2025-12-02 08:18:06.437101762 +0000 UTC m=+0.234913494 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, config_id=tripleo_step4, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4)
Dec 02 08:18:06 np0005541914.localdomain podman[70613]: unhealthy
Dec 02 08:18:06 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:18:06 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Failed with result 'exit-code'.
Dec 02 08:18:06 np0005541914.localdomain podman[70676]: 2025-12-02 08:18:06.502609103 +0000 UTC m=+0.168185488 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=starting, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, url=https://www.redhat.com, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, release=1761123044, architecture=x86_64, container_name=logrotate_crond, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron)
Dec 02 08:18:06 np0005541914.localdomain podman[70676]: 2025-12-02 08:18:06.511040005 +0000 UTC m=+0.176616420 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, batch=17.1_20251118.1, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64)
Dec 02 08:18:06 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:18:06 np0005541914.localdomain podman[70714]: 2025-12-02 08:18:06.55457504 +0000 UTC m=+0.115851888 container cleanup e116e95591203fdc7f3a4b3a13962cfe84ce654738a9eb088956becbc4c1e1c3 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, managed_by=tripleo_ansible, container_name=configure_cms_options, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, architecture=x86_64, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, vcs-type=git, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 02 08:18:06 np0005541914.localdomain systemd[1]: libpod-conmon-e116e95591203fdc7f3a4b3a13962cfe84ce654738a9eb088956becbc4c1e1c3.scope: Deactivated successfully.
Dec 02 08:18:06 np0005541914.localdomain python3[70321]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name configure_cms_options --conmon-pidfile /run/configure_cms_options.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1764661676 --label config_id=tripleo_step4 --label container_name=configure_cms_options --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/configure_cms_options.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 /bin/bash -c CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi
Dec 02 08:18:06 np0005541914.localdomain podman[70825]: 2025-12-02 08:18:06.715556823 +0000 UTC m=+0.078833916 container create f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, tcib_managed=true, release=1761123044)
Dec 02 08:18:06 np0005541914.localdomain systemd[1]: Started libpod-conmon-f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.scope.
Dec 02 08:18:06 np0005541914.localdomain podman[70825]: 2025-12-02 08:18:06.678152507 +0000 UTC m=+0.041429600 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Dec 02 08:18:06 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:18:06 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/becbc927e1a2defd8b98f9313e9ae54e436a645a48c9af865764923e7f3644aa/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 02 08:18:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:18:06 np0005541914.localdomain podman[70825]: 2025-12-02 08:18:06.83498407 +0000 UTC m=+0.198261163 container init f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=nova_migration_target, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=)
Dec 02 08:18:06 np0005541914.localdomain sudo[70899]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 02 08:18:06 np0005541914.localdomain sudo[70899]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 02 08:18:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:18:06 np0005541914.localdomain podman[70825]: 2025-12-02 08:18:06.892326176 +0000 UTC m=+0.255603299 container start f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, release=1761123044, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, architecture=x86_64, version=17.1.12, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4)
Dec 02 08:18:06 np0005541914.localdomain python3[70321]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_migration_target --conmon-pidfile /run/nova_migration_target.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=51230b537c6b56095225b7a0a6b952d0 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=nova_migration_target --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_migration_target.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /etc/ssh:/host-ssh:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Dec 02 08:18:06 np0005541914.localdomain podman[70871]: 2025-12-02 08:18:06.905898329 +0000 UTC m=+0.146226325 container create 9e2f64a754345c5abed7c4f14afaed370ed35e857158b80af95000e9458ab27b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=setup_ovs_manager, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-type=git, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, managed_by=tripleo_ansible, tcib_managed=true, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, version=17.1.12)
Dec 02 08:18:06 np0005541914.localdomain sudo[70899]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:06 np0005541914.localdomain podman[70871]: 2025-12-02 08:18:06.846322673 +0000 UTC m=+0.086650729 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1
Dec 02 08:18:06 np0005541914.localdomain systemd[1]: Started libpod-conmon-9e2f64a754345c5abed7c4f14afaed370ed35e857158b80af95000e9458ab27b.scope.
Dec 02 08:18:06 np0005541914.localdomain sshd[70922]: Server listening on 0.0.0.0 port 2022.
Dec 02 08:18:06 np0005541914.localdomain sshd[70922]: Server listening on :: port 2022.
Dec 02 08:18:06 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:18:06 np0005541914.localdomain podman[70871]: 2025-12-02 08:18:06.983660039 +0000 UTC m=+0.223987995 container init 9e2f64a754345c5abed7c4f14afaed370ed35e857158b80af95000e9458ab27b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, container_name=setup_ovs_manager, version=17.1.12, managed_by=tripleo_ansible, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, vcs-type=git)
Dec 02 08:18:06 np0005541914.localdomain podman[70871]: 2025-12-02 08:18:06.99651592 +0000 UTC m=+0.236843896 container start 9e2f64a754345c5abed7c4f14afaed370ed35e857158b80af95000e9458ab27b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, build-date=2025-11-19T00:14:25Z, version=17.1.12, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, container_name=setup_ovs_manager, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 02 08:18:06 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-6dfa5ad77b1341d3196a32aea0408575f7ecd87125bb33cfdce442fdca4faf78-merged.mount: Deactivated successfully.
Dec 02 08:18:06 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-01fa9ed7c38a9533f171c79267fdee2e0f06716a6e7cb04d371acb30af6b0e69-userdata-shm.mount: Deactivated successfully.
Dec 02 08:18:07 np0005541914.localdomain podman[70871]: 2025-12-02 08:18:07.004495558 +0000 UTC m=+0.244823614 container attach 9e2f64a754345c5abed7c4f14afaed370ed35e857158b80af95000e9458ab27b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, url=https://www.redhat.com, container_name=setup_ovs_manager, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.buildah.version=1.41.4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1)
Dec 02 08:18:07 np0005541914.localdomain systemd[1]: tmp-crun.QX3SdV.mount: Deactivated successfully.
Dec 02 08:18:07 np0005541914.localdomain podman[70901]: 2025-12-02 08:18:07.012445996 +0000 UTC m=+0.111656248 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=starting, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-type=git, tcib_managed=true, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, io.buildah.version=1.41.4, vendor=Red Hat, Inc.)
Dec 02 08:18:07 np0005541914.localdomain sudo[70957]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmpp53g8jwe/privsep.sock
Dec 02 08:18:07 np0005541914.localdomain sudo[70957]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 02 08:18:07 np0005541914.localdomain podman[70901]: 2025-12-02 08:18:07.362441613 +0000 UTC m=+0.461651895 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, url=https://www.redhat.com, container_name=nova_migration_target, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, tcib_managed=true, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible)
Dec 02 08:18:07 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:18:07 np0005541914.localdomain kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Dec 02 08:18:07 np0005541914.localdomain sudo[70957]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:09 np0005541914.localdomain ovs-vsctl[71082]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Dec 02 08:18:10 np0005541914.localdomain systemd[1]: libpod-9e2f64a754345c5abed7c4f14afaed370ed35e857158b80af95000e9458ab27b.scope: Deactivated successfully.
Dec 02 08:18:10 np0005541914.localdomain systemd[1]: libpod-9e2f64a754345c5abed7c4f14afaed370ed35e857158b80af95000e9458ab27b.scope: Consumed 2.996s CPU time.
Dec 02 08:18:10 np0005541914.localdomain podman[70871]: 2025-12-02 08:18:10.106668251 +0000 UTC m=+3.346996257 container died 9e2f64a754345c5abed7c4f14afaed370ed35e857158b80af95000e9458ab27b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, container_name=setup_ovs_manager, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64)
Dec 02 08:18:10 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9e2f64a754345c5abed7c4f14afaed370ed35e857158b80af95000e9458ab27b-userdata-shm.mount: Deactivated successfully.
Dec 02 08:18:10 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a962ed19f38fa02a2bde769e5b1e4ad9f81e2456610cd4047cfb92b422afb6bb-merged.mount: Deactivated successfully.
Dec 02 08:18:10 np0005541914.localdomain podman[71083]: 2025-12-02 08:18:10.219923748 +0000 UTC m=+0.101022196 container cleanup 9e2f64a754345c5abed7c4f14afaed370ed35e857158b80af95000e9458ab27b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., url=https://www.redhat.com, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, build-date=2025-11-19T00:14:25Z, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=setup_ovs_manager, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 02 08:18:10 np0005541914.localdomain systemd[1]: libpod-conmon-9e2f64a754345c5abed7c4f14afaed370ed35e857158b80af95000e9458ab27b.scope: Deactivated successfully.
Dec 02 08:18:10 np0005541914.localdomain python3[70321]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name setup_ovs_manager --conmon-pidfile /run/setup_ovs_manager.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1764661676 --label config_id=tripleo_step4 --label container_name=setup_ovs_manager --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1764661676'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/setup_ovs_manager.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 /container_puppet_apply.sh 4 exec include tripleo::profile::base::neutron::ovn_metadata
Dec 02 08:18:10 np0005541914.localdomain podman[71194]: 2025-12-02 08:18:10.712691369 +0000 UTC m=+0.100644444 container create 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, architecture=x86_64)
Dec 02 08:18:10 np0005541914.localdomain podman[71200]: 2025-12-02 08:18:10.72814618 +0000 UTC m=+0.095576376 container create b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20251118.1, tcib_managed=true, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Dec 02 08:18:10 np0005541914.localdomain podman[71194]: 2025-12-02 08:18:10.659316367 +0000 UTC m=+0.047269502 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1
Dec 02 08:18:10 np0005541914.localdomain podman[71200]: 2025-12-02 08:18:10.673588702 +0000 UTC m=+0.041018918 image pull  registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1
Dec 02 08:18:10 np0005541914.localdomain systemd[1]: Started libpod-conmon-b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.scope.
Dec 02 08:18:10 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:18:10 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d25cd45e405537f342915e53026fb2ea6ae337ec52f5b72439f9a37d98e6337/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 02 08:18:10 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d25cd45e405537f342915e53026fb2ea6ae337ec52f5b72439f9a37d98e6337/merged/var/log/ovn supports timestamps until 2038 (0x7fffffff)
Dec 02 08:18:10 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d25cd45e405537f342915e53026fb2ea6ae337ec52f5b72439f9a37d98e6337/merged/var/log/openvswitch supports timestamps until 2038 (0x7fffffff)
Dec 02 08:18:10 np0005541914.localdomain systemd[1]: Started libpod-conmon-6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.scope.
Dec 02 08:18:10 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:18:10 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a895fb8ef70030e2b27c789af81d44f745a1833cc8dfd0936f4f5302c8f5799a/merged/var/log/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 08:18:10 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a895fb8ef70030e2b27c789af81d44f745a1833cc8dfd0936f4f5302c8f5799a/merged/etc/neutron/kill_scripts supports timestamps until 2038 (0x7fffffff)
Dec 02 08:18:10 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a895fb8ef70030e2b27c789af81d44f745a1833cc8dfd0936f4f5302c8f5799a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 08:18:10 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:18:10 np0005541914.localdomain podman[71200]: 2025-12-02 08:18:10.849009193 +0000 UTC m=+0.216439369 container init b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vcs-type=git, architecture=x86_64, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:18:10 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:18:10 np0005541914.localdomain podman[71194]: 2025-12-02 08:18:10.873997861 +0000 UTC m=+0.261950956 container init 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn)
Dec 02 08:18:10 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:18:10 np0005541914.localdomain podman[71200]: 2025-12-02 08:18:10.895423919 +0000 UTC m=+0.262854125 container start b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-type=git, batch=17.1_20251118.1, release=1761123044, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 02 08:18:10 np0005541914.localdomain sudo[71234]:  neutron : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 02 08:18:10 np0005541914.localdomain sudo[71234]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42435)
Dec 02 08:18:10 np0005541914.localdomain python3[70321]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck 6642 --label config_id=tripleo_step4 --label container_name=ovn_controller --label managed_by=tripleo_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_controller.log --network host --privileged=True --user root --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/log/containers/openvswitch:/var/log/openvswitch:z --volume /var/log/containers/openvswitch:/var/log/ovn:z registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1
Dec 02 08:18:10 np0005541914.localdomain systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring.
Dec 02 08:18:10 np0005541914.localdomain systemd[1]: Created slice User Slice of UID 0.
Dec 02 08:18:10 np0005541914.localdomain systemd[1]: Starting User Runtime Directory /run/user/0...
Dec 02 08:18:10 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:18:10 np0005541914.localdomain podman[71194]: 2025-12-02 08:18:10.950883166 +0000 UTC m=+0.338836201 container start 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, version=17.1.12, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team)
Dec 02 08:18:10 np0005541914.localdomain python3[70321]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=6b6de39672ef4d892f2e8f81f38c430b --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ovn_metadata_agent --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_metadata_agent.log --network host --pid host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/neutron:/var/log/neutron:z --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /run/netns:/run/netns:shared --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1
Dec 02 08:18:10 np0005541914.localdomain systemd[1]: Finished User Runtime Directory /run/user/0.
Dec 02 08:18:10 np0005541914.localdomain systemd[1]: Starting User Manager for UID 0...
Dec 02 08:18:10 np0005541914.localdomain sudo[71234]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:11 np0005541914.localdomain systemd[71261]: pam_unix(systemd-user:session): session opened for user root(uid=0) by (uid=0)
Dec 02 08:18:11 np0005541914.localdomain podman[71235]: 2025-12-02 08:18:11.015209117 +0000 UTC m=+0.105806924 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=starting, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, version=17.1.12, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044)
Dec 02 08:18:11 np0005541914.localdomain podman[71235]: 2025-12-02 08:18:11.05318302 +0000 UTC m=+0.143780837 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, tcib_managed=true, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container)
Dec 02 08:18:11 np0005541914.localdomain podman[71235]: unhealthy
Dec 02 08:18:11 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:18:11 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 08:18:11 np0005541914.localdomain podman[71253]: 2025-12-02 08:18:11.094952161 +0000 UTC m=+0.135944014 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=starting, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.expose-services=, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible)
Dec 02 08:18:11 np0005541914.localdomain podman[71253]: 2025-12-02 08:18:11.106708846 +0000 UTC m=+0.147700699 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=ovn_metadata_agent, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, version=17.1.12, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn)
Dec 02 08:18:11 np0005541914.localdomain podman[71253]: unhealthy
Dec 02 08:18:11 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:18:11 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 08:18:11 np0005541914.localdomain systemd[71261]: Queued start job for default target Main User Target.
Dec 02 08:18:11 np0005541914.localdomain systemd[71261]: Created slice User Application Slice.
Dec 02 08:18:11 np0005541914.localdomain systemd[71261]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec 02 08:18:11 np0005541914.localdomain systemd[71261]: Started Daily Cleanup of User's Temporary Directories.
Dec 02 08:18:11 np0005541914.localdomain systemd[71261]: Reached target Paths.
Dec 02 08:18:11 np0005541914.localdomain systemd[71261]: Reached target Timers.
Dec 02 08:18:11 np0005541914.localdomain systemd[71261]: Starting D-Bus User Message Bus Socket...
Dec 02 08:18:11 np0005541914.localdomain systemd[71261]: Starting Create User's Volatile Files and Directories...
Dec 02 08:18:11 np0005541914.localdomain systemd[71261]: Finished Create User's Volatile Files and Directories.
Dec 02 08:18:11 np0005541914.localdomain systemd[71261]: Listening on D-Bus User Message Bus Socket.
Dec 02 08:18:11 np0005541914.localdomain systemd[71261]: Reached target Sockets.
Dec 02 08:18:11 np0005541914.localdomain systemd[71261]: Reached target Basic System.
Dec 02 08:18:11 np0005541914.localdomain systemd[71261]: Reached target Main User Target.
Dec 02 08:18:11 np0005541914.localdomain systemd[71261]: Startup finished in 146ms.
Dec 02 08:18:11 np0005541914.localdomain systemd[1]: Started User Manager for UID 0.
Dec 02 08:18:11 np0005541914.localdomain systemd[1]: Started Session c9 of User root.
Dec 02 08:18:11 np0005541914.localdomain systemd[1]: session-c9.scope: Deactivated successfully.
Dec 02 08:18:11 np0005541914.localdomain kernel: device br-int entered promiscuous mode
Dec 02 08:18:11 np0005541914.localdomain NetworkManager[5967]: <info>  [1764663491.2866] manager: (br-int): new Generic device (/org/freedesktop/NetworkManager/Devices/11)
Dec 02 08:18:11 np0005541914.localdomain sudo[70319]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:11 np0005541914.localdomain systemd-udevd[71346]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 08:18:11 np0005541914.localdomain systemd-udevd[71349]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 08:18:11 np0005541914.localdomain NetworkManager[5967]: <info>  [1764663491.3276] device (genev_sys_6081): carrier: link connected
Dec 02 08:18:11 np0005541914.localdomain NetworkManager[5967]: <info>  [1764663491.3280] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/12)
Dec 02 08:18:11 np0005541914.localdomain kernel: device genev_sys_6081 entered promiscuous mode
Dec 02 08:18:11 np0005541914.localdomain sudo[71366]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wbepxmzhpxdcuhwfqsycxxcibcllvowh ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:18:11 np0005541914.localdomain sudo[71366]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:11 np0005541914.localdomain python3[71368]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:18:11 np0005541914.localdomain sudo[71366]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:11 np0005541914.localdomain sudo[71382]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-txjajfsaxwnhuzoruzcobjbpmjtgrwmm ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:18:11 np0005541914.localdomain sudo[71382]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:11 np0005541914.localdomain python3[71384]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:18:12 np0005541914.localdomain sudo[71382]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:12 np0005541914.localdomain sudo[71398]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-appusfooenbomayigulqwszytzqngalk ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:18:12 np0005541914.localdomain sudo[71398]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:12 np0005541914.localdomain python3[71400]: ansible-file Invoked with path=/etc/systemd/system/tripleo_logrotate_crond.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:18:12 np0005541914.localdomain sudo[71398]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:12 np0005541914.localdomain sudo[71414]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ezixzuujltagkhmvzvqpggtxbijvnuzr ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:18:12 np0005541914.localdomain sudo[71414]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:12 np0005541914.localdomain python3[71416]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:18:12 np0005541914.localdomain sudo[71414]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:12 np0005541914.localdomain sudo[71418]:  neutron : PWD=/ ; USER=root ; COMMAND=/usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf privsep-helper --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/networking-ovn/networking-ovn-metadata-agent.ini --privsep_context neutron.privileged.default --privsep_sock_path /tmp/tmpjro3yy3b/privsep.sock
Dec 02 08:18:12 np0005541914.localdomain sudo[71418]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42435)
Dec 02 08:18:12 np0005541914.localdomain sudo[71433]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dzbknuigzqyevzovkrpxrrrofkvfytyv ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:18:12 np0005541914.localdomain sudo[71433]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:12 np0005541914.localdomain python3[71435]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:18:12 np0005541914.localdomain sudo[71433]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:12 np0005541914.localdomain sudo[71450]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hcgvgmpuvxiavkhmvrpgjyabejsquwsl ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:18:12 np0005541914.localdomain sudo[71450]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:13 np0005541914.localdomain python3[71452]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:18:13 np0005541914.localdomain sudo[71450]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:13 np0005541914.localdomain sudo[71466]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ugpoaomtvexzgrankpyjqluncwzkduuo ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:18:13 np0005541914.localdomain sudo[71466]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:13 np0005541914.localdomain sudo[71418]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:13 np0005541914.localdomain python3[71468]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 08:18:13 np0005541914.localdomain sudo[71466]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:13 np0005541914.localdomain sudo[71484]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sziefzkbmesvllacaomvfakrtdzvizfz ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:18:13 np0005541914.localdomain sudo[71484]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:13 np0005541914.localdomain python3[71486]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 08:18:13 np0005541914.localdomain sudo[71484]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:13 np0005541914.localdomain sudo[71502]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tkhsjqobexczqqkwgocfbbpqknsxmhxa ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:18:13 np0005541914.localdomain sudo[71502]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:13 np0005541914.localdomain python3[71504]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_logrotate_crond_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 08:18:13 np0005541914.localdomain sudo[71502]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:14 np0005541914.localdomain sudo[71518]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-irbtrpsodrnbarqzcgpceetuaureqmmr ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:18:14 np0005541914.localdomain sudo[71518]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:14 np0005541914.localdomain python3[71520]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_migration_target_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 08:18:14 np0005541914.localdomain sudo[71518]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:14 np0005541914.localdomain sudo[71534]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bfzwiowpqxiddrnqjyrqrlszgvclmsnc ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:18:14 np0005541914.localdomain sudo[71534]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:14 np0005541914.localdomain python3[71536]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_controller_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 08:18:14 np0005541914.localdomain sudo[71534]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:14 np0005541914.localdomain sudo[71550]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ajhejqtwhpfrxuiwcircgjaylrzqaeul ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:18:14 np0005541914.localdomain sudo[71550]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:14 np0005541914.localdomain python3[71552]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 08:18:14 np0005541914.localdomain sudo[71550]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:15 np0005541914.localdomain sudo[71611]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cdzxgkmycgscxhyshdafmottraobllpt ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:18:15 np0005541914.localdomain sudo[71611]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:15 np0005541914.localdomain python3[71613]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663494.7649007-108792-245862602601178/source dest=/etc/systemd/system/tripleo_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:18:15 np0005541914.localdomain sudo[71611]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:15 np0005541914.localdomain sudo[71640]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-adlwkkihpisygksbenflacqftvwjztng ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:18:15 np0005541914.localdomain sudo[71640]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:15 np0005541914.localdomain python3[71642]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663494.7649007-108792-245862602601178/source dest=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:18:15 np0005541914.localdomain sudo[71640]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:16 np0005541914.localdomain sudo[71669]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xkdcpmzxjlpwhswfpcoueiuqudwonyma ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:18:16 np0005541914.localdomain sudo[71669]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:16 np0005541914.localdomain python3[71671]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663494.7649007-108792-245862602601178/source dest=/etc/systemd/system/tripleo_logrotate_crond.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:18:16 np0005541914.localdomain sudo[71669]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:16 np0005541914.localdomain sudo[71698]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gujpgahduzyztylpdnxsddxehysvkkxo ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:18:16 np0005541914.localdomain sudo[71698]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:16 np0005541914.localdomain python3[71700]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663494.7649007-108792-245862602601178/source dest=/etc/systemd/system/tripleo_nova_migration_target.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:18:16 np0005541914.localdomain sudo[71698]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:17 np0005541914.localdomain sudo[71727]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mddznukvibuvwytpfbglncknhlpxipjg ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:18:17 np0005541914.localdomain sudo[71727]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:17 np0005541914.localdomain python3[71729]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663494.7649007-108792-245862602601178/source dest=/etc/systemd/system/tripleo_ovn_controller.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:18:17 np0005541914.localdomain sudo[71727]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:17 np0005541914.localdomain sudo[71756]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gjtzqyknyckekzpyjpwienlduickygff ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:18:17 np0005541914.localdomain sudo[71756]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:17 np0005541914.localdomain python3[71758]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663494.7649007-108792-245862602601178/source dest=/etc/systemd/system/tripleo_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:18:17 np0005541914.localdomain sudo[71756]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:17 np0005541914.localdomain sudo[71772]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yjscdpxjkwczulgsiaduuprpmnyqtjpi ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:18:17 np0005541914.localdomain sudo[71772]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:18 np0005541914.localdomain python3[71774]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 02 08:18:18 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:18:18 np0005541914.localdomain systemd-sysv-generator[71800]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:18:18 np0005541914.localdomain systemd-rc-local-generator[71795]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:18:18 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:18:18 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:18:18 np0005541914.localdomain sudo[71772]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:18 np0005541914.localdomain systemd[1]: tmp-crun.6tA5F9.mount: Deactivated successfully.
Dec 02 08:18:18 np0005541914.localdomain podman[71811]: 2025-12-02 08:18:18.615541837 +0000 UTC m=+0.071314392 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:18:18 np0005541914.localdomain podman[71811]: 2025-12-02 08:18:18.624847067 +0000 UTC m=+0.080619612 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-collectd, tcib_managed=true, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, batch=17.1_20251118.1, vendor=Red Hat, Inc., release=1761123044, io.buildah.version=1.41.4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:18:18 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:18:19 np0005541914.localdomain sudo[71844]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jfuoytcbcwiroxjnshtdelgjsydakwms ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:18:19 np0005541914.localdomain sudo[71844]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:19 np0005541914.localdomain python3[71846]: ansible-systemd Invoked with state=restarted name=tripleo_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 08:18:19 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:18:19 np0005541914.localdomain systemd-rc-local-generator[71874]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:18:19 np0005541914.localdomain systemd-sysv-generator[71880]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:18:19 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:18:19 np0005541914.localdomain systemd[1]: Starting ceilometer_agent_compute container...
Dec 02 08:18:20 np0005541914.localdomain tripleo-start-podman-container[71887]: Creating additional drop-in dependency for "ceilometer_agent_compute" (814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae)
Dec 02 08:18:20 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:18:20 np0005541914.localdomain systemd-rc-local-generator[71942]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:18:20 np0005541914.localdomain systemd-sysv-generator[71945]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:18:20 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:18:20 np0005541914.localdomain systemd[1]: Started ceilometer_agent_compute container.
Dec 02 08:18:20 np0005541914.localdomain sudo[71844]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:20 np0005541914.localdomain sudo[71969]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xefnoaacszlasdefrvuxncyaxhtktfao ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:18:20 np0005541914.localdomain sudo[71969]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:20 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:18:20 np0005541914.localdomain systemd[1]: tmp-crun.pcLGRx.mount: Deactivated successfully.
Dec 02 08:18:20 np0005541914.localdomain podman[71971]: 2025-12-02 08:18:20.865527378 +0000 UTC m=+0.091488629 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, config_id=tripleo_step3, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 02 08:18:20 np0005541914.localdomain podman[71971]: 2025-12-02 08:18:20.878909004 +0000 UTC m=+0.104870275 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_id=tripleo_step3, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=)
Dec 02 08:18:20 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:18:21 np0005541914.localdomain python3[71972]: ansible-systemd Invoked with state=restarted name=tripleo_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 08:18:21 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:18:21 np0005541914.localdomain systemd-rc-local-generator[72019]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:18:21 np0005541914.localdomain systemd-sysv-generator[72023]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:18:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:18:21 np0005541914.localdomain systemd[1]: Stopping User Manager for UID 0...
Dec 02 08:18:21 np0005541914.localdomain systemd[71261]: Activating special unit Exit the Session...
Dec 02 08:18:21 np0005541914.localdomain systemd[71261]: Stopped target Main User Target.
Dec 02 08:18:21 np0005541914.localdomain systemd[71261]: Stopped target Basic System.
Dec 02 08:18:21 np0005541914.localdomain systemd[71261]: Stopped target Paths.
Dec 02 08:18:21 np0005541914.localdomain systemd[71261]: Stopped target Sockets.
Dec 02 08:18:21 np0005541914.localdomain systemd[71261]: Stopped target Timers.
Dec 02 08:18:21 np0005541914.localdomain systemd[71261]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 02 08:18:21 np0005541914.localdomain systemd[71261]: Closed D-Bus User Message Bus Socket.
Dec 02 08:18:21 np0005541914.localdomain systemd[71261]: Stopped Create User's Volatile Files and Directories.
Dec 02 08:18:21 np0005541914.localdomain systemd[71261]: Removed slice User Application Slice.
Dec 02 08:18:21 np0005541914.localdomain systemd[71261]: Reached target Shutdown.
Dec 02 08:18:21 np0005541914.localdomain systemd[71261]: Finished Exit the Session.
Dec 02 08:18:21 np0005541914.localdomain systemd[71261]: Reached target Exit the Session.
Dec 02 08:18:21 np0005541914.localdomain systemd[1]: Starting ceilometer_agent_ipmi container...
Dec 02 08:18:21 np0005541914.localdomain systemd[1]: user@0.service: Deactivated successfully.
Dec 02 08:18:21 np0005541914.localdomain systemd[1]: Stopped User Manager for UID 0.
Dec 02 08:18:21 np0005541914.localdomain systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec 02 08:18:21 np0005541914.localdomain systemd[1]: run-user-0.mount: Deactivated successfully.
Dec 02 08:18:21 np0005541914.localdomain systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec 02 08:18:21 np0005541914.localdomain systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec 02 08:18:21 np0005541914.localdomain systemd[1]: Removed slice User Slice of UID 0.
Dec 02 08:18:21 np0005541914.localdomain systemd[1]: Started ceilometer_agent_ipmi container.
Dec 02 08:18:21 np0005541914.localdomain sudo[71969]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:21 np0005541914.localdomain sudo[72055]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wfcrxvfkuwhvxwkmytpncubptwfuhroy ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:18:21 np0005541914.localdomain sudo[72055]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:22 np0005541914.localdomain python3[72057]: ansible-systemd Invoked with state=restarted name=tripleo_logrotate_crond.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 08:18:22 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:18:22 np0005541914.localdomain systemd-sysv-generator[72083]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:18:22 np0005541914.localdomain systemd-rc-local-generator[72079]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:18:22 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:18:22 np0005541914.localdomain systemd[1]: Starting logrotate_crond container...
Dec 02 08:18:22 np0005541914.localdomain systemd[1]: Started logrotate_crond container.
Dec 02 08:18:22 np0005541914.localdomain sudo[72055]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:22 np0005541914.localdomain sudo[72121]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wwedbyspbcjpbokxvqgklxaskgbfhusx ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:18:22 np0005541914.localdomain sudo[72121]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:23 np0005541914.localdomain python3[72123]: ansible-systemd Invoked with state=restarted name=tripleo_nova_migration_target.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 08:18:24 np0005541914.localdomain sshd[72125]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:18:24 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:18:24 np0005541914.localdomain systemd-sysv-generator[72155]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:18:24 np0005541914.localdomain systemd-rc-local-generator[72149]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:18:24 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:18:24 np0005541914.localdomain systemd[1]: Starting nova_migration_target container...
Dec 02 08:18:24 np0005541914.localdomain systemd[1]: Started nova_migration_target container.
Dec 02 08:18:24 np0005541914.localdomain sudo[72121]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:25 np0005541914.localdomain sudo[72191]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tfxudytqnkdcogygrzosladrrscjadlj ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:18:25 np0005541914.localdomain sudo[72191]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:25 np0005541914.localdomain python3[72193]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 08:18:25 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:18:25 np0005541914.localdomain systemd-rc-local-generator[72219]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:18:25 np0005541914.localdomain systemd-sysv-generator[72224]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:18:25 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:18:25 np0005541914.localdomain systemd[1]: Starting ovn_controller container...
Dec 02 08:18:26 np0005541914.localdomain tripleo-start-podman-container[72233]: Creating additional drop-in dependency for "ovn_controller" (b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d)
Dec 02 08:18:26 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:18:26 np0005541914.localdomain systemd-sysv-generator[72293]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:18:26 np0005541914.localdomain systemd-rc-local-generator[72288]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:18:26 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:18:26 np0005541914.localdomain systemd[1]: Started ovn_controller container.
Dec 02 08:18:26 np0005541914.localdomain sudo[72191]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:26 np0005541914.localdomain sshd[72125]: Received disconnect from 103.52.115.25 port 37356:11: Bye Bye [preauth]
Dec 02 08:18:26 np0005541914.localdomain sshd[72125]: Disconnected from authenticating user root 103.52.115.25 port 37356 [preauth]
Dec 02 08:18:26 np0005541914.localdomain sudo[72315]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ayrygjoywxfofjmmajecmqbrtjjzjgra ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:18:26 np0005541914.localdomain sudo[72315]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:26 np0005541914.localdomain python3[72317]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 08:18:28 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:18:28 np0005541914.localdomain systemd-rc-local-generator[72342]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:18:28 np0005541914.localdomain systemd-sysv-generator[72347]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:18:28 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:18:28 np0005541914.localdomain systemd[1]: Starting ovn_metadata_agent container...
Dec 02 08:18:28 np0005541914.localdomain systemd[1]: Started ovn_metadata_agent container.
Dec 02 08:18:28 np0005541914.localdomain sudo[72315]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:28 np0005541914.localdomain sudo[72397]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gduhvtszyhjwygzfkocgnkqgwvdycwfx ; /usr/bin/python3
Dec 02 08:18:28 np0005541914.localdomain sudo[72397]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:29 np0005541914.localdomain python3[72399]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks4.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:18:29 np0005541914.localdomain sudo[72397]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:29 np0005541914.localdomain sudo[72446]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uonojkepjbremticautwivguqebgspsr ; /usr/bin/python3
Dec 02 08:18:29 np0005541914.localdomain sudo[72446]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:29 np0005541914.localdomain sudo[72446]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:30 np0005541914.localdomain sudo[72489]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ybxdpzyokxbgzldnknwsvpzbtbxjdldn ; /usr/bin/python3
Dec 02 08:18:30 np0005541914.localdomain sudo[72489]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:30 np0005541914.localdomain sudo[72489]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:30 np0005541914.localdomain sudo[72494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:18:30 np0005541914.localdomain sudo[72494]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:18:30 np0005541914.localdomain sudo[72494]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:30 np0005541914.localdomain sudo[72521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:18:30 np0005541914.localdomain sudo[72521]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:18:30 np0005541914.localdomain sudo[72549]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-terojyhxokkkgyllbqaszoxlbmoofnxk ; /usr/bin/python3
Dec 02 08:18:30 np0005541914.localdomain sudo[72549]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:30 np0005541914.localdomain python3[72551]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks4.json short_hostname=np0005541914 step=4 update_config_hash_only=False
Dec 02 08:18:30 np0005541914.localdomain sudo[72549]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:31 np0005541914.localdomain sudo[72521]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:31 np0005541914.localdomain sudo[72597]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lqkxzsonkyokenzadqgglwodtkjzggkh ; /usr/bin/python3
Dec 02 08:18:31 np0005541914.localdomain sudo[72597]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:31 np0005541914.localdomain python3[72599]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:18:31 np0005541914.localdomain sudo[72597]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:31 np0005541914.localdomain sudo[72613]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ctavhxgzjeztxrkdcmrtbzsmvlfqjosz ; /usr/bin/python3
Dec 02 08:18:31 np0005541914.localdomain sudo[72613]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:18:31 np0005541914.localdomain python3[72615]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_4 config_pattern=container-puppet-*.json config_overrides={} debug=True
Dec 02 08:18:31 np0005541914.localdomain sudo[72613]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:32 np0005541914.localdomain sudo[72616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:18:32 np0005541914.localdomain sudo[72616]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:18:32 np0005541914.localdomain sudo[72616]: pam_unix(sudo:session): session closed for user root
Dec 02 08:18:33 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:18:34 np0005541914.localdomain podman[72631]: 2025-12-02 08:18:34.055606379 +0000 UTC m=+0.064325884 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, release=1761123044, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4)
Dec 02 08:18:34 np0005541914.localdomain podman[72631]: 2025-12-02 08:18:34.267899519 +0000 UTC m=+0.276619014 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, tcib_managed=true, container_name=metrics_qdr, version=17.1.12, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 08:18:34 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:18:36 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:18:36 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:18:36 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:18:37 np0005541914.localdomain podman[72662]: 2025-12-02 08:18:37.088271118 +0000 UTC m=+0.096150935 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, name=rhosp17/openstack-cron, vcs-type=git, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, version=17.1.12, build-date=2025-11-18T22:49:32Z, release=1761123044, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, architecture=x86_64)
Dec 02 08:18:37 np0005541914.localdomain podman[72663]: 2025-12-02 08:18:37.140318338 +0000 UTC m=+0.145126140 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=starting, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:18:37 np0005541914.localdomain podman[72662]: 2025-12-02 08:18:37.153311513 +0000 UTC m=+0.161191340 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, release=1761123044, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:18:37 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:18:37 np0005541914.localdomain podman[72663]: 2025-12-02 08:18:37.200015777 +0000 UTC m=+0.204823609 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, url=https://www.redhat.com, io.buildah.version=1.41.4, release=1761123044, tcib_managed=true, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute)
Dec 02 08:18:37 np0005541914.localdomain systemd[1]: tmp-crun.rcGtXz.mount: Deactivated successfully.
Dec 02 08:18:37 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:18:37 np0005541914.localdomain podman[72664]: 2025-12-02 08:18:37.221441864 +0000 UTC m=+0.226366349 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=starting, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, managed_by=tripleo_ansible, tcib_managed=true)
Dec 02 08:18:37 np0005541914.localdomain podman[72664]: 2025-12-02 08:18:37.278899633 +0000 UTC m=+0.283824068 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, distribution-scope=public, build-date=2025-11-19T00:12:45Z, release=1761123044, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 02 08:18:37 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:18:37 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:18:38 np0005541914.localdomain podman[72733]: 2025-12-02 08:18:38.073490141 +0000 UTC m=+0.079614249 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, batch=17.1_20251118.1, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64)
Dec 02 08:18:38 np0005541914.localdomain podman[72733]: 2025-12-02 08:18:38.472966579 +0000 UTC m=+0.479090687 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, batch=17.1_20251118.1)
Dec 02 08:18:38 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:18:41 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:18:41 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:18:42 np0005541914.localdomain podman[72757]: 2025-12-02 08:18:42.132292879 +0000 UTC m=+0.134688974 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=starting, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.expose-services=, config_id=tripleo_step4, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1761123044, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:18:42 np0005541914.localdomain podman[72756]: 2025-12-02 08:18:42.097580168 +0000 UTC m=+0.103693819 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=starting, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, config_id=tripleo_step4, vcs-type=git, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team)
Dec 02 08:18:42 np0005541914.localdomain podman[72756]: 2025-12-02 08:18:42.178047374 +0000 UTC m=+0.184161035 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, version=17.1.12, release=1761123044, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, batch=17.1_20251118.1, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 02 08:18:42 np0005541914.localdomain podman[72757]: 2025-12-02 08:18:42.182948396 +0000 UTC m=+0.185344521 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, version=17.1.12, tcib_managed=true, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, distribution-scope=public, container_name=ovn_controller, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, architecture=x86_64, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Dec 02 08:18:42 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:18:42 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:18:48 np0005541914.localdomain snmpd[69217]: empty variable list in _query
Dec 02 08:18:48 np0005541914.localdomain snmpd[69217]: empty variable list in _query
Dec 02 08:18:48 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:18:49 np0005541914.localdomain podman[72806]: 2025-12-02 08:18:49.067936694 +0000 UTC m=+0.075432290 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, container_name=collectd, distribution-scope=public, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, batch=17.1_20251118.1, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=)
Dec 02 08:18:49 np0005541914.localdomain podman[72806]: 2025-12-02 08:18:49.083792438 +0000 UTC m=+0.091288004 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, container_name=collectd, io.buildah.version=1.41.4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_id=tripleo_step3, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, architecture=x86_64)
Dec 02 08:18:49 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:18:50 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:18:51 np0005541914.localdomain podman[72825]: 2025-12-02 08:18:51.079212793 +0000 UTC m=+0.082121258 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, version=17.1.12, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1761123044)
Dec 02 08:18:51 np0005541914.localdomain podman[72825]: 2025-12-02 08:18:51.088370648 +0000 UTC m=+0.091279163 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 02 08:18:51 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:18:52 np0005541914.localdomain sshd[72844]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:18:53 np0005541914.localdomain sshd[72844]: Received disconnect from 182.253.156.173 port 45452:11: Bye Bye [preauth]
Dec 02 08:18:53 np0005541914.localdomain sshd[72844]: Disconnected from authenticating user root 182.253.156.173 port 45452 [preauth]
Dec 02 08:19:04 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:19:05 np0005541914.localdomain systemd[1]: tmp-crun.h34PZI.mount: Deactivated successfully.
Dec 02 08:19:05 np0005541914.localdomain podman[72846]: 2025-12-02 08:19:05.091379991 +0000 UTC m=+0.091040932 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, container_name=metrics_qdr, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.buildah.version=1.41.4)
Dec 02 08:19:05 np0005541914.localdomain podman[72846]: 2025-12-02 08:19:05.299955201 +0000 UTC m=+0.299616102 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container)
Dec 02 08:19:05 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:19:07 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:19:07 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:19:07 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:19:08 np0005541914.localdomain podman[72876]: 2025-12-02 08:19:08.077784837 +0000 UTC m=+0.081851144 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, version=17.1.12, build-date=2025-11-18T22:49:32Z, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron)
Dec 02 08:19:08 np0005541914.localdomain podman[72876]: 2025-12-02 08:19:08.092015542 +0000 UTC m=+0.096081809 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, version=17.1.12, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true)
Dec 02 08:19:08 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:19:08 np0005541914.localdomain podman[72878]: 2025-12-02 08:19:08.142924056 +0000 UTC m=+0.141806041 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, version=17.1.12, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.4, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi)
Dec 02 08:19:08 np0005541914.localdomain systemd[1]: tmp-crun.YAKjVw.mount: Deactivated successfully.
Dec 02 08:19:08 np0005541914.localdomain podman[72877]: 2025-12-02 08:19:08.190792795 +0000 UTC m=+0.192582111 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.expose-services=)
Dec 02 08:19:08 np0005541914.localdomain podman[72878]: 2025-12-02 08:19:08.203906376 +0000 UTC m=+0.202788321 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, build-date=2025-11-19T00:12:45Z, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:19:08 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:19:08 np0005541914.localdomain podman[72877]: 2025-12-02 08:19:08.226932066 +0000 UTC m=+0.228721362 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, architecture=x86_64, batch=17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:19:08 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:19:08 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:19:09 np0005541914.localdomain podman[72947]: 2025-12-02 08:19:09.077979463 +0000 UTC m=+0.081689619 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, distribution-scope=public, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, batch=17.1_20251118.1, tcib_managed=true, version=17.1.12, vendor=Red Hat, Inc.)
Dec 02 08:19:09 np0005541914.localdomain podman[72947]: 2025-12-02 08:19:09.453746968 +0000 UTC m=+0.457457074 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, batch=17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:19:09 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:19:12 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:19:12 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:19:13 np0005541914.localdomain podman[72970]: 2025-12-02 08:19:13.079812703 +0000 UTC m=+0.084123045 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://www.redhat.com, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., batch=17.1_20251118.1)
Dec 02 08:19:13 np0005541914.localdomain systemd[1]: tmp-crun.hitcFA.mount: Deactivated successfully.
Dec 02 08:19:13 np0005541914.localdomain podman[72971]: 2025-12-02 08:19:13.150423894 +0000 UTC m=+0.150425311 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., version=17.1.12, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, config_id=tripleo_step4, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Dec 02 08:19:13 np0005541914.localdomain podman[72970]: 2025-12-02 08:19:13.153892523 +0000 UTC m=+0.158202895 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.12, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 02 08:19:13 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:19:13 np0005541914.localdomain podman[72971]: 2025-12-02 08:19:13.174845848 +0000 UTC m=+0.174847275 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, release=1761123044, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, url=https://www.redhat.com, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true)
Dec 02 08:19:13 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:19:18 np0005541914.localdomain sshd[73018]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:19:19 np0005541914.localdomain sshd[73018]: Invalid user sol from 45.148.10.240 port 33252
Dec 02 08:19:19 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:19:19 np0005541914.localdomain sshd[73018]: Connection closed by invalid user sol 45.148.10.240 port 33252 [preauth]
Dec 02 08:19:19 np0005541914.localdomain podman[73020]: 2025-12-02 08:19:19.322484245 +0000 UTC m=+0.079719108 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, release=1761123044, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, config_id=tripleo_step3, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4)
Dec 02 08:19:19 np0005541914.localdomain podman[73020]: 2025-12-02 08:19:19.335932016 +0000 UTC m=+0.093166879 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, name=rhosp17/openstack-collectd, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3)
Dec 02 08:19:19 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:19:21 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:19:22 np0005541914.localdomain podman[73041]: 2025-12-02 08:19:22.084400232 +0000 UTC m=+0.077373404 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, container_name=iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://www.redhat.com, io.buildah.version=1.41.4, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, version=17.1.12, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container)
Dec 02 08:19:22 np0005541914.localdomain podman[73041]: 2025-12-02 08:19:22.094706604 +0000 UTC m=+0.087679796 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.component=openstack-iscsid-container, architecture=x86_64, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com)
Dec 02 08:19:22 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:19:32 np0005541914.localdomain sudo[73060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:19:32 np0005541914.localdomain sudo[73060]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:19:32 np0005541914.localdomain sudo[73060]: pam_unix(sudo:session): session closed for user root
Dec 02 08:19:32 np0005541914.localdomain sudo[73075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 02 08:19:32 np0005541914.localdomain sudo[73075]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:19:33 np0005541914.localdomain systemd[1]: tmp-crun.dwqlo0.mount: Deactivated successfully.
Dec 02 08:19:33 np0005541914.localdomain podman[73163]: 2025-12-02 08:19:33.413701087 +0000 UTC m=+0.093743796 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, GIT_BRANCH=main, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, ceph=True, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.openshift.expose-services=, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 02 08:19:33 np0005541914.localdomain podman[73163]: 2025-12-02 08:19:33.550980916 +0000 UTC m=+0.231023575 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, RELEASE=main, ceph=True, io.openshift.expose-services=, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 02 08:19:33 np0005541914.localdomain sudo[73075]: pam_unix(sudo:session): session closed for user root
Dec 02 08:19:33 np0005541914.localdomain sudo[73232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:19:33 np0005541914.localdomain sudo[73232]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:19:33 np0005541914.localdomain sudo[73232]: pam_unix(sudo:session): session closed for user root
Dec 02 08:19:33 np0005541914.localdomain sudo[73247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:19:33 np0005541914.localdomain sudo[73247]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:19:34 np0005541914.localdomain sudo[73247]: pam_unix(sudo:session): session closed for user root
Dec 02 08:19:35 np0005541914.localdomain sudo[73294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:19:35 np0005541914.localdomain sudo[73294]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:19:35 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:19:35 np0005541914.localdomain sudo[73294]: pam_unix(sudo:session): session closed for user root
Dec 02 08:19:35 np0005541914.localdomain systemd[1]: tmp-crun.NL9gPW.mount: Deactivated successfully.
Dec 02 08:19:35 np0005541914.localdomain podman[73309]: 2025-12-02 08:19:35.684529667 +0000 UTC m=+0.106931009 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.12, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vcs-type=git, distribution-scope=public, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64)
Dec 02 08:19:35 np0005541914.localdomain podman[73309]: 2025-12-02 08:19:35.857745521 +0000 UTC m=+0.280146813 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1761123044, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, architecture=x86_64, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, container_name=metrics_qdr, batch=17.1_20251118.1, io.openshift.expose-services=)
Dec 02 08:19:35 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:19:38 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:19:38 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:19:38 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:19:39 np0005541914.localdomain systemd[1]: tmp-crun.QvvgA2.mount: Deactivated successfully.
Dec 02 08:19:39 np0005541914.localdomain podman[73338]: 2025-12-02 08:19:39.09176757 +0000 UTC m=+0.092382763 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, release=1761123044, vendor=Red Hat, Inc., container_name=logrotate_crond, vcs-type=git, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:19:39 np0005541914.localdomain podman[73338]: 2025-12-02 08:19:39.100673469 +0000 UTC m=+0.101288622 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, vcs-type=git, batch=17.1_20251118.1, config_id=tripleo_step4, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:19:39 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:19:39 np0005541914.localdomain podman[73340]: 2025-12-02 08:19:39.145835683 +0000 UTC m=+0.140535131 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, distribution-scope=public, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, version=17.1.12, managed_by=tripleo_ansible, io.buildah.version=1.41.4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, tcib_managed=true, vcs-type=git)
Dec 02 08:19:39 np0005541914.localdomain podman[73339]: 2025-12-02 08:19:39.193405892 +0000 UTC m=+0.190517826 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, container_name=ceilometer_agent_compute, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git)
Dec 02 08:19:39 np0005541914.localdomain podman[73340]: 2025-12-02 08:19:39.202770565 +0000 UTC m=+0.197470013 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, io.buildah.version=1.41.4, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=)
Dec 02 08:19:39 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:19:39 np0005541914.localdomain podman[73339]: 2025-12-02 08:19:39.21889011 +0000 UTC m=+0.216002004 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:11:48Z, vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=)
Dec 02 08:19:39 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:19:39 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:19:40 np0005541914.localdomain systemd[1]: tmp-crun.cv4XJR.mount: Deactivated successfully.
Dec 02 08:19:40 np0005541914.localdomain podman[73412]: 2025-12-02 08:19:40.084930146 +0000 UTC m=+0.092762775 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.12, architecture=x86_64, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 02 08:19:40 np0005541914.localdomain podman[73412]: 2025-12-02 08:19:40.454917281 +0000 UTC m=+0.462749920 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, config_id=tripleo_step4, vendor=Red Hat, Inc., url=https://www.redhat.com, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:19:40 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:19:43 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:19:43 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:19:44 np0005541914.localdomain podman[73437]: 2025-12-02 08:19:44.087050294 +0000 UTC m=+0.092375244 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible)
Dec 02 08:19:44 np0005541914.localdomain systemd[1]: tmp-crun.admLKV.mount: Deactivated successfully.
Dec 02 08:19:44 np0005541914.localdomain podman[73437]: 2025-12-02 08:19:44.139876699 +0000 UTC m=+0.145201608 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, release=1761123044, container_name=ovn_metadata_agent)
Dec 02 08:19:44 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:19:44 np0005541914.localdomain podman[73438]: 2025-12-02 08:19:44.14056638 +0000 UTC m=+0.142876685 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, distribution-scope=public, tcib_managed=true, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=ovn_controller)
Dec 02 08:19:44 np0005541914.localdomain podman[73438]: 2025-12-02 08:19:44.220564355 +0000 UTC m=+0.222874690 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, release=1761123044, io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 02 08:19:44 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:19:49 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:19:50 np0005541914.localdomain podman[73484]: 2025-12-02 08:19:50.067581628 +0000 UTC m=+0.072638735 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20251118.1, config_id=tripleo_step3, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.4, url=https://www.redhat.com, release=1761123044, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team)
Dec 02 08:19:50 np0005541914.localdomain podman[73484]: 2025-12-02 08:19:50.078974065 +0000 UTC m=+0.084031182 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 08:19:50 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:19:51 np0005541914.localdomain sshd[73503]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:19:52 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:19:53 np0005541914.localdomain podman[73505]: 2025-12-02 08:19:53.074761224 +0000 UTC m=+0.080500201 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Dec 02 08:19:53 np0005541914.localdomain podman[73505]: 2025-12-02 08:19:53.082980342 +0000 UTC m=+0.088719339 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, io.buildah.version=1.41.4, vcs-type=git, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, config_id=tripleo_step3, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z)
Dec 02 08:19:53 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:19:55 np0005541914.localdomain sshd[73503]: Connection closed by 103.52.115.25 port 52296 [preauth]
Dec 02 08:19:57 np0005541914.localdomain sshd[73525]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:19:58 np0005541914.localdomain sshd[73525]: Invalid user ubuntu from 43.225.159.111 port 33780
Dec 02 08:19:58 np0005541914.localdomain sshd[73525]: Received disconnect from 43.225.159.111 port 33780:11:  [preauth]
Dec 02 08:19:58 np0005541914.localdomain sshd[73525]: Disconnected from invalid user ubuntu 43.225.159.111 port 33780 [preauth]
Dec 02 08:20:05 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:20:06 np0005541914.localdomain podman[73527]: 2025-12-02 08:20:06.077776609 +0000 UTC m=+0.084709243 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vendor=Red Hat, Inc., release=1761123044, container_name=metrics_qdr, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12)
Dec 02 08:20:06 np0005541914.localdomain podman[73527]: 2025-12-02 08:20:06.292977447 +0000 UTC m=+0.299910131 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.buildah.version=1.41.4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, build-date=2025-11-18T22:49:46Z, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 08:20:06 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:20:07 np0005541914.localdomain sshd[73556]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:20:08 np0005541914.localdomain sshd[73556]: Invalid user controlm from 182.253.156.173 port 54080
Dec 02 08:20:08 np0005541914.localdomain sshd[73556]: Received disconnect from 182.253.156.173 port 54080:11: Bye Bye [preauth]
Dec 02 08:20:08 np0005541914.localdomain sshd[73556]: Disconnected from invalid user controlm 182.253.156.173 port 54080 [preauth]
Dec 02 08:20:09 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:20:09 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:20:09 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:20:10 np0005541914.localdomain podman[73560]: 2025-12-02 08:20:10.090020334 +0000 UTC m=+0.083310129 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, distribution-scope=public, batch=17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, release=1761123044, config_id=tripleo_step4, io.buildah.version=1.41.4, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git)
Dec 02 08:20:10 np0005541914.localdomain systemd[1]: tmp-crun.LGlGvB.mount: Deactivated successfully.
Dec 02 08:20:10 np0005541914.localdomain podman[73559]: 2025-12-02 08:20:10.141038672 +0000 UTC m=+0.138396714 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, vcs-type=git, batch=17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, managed_by=tripleo_ansible)
Dec 02 08:20:10 np0005541914.localdomain podman[73560]: 2025-12-02 08:20:10.14478874 +0000 UTC m=+0.138078545 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:20:10 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:20:10 np0005541914.localdomain podman[73559]: 2025-12-02 08:20:10.163524076 +0000 UTC m=+0.160882138 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, url=https://www.redhat.com, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, distribution-scope=public, release=1761123044)
Dec 02 08:20:10 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:20:10 np0005541914.localdomain podman[73558]: 2025-12-02 08:20:10.181351234 +0000 UTC m=+0.183977481 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1761123044, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:20:10 np0005541914.localdomain podman[73558]: 2025-12-02 08:20:10.19177039 +0000 UTC m=+0.194396647 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, vendor=Red Hat, Inc., release=1761123044, io.openshift.expose-services=, config_id=tripleo_step4, version=17.1.12, io.buildah.version=1.41.4, batch=17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 08:20:10 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:20:10 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:20:11 np0005541914.localdomain podman[73628]: 2025-12-02 08:20:11.071290578 +0000 UTC m=+0.075579147 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, vcs-type=git, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute)
Dec 02 08:20:11 np0005541914.localdomain podman[73628]: 2025-12-02 08:20:11.402799979 +0000 UTC m=+0.407088558 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1, release=1761123044, vcs-type=git, version=17.1.12, io.buildah.version=1.41.4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, url=https://www.redhat.com)
Dec 02 08:20:11 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:20:14 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:20:14 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:20:15 np0005541914.localdomain systemd[1]: tmp-crun.4LTOtR.mount: Deactivated successfully.
Dec 02 08:20:15 np0005541914.localdomain podman[73652]: 2025-12-02 08:20:15.09075552 +0000 UTC m=+0.094765508 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, version=17.1.12, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, distribution-scope=public, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vcs-type=git, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 02 08:20:15 np0005541914.localdomain podman[73653]: 2025-12-02 08:20:15.14246765 +0000 UTC m=+0.144908599 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, distribution-scope=public, vcs-type=git, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4)
Dec 02 08:20:15 np0005541914.localdomain podman[73653]: 2025-12-02 08:20:15.166019527 +0000 UTC m=+0.168460516 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 02 08:20:15 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:20:15 np0005541914.localdomain podman[73652]: 2025-12-02 08:20:15.218135119 +0000 UTC m=+0.222145117 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 02 08:20:15 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:20:16 np0005541914.localdomain systemd[1]: tmp-crun.KWfnNJ.mount: Deactivated successfully.
Dec 02 08:20:20 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:20:21 np0005541914.localdomain podman[73701]: 2025-12-02 08:20:21.070510979 +0000 UTC m=+0.074334679 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, build-date=2025-11-18T22:51:28Z, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20251118.1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git)
Dec 02 08:20:21 np0005541914.localdomain podman[73701]: 2025-12-02 08:20:21.106881118 +0000 UTC m=+0.110704798 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, architecture=x86_64, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:20:21 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:20:23 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:20:24 np0005541914.localdomain podman[73721]: 2025-12-02 08:20:24.07370397 +0000 UTC m=+0.078502049 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2025-11-18T23:44:13Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_id=tripleo_step3, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git)
Dec 02 08:20:24 np0005541914.localdomain podman[73721]: 2025-12-02 08:20:24.083013881 +0000 UTC m=+0.087811980 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, release=1761123044, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, version=17.1.12)
Dec 02 08:20:24 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:20:35 np0005541914.localdomain sudo[73741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:20:35 np0005541914.localdomain sudo[73741]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:20:35 np0005541914.localdomain sudo[73741]: pam_unix(sudo:session): session closed for user root
Dec 02 08:20:35 np0005541914.localdomain sudo[73756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:20:35 np0005541914.localdomain sudo[73756]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:20:36 np0005541914.localdomain sudo[73756]: pam_unix(sudo:session): session closed for user root
Dec 02 08:20:36 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:20:37 np0005541914.localdomain podman[73805]: 2025-12-02 08:20:37.060370872 +0000 UTC m=+0.068649440 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, version=17.1.12, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container)
Dec 02 08:20:37 np0005541914.localdomain podman[73805]: 2025-12-02 08:20:37.236919119 +0000 UTC m=+0.245197717 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, container_name=metrics_qdr, io.openshift.expose-services=, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12)
Dec 02 08:20:37 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:20:37 np0005541914.localdomain sudo[73832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:20:37 np0005541914.localdomain sudo[73832]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:20:37 np0005541914.localdomain sudo[73832]: pam_unix(sudo:session): session closed for user root
Dec 02 08:20:40 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:20:40 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:20:40 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:20:41 np0005541914.localdomain systemd[1]: tmp-crun.O5nO6l.mount: Deactivated successfully.
Dec 02 08:20:41 np0005541914.localdomain podman[73848]: 2025-12-02 08:20:41.083995215 +0000 UTC m=+0.082912667 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1761123044)
Dec 02 08:20:41 np0005541914.localdomain podman[73847]: 2025-12-02 08:20:41.093068729 +0000 UTC m=+0.096339537 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1761123044, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, version=17.1.12, io.openshift.expose-services=, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=logrotate_crond, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T22:49:32Z)
Dec 02 08:20:41 np0005541914.localdomain podman[73847]: 2025-12-02 08:20:41.131919295 +0000 UTC m=+0.135190073 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, version=17.1.12, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 08:20:41 np0005541914.localdomain podman[73848]: 2025-12-02 08:20:41.140783323 +0000 UTC m=+0.139700765 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, container_name=ceilometer_agent_compute, version=17.1.12, vcs-type=git, tcib_managed=true, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc.)
Dec 02 08:20:41 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:20:41 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:20:41 np0005541914.localdomain podman[73849]: 2025-12-02 08:20:41.224872966 +0000 UTC m=+0.225514232 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12, batch=17.1_20251118.1)
Dec 02 08:20:41 np0005541914.localdomain podman[73849]: 2025-12-02 08:20:41.259099577 +0000 UTC m=+0.259740813 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 02 08:20:41 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:20:41 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:20:42 np0005541914.localdomain podman[73918]: 2025-12-02 08:20:42.088429453 +0000 UTC m=+0.084744754 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, container_name=nova_migration_target, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, release=1761123044, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Dec 02 08:20:42 np0005541914.localdomain podman[73918]: 2025-12-02 08:20:42.49488135 +0000 UTC m=+0.491196691 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-type=git, container_name=nova_migration_target, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, tcib_managed=true, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64)
Dec 02 08:20:42 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:20:45 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:20:45 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:20:46 np0005541914.localdomain systemd[1]: tmp-crun.wUpiQt.mount: Deactivated successfully.
Dec 02 08:20:46 np0005541914.localdomain podman[73941]: 2025-12-02 08:20:46.072952971 +0000 UTC m=+0.079137990 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, distribution-scope=public, release=1761123044, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4)
Dec 02 08:20:46 np0005541914.localdomain podman[73941]: 2025-12-02 08:20:46.115351968 +0000 UTC m=+0.121536917 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:20:46 np0005541914.localdomain podman[73942]: 2025-12-02 08:20:46.123247276 +0000 UTC m=+0.122705614 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.buildah.version=1.41.4)
Dec 02 08:20:46 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:20:46 np0005541914.localdomain podman[73942]: 2025-12-02 08:20:46.144760669 +0000 UTC m=+0.144219007 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 02 08:20:46 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:20:51 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:20:51 np0005541914.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 02 08:20:52 np0005541914.localdomain recover_tripleo_nova_virtqemud[73991]: 61907
Dec 02 08:20:52 np0005541914.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 02 08:20:52 np0005541914.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 02 08:20:52 np0005541914.localdomain podman[73989]: 2025-12-02 08:20:52.075303907 +0000 UTC m=+0.076960020 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, batch=17.1_20251118.1, container_name=collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, url=https://www.redhat.com, name=rhosp17/openstack-collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Dec 02 08:20:52 np0005541914.localdomain podman[73989]: 2025-12-02 08:20:52.08434955 +0000 UTC m=+0.086005633 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, distribution-scope=public, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., release=1761123044, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, architecture=x86_64, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, batch=17.1_20251118.1, version=17.1.12)
Dec 02 08:20:52 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:20:54 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:20:55 np0005541914.localdomain podman[74011]: 2025-12-02 08:20:55.081898536 +0000 UTC m=+0.085827019 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, managed_by=tripleo_ansible, distribution-scope=public, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., config_id=tripleo_step3, architecture=x86_64, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, url=https://www.redhat.com)
Dec 02 08:20:55 np0005541914.localdomain podman[74011]: 2025-12-02 08:20:55.118228962 +0000 UTC m=+0.122157435 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible)
Dec 02 08:20:55 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:21:07 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:21:08 np0005541914.localdomain podman[74028]: 2025-12-02 08:21:08.089674038 +0000 UTC m=+0.086845220 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_id=tripleo_step1, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:21:08 np0005541914.localdomain podman[74028]: 2025-12-02 08:21:08.299686044 +0000 UTC m=+0.296857166 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, vendor=Red Hat, Inc., url=https://www.redhat.com, container_name=metrics_qdr, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:21:08 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:21:08 np0005541914.localdomain sshd[74057]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:21:11 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:21:11 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:21:11 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:21:12 np0005541914.localdomain podman[74061]: 2025-12-02 08:21:12.100971164 +0000 UTC m=+0.095116889 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, version=17.1.12, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, batch=17.1_20251118.1)
Dec 02 08:21:12 np0005541914.localdomain podman[74059]: 2025-12-02 08:21:12.069511708 +0000 UTC m=+0.068418032 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, container_name=logrotate_crond, io.buildah.version=1.41.4, release=1761123044, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12)
Dec 02 08:21:12 np0005541914.localdomain podman[74060]: 2025-12-02 08:21:12.120193026 +0000 UTC m=+0.116006834 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, version=17.1.12, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute)
Dec 02 08:21:12 np0005541914.localdomain podman[74059]: 2025-12-02 08:21:12.152146486 +0000 UTC m=+0.151052860 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, tcib_managed=true, com.redhat.component=openstack-cron-container, version=17.1.12, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git)
Dec 02 08:21:12 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:21:12 np0005541914.localdomain podman[74060]: 2025-12-02 08:21:12.204243048 +0000 UTC m=+0.200056836 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, architecture=x86_64)
Dec 02 08:21:12 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:21:12 np0005541914.localdomain podman[74061]: 2025-12-02 08:21:12.258749944 +0000 UTC m=+0.252895669 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, architecture=x86_64, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.openshift.expose-services=, release=1761123044, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:21:12 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:21:12 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:21:13 np0005541914.localdomain podman[74133]: 2025-12-02 08:21:13.094927206 +0000 UTC m=+0.095892785 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, version=17.1.12, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:21:13 np0005541914.localdomain podman[74133]: 2025-12-02 08:21:13.471875507 +0000 UTC m=+0.472841076 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_id=tripleo_step4, url=https://www.redhat.com, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044)
Dec 02 08:21:13 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:21:13 np0005541914.localdomain sshd[74057]: Received disconnect from 103.52.115.25 port 52866:11: Bye Bye [preauth]
Dec 02 08:21:13 np0005541914.localdomain sshd[74057]: Disconnected from authenticating user root 103.52.115.25 port 52866 [preauth]
Dec 02 08:21:16 np0005541914.localdomain sudo[74201]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-crrxikbwfjcelzareubwhsouqcuxybmy ; /usr/bin/python3
Dec 02 08:21:16 np0005541914.localdomain sudo[74201]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:21:16 np0005541914.localdomain python3[74203]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:21:16 np0005541914.localdomain sudo[74201]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:16 np0005541914.localdomain sudo[74246]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fdkasqhetrzpyszerrshqmmxnhyrglik ; /usr/bin/python3
Dec 02 08:21:16 np0005541914.localdomain sudo[74246]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:21:16 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:21:16 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:21:16 np0005541914.localdomain systemd[1]: tmp-crun.KKXNrS.mount: Deactivated successfully.
Dec 02 08:21:16 np0005541914.localdomain podman[74249]: 2025-12-02 08:21:16.60703118 +0000 UTC m=+0.096119231 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, version=17.1.12, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.expose-services=)
Dec 02 08:21:16 np0005541914.localdomain systemd[1]: tmp-crun.bx80YS.mount: Deactivated successfully.
Dec 02 08:21:16 np0005541914.localdomain podman[74250]: 2025-12-02 08:21:16.6536517 +0000 UTC m=+0.140051806 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-type=git, container_name=ovn_controller, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.buildah.version=1.41.4, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:21:16 np0005541914.localdomain python3[74248]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663675.8580468-113049-90246068038572/source _original_basename=tmpzfs4mg0j follow=False checksum=039e0b234f00fbd1242930f0d5dc67e8b4c067fe backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:21:16 np0005541914.localdomain podman[74250]: 2025-12-02 08:21:16.675391471 +0000 UTC m=+0.161791567 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, architecture=x86_64)
Dec 02 08:21:16 np0005541914.localdomain sudo[74246]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:16 np0005541914.localdomain podman[74249]: 2025-12-02 08:21:16.68718142 +0000 UTC m=+0.176269411 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team)
Dec 02 08:21:16 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:21:16 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:21:17 np0005541914.localdomain sudo[74323]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mnruxjgueylmkwibimkmfdykboxsrddf ; /usr/bin/python3
Dec 02 08:21:17 np0005541914.localdomain sudo[74323]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:21:17 np0005541914.localdomain python3[74325]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 08:21:17 np0005541914.localdomain sudo[74323]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:18 np0005541914.localdomain sudo[74373]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sqofbgywwszehabkewizfigenynbbylm ; /usr/bin/python3
Dec 02 08:21:18 np0005541914.localdomain sudo[74373]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:21:18 np0005541914.localdomain sudo[74373]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:18 np0005541914.localdomain sudo[74391]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uwbpwogdizittwalvijwsqgmjixkmkoj ; /usr/bin/python3
Dec 02 08:21:18 np0005541914.localdomain sudo[74391]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:21:18 np0005541914.localdomain sudo[74391]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:19 np0005541914.localdomain sudo[74495]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-opfmwefuzmpamzhirevfspkwzhidnbqy ; ANSIBLE_ASYNC_DIR=/tmp/.ansible_async /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663678.8946633-113244-167853700649424/async_wrapper.py 980276114034 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663678.8946633-113244-167853700649424/AnsiballZ_command.py _
Dec 02 08:21:19 np0005541914.localdomain sudo[74495]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Dec 02 08:21:19 np0005541914.localdomain ansible-async_wrapper.py[74497]: Invoked with 980276114034 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663678.8946633-113244-167853700649424/AnsiballZ_command.py _
Dec 02 08:21:19 np0005541914.localdomain ansible-async_wrapper.py[74500]: Starting module and watcher
Dec 02 08:21:19 np0005541914.localdomain ansible-async_wrapper.py[74500]: Start watching 74501 (3600)
Dec 02 08:21:19 np0005541914.localdomain ansible-async_wrapper.py[74501]: Start module (74501)
Dec 02 08:21:19 np0005541914.localdomain ansible-async_wrapper.py[74497]: Return async_wrapper task started.
Dec 02 08:21:19 np0005541914.localdomain sudo[74495]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:19 np0005541914.localdomain sudo[74516]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yxkesfnuoyrvlaycwkmmrytzidlnjynu ; /usr/bin/python3
Dec 02 08:21:19 np0005541914.localdomain sudo[74516]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:21:19 np0005541914.localdomain python3[74521]: ansible-ansible.legacy.async_status Invoked with jid=980276114034.74497 mode=status _async_dir=/tmp/.ansible_async
Dec 02 08:21:19 np0005541914.localdomain sudo[74516]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:22 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:21:22 np0005541914.localdomain podman[74576]: 2025-12-02 08:21:22.421526745 +0000 UTC m=+0.103241854 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, url=https://www.redhat.com, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd)
Dec 02 08:21:22 np0005541914.localdomain podman[74576]: 2025-12-02 08:21:22.432117457 +0000 UTC m=+0.113832526 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, tcib_managed=true, config_id=tripleo_step3, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, build-date=2025-11-18T22:51:28Z, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64)
Dec 02 08:21:22 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:21:23 np0005541914.localdomain puppet-user[74520]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Dec 02 08:21:23 np0005541914.localdomain puppet-user[74520]:    (file: /etc/puppet/hiera.yaml)
Dec 02 08:21:23 np0005541914.localdomain puppet-user[74520]: Warning: Undefined variable '::deploy_config_name';
Dec 02 08:21:23 np0005541914.localdomain puppet-user[74520]:    (file & line not available)
Dec 02 08:21:23 np0005541914.localdomain puppet-user[74520]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Dec 02 08:21:23 np0005541914.localdomain puppet-user[74520]:    (file & line not available)
Dec 02 08:21:23 np0005541914.localdomain puppet-user[74520]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8)
Dec 02 08:21:23 np0005541914.localdomain puppet-user[74520]: Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/snmp/manifests/params.pp", 310]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Dec 02 08:21:23 np0005541914.localdomain puppet-user[74520]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Dec 02 08:21:23 np0005541914.localdomain puppet-user[74520]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Dec 02 08:21:23 np0005541914.localdomain puppet-user[74520]:                     with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 358]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Dec 02 08:21:23 np0005541914.localdomain puppet-user[74520]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Dec 02 08:21:23 np0005541914.localdomain puppet-user[74520]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Dec 02 08:21:23 np0005541914.localdomain puppet-user[74520]:                     with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 367]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Dec 02 08:21:23 np0005541914.localdomain puppet-user[74520]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Dec 02 08:21:23 np0005541914.localdomain puppet-user[74520]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Dec 02 08:21:23 np0005541914.localdomain puppet-user[74520]:                     with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 382]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Dec 02 08:21:23 np0005541914.localdomain puppet-user[74520]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Dec 02 08:21:23 np0005541914.localdomain puppet-user[74520]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Dec 02 08:21:23 np0005541914.localdomain puppet-user[74520]:                     with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 388]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Dec 02 08:21:23 np0005541914.localdomain puppet-user[74520]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Dec 02 08:21:23 np0005541914.localdomain puppet-user[74520]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Dec 02 08:21:23 np0005541914.localdomain puppet-user[74520]:                     with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 393]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Dec 02 08:21:23 np0005541914.localdomain puppet-user[74520]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Dec 02 08:21:23 np0005541914.localdomain puppet-user[74520]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69)
Dec 02 08:21:23 np0005541914.localdomain puppet-user[74520]: Notice: Compiled catalog for np0005541914.localdomain in environment production in 0.29 seconds
Dec 02 08:21:24 np0005541914.localdomain puppet-user[74520]: Notice: Applied catalog in 0.32 seconds
Dec 02 08:21:24 np0005541914.localdomain puppet-user[74520]: Application:
Dec 02 08:21:24 np0005541914.localdomain puppet-user[74520]:    Initial environment: production
Dec 02 08:21:24 np0005541914.localdomain puppet-user[74520]:    Converged environment: production
Dec 02 08:21:24 np0005541914.localdomain puppet-user[74520]:          Run mode: user
Dec 02 08:21:24 np0005541914.localdomain puppet-user[74520]: Changes:
Dec 02 08:21:24 np0005541914.localdomain puppet-user[74520]: Events:
Dec 02 08:21:24 np0005541914.localdomain puppet-user[74520]: Resources:
Dec 02 08:21:24 np0005541914.localdomain puppet-user[74520]:             Total: 19
Dec 02 08:21:24 np0005541914.localdomain puppet-user[74520]: Time:
Dec 02 08:21:24 np0005541914.localdomain puppet-user[74520]:        Filebucket: 0.00
Dec 02 08:21:24 np0005541914.localdomain puppet-user[74520]:           Package: 0.00
Dec 02 08:21:24 np0005541914.localdomain puppet-user[74520]:          Schedule: 0.00
Dec 02 08:21:24 np0005541914.localdomain puppet-user[74520]:              Exec: 0.01
Dec 02 08:21:24 np0005541914.localdomain puppet-user[74520]:            Augeas: 0.01
Dec 02 08:21:24 np0005541914.localdomain puppet-user[74520]:              File: 0.02
Dec 02 08:21:24 np0005541914.localdomain puppet-user[74520]:           Service: 0.08
Dec 02 08:21:24 np0005541914.localdomain puppet-user[74520]:    Transaction evaluation: 0.31
Dec 02 08:21:24 np0005541914.localdomain puppet-user[74520]:    Catalog application: 0.32
Dec 02 08:21:24 np0005541914.localdomain puppet-user[74520]:    Config retrieval: 0.36
Dec 02 08:21:24 np0005541914.localdomain puppet-user[74520]:          Last run: 1764663684
Dec 02 08:21:24 np0005541914.localdomain puppet-user[74520]:             Total: 0.33
Dec 02 08:21:24 np0005541914.localdomain puppet-user[74520]: Version:
Dec 02 08:21:24 np0005541914.localdomain puppet-user[74520]:            Config: 1764663683
Dec 02 08:21:24 np0005541914.localdomain puppet-user[74520]:            Puppet: 7.10.0
Dec 02 08:21:24 np0005541914.localdomain ansible-async_wrapper.py[74501]: Module complete (74501)
Dec 02 08:21:24 np0005541914.localdomain ansible-async_wrapper.py[74500]: Done in kid B.
Dec 02 08:21:25 np0005541914.localdomain sshd[74664]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:21:25 np0005541914.localdomain sshd[74664]: Invalid user sol from 45.148.10.240 port 37152
Dec 02 08:21:25 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:21:25 np0005541914.localdomain sshd[74664]: Connection closed by invalid user sol 45.148.10.240 port 37152 [preauth]
Dec 02 08:21:25 np0005541914.localdomain podman[74666]: 2025-12-02 08:21:25.671202614 +0000 UTC m=+0.069215049 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step3, vcs-type=git)
Dec 02 08:21:25 np0005541914.localdomain podman[74666]: 2025-12-02 08:21:25.682436236 +0000 UTC m=+0.080448621 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, release=1761123044, distribution-scope=public, tcib_managed=true, container_name=iscsid, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, vendor=Red Hat, Inc., config_id=tripleo_step3, batch=17.1_20251118.1)
Dec 02 08:21:25 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:21:30 np0005541914.localdomain sudo[74700]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jgmncnkfhpuotzjikyforjdtcjpaovju ; /usr/bin/python3
Dec 02 08:21:30 np0005541914.localdomain sudo[74700]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:21:30 np0005541914.localdomain python3[74702]: ansible-ansible.legacy.async_status Invoked with jid=980276114034.74497 mode=status _async_dir=/tmp/.ansible_async
Dec 02 08:21:30 np0005541914.localdomain sudo[74700]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:30 np0005541914.localdomain sudo[74716]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-csyuzosqsppwjtapmsylaxtkteowxttd ; /usr/bin/python3
Dec 02 08:21:30 np0005541914.localdomain sudo[74716]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:21:30 np0005541914.localdomain python3[74718]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 02 08:21:30 np0005541914.localdomain sudo[74716]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:31 np0005541914.localdomain sudo[74732]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oxynehpmjpteijrmkpbdbjjzbhvsvzgb ; /usr/bin/python3
Dec 02 08:21:31 np0005541914.localdomain sudo[74732]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:21:31 np0005541914.localdomain python3[74734]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 08:21:31 np0005541914.localdomain sudo[74732]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:31 np0005541914.localdomain sudo[74782]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tqmdbxrlhhjllxrlzutnptuyuldhblqm ; /usr/bin/python3
Dec 02 08:21:31 np0005541914.localdomain sudo[74782]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:21:31 np0005541914.localdomain python3[74784]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:21:31 np0005541914.localdomain sudo[74782]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:31 np0005541914.localdomain sudo[74800]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uodrhyezvsfepyymwuolvrwwvzxnncgt ; /usr/bin/python3
Dec 02 08:21:31 np0005541914.localdomain sudo[74800]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:21:32 np0005541914.localdomain python3[74802]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpncl_mgl3 recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Dec 02 08:21:32 np0005541914.localdomain sudo[74800]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:32 np0005541914.localdomain sudo[74830]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nsopudnyvwjpszusnhcbdwnxtfnyqvcy ; /usr/bin/python3
Dec 02 08:21:32 np0005541914.localdomain sudo[74830]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:21:32 np0005541914.localdomain python3[74832]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:21:32 np0005541914.localdomain sudo[74830]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:32 np0005541914.localdomain sudo[74846]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ebdlmgjkwngusecwtxrzucovprvsgetl ; /usr/bin/python3
Dec 02 08:21:32 np0005541914.localdomain sudo[74846]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:21:33 np0005541914.localdomain sudo[74846]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:33 np0005541914.localdomain sudo[74935]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qhgsahnptsafsdufxrgppvaoygnpkqoi ; /usr/bin/python3
Dec 02 08:21:33 np0005541914.localdomain sudo[74935]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:21:33 np0005541914.localdomain python3[74937]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None
Dec 02 08:21:33 np0005541914.localdomain sudo[74935]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:34 np0005541914.localdomain sudo[74954]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-guzvjydqnkllpmczpxdsdyolcdzrgdwx ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:21:34 np0005541914.localdomain sudo[74954]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:21:34 np0005541914.localdomain python3[74956]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:21:34 np0005541914.localdomain sudo[74954]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:34 np0005541914.localdomain sudo[74970]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-abldldgksrxhlixwmkdhsbykbaoviepb ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:21:34 np0005541914.localdomain sudo[74970]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:21:34 np0005541914.localdomain sudo[74970]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:35 np0005541914.localdomain sudo[74986]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zzehqhunwqnlixrsqwnthadrzofyyqln ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:21:35 np0005541914.localdomain sudo[74986]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:21:35 np0005541914.localdomain python3[74988]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 08:21:35 np0005541914.localdomain sudo[74986]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:35 np0005541914.localdomain sudo[75036]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ypxqvedzoastqakoviduhrsibomzrqgv ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:21:35 np0005541914.localdomain sudo[75036]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:21:36 np0005541914.localdomain python3[75038]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:21:36 np0005541914.localdomain sudo[75036]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:36 np0005541914.localdomain sudo[75054]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vevbqjalbzpfouuocmrdsjibqbjhvlsa ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:21:36 np0005541914.localdomain sudo[75054]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:21:36 np0005541914.localdomain python3[75056]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:21:36 np0005541914.localdomain sudo[75054]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:36 np0005541914.localdomain sudo[75116]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-autbmtwgtdgtnftoaynljylhihtdznic ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:21:36 np0005541914.localdomain sudo[75116]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:21:36 np0005541914.localdomain python3[75118]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:21:36 np0005541914.localdomain sudo[75116]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:36 np0005541914.localdomain sudo[75134]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mchrxktjujiupefxfwehfpanqvgsibzh ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:21:36 np0005541914.localdomain sudo[75134]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:21:37 np0005541914.localdomain python3[75136]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:21:37 np0005541914.localdomain sudo[75134]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:37 np0005541914.localdomain sudo[75177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:21:37 np0005541914.localdomain sudo[75177]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:21:37 np0005541914.localdomain sudo[75177]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:37 np0005541914.localdomain sudo[75214]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dezlazgwdjvvzwqsvrybunsgkyslyjam ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:21:37 np0005541914.localdomain sudo[75214]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:21:37 np0005541914.localdomain sudo[75211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:21:37 np0005541914.localdomain sudo[75211]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:21:37 np0005541914.localdomain python3[75227]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:21:37 np0005541914.localdomain sudo[75214]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:37 np0005541914.localdomain sudo[75245]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-faklwbccvieewydutgwtylabvqwpgjwh ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:21:37 np0005541914.localdomain sudo[75245]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:21:37 np0005541914.localdomain python3[75257]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:21:37 np0005541914.localdomain sudo[75245]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:37 np0005541914.localdomain sudo[75211]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:38 np0005541914.localdomain sudo[75337]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sluhaktalgexftuvxkxrhnnkrsejskec ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:21:38 np0005541914.localdomain sudo[75337]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:21:38 np0005541914.localdomain python3[75339]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:21:38 np0005541914.localdomain sudo[75337]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:38 np0005541914.localdomain sudo[75355]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xyspxesyhihaxcgqqjehsgwasqsknnpi ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:21:38 np0005541914.localdomain sudo[75355]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:21:38 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:21:38 np0005541914.localdomain systemd[1]: tmp-crun.32OI8V.mount: Deactivated successfully.
Dec 02 08:21:38 np0005541914.localdomain podman[75358]: 2025-12-02 08:21:38.619964038 +0000 UTC m=+0.103948076 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, version=17.1.12, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc.)
Dec 02 08:21:38 np0005541914.localdomain python3[75357]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:21:38 np0005541914.localdomain sudo[75355]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:38 np0005541914.localdomain sudo[75385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:21:38 np0005541914.localdomain sudo[75385]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:21:38 np0005541914.localdomain sudo[75385]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:38 np0005541914.localdomain podman[75358]: 2025-12-02 08:21:38.819299219 +0000 UTC m=+0.303283227 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.41.4)
Dec 02 08:21:38 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:21:38 np0005541914.localdomain sudo[75429]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ofvvaudgqzkndkaqjtzyxpfzgwkhdozu ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:21:38 np0005541914.localdomain sudo[75429]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:21:39 np0005541914.localdomain python3[75431]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 08:21:39 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:21:39 np0005541914.localdomain systemd-rc-local-generator[75452]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:21:39 np0005541914.localdomain systemd-sysv-generator[75456]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:21:39 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:21:39 np0005541914.localdomain sudo[75429]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:39 np0005541914.localdomain sudo[75515]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pphvgugqolmmwdlndqcpcxhwmgkqwwoz ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:21:39 np0005541914.localdomain sudo[75515]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:21:40 np0005541914.localdomain python3[75517]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:21:40 np0005541914.localdomain sudo[75515]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:40 np0005541914.localdomain sudo[75533]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nrdrnqmayqxnkcxglxjsqbjmtdwxdkig ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:21:40 np0005541914.localdomain sudo[75533]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:21:40 np0005541914.localdomain python3[75535]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:21:40 np0005541914.localdomain sudo[75533]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:40 np0005541914.localdomain sudo[75595]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bxqkvqtbwujdqhgmhdvlqucbjoxtifpk ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:21:40 np0005541914.localdomain sudo[75595]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:21:40 np0005541914.localdomain python3[75597]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 02 08:21:40 np0005541914.localdomain sudo[75595]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:41 np0005541914.localdomain sudo[75613]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zmuqzylgxwzrrsoljreajjwormnejcyg ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:21:41 np0005541914.localdomain sudo[75613]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:21:41 np0005541914.localdomain python3[75615]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:21:41 np0005541914.localdomain sudo[75613]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:41 np0005541914.localdomain sudo[75643]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ziruhnxhwgmvrjykwliitetffepqieqf ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:21:41 np0005541914.localdomain sudo[75643]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:21:41 np0005541914.localdomain python3[75645]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 08:21:41 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:21:41 np0005541914.localdomain systemd-rc-local-generator[75669]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:21:41 np0005541914.localdomain systemd-sysv-generator[75676]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:21:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:21:42 np0005541914.localdomain systemd[1]: Starting Create netns directory...
Dec 02 08:21:42 np0005541914.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 02 08:21:42 np0005541914.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 02 08:21:42 np0005541914.localdomain systemd[1]: Finished Create netns directory.
Dec 02 08:21:42 np0005541914.localdomain sudo[75643]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:42 np0005541914.localdomain sudo[75700]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pudghapefkznbnivypdcyhcllvqhmrio ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:21:42 np0005541914.localdomain sudo[75700]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:21:42 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:21:42 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:21:42 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:21:42 np0005541914.localdomain podman[75704]: 2025-12-02 08:21:42.553706545 +0000 UTC m=+0.086025444 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, version=17.1.12, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute)
Dec 02 08:21:42 np0005541914.localdomain podman[75704]: 2025-12-02 08:21:42.60752482 +0000 UTC m=+0.139843719 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, batch=17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true)
Dec 02 08:21:42 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:21:42 np0005541914.localdomain python3[75702]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6
Dec 02 08:21:42 np0005541914.localdomain sudo[75700]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:42 np0005541914.localdomain podman[75705]: 2025-12-02 08:21:42.658149526 +0000 UTC m=+0.184845309 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, distribution-scope=public, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, url=https://www.redhat.com, vendor=Red Hat, Inc.)
Dec 02 08:21:42 np0005541914.localdomain podman[75703]: 2025-12-02 08:21:42.607394566 +0000 UTC m=+0.137223668 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, batch=17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, release=1761123044, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron)
Dec 02 08:21:42 np0005541914.localdomain podman[75705]: 2025-12-02 08:21:42.706232441 +0000 UTC m=+0.232928214 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team)
Dec 02 08:21:42 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:21:42 np0005541914.localdomain podman[75703]: 2025-12-02 08:21:42.788692943 +0000 UTC m=+0.318522045 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.expose-services=, version=17.1.12, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, container_name=logrotate_crond)
Dec 02 08:21:42 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:21:42 np0005541914.localdomain sudo[75789]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nxssajlzussvnzazygmfmhqspnsqwrad ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:21:42 np0005541914.localdomain sudo[75789]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:21:43 np0005541914.localdomain sudo[75789]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:43 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:21:44 np0005541914.localdomain podman[75819]: 2025-12-02 08:21:44.069733162 +0000 UTC m=+0.077037313 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.buildah.version=1.41.4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=)
Dec 02 08:21:44 np0005541914.localdomain podman[75819]: 2025-12-02 08:21:44.401101188 +0000 UTC m=+0.408405359 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, vcs-type=git, container_name=nova_migration_target, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., release=1761123044, tcib_managed=true, io.openshift.expose-services=)
Dec 02 08:21:44 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:21:44 np0005541914.localdomain sudo[75856]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qhlczzxfyalbtgshdnoydlvdhlfxfojx ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:21:44 np0005541914.localdomain sudo[75856]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:21:44 np0005541914.localdomain python3[75858]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step5 config_dir=/var/lib/tripleo-config/container-startup-config/step_5 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False
Dec 02 08:21:44 np0005541914.localdomain podman[75896]: 2025-12-02 08:21:44.997391178 +0000 UTC m=+0.085551590 container create 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, tcib_managed=true, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., release=1761123044, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, config_id=tripleo_step5)
Dec 02 08:21:45 np0005541914.localdomain systemd[1]: Started libpod-conmon-6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.scope.
Dec 02 08:21:45 np0005541914.localdomain podman[75896]: 2025-12-02 08:21:44.94604978 +0000 UTC m=+0.034210222 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Dec 02 08:21:45 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:21:45 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e1d8b5716686b6ea155be98d0f313571788c49d87ac4366e7f84d4f947d1b6e/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 02 08:21:45 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e1d8b5716686b6ea155be98d0f313571788c49d87ac4366e7f84d4f947d1b6e/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 02 08:21:45 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e1d8b5716686b6ea155be98d0f313571788c49d87ac4366e7f84d4f947d1b6e/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 02 08:21:45 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e1d8b5716686b6ea155be98d0f313571788c49d87ac4366e7f84d4f947d1b6e/merged/var/log/nova supports timestamps until 2038 (0x7fffffff)
Dec 02 08:21:45 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e1d8b5716686b6ea155be98d0f313571788c49d87ac4366e7f84d4f947d1b6e/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Dec 02 08:21:45 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:21:45 np0005541914.localdomain podman[75896]: 2025-12-02 08:21:45.095359315 +0000 UTC m=+0.183519767 container init 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., container_name=nova_compute, config_id=tripleo_step5, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:21:45 np0005541914.localdomain sudo[75916]:     nova : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 02 08:21:45 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:21:45 np0005541914.localdomain podman[75896]: 2025-12-02 08:21:45.134661476 +0000 UTC m=+0.222821878 container start 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, architecture=x86_64, release=1761123044, tcib_managed=true)
Dec 02 08:21:45 np0005541914.localdomain systemd-logind[760]: Existing logind session ID 28 used by new audit session, ignoring.
Dec 02 08:21:45 np0005541914.localdomain python3[75858]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_compute --conmon-pidfile /run/nova_compute.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env LIBGUESTFS_BACKEND=direct --env TRIPLEO_CONFIG_HASH=d89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0 --healthcheck-command /openstack/healthcheck 5672 --ipc host --label config_id=tripleo_step5 --label container_name=nova_compute --label managed_by=tripleo_ansible --label config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_compute.log --network host --privileged=True --ulimit nofile=131072 --ulimit memlock=67108864 --user nova --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/nova:/var/log/nova --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /dev:/dev --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /run/nova:/run/nova:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /sys/class/net:/sys/class/net --volume /sys/bus/pci:/sys/bus/pci --volume /boot:/boot:ro --volume /var/lib/nova:/var/lib/nova:shared registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Dec 02 08:21:45 np0005541914.localdomain systemd[1]: Created slice User Slice of UID 0.
Dec 02 08:21:45 np0005541914.localdomain systemd[1]: Starting User Runtime Directory /run/user/0...
Dec 02 08:21:45 np0005541914.localdomain systemd[1]: Finished User Runtime Directory /run/user/0.
Dec 02 08:21:45 np0005541914.localdomain systemd[1]: Starting User Manager for UID 0...
Dec 02 08:21:45 np0005541914.localdomain systemd[75931]: pam_unix(systemd-user:session): session opened for user root(uid=0) by (uid=0)
Dec 02 08:21:45 np0005541914.localdomain podman[75917]: 2025-12-02 08:21:45.265804701 +0000 UTC m=+0.124581451 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, maintainer=OpenStack TripleO Team, version=17.1.12, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step5, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public)
Dec 02 08:21:45 np0005541914.localdomain podman[75917]: 2025-12-02 08:21:45.330703674 +0000 UTC m=+0.189480444 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, distribution-scope=public, architecture=x86_64, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1761123044, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, version=17.1.12, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.)
Dec 02 08:21:45 np0005541914.localdomain podman[75917]: unhealthy
Dec 02 08:21:45 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:21:45 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Failed with result 'exit-code'.
Dec 02 08:21:45 np0005541914.localdomain systemd[75931]: Queued start job for default target Main User Target.
Dec 02 08:21:45 np0005541914.localdomain systemd[75931]: Created slice User Application Slice.
Dec 02 08:21:45 np0005541914.localdomain systemd[75931]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec 02 08:21:45 np0005541914.localdomain systemd[75931]: Started Daily Cleanup of User's Temporary Directories.
Dec 02 08:21:45 np0005541914.localdomain systemd[75931]: Reached target Paths.
Dec 02 08:21:45 np0005541914.localdomain systemd[75931]: Reached target Timers.
Dec 02 08:21:45 np0005541914.localdomain systemd[75931]: Starting D-Bus User Message Bus Socket...
Dec 02 08:21:45 np0005541914.localdomain systemd[75931]: Starting Create User's Volatile Files and Directories...
Dec 02 08:21:45 np0005541914.localdomain systemd[75931]: Finished Create User's Volatile Files and Directories.
Dec 02 08:21:45 np0005541914.localdomain systemd[75931]: Listening on D-Bus User Message Bus Socket.
Dec 02 08:21:45 np0005541914.localdomain systemd[75931]: Reached target Sockets.
Dec 02 08:21:45 np0005541914.localdomain systemd[75931]: Reached target Basic System.
Dec 02 08:21:45 np0005541914.localdomain systemd[75931]: Reached target Main User Target.
Dec 02 08:21:45 np0005541914.localdomain systemd[75931]: Startup finished in 147ms.
Dec 02 08:21:45 np0005541914.localdomain systemd[1]: Started User Manager for UID 0.
Dec 02 08:21:45 np0005541914.localdomain systemd[1]: Started Session c10 of User root.
Dec 02 08:21:45 np0005541914.localdomain sudo[75916]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42436)
Dec 02 08:21:45 np0005541914.localdomain sudo[75916]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:45 np0005541914.localdomain systemd[1]: session-c10.scope: Deactivated successfully.
Dec 02 08:21:45 np0005541914.localdomain podman[76020]: 2025-12-02 08:21:45.624837254 +0000 UTC m=+0.066426301 container create 04715a69146858c8339bc8101e67a39c455c4d6a76b51ebad6e24f8a290e5fbf (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, tcib_managed=true, managed_by=tripleo_ansible, container_name=nova_wait_for_compute_service, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, release=1761123044)
Dec 02 08:21:45 np0005541914.localdomain podman[76020]: 2025-12-02 08:21:45.58896457 +0000 UTC m=+0.030553627 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Dec 02 08:21:45 np0005541914.localdomain systemd[1]: Started libpod-conmon-04715a69146858c8339bc8101e67a39c455c4d6a76b51ebad6e24f8a290e5fbf.scope.
Dec 02 08:21:45 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:21:45 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe6b5bb5c3faac5bc7b25f16619728c4e2a2d4a71d222c2e5e52b063609b5512/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff)
Dec 02 08:21:45 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe6b5bb5c3faac5bc7b25f16619728c4e2a2d4a71d222c2e5e52b063609b5512/merged/var/log/nova supports timestamps until 2038 (0x7fffffff)
Dec 02 08:21:45 np0005541914.localdomain podman[76020]: 2025-12-02 08:21:45.720722406 +0000 UTC m=+0.162311483 container init 04715a69146858c8339bc8101e67a39c455c4d6a76b51ebad6e24f8a290e5fbf (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, version=17.1.12, architecture=x86_64, container_name=nova_wait_for_compute_service, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 02 08:21:45 np0005541914.localdomain podman[76020]: 2025-12-02 08:21:45.728768508 +0000 UTC m=+0.170357585 container start 04715a69146858c8339bc8101e67a39c455c4d6a76b51ebad6e24f8a290e5fbf (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_id=tripleo_step5, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_wait_for_compute_service, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, io.buildah.version=1.41.4, vcs-type=git, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.12)
Dec 02 08:21:45 np0005541914.localdomain podman[76020]: 2025-12-02 08:21:45.729103808 +0000 UTC m=+0.170692845 container attach 04715a69146858c8339bc8101e67a39c455c4d6a76b51ebad6e24f8a290e5fbf (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, url=https://www.redhat.com, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_wait_for_compute_service, vcs-type=git, release=1761123044, name=rhosp17/openstack-nova-compute)
Dec 02 08:21:45 np0005541914.localdomain sudo[76040]:     nova : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 02 08:21:45 np0005541914.localdomain sudo[76040]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42436)
Dec 02 08:21:45 np0005541914.localdomain sudo[76040]: pam_unix(sudo:session): session closed for user root
Dec 02 08:21:46 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:21:46 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:21:47 np0005541914.localdomain podman[76044]: 2025-12-02 08:21:47.079544501 +0000 UTC m=+0.080811272 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.buildah.version=1.41.4, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn)
Dec 02 08:21:47 np0005541914.localdomain podman[76044]: 2025-12-02 08:21:47.124391995 +0000 UTC m=+0.125658786 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_id=tripleo_step4, version=17.1.12, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible)
Dec 02 08:21:47 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:21:47 np0005541914.localdomain podman[76045]: 2025-12-02 08:21:47.130517927 +0000 UTC m=+0.127232985 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step4, release=1761123044, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.buildah.version=1.41.4, version=17.1.12, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-ovn-controller)
Dec 02 08:21:47 np0005541914.localdomain podman[76045]: 2025-12-02 08:21:47.209972214 +0000 UTC m=+0.206687222 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, architecture=x86_64, build-date=2025-11-18T23:34:05Z, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:21:47 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:21:52 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:21:53 np0005541914.localdomain systemd[1]: tmp-crun.frq9b8.mount: Deactivated successfully.
Dec 02 08:21:53 np0005541914.localdomain podman[76092]: 2025-12-02 08:21:53.074069762 +0000 UTC m=+0.081263136 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, distribution-scope=public, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, url=https://www.redhat.com, io.openshift.expose-services=, config_id=tripleo_step3, release=1761123044, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12)
Dec 02 08:21:53 np0005541914.localdomain podman[76092]: 2025-12-02 08:21:53.083713233 +0000 UTC m=+0.090906617 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, container_name=collectd, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd)
Dec 02 08:21:53 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:21:55 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:21:55 np0005541914.localdomain systemd[1]: Stopping User Manager for UID 0...
Dec 02 08:21:55 np0005541914.localdomain systemd[75931]: Activating special unit Exit the Session...
Dec 02 08:21:55 np0005541914.localdomain systemd[75931]: Stopped target Main User Target.
Dec 02 08:21:55 np0005541914.localdomain systemd[75931]: Stopped target Basic System.
Dec 02 08:21:55 np0005541914.localdomain systemd[75931]: Stopped target Paths.
Dec 02 08:21:55 np0005541914.localdomain systemd[75931]: Stopped target Sockets.
Dec 02 08:21:55 np0005541914.localdomain systemd[75931]: Stopped target Timers.
Dec 02 08:21:55 np0005541914.localdomain systemd[75931]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 02 08:21:55 np0005541914.localdomain systemd[75931]: Closed D-Bus User Message Bus Socket.
Dec 02 08:21:55 np0005541914.localdomain systemd[75931]: Stopped Create User's Volatile Files and Directories.
Dec 02 08:21:55 np0005541914.localdomain systemd[75931]: Removed slice User Application Slice.
Dec 02 08:21:55 np0005541914.localdomain systemd[75931]: Reached target Shutdown.
Dec 02 08:21:55 np0005541914.localdomain systemd[75931]: Finished Exit the Session.
Dec 02 08:21:55 np0005541914.localdomain systemd[75931]: Reached target Exit the Session.
Dec 02 08:21:55 np0005541914.localdomain systemd[1]: user@0.service: Deactivated successfully.
Dec 02 08:21:55 np0005541914.localdomain systemd[1]: Stopped User Manager for UID 0.
Dec 02 08:21:55 np0005541914.localdomain systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec 02 08:21:55 np0005541914.localdomain systemd[1]: run-user-0.mount: Deactivated successfully.
Dec 02 08:21:55 np0005541914.localdomain systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec 02 08:21:55 np0005541914.localdomain systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec 02 08:21:55 np0005541914.localdomain systemd[1]: Removed slice User Slice of UID 0.
Dec 02 08:21:55 np0005541914.localdomain podman[76111]: 2025-12-02 08:21:55.819645597 +0000 UTC m=+0.067432602 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, version=17.1.12, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z)
Dec 02 08:21:55 np0005541914.localdomain podman[76111]: 2025-12-02 08:21:55.858923577 +0000 UTC m=+0.106710632 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, batch=17.1_20251118.1, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.12, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, tcib_managed=true)
Dec 02 08:21:55 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:22:08 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:22:09 np0005541914.localdomain podman[76131]: 2025-12-02 08:22:09.075861776 +0000 UTC m=+0.083875677 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:22:09 np0005541914.localdomain podman[76131]: 2025-12-02 08:22:09.278103058 +0000 UTC m=+0.286116929 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible)
Dec 02 08:22:09 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:22:12 np0005541914.localdomain sshd[35708]: Received disconnect from 192.168.122.100 port 60334:11: disconnected by user
Dec 02 08:22:12 np0005541914.localdomain sshd[35708]: Disconnected from user zuul 192.168.122.100 port 60334
Dec 02 08:22:12 np0005541914.localdomain sshd[35705]: pam_unix(sshd:session): session closed for user zuul
Dec 02 08:22:12 np0005541914.localdomain systemd[1]: session-27.scope: Deactivated successfully.
Dec 02 08:22:12 np0005541914.localdomain systemd[1]: session-27.scope: Consumed 2.951s CPU time.
Dec 02 08:22:12 np0005541914.localdomain systemd-logind[760]: Session 27 logged out. Waiting for processes to exit.
Dec 02 08:22:12 np0005541914.localdomain systemd-logind[760]: Removed session 27.
Dec 02 08:22:12 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:22:12 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:22:12 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:22:13 np0005541914.localdomain podman[76160]: 2025-12-02 08:22:13.097609688 +0000 UTC m=+0.094173529 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, build-date=2025-11-19T00:11:48Z, tcib_managed=true, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container)
Dec 02 08:22:13 np0005541914.localdomain podman[76159]: 2025-12-02 08:22:13.145596701 +0000 UTC m=+0.142946167 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, architecture=x86_64, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:22:13 np0005541914.localdomain podman[76160]: 2025-12-02 08:22:13.154742618 +0000 UTC m=+0.151306409 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:22:13 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:22:13 np0005541914.localdomain podman[76161]: 2025-12-02 08:22:13.07178811 +0000 UTC m=+0.070153277 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, tcib_managed=true, version=17.1.12)
Dec 02 08:22:13 np0005541914.localdomain podman[76159]: 2025-12-02 08:22:13.205728584 +0000 UTC m=+0.203078050 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, version=17.1.12, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond)
Dec 02 08:22:13 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:22:13 np0005541914.localdomain podman[76161]: 2025-12-02 08:22:13.255862084 +0000 UTC m=+0.254227221 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, version=17.1.12, vcs-type=git, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container)
Dec 02 08:22:13 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:22:14 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:22:15 np0005541914.localdomain podman[76229]: 2025-12-02 08:22:15.057130953 +0000 UTC m=+0.067350320 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, batch=17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, container_name=nova_migration_target, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:22:15 np0005541914.localdomain podman[76229]: 2025-12-02 08:22:15.369382559 +0000 UTC m=+0.379601876 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, architecture=x86_64, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git)
Dec 02 08:22:15 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:22:15 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:22:15 np0005541914.localdomain podman[76253]: 2025-12-02 08:22:15.475609525 +0000 UTC m=+0.070429756 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, batch=17.1_20251118.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:22:15 np0005541914.localdomain podman[76253]: 2025-12-02 08:22:15.529553483 +0000 UTC m=+0.124373714 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc.)
Dec 02 08:22:15 np0005541914.localdomain podman[76253]: unhealthy
Dec 02 08:22:15 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:22:15 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Failed with result 'exit-code'.
Dec 02 08:22:17 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:22:17 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:22:18 np0005541914.localdomain podman[76276]: 2025-12-02 08:22:18.078952247 +0000 UTC m=+0.081937847 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, release=1761123044, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, architecture=x86_64)
Dec 02 08:22:18 np0005541914.localdomain podman[76275]: 2025-12-02 08:22:18.129391447 +0000 UTC m=+0.136213717 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.4, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, distribution-scope=public, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:22:18 np0005541914.localdomain podman[76276]: 2025-12-02 08:22:18.153176421 +0000 UTC m=+0.156161941 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4)
Dec 02 08:22:18 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:22:18 np0005541914.localdomain podman[76275]: 2025-12-02 08:22:18.190823569 +0000 UTC m=+0.197645769 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.12, io.openshift.expose-services=, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn)
Dec 02 08:22:18 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:22:23 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:22:24 np0005541914.localdomain podman[76323]: 2025-12-02 08:22:24.066196781 +0000 UTC m=+0.073563644 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.41.4, config_id=tripleo_step3, container_name=collectd, url=https://www.redhat.com, vcs-type=git, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, release=1761123044, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd)
Dec 02 08:22:24 np0005541914.localdomain podman[76323]: 2025-12-02 08:22:24.074197911 +0000 UTC m=+0.081564754 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2025-11-18T22:51:28Z, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, tcib_managed=true, config_id=tripleo_step3, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, distribution-scope=public, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=17.1.12, vcs-type=git, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=)
Dec 02 08:22:24 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:22:25 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:22:26 np0005541914.localdomain podman[76343]: 2025-12-02 08:22:26.07317927 +0000 UTC m=+0.081274205 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, release=1761123044, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:22:26 np0005541914.localdomain podman[76343]: 2025-12-02 08:22:26.112909084 +0000 UTC m=+0.121004049 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_id=tripleo_step3, distribution-scope=public, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, release=1761123044, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.buildah.version=1.41.4, container_name=iscsid, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container)
Dec 02 08:22:26 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:22:38 np0005541914.localdomain sudo[76361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:22:38 np0005541914.localdomain sudo[76361]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:22:38 np0005541914.localdomain sudo[76361]: pam_unix(sudo:session): session closed for user root
Dec 02 08:22:38 np0005541914.localdomain sudo[76376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:22:38 np0005541914.localdomain sudo[76376]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:22:39 np0005541914.localdomain sudo[76376]: pam_unix(sudo:session): session closed for user root
Dec 02 08:22:39 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:22:40 np0005541914.localdomain podman[76424]: 2025-12-02 08:22:40.067152361 +0000 UTC m=+0.074849085 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, batch=17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 08:22:40 np0005541914.localdomain podman[76424]: 2025-12-02 08:22:40.255138436 +0000 UTC m=+0.262835140 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, tcib_managed=true, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, managed_by=tripleo_ansible, container_name=metrics_qdr, distribution-scope=public, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 08:22:40 np0005541914.localdomain sudo[76453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:22:40 np0005541914.localdomain sudo[76453]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:22:40 np0005541914.localdomain sudo[76453]: pam_unix(sudo:session): session closed for user root
Dec 02 08:22:40 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:22:43 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:22:43 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:22:43 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:22:44 np0005541914.localdomain podman[76468]: 2025-12-02 08:22:44.085275259 +0000 UTC m=+0.088443231 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_id=tripleo_step4, version=17.1.12, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1761123044, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, architecture=x86_64, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible)
Dec 02 08:22:44 np0005541914.localdomain podman[76469]: 2025-12-02 08:22:44.143517902 +0000 UTC m=+0.144668461 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, distribution-scope=public, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, container_name=ceilometer_agent_compute, version=17.1.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4)
Dec 02 08:22:44 np0005541914.localdomain podman[76469]: 2025-12-02 08:22:44.193250339 +0000 UTC m=+0.194400918 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, managed_by=tripleo_ansible, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, release=1761123044, batch=17.1_20251118.1)
Dec 02 08:22:44 np0005541914.localdomain podman[76470]: 2025-12-02 08:22:44.198188764 +0000 UTC m=+0.196942687 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, io.buildah.version=1.41.4, release=1761123044, build-date=2025-11-19T00:12:45Z)
Dec 02 08:22:44 np0005541914.localdomain podman[76468]: 2025-12-02 08:22:44.220627886 +0000 UTC m=+0.223795898 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, distribution-scope=public, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git)
Dec 02 08:22:44 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:22:44 np0005541914.localdomain podman[76470]: 2025-12-02 08:22:44.254155996 +0000 UTC m=+0.252909909 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., release=1761123044, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, version=17.1.12)
Dec 02 08:22:44 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:22:44 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:22:45 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:22:45 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:22:46 np0005541914.localdomain podman[76542]: 2025-12-02 08:22:46.073502161 +0000 UTC m=+0.076786995 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, architecture=x86_64, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, tcib_managed=true, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, release=1761123044, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 02 08:22:46 np0005541914.localdomain systemd[1]: tmp-crun.8bJxdH.mount: Deactivated successfully.
Dec 02 08:22:46 np0005541914.localdomain podman[76543]: 2025-12-02 08:22:46.137536515 +0000 UTC m=+0.140166619 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-type=git, tcib_managed=true, version=17.1.12, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:22:46 np0005541914.localdomain podman[76542]: 2025-12-02 08:22:46.189864104 +0000 UTC m=+0.193148918 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, release=1761123044, config_id=tripleo_step5, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 02 08:22:46 np0005541914.localdomain podman[76542]: unhealthy
Dec 02 08:22:46 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:22:46 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Failed with result 'exit-code'.
Dec 02 08:22:46 np0005541914.localdomain podman[76543]: 2025-12-02 08:22:46.506035303 +0000 UTC m=+0.508665407 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target)
Dec 02 08:22:46 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:22:48 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:22:48 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:22:49 np0005541914.localdomain systemd[1]: tmp-crun.6N2uDL.mount: Deactivated successfully.
Dec 02 08:22:49 np0005541914.localdomain podman[76588]: 2025-12-02 08:22:49.088596564 +0000 UTC m=+0.090017130 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, version=17.1.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044)
Dec 02 08:22:49 np0005541914.localdomain systemd[1]: tmp-crun.q783jo.mount: Deactivated successfully.
Dec 02 08:22:49 np0005541914.localdomain podman[76587]: 2025-12-02 08:22:49.140948133 +0000 UTC m=+0.145069993 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, tcib_managed=true, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=)
Dec 02 08:22:49 np0005541914.localdomain podman[76588]: 2025-12-02 08:22:49.193203329 +0000 UTC m=+0.194623885 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.buildah.version=1.41.4, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, container_name=ovn_controller)
Dec 02 08:22:49 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:22:49 np0005541914.localdomain podman[76587]: 2025-12-02 08:22:49.249730838 +0000 UTC m=+0.253852688 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.12, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, tcib_managed=true)
Dec 02 08:22:49 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:22:54 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:22:54 np0005541914.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 02 08:22:55 np0005541914.localdomain recover_tripleo_nova_virtqemud[76641]: 61907
Dec 02 08:22:55 np0005541914.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 02 08:22:55 np0005541914.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 02 08:22:55 np0005541914.localdomain systemd[1]: tmp-crun.EOt9wN.mount: Deactivated successfully.
Dec 02 08:22:55 np0005541914.localdomain podman[76637]: 2025-12-02 08:22:55.085611563 +0000 UTC m=+0.080456281 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, url=https://www.redhat.com, release=1761123044, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc.)
Dec 02 08:22:55 np0005541914.localdomain podman[76637]: 2025-12-02 08:22:55.094329886 +0000 UTC m=+0.089174664 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, release=1761123044, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container)
Dec 02 08:22:55 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:22:56 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:22:57 np0005541914.localdomain podman[76657]: 2025-12-02 08:22:57.078708307 +0000 UTC m=+0.082097121 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, release=1761123044, url=https://www.redhat.com, version=17.1.12, architecture=x86_64, config_id=tripleo_step3, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:22:57 np0005541914.localdomain podman[76657]: 2025-12-02 08:22:57.08520827 +0000 UTC m=+0.088597104 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, version=17.1.12, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, io.buildah.version=1.41.4, architecture=x86_64)
Dec 02 08:22:57 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:23:10 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:23:11 np0005541914.localdomain podman[76677]: 2025-12-02 08:23:11.083404599 +0000 UTC m=+0.088740399 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, io.buildah.version=1.41.4, version=17.1.12, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Dec 02 08:23:11 np0005541914.localdomain podman[76677]: 2025-12-02 08:23:11.274834553 +0000 UTC m=+0.280170273 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, version=17.1.12, architecture=x86_64, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, vendor=Red Hat, Inc.)
Dec 02 08:23:11 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:23:14 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:23:14 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:23:14 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:23:15 np0005541914.localdomain podman[76707]: 2025-12-02 08:23:15.087864572 +0000 UTC m=+0.085822909 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:23:15 np0005541914.localdomain podman[76707]: 2025-12-02 08:23:15.121931468 +0000 UTC m=+0.119889805 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, version=17.1.12, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible)
Dec 02 08:23:15 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:23:15 np0005541914.localdomain podman[76706]: 2025-12-02 08:23:15.143521635 +0000 UTC m=+0.146449178 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, tcib_managed=true, release=1761123044, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-cron-container, distribution-scope=public, vcs-type=git, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, config_id=tripleo_step4)
Dec 02 08:23:15 np0005541914.localdomain podman[76706]: 2025-12-02 08:23:15.18076685 +0000 UTC m=+0.183694353 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, version=17.1.12, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_id=tripleo_step4)
Dec 02 08:23:15 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:23:15 np0005541914.localdomain podman[76708]: 2025-12-02 08:23:15.197816314 +0000 UTC m=+0.193987175 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, container_name=ceilometer_agent_ipmi, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, url=https://www.redhat.com)
Dec 02 08:23:15 np0005541914.localdomain podman[76708]: 2025-12-02 08:23:15.226224983 +0000 UTC m=+0.222395844 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, release=1761123044, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Dec 02 08:23:15 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:23:16 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:23:16 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:23:17 np0005541914.localdomain systemd[1]: tmp-crun.wihRoe.mount: Deactivated successfully.
Dec 02 08:23:17 np0005541914.localdomain podman[76782]: 2025-12-02 08:23:17.082995 +0000 UTC m=+0.086678015 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, config_id=tripleo_step4, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 02 08:23:17 np0005541914.localdomain podman[76781]: 2025-12-02 08:23:17.128710441 +0000 UTC m=+0.134289736 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, container_name=nova_compute, url=https://www.redhat.com, release=1761123044, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, distribution-scope=public, architecture=x86_64, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step5)
Dec 02 08:23:17 np0005541914.localdomain podman[76781]: 2025-12-02 08:23:17.188908346 +0000 UTC m=+0.194487691 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=nova_compute, release=1761123044, name=rhosp17/openstack-nova-compute, version=17.1.12, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64)
Dec 02 08:23:17 np0005541914.localdomain podman[76781]: unhealthy
Dec 02 08:23:17 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:23:17 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Failed with result 'exit-code'.
Dec 02 08:23:17 np0005541914.localdomain podman[76782]: 2025-12-02 08:23:17.434752313 +0000 UTC m=+0.438435258 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-nova-compute, release=1761123044, architecture=x86_64, container_name=nova_migration_target, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true)
Dec 02 08:23:17 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:23:19 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:23:19 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:23:20 np0005541914.localdomain systemd[1]: tmp-crun.ukVzaB.mount: Deactivated successfully.
Dec 02 08:23:20 np0005541914.localdomain podman[76827]: 2025-12-02 08:23:20.08550802 +0000 UTC m=+0.087655966 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 02 08:23:20 np0005541914.localdomain podman[76827]: 2025-12-02 08:23:20.132874253 +0000 UTC m=+0.135022199 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12)
Dec 02 08:23:20 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:23:20 np0005541914.localdomain podman[76826]: 2025-12-02 08:23:20.133604096 +0000 UTC m=+0.137639821 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, release=1761123044, distribution-scope=public)
Dec 02 08:23:20 np0005541914.localdomain podman[76826]: 2025-12-02 08:23:20.221975752 +0000 UTC m=+0.226011467 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible)
Dec 02 08:23:20 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:23:25 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:23:26 np0005541914.localdomain podman[76874]: 2025-12-02 08:23:26.086971065 +0000 UTC m=+0.088746958 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-11-18T22:51:28Z, distribution-scope=public, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, batch=17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, url=https://www.redhat.com, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, container_name=collectd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git)
Dec 02 08:23:26 np0005541914.localdomain podman[76874]: 2025-12-02 08:23:26.098932017 +0000 UTC m=+0.100707960 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, architecture=x86_64, batch=17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, tcib_managed=true, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc.)
Dec 02 08:23:26 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:23:27 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:23:28 np0005541914.localdomain podman[76894]: 2025-12-02 08:23:28.079831478 +0000 UTC m=+0.084389850 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step3, io.openshift.expose-services=, batch=17.1_20251118.1, container_name=iscsid, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, vcs-type=git)
Dec 02 08:23:28 np0005541914.localdomain podman[76894]: 2025-12-02 08:23:28.092736588 +0000 UTC m=+0.097295010 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., url=https://www.redhat.com, version=17.1.12, batch=17.1_20251118.1, distribution-scope=public, container_name=iscsid, release=1761123044, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 02 08:23:28 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:23:33 np0005541914.localdomain sshd[76913]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:23:33 np0005541914.localdomain sshd[76913]: Invalid user sol from 45.148.10.240 port 45044
Dec 02 08:23:33 np0005541914.localdomain sshd[76913]: Connection closed by invalid user sol 45.148.10.240 port 45044 [preauth]
Dec 02 08:23:40 np0005541914.localdomain sudo[76915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:23:40 np0005541914.localdomain sudo[76915]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:23:40 np0005541914.localdomain sudo[76915]: pam_unix(sudo:session): session closed for user root
Dec 02 08:23:40 np0005541914.localdomain sudo[76930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:23:40 np0005541914.localdomain sudo[76930]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:23:41 np0005541914.localdomain sudo[76930]: pam_unix(sudo:session): session closed for user root
Dec 02 08:23:41 np0005541914.localdomain sudo[76977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:23:41 np0005541914.localdomain sudo[76977]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:23:41 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:23:41 np0005541914.localdomain sudo[76977]: pam_unix(sudo:session): session closed for user root
Dec 02 08:23:41 np0005541914.localdomain systemd[1]: tmp-crun.OMj9f9.mount: Deactivated successfully.
Dec 02 08:23:41 np0005541914.localdomain podman[76992]: 2025-12-02 08:23:41.928343453 +0000 UTC m=+0.088219023 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, vcs-type=git, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, version=17.1.12, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64)
Dec 02 08:23:42 np0005541914.localdomain podman[76992]: 2025-12-02 08:23:42.126925338 +0000 UTC m=+0.286800948 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1761123044, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, version=17.1.12, io.buildah.version=1.41.4, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 08:23:42 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:23:45 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:23:45 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:23:45 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:23:46 np0005541914.localdomain podman[77024]: 2025-12-02 08:23:46.09135564 +0000 UTC m=+0.086712249 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.12, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, vendor=Red Hat, Inc.)
Dec 02 08:23:46 np0005541914.localdomain podman[77024]: 2025-12-02 08:23:46.118736694 +0000 UTC m=+0.114093233 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, tcib_managed=true, version=17.1.12, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, batch=17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=)
Dec 02 08:23:46 np0005541914.localdomain podman[77022]: 2025-12-02 08:23:46.130576892 +0000 UTC m=+0.129644250 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, managed_by=tripleo_ansible, container_name=logrotate_crond, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, architecture=x86_64, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 08:23:46 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:23:46 np0005541914.localdomain podman[77023]: 2025-12-02 08:23:46.182057004 +0000 UTC m=+0.180092372 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, release=1761123044, vcs-type=git, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:23:46 np0005541914.localdomain podman[77022]: 2025-12-02 08:23:46.215242139 +0000 UTC m=+0.214309477 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, io.openshift.expose-services=, release=1761123044, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, name=rhosp17/openstack-cron, config_id=tripleo_step4, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron)
Dec 02 08:23:46 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:23:46 np0005541914.localdomain podman[77023]: 2025-12-02 08:23:46.235856775 +0000 UTC m=+0.233892133 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, version=17.1.12, release=1761123044, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:23:46 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:23:47 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:23:47 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:23:48 np0005541914.localdomain podman[77094]: 2025-12-02 08:23:48.069582793 +0000 UTC m=+0.075268073 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, release=1761123044, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 02 08:23:48 np0005541914.localdomain systemd[1]: tmp-crun.QYFkG3.mount: Deactivated successfully.
Dec 02 08:23:48 np0005541914.localdomain podman[77093]: 2025-12-02 08:23:48.131045129 +0000 UTC m=+0.137896473 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, architecture=x86_64, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, version=17.1.12, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team)
Dec 02 08:23:48 np0005541914.localdomain podman[77093]: 2025-12-02 08:23:48.187314672 +0000 UTC m=+0.194166006 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, version=17.1.12, io.buildah.version=1.41.4, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, container_name=nova_compute, distribution-scope=public)
Dec 02 08:23:48 np0005541914.localdomain podman[77093]: unhealthy
Dec 02 08:23:48 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:23:48 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Failed with result 'exit-code'.
Dec 02 08:23:48 np0005541914.localdomain podman[77094]: 2025-12-02 08:23:48.447980381 +0000 UTC m=+0.453665651 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, container_name=nova_migration_target, release=1761123044, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team)
Dec 02 08:23:48 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:23:50 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:23:50 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:23:51 np0005541914.localdomain podman[77138]: 2025-12-02 08:23:51.071411653 +0000 UTC m=+0.071108981 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 02 08:23:51 np0005541914.localdomain podman[77137]: 2025-12-02 08:23:51.121160524 +0000 UTC m=+0.122304244 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, version=17.1.12, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, build-date=2025-11-19T00:14:25Z, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container)
Dec 02 08:23:51 np0005541914.localdomain podman[77138]: 2025-12-02 08:23:51.144402997 +0000 UTC m=+0.144100365 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, container_name=ovn_controller, tcib_managed=true, distribution-scope=public, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Dec 02 08:23:51 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:23:51 np0005541914.localdomain podman[77137]: 2025-12-02 08:23:51.213592049 +0000 UTC m=+0.214735789 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4)
Dec 02 08:23:51 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:23:56 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:23:57 np0005541914.localdomain podman[77185]: 2025-12-02 08:23:57.082813606 +0000 UTC m=+0.085944306 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, config_id=tripleo_step3, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vcs-type=git, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, build-date=2025-11-18T22:51:28Z, tcib_managed=true, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Dec 02 08:23:57 np0005541914.localdomain podman[77185]: 2025-12-02 08:23:57.094774198 +0000 UTC m=+0.097904878 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, vcs-type=git, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, architecture=x86_64, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, container_name=collectd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:23:57 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:23:58 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:23:59 np0005541914.localdomain podman[77207]: 2025-12-02 08:23:59.080710429 +0000 UTC m=+0.088565014 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, tcib_managed=true, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.41.4, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, config_id=tripleo_step3, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com)
Dec 02 08:23:59 np0005541914.localdomain podman[77207]: 2025-12-02 08:23:59.090252009 +0000 UTC m=+0.098106574 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, vcs-type=git, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid)
Dec 02 08:23:59 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:24:12 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:24:13 np0005541914.localdomain systemd[1]: tmp-crun.vr9DGF.mount: Deactivated successfully.
Dec 02 08:24:13 np0005541914.localdomain podman[77226]: 2025-12-02 08:24:13.079027044 +0000 UTC m=+0.085598477 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1)
Dec 02 08:24:13 np0005541914.localdomain podman[77226]: 2025-12-02 08:24:13.304533569 +0000 UTC m=+0.311105032 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, architecture=x86_64, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd)
Dec 02 08:24:13 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:24:14 np0005541914.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 02 08:24:15 np0005541914.localdomain recover_tripleo_nova_virtqemud[77256]: 61907
Dec 02 08:24:15 np0005541914.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 02 08:24:15 np0005541914.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 02 08:24:16 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:24:16 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:24:16 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:24:17 np0005541914.localdomain podman[77259]: 2025-12-02 08:24:17.077777783 +0000 UTC m=+0.070968675 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, release=1761123044, managed_by=tripleo_ansible, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 02 08:24:17 np0005541914.localdomain podman[77259]: 2025-12-02 08:24:17.106719994 +0000 UTC m=+0.099910846 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, tcib_managed=true, distribution-scope=public, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:24:17 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:24:17 np0005541914.localdomain podman[77258]: 2025-12-02 08:24:17.183119729 +0000 UTC m=+0.177310611 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, url=https://www.redhat.com, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044)
Dec 02 08:24:17 np0005541914.localdomain podman[77257]: 2025-12-02 08:24:17.235349223 +0000 UTC m=+0.233096530 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, release=1761123044, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64)
Dec 02 08:24:17 np0005541914.localdomain podman[77258]: 2025-12-02 08:24:17.258181154 +0000 UTC m=+0.252372026 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, distribution-scope=public, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 02 08:24:17 np0005541914.localdomain podman[77257]: 2025-12-02 08:24:17.266365024 +0000 UTC m=+0.264112361 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, distribution-scope=public, release=1761123044, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 08:24:17 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:24:17 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:24:18 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:24:18 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:24:19 np0005541914.localdomain systemd[1]: tmp-crun.QfqZMP.mount: Deactivated successfully.
Dec 02 08:24:19 np0005541914.localdomain podman[77328]: 2025-12-02 08:24:19.081222878 +0000 UTC m=+0.084665318 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, url=https://www.redhat.com, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4)
Dec 02 08:24:19 np0005541914.localdomain podman[77328]: 2025-12-02 08:24:19.123898362 +0000 UTC m=+0.127340862 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, io.openshift.expose-services=, config_id=tripleo_step5, vendor=Red Hat, Inc., version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1761123044, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:24:19 np0005541914.localdomain podman[77328]: unhealthy
Dec 02 08:24:19 np0005541914.localdomain systemd[1]: tmp-crun.WU4ieE.mount: Deactivated successfully.
Dec 02 08:24:19 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:24:19 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Failed with result 'exit-code'.
Dec 02 08:24:19 np0005541914.localdomain podman[77329]: 2025-12-02 08:24:19.141344935 +0000 UTC m=+0.141367545 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, tcib_managed=true, vcs-type=git, version=17.1.12, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.openshift.expose-services=, io.buildah.version=1.41.4)
Dec 02 08:24:19 np0005541914.localdomain podman[77329]: 2025-12-02 08:24:19.568779524 +0000 UTC m=+0.568802104 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true)
Dec 02 08:24:19 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:24:21 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:24:21 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:24:22 np0005541914.localdomain podman[77375]: 2025-12-02 08:24:22.082984165 +0000 UTC m=+0.089298315 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z)
Dec 02 08:24:22 np0005541914.localdomain podman[77375]: 2025-12-02 08:24:22.160912384 +0000 UTC m=+0.167226554 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, version=17.1.12, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:24:22 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:24:22 np0005541914.localdomain podman[77376]: 2025-12-02 08:24:22.180539571 +0000 UTC m=+0.183469871 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, vcs-type=git, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, container_name=ovn_controller, release=1761123044, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 02 08:24:22 np0005541914.localdomain podman[77376]: 2025-12-02 08:24:22.231948902 +0000 UTC m=+0.234879182 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, managed_by=tripleo_ansible, release=1761123044, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true)
Dec 02 08:24:22 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:24:27 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:24:28 np0005541914.localdomain podman[77422]: 2025-12-02 08:24:28.068043795 +0000 UTC m=+0.073832631 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-collectd-container, distribution-scope=public, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, container_name=collectd, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:24:28 np0005541914.localdomain podman[77422]: 2025-12-02 08:24:28.100120078 +0000 UTC m=+0.105908914 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step3, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z)
Dec 02 08:24:28 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:24:29 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:24:30 np0005541914.localdomain systemd[1]: tmp-crun.edAwO9.mount: Deactivated successfully.
Dec 02 08:24:30 np0005541914.localdomain podman[77442]: 2025-12-02 08:24:30.0828153 +0000 UTC m=+0.086463641 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, url=https://www.redhat.com, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Dec 02 08:24:30 np0005541914.localdomain podman[77442]: 2025-12-02 08:24:30.119434146 +0000 UTC m=+0.123082437 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, vcs-type=git, io.buildah.version=1.41.4, container_name=iscsid, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, release=1761123044, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible)
Dec 02 08:24:30 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:24:41 np0005541914.localdomain sudo[77462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:24:41 np0005541914.localdomain sudo[77462]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:24:41 np0005541914.localdomain sudo[77462]: pam_unix(sudo:session): session closed for user root
Dec 02 08:24:41 np0005541914.localdomain sudo[77477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:24:41 np0005541914.localdomain sudo[77477]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:24:42 np0005541914.localdomain sudo[77477]: pam_unix(sudo:session): session closed for user root
Dec 02 08:24:43 np0005541914.localdomain sudo[77523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:24:43 np0005541914.localdomain sudo[77523]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:24:43 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:24:43 np0005541914.localdomain sudo[77523]: pam_unix(sudo:session): session closed for user root
Dec 02 08:24:43 np0005541914.localdomain podman[77538]: 2025-12-02 08:24:43.481576298 +0000 UTC m=+0.082358930 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1761123044, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr)
Dec 02 08:24:43 np0005541914.localdomain podman[77538]: 2025-12-02 08:24:43.643939689 +0000 UTC m=+0.244722321 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, release=1761123044, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, version=17.1.12, batch=17.1_20251118.1)
Dec 02 08:24:43 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:24:47 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:24:47 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:24:47 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:24:48 np0005541914.localdomain systemd[1]: tmp-crun.funkNb.mount: Deactivated successfully.
Dec 02 08:24:48 np0005541914.localdomain podman[77571]: 2025-12-02 08:24:48.084050417 +0000 UTC m=+0.089968145 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute)
Dec 02 08:24:48 np0005541914.localdomain podman[77571]: 2025-12-02 08:24:48.10423226 +0000 UTC m=+0.110149988 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, url=https://www.redhat.com, version=17.1.12, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 02 08:24:48 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:24:48 np0005541914.localdomain podman[77572]: 2025-12-02 08:24:48.118576601 +0000 UTC m=+0.122899461 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, release=1761123044, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 02 08:24:48 np0005541914.localdomain podman[77572]: 2025-12-02 08:24:48.139764964 +0000 UTC m=+0.144087814 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.buildah.version=1.41.4, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12)
Dec 02 08:24:48 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:24:48 np0005541914.localdomain podman[77570]: 2025-12-02 08:24:48.224153473 +0000 UTC m=+0.229093572 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, name=rhosp17/openstack-cron, version=17.1.12, com.redhat.component=openstack-cron-container, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron)
Dec 02 08:24:48 np0005541914.localdomain podman[77570]: 2025-12-02 08:24:48.227478061 +0000 UTC m=+0.232418150 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, tcib_managed=true, io.openshift.expose-services=, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:24:48 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:24:49 np0005541914.localdomain systemd[1]: tmp-crun.u6b95J.mount: Deactivated successfully.
Dec 02 08:24:49 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:24:49 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:24:50 np0005541914.localdomain podman[77709]: 2025-12-02 08:24:50.076820238 +0000 UTC m=+0.068255607 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, architecture=x86_64, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container)
Dec 02 08:24:50 np0005541914.localdomain systemd[1]: tmp-crun.XPJMnI.mount: Deactivated successfully.
Dec 02 08:24:50 np0005541914.localdomain podman[77708]: 2025-12-02 08:24:50.139777778 +0000 UTC m=+0.135571874 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, container_name=nova_compute, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git)
Dec 02 08:24:50 np0005541914.localdomain podman[77708]: 2025-12-02 08:24:50.18987863 +0000 UTC m=+0.185672696 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, distribution-scope=public, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, container_name=nova_compute, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, version=17.1.12, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:24:50 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:24:50 np0005541914.localdomain podman[77709]: 2025-12-02 08:24:50.452077383 +0000 UTC m=+0.443512702 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, release=1761123044, config_id=tripleo_step4, version=17.1.12, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 02 08:24:50 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:24:52 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:24:52 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:24:53 np0005541914.localdomain podman[77779]: 2025-12-02 08:24:53.073254838 +0000 UTC m=+0.079585759 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_id=tripleo_step4, batch=17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, version=17.1.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, tcib_managed=true)
Dec 02 08:24:53 np0005541914.localdomain podman[77780]: 2025-12-02 08:24:53.129233653 +0000 UTC m=+0.131358890 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, architecture=x86_64, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, config_id=tripleo_step4, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12)
Dec 02 08:24:53 np0005541914.localdomain podman[77779]: 2025-12-02 08:24:53.163784568 +0000 UTC m=+0.170115499 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, container_name=ovn_metadata_agent, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, url=https://www.redhat.com, config_id=tripleo_step4, tcib_managed=true, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 02 08:24:53 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:24:53 np0005541914.localdomain podman[77780]: 2025-12-02 08:24:53.176938785 +0000 UTC m=+0.179064042 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.12, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.openshift.expose-services=, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 02 08:24:53 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:24:58 np0005541914.localdomain systemd[1]: libpod-04715a69146858c8339bc8101e67a39c455c4d6a76b51ebad6e24f8a290e5fbf.scope: Deactivated successfully.
Dec 02 08:24:58 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:24:58 np0005541914.localdomain podman[77824]: 2025-12-02 08:24:58.669488005 +0000 UTC m=+0.054202324 container died 04715a69146858c8339bc8101e67a39c455c4d6a76b51ebad6e24f8a290e5fbf (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, container_name=nova_wait_for_compute_service, batch=17.1_20251118.1, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.buildah.version=1.41.4, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, version=17.1.12, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team)
Dec 02 08:24:58 np0005541914.localdomain systemd[1]: tmp-crun.Z3s9Cx.mount: Deactivated successfully.
Dec 02 08:24:58 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-04715a69146858c8339bc8101e67a39c455c4d6a76b51ebad6e24f8a290e5fbf-userdata-shm.mount: Deactivated successfully.
Dec 02 08:24:58 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-fe6b5bb5c3faac5bc7b25f16619728c4e2a2d4a71d222c2e5e52b063609b5512-merged.mount: Deactivated successfully.
Dec 02 08:24:58 np0005541914.localdomain podman[77824]: 2025-12-02 08:24:58.705630317 +0000 UTC m=+0.090344566 container cleanup 04715a69146858c8339bc8101e67a39c455c4d6a76b51ebad6e24f8a290e5fbf (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, container_name=nova_wait_for_compute_service, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=)
Dec 02 08:24:58 np0005541914.localdomain systemd[1]: libpod-conmon-04715a69146858c8339bc8101e67a39c455c4d6a76b51ebad6e24f8a290e5fbf.scope: Deactivated successfully.
Dec 02 08:24:58 np0005541914.localdomain python3[75858]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_wait_for_compute_service --conmon-pidfile /run/nova_wait_for_compute_service.pid --detach=False --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env __OS_DEBUG=true --env TRIPLEO_CONFIG_HASH=51230b537c6b56095225b7a0a6b952d0 --label config_id=tripleo_step5 --label container_name=nova_wait_for_compute_service --label managed_by=tripleo_ansible --label config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_wait_for_compute_service.log --network host --user nova --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/nova:/var/log/nova --volume /var/lib/container-config-scripts:/container-config-scripts registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Dec 02 08:24:58 np0005541914.localdomain podman[77825]: 2025-12-02 08:24:58.750747983 +0000 UTC m=+0.127129447 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, release=1761123044, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, container_name=collectd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vcs-type=git, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, build-date=2025-11-18T22:51:28Z)
Dec 02 08:24:58 np0005541914.localdomain podman[77825]: 2025-12-02 08:24:58.760446027 +0000 UTC m=+0.136827531 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, release=1761123044, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, name=rhosp17/openstack-collectd, tcib_managed=true, architecture=x86_64, container_name=collectd, batch=17.1_20251118.1, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4)
Dec 02 08:24:58 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:24:58 np0005541914.localdomain sudo[75856]: pam_unix(sudo:session): session closed for user root
Dec 02 08:24:59 np0005541914.localdomain sudo[77892]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iuatkpwsexnmnwledogcoypsnhsqzpon ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:24:59 np0005541914.localdomain sudo[77892]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:24:59 np0005541914.localdomain python3[77894]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:24:59 np0005541914.localdomain sudo[77892]: pam_unix(sudo:session): session closed for user root
Dec 02 08:24:59 np0005541914.localdomain sudo[77908]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ziztunrvxfgkamlymiekfickvhaplwcf ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:24:59 np0005541914.localdomain sudo[77908]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:24:59 np0005541914.localdomain python3[77910]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_compute_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 02 08:24:59 np0005541914.localdomain sudo[77908]: pam_unix(sudo:session): session closed for user root
Dec 02 08:25:00 np0005541914.localdomain sudo[77969]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iuewukywrjdwgnxzlrmzrrjlhsydjssq ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:25:00 np0005541914.localdomain sudo[77969]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:25:00 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:25:00 np0005541914.localdomain systemd[1]: tmp-crun.rAbHLs.mount: Deactivated successfully.
Dec 02 08:25:00 np0005541914.localdomain podman[77971]: 2025-12-02 08:25:00.240580257 +0000 UTC m=+0.065032712 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 02 08:25:00 np0005541914.localdomain podman[77971]: 2025-12-02 08:25:00.250349774 +0000 UTC m=+0.074802219 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, vcs-type=git, release=1761123044, url=https://www.redhat.com, tcib_managed=true, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, container_name=iscsid)
Dec 02 08:25:00 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:25:00 np0005541914.localdomain python3[77972]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1764663899.7048783-118037-46971728796331/source dest=/etc/systemd/system/tripleo_nova_compute.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:25:00 np0005541914.localdomain sudo[77969]: pam_unix(sudo:session): session closed for user root
Dec 02 08:25:00 np0005541914.localdomain sudo[78005]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ojtfwhjudbatrocwgzaegvtsrgocflun ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:25:00 np0005541914.localdomain sudo[78005]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:25:00 np0005541914.localdomain python3[78007]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 02 08:25:00 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:25:00 np0005541914.localdomain systemd-rc-local-generator[78024]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:25:00 np0005541914.localdomain systemd-sysv-generator[78031]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:25:00 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:25:01 np0005541914.localdomain sudo[78005]: pam_unix(sudo:session): session closed for user root
Dec 02 08:25:01 np0005541914.localdomain sudo[78057]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nqwmswhbdactvboofgdnjeeuhsdezvgm ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Dec 02 08:25:01 np0005541914.localdomain sudo[78057]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:25:01 np0005541914.localdomain python3[78059]: ansible-systemd Invoked with state=restarted name=tripleo_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 08:25:01 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:25:01 np0005541914.localdomain systemd-rc-local-generator[78089]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:25:01 np0005541914.localdomain systemd-sysv-generator[78092]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:25:01 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:25:02 np0005541914.localdomain systemd[1]: Starting nova_compute container...
Dec 02 08:25:02 np0005541914.localdomain tripleo-start-podman-container[78099]: Creating additional drop-in dependency for "nova_compute" (6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e)
Dec 02 08:25:02 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 08:25:02 np0005541914.localdomain systemd-rc-local-generator[78155]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 08:25:02 np0005541914.localdomain systemd-sysv-generator[78160]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 08:25:02 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 08:25:02 np0005541914.localdomain systemd[1]: Started nova_compute container.
Dec 02 08:25:02 np0005541914.localdomain sudo[78057]: pam_unix(sudo:session): session closed for user root
Dec 02 08:25:03 np0005541914.localdomain sudo[78194]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uaeamehfdjwkyovbcprbbantsuzhqlkc ; /usr/bin/python3
Dec 02 08:25:03 np0005541914.localdomain sudo[78194]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:25:03 np0005541914.localdomain python3[78196]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks5.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:25:03 np0005541914.localdomain sudo[78194]: pam_unix(sudo:session): session closed for user root
Dec 02 08:25:03 np0005541914.localdomain sudo[78242]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uqtsabgffraefwaagmihcqnhrhyxsvzu ; /usr/bin/python3
Dec 02 08:25:03 np0005541914.localdomain sudo[78242]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:25:03 np0005541914.localdomain sudo[78242]: pam_unix(sudo:session): session closed for user root
Dec 02 08:25:04 np0005541914.localdomain sudo[78285]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jvoacfsvkrclnrhrrzwnvmcwzumhbtaz ; /usr/bin/python3
Dec 02 08:25:04 np0005541914.localdomain sudo[78285]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:25:04 np0005541914.localdomain sudo[78285]: pam_unix(sudo:session): session closed for user root
Dec 02 08:25:04 np0005541914.localdomain sudo[78315]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gbydxnpzgivjwrabiyjsakqqtqfcycao ; /usr/bin/python3
Dec 02 08:25:04 np0005541914.localdomain sudo[78315]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:25:04 np0005541914.localdomain python3[78317]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks5.json short_hostname=np0005541914 step=5 update_config_hash_only=False
Dec 02 08:25:04 np0005541914.localdomain sudo[78315]: pam_unix(sudo:session): session closed for user root
Dec 02 08:25:05 np0005541914.localdomain sudo[78331]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rtnujyhufzxqykdngriimmvpibyzhage ; /usr/bin/python3
Dec 02 08:25:05 np0005541914.localdomain sudo[78331]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:25:05 np0005541914.localdomain python3[78333]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 08:25:05 np0005541914.localdomain sudo[78331]: pam_unix(sudo:session): session closed for user root
Dec 02 08:25:05 np0005541914.localdomain sudo[78347]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lulgjawzliytgadpodfcpabfbcoossxa ; /usr/bin/python3
Dec 02 08:25:05 np0005541914.localdomain sudo[78347]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Dec 02 08:25:05 np0005541914.localdomain python3[78349]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_5 config_pattern=container-puppet-*.json config_overrides={} debug=True
Dec 02 08:25:05 np0005541914.localdomain sudo[78347]: pam_unix(sudo:session): session closed for user root
Dec 02 08:25:13 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:25:13 np0005541914.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 02 08:25:14 np0005541914.localdomain recover_tripleo_nova_virtqemud[78352]: 61907
Dec 02 08:25:14 np0005541914.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 02 08:25:14 np0005541914.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 02 08:25:14 np0005541914.localdomain podman[78350]: 2025-12-02 08:25:14.08408584 +0000 UTC m=+0.082659299 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044)
Dec 02 08:25:14 np0005541914.localdomain podman[78350]: 2025-12-02 08:25:14.307863565 +0000 UTC m=+0.306437024 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12)
Dec 02 08:25:14 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:25:18 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:25:18 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:25:18 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:25:19 np0005541914.localdomain podman[78382]: 2025-12-02 08:25:19.07086032 +0000 UTC m=+0.071479651 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, container_name=ceilometer_agent_compute)
Dec 02 08:25:19 np0005541914.localdomain podman[78381]: 2025-12-02 08:25:19.143744551 +0000 UTC m=+0.142842928 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, batch=17.1_20251118.1, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, architecture=x86_64, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Dec 02 08:25:19 np0005541914.localdomain podman[78382]: 2025-12-02 08:25:19.151434327 +0000 UTC m=+0.152053638 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, tcib_managed=true, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Dec 02 08:25:19 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:25:19 np0005541914.localdomain podman[78381]: 2025-12-02 08:25:19.180889233 +0000 UTC m=+0.179987600 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-cron, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:25:19 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:25:19 np0005541914.localdomain podman[78383]: 2025-12-02 08:25:19.106980071 +0000 UTC m=+0.099821113 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, batch=17.1_20251118.1, version=17.1.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, release=1761123044, distribution-scope=public, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 02 08:25:19 np0005541914.localdomain podman[78383]: 2025-12-02 08:25:19.238626679 +0000 UTC m=+0.231467741 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12)
Dec 02 08:25:19 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:25:20 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:25:20 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:25:21 np0005541914.localdomain podman[78456]: 2025-12-02 08:25:21.077818408 +0000 UTC m=+0.083135744 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.4, container_name=nova_compute, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.openshift.expose-services=)
Dec 02 08:25:21 np0005541914.localdomain podman[78457]: 2025-12-02 08:25:21.129791575 +0000 UTC m=+0.133140853 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, tcib_managed=true, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, distribution-scope=public, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:25:21 np0005541914.localdomain podman[78456]: 2025-12-02 08:25:21.135155082 +0000 UTC m=+0.140472358 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, architecture=x86_64, container_name=nova_compute, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, version=17.1.12)
Dec 02 08:25:21 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:25:21 np0005541914.localdomain podman[78457]: 2025-12-02 08:25:21.520945028 +0000 UTC m=+0.524294326 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, distribution-scope=public, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.12, vcs-type=git, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1761123044, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Dec 02 08:25:21 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:25:23 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:25:23 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:25:24 np0005541914.localdomain podman[78505]: 2025-12-02 08:25:24.08858197 +0000 UTC m=+0.089817590 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, config_id=tripleo_step4, tcib_managed=true)
Dec 02 08:25:24 np0005541914.localdomain systemd[1]: tmp-crun.NGNuyN.mount: Deactivated successfully.
Dec 02 08:25:24 np0005541914.localdomain podman[78506]: 2025-12-02 08:25:24.1566204 +0000 UTC m=+0.153936804 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, architecture=x86_64, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4)
Dec 02 08:25:24 np0005541914.localdomain podman[78506]: 2025-12-02 08:25:24.206792314 +0000 UTC m=+0.204108718 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, url=https://www.redhat.com, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team)
Dec 02 08:25:24 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:25:24 np0005541914.localdomain podman[78505]: 2025-12-02 08:25:24.258838892 +0000 UTC m=+0.260074482 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public)
Dec 02 08:25:24 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:25:28 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:25:29 np0005541914.localdomain systemd[1]: tmp-crun.gytKHF.mount: Deactivated successfully.
Dec 02 08:25:29 np0005541914.localdomain podman[78554]: 2025-12-02 08:25:29.090602188 +0000 UTC m=+0.092187600 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd)
Dec 02 08:25:29 np0005541914.localdomain podman[78554]: 2025-12-02 08:25:29.108299897 +0000 UTC m=+0.109885339 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.buildah.version=1.41.4, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:25:29 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:25:30 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:25:31 np0005541914.localdomain podman[78574]: 2025-12-02 08:25:31.090703243 +0000 UTC m=+0.090870380 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, release=1761123044, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, managed_by=tripleo_ansible)
Dec 02 08:25:31 np0005541914.localdomain podman[78574]: 2025-12-02 08:25:31.127583947 +0000 UTC m=+0.127751064 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vcs-type=git, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, distribution-scope=public, release=1761123044, config_id=tripleo_step3, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., container_name=iscsid)
Dec 02 08:25:31 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:25:35 np0005541914.localdomain sshd[78592]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:25:35 np0005541914.localdomain sshd[78592]: Accepted publickey for zuul from 192.168.122.100 port 41316 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 08:25:35 np0005541914.localdomain systemd-logind[760]: New session 33 of user zuul.
Dec 02 08:25:35 np0005541914.localdomain systemd[1]: Started Session 33 of User zuul.
Dec 02 08:25:35 np0005541914.localdomain sshd[78592]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 08:25:35 np0005541914.localdomain sudo[78699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iohgpjuabivxotjqjkvhltnwintltjjy ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764663935.2911742-40184-102535007208001/AnsiballZ_setup.py
Dec 02 08:25:35 np0005541914.localdomain sudo[78699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 08:25:36 np0005541914.localdomain python3[78701]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 08:25:38 np0005541914.localdomain sudo[78699]: pam_unix(sudo:session): session closed for user root
Dec 02 08:25:42 np0005541914.localdomain sshd[78888]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:25:42 np0005541914.localdomain sshd[78888]: Invalid user sol from 45.148.10.240 port 56268
Dec 02 08:25:42 np0005541914.localdomain sshd[78888]: Connection closed by invalid user sol 45.148.10.240 port 56268 [preauth]
Dec 02 08:25:43 np0005541914.localdomain sudo[78964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-svyvkmtxdknajmmleiscwiqjapdshzen ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764663942.9568117-40246-264445740169911/AnsiballZ_dnf.py
Dec 02 08:25:43 np0005541914.localdomain sudo[78964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 08:25:43 np0005541914.localdomain sudo[78967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:25:43 np0005541914.localdomain sudo[78967]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:25:43 np0005541914.localdomain sudo[78967]: pam_unix(sudo:session): session closed for user root
Dec 02 08:25:43 np0005541914.localdomain sudo[78982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:25:43 np0005541914.localdomain sudo[78982]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:25:43 np0005541914.localdomain python3[78966]: ansible-ansible.legacy.dnf Invoked with name=['iptables'] allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None state=None
Dec 02 08:25:44 np0005541914.localdomain sudo[78982]: pam_unix(sudo:session): session closed for user root
Dec 02 08:25:44 np0005541914.localdomain sudo[79031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:25:44 np0005541914.localdomain sudo[79031]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:25:44 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:25:44 np0005541914.localdomain sudo[79031]: pam_unix(sudo:session): session closed for user root
Dec 02 08:25:44 np0005541914.localdomain podman[79046]: 2025-12-02 08:25:44.924895604 +0000 UTC m=+0.068137072 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_id=tripleo_step1, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, managed_by=tripleo_ansible, container_name=metrics_qdr, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 08:25:45 np0005541914.localdomain podman[79046]: 2025-12-02 08:25:45.125751856 +0000 UTC m=+0.268993324 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, name=rhosp17/openstack-qdrouterd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_id=tripleo_step1, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 08:25:45 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:25:46 np0005541914.localdomain sudo[78964]: pam_unix(sudo:session): session closed for user root
Dec 02 08:25:47 np0005541914.localdomain sudo[79183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-elnqacvyhonbbtjwycllfauvrznxktql ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764663947.5707233-40302-258286669402253/AnsiballZ_iptables.py
Dec 02 08:25:47 np0005541914.localdomain sudo[79183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 08:25:48 np0005541914.localdomain python3[79185]: ansible-ansible.builtin.iptables Invoked with action=insert chain=INPUT comment=allow ssh access for zuul executor in_interface=eth0 jump=ACCEPT protocol=tcp source=38.102.83.114 table=filter state=present ip_version=ipv4 match=[] destination_ports=[] ctstate=[] syn=ignore flush=False chain_management=False numeric=False rule_num=None wait=None to_source=None destination=None to_destination=None tcp_flags=None gateway=None log_prefix=None log_level=None goto=None out_interface=None fragment=None set_counters=None source_port=None destination_port=None to_ports=None set_dscp_mark=None set_dscp_mark_class=None src_range=None dst_range=None match_set=None match_set_flags=None limit=None limit_burst=None uid_owner=None gid_owner=None reject_with=None icmp_type=None policy=None
Dec 02 08:25:48 np0005541914.localdomain kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec 02 08:25:48 np0005541914.localdomain systemd-journald[47679]: Field hash table of /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal has a fill level at 81.1 (270 of 333 items), suggesting rotation.
Dec 02 08:25:48 np0005541914.localdomain systemd-journald[47679]: /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal: Journal header limits reached or header out-of-date, rotating.
Dec 02 08:25:48 np0005541914.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 02 08:25:48 np0005541914.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 02 08:25:48 np0005541914.localdomain sudo[79183]: pam_unix(sudo:session): session closed for user root
Dec 02 08:25:49 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:25:49 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:25:49 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:25:50 np0005541914.localdomain systemd[1]: tmp-crun.RV7u4j.mount: Deactivated successfully.
Dec 02 08:25:50 np0005541914.localdomain podman[79234]: 2025-12-02 08:25:50.060114146 +0000 UTC m=+0.060225240 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, io.buildah.version=1.41.4, release=1761123044, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, name=rhosp17/openstack-cron, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron)
Dec 02 08:25:50 np0005541914.localdomain podman[79234]: 2025-12-02 08:25:50.072228682 +0000 UTC m=+0.072339786 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, release=1761123044, io.buildah.version=1.41.4, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron)
Dec 02 08:25:50 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:25:50 np0005541914.localdomain systemd[1]: tmp-crun.5Ru3ID.mount: Deactivated successfully.
Dec 02 08:25:50 np0005541914.localdomain podman[79236]: 2025-12-02 08:25:50.124464156 +0000 UTC m=+0.118931095 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, release=1761123044, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com)
Dec 02 08:25:50 np0005541914.localdomain podman[79235]: 2025-12-02 08:25:50.173203739 +0000 UTC m=+0.170469740 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, config_id=tripleo_step4, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com)
Dec 02 08:25:50 np0005541914.localdomain podman[79236]: 2025-12-02 08:25:50.179977468 +0000 UTC m=+0.174444367 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, version=17.1.12, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc.)
Dec 02 08:25:50 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:25:50 np0005541914.localdomain podman[79235]: 2025-12-02 08:25:50.226842025 +0000 UTC m=+0.224108016 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute)
Dec 02 08:25:50 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:25:51 np0005541914.localdomain systemd[1]: tmp-crun.eMZztZ.mount: Deactivated successfully.
Dec 02 08:25:51 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:25:51 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:25:52 np0005541914.localdomain podman[79305]: 2025-12-02 08:25:52.072949826 +0000 UTC m=+0.072416649 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, batch=17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, distribution-scope=public, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 02 08:25:52 np0005541914.localdomain podman[79305]: 2025-12-02 08:25:52.153467592 +0000 UTC m=+0.152934475 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-type=git, container_name=nova_compute, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 02 08:25:52 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:25:52 np0005541914.localdomain podman[79306]: 2025-12-02 08:25:52.153860244 +0000 UTC m=+0.147926088 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, io.buildah.version=1.41.4, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 02 08:25:52 np0005541914.localdomain podman[79306]: 2025-12-02 08:25:52.495908214 +0000 UTC m=+0.489974068 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., distribution-scope=public, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, architecture=x86_64, com.redhat.component=openstack-nova-compute-container)
Dec 02 08:25:52 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:25:54 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:25:54 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:25:55 np0005541914.localdomain systemd[1]: tmp-crun.LBmxju.mount: Deactivated successfully.
Dec 02 08:25:55 np0005541914.localdomain podman[79351]: 2025-12-02 08:25:55.077336169 +0000 UTC m=+0.082475254 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, release=1761123044, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, batch=17.1_20251118.1, managed_by=tripleo_ansible)
Dec 02 08:25:55 np0005541914.localdomain systemd[1]: tmp-crun.8v8p43.mount: Deactivated successfully.
Dec 02 08:25:55 np0005541914.localdomain podman[79352]: 2025-12-02 08:25:55.142027211 +0000 UTC m=+0.138425219 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.buildah.version=1.41.4, architecture=x86_64, container_name=ovn_controller, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 02 08:25:55 np0005541914.localdomain podman[79351]: 2025-12-02 08:25:55.166989103 +0000 UTC m=+0.172128198 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_id=tripleo_step4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4)
Dec 02 08:25:55 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:25:55 np0005541914.localdomain podman[79352]: 2025-12-02 08:25:55.197671645 +0000 UTC m=+0.194069643 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 02 08:25:55 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:25:59 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:26:00 np0005541914.localdomain podman[79400]: 2025-12-02 08:26:00.076255926 +0000 UTC m=+0.084092632 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, version=17.1.12, container_name=collectd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 08:26:00 np0005541914.localdomain podman[79400]: 2025-12-02 08:26:00.112992045 +0000 UTC m=+0.120828691 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc., container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12)
Dec 02 08:26:00 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:26:01 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:26:02 np0005541914.localdomain podman[79420]: 2025-12-02 08:26:02.045543756 +0000 UTC m=+0.053881274 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid)
Dec 02 08:26:02 np0005541914.localdomain podman[79420]: 2025-12-02 08:26:02.079902565 +0000 UTC m=+0.088240073 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Dec 02 08:26:02 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:26:15 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:26:16 np0005541914.localdomain podman[79440]: 2025-12-02 08:26:16.072002369 +0000 UTC m=+0.079347523 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:26:16 np0005541914.localdomain podman[79440]: 2025-12-02 08:26:16.343953689 +0000 UTC m=+0.351298853 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=)
Dec 02 08:26:16 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:26:20 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:26:20 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:26:20 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:26:20 np0005541914.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 02 08:26:21 np0005541914.localdomain recover_tripleo_nova_virtqemud[79483]: 61907
Dec 02 08:26:21 np0005541914.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 02 08:26:21 np0005541914.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 02 08:26:21 np0005541914.localdomain podman[79469]: 2025-12-02 08:26:21.071396957 +0000 UTC m=+0.077278881 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, architecture=x86_64, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, distribution-scope=public, container_name=logrotate_crond, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1)
Dec 02 08:26:21 np0005541914.localdomain podman[79469]: 2025-12-02 08:26:21.106047926 +0000 UTC m=+0.111929850 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:26:21 np0005541914.localdomain podman[79470]: 2025-12-02 08:26:21.13239611 +0000 UTC m=+0.136643696 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, tcib_managed=true, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=17.1.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:26:21 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:26:21 np0005541914.localdomain podman[79471]: 2025-12-02 08:26:21.206829367 +0000 UTC m=+0.208980121 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.expose-services=, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, vendor=Red Hat, Inc., url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 02 08:26:21 np0005541914.localdomain podman[79470]: 2025-12-02 08:26:21.219924301 +0000 UTC m=+0.224171847 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, vcs-type=git, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public)
Dec 02 08:26:21 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:26:21 np0005541914.localdomain podman[79471]: 2025-12-02 08:26:21.234809629 +0000 UTC m=+0.236960393 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 02 08:26:21 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:26:22 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:26:22 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:26:23 np0005541914.localdomain podman[79546]: 2025-12-02 08:26:23.058583295 +0000 UTC m=+0.067851435 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute)
Dec 02 08:26:23 np0005541914.localdomain podman[79547]: 2025-12-02 08:26:23.108896533 +0000 UTC m=+0.115392561 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044)
Dec 02 08:26:23 np0005541914.localdomain podman[79546]: 2025-12-02 08:26:23.163190228 +0000 UTC m=+0.172458418 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, build-date=2025-11-19T00:36:58Z, version=17.1.12, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, managed_by=tripleo_ansible)
Dec 02 08:26:23 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:26:23 np0005541914.localdomain podman[79547]: 2025-12-02 08:26:23.421203659 +0000 UTC m=+0.427699697 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, config_id=tripleo_step4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible)
Dec 02 08:26:23 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:26:25 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:26:25 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:26:26 np0005541914.localdomain systemd[1]: tmp-crun.xU5goo.mount: Deactivated successfully.
Dec 02 08:26:26 np0005541914.localdomain podman[79594]: 2025-12-02 08:26:26.086720247 +0000 UTC m=+0.093479788 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, config_id=tripleo_step4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64)
Dec 02 08:26:26 np0005541914.localdomain podman[79594]: 2025-12-02 08:26:26.123802966 +0000 UTC m=+0.130562487 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, vendor=Red Hat, Inc., batch=17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 02 08:26:26 np0005541914.localdomain systemd[1]: tmp-crun.09tdiy.mount: Deactivated successfully.
Dec 02 08:26:26 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:26:26 np0005541914.localdomain podman[79595]: 2025-12-02 08:26:26.133538282 +0000 UTC m=+0.137351237 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20251118.1, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Dec 02 08:26:26 np0005541914.localdomain podman[79595]: 2025-12-02 08:26:26.213858042 +0000 UTC m=+0.217671027 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, release=1761123044, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1)
Dec 02 08:26:26 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:26:30 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:26:31 np0005541914.localdomain podman[79641]: 2025-12-02 08:26:31.081721468 +0000 UTC m=+0.073285864 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, architecture=x86_64, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, release=1761123044, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true)
Dec 02 08:26:31 np0005541914.localdomain podman[79641]: 2025-12-02 08:26:31.091390012 +0000 UTC m=+0.082954328 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, managed_by=tripleo_ansible, batch=17.1_20251118.1, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:26:31 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:26:32 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:26:33 np0005541914.localdomain podman[79660]: 2025-12-02 08:26:33.062571347 +0000 UTC m=+0.068804923 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_id=tripleo_step3, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, version=17.1.12, tcib_managed=true, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Dec 02 08:26:33 np0005541914.localdomain podman[79660]: 2025-12-02 08:26:33.070414518 +0000 UTC m=+0.076648104 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, release=1761123044, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid)
Dec 02 08:26:33 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:26:44 np0005541914.localdomain sudo[79680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:26:44 np0005541914.localdomain sudo[79680]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:26:44 np0005541914.localdomain sudo[79680]: pam_unix(sudo:session): session closed for user root
Dec 02 08:26:45 np0005541914.localdomain sudo[79695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:26:45 np0005541914.localdomain sudo[79695]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:26:45 np0005541914.localdomain sudo[79695]: pam_unix(sudo:session): session closed for user root
Dec 02 08:26:46 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:26:47 np0005541914.localdomain systemd[1]: tmp-crun.YITxTk.mount: Deactivated successfully.
Dec 02 08:26:47 np0005541914.localdomain podman[79743]: 2025-12-02 08:26:47.079481559 +0000 UTC m=+0.082774033 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, version=17.1.12, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., url=https://www.redhat.com, config_id=tripleo_step1)
Dec 02 08:26:47 np0005541914.localdomain podman[79743]: 2025-12-02 08:26:47.328898517 +0000 UTC m=+0.332190951 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-type=git, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, managed_by=tripleo_ansible, batch=17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.4, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 08:26:47 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:26:47 np0005541914.localdomain sshd[78592]: pam_unix(sshd:session): session closed for user zuul
Dec 02 08:26:47 np0005541914.localdomain systemd[1]: session-33.scope: Deactivated successfully.
Dec 02 08:26:47 np0005541914.localdomain systemd[1]: session-33.scope: Consumed 6.019s CPU time.
Dec 02 08:26:47 np0005541914.localdomain systemd-logind[760]: Session 33 logged out. Waiting for processes to exit.
Dec 02 08:26:47 np0005541914.localdomain systemd-logind[760]: Removed session 33.
Dec 02 08:26:48 np0005541914.localdomain sudo[79773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:26:48 np0005541914.localdomain sudo[79773]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:26:48 np0005541914.localdomain sudo[79773]: pam_unix(sudo:session): session closed for user root
Dec 02 08:26:51 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:26:51 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:26:51 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:26:52 np0005541914.localdomain systemd[1]: tmp-crun.lGnM8Q.mount: Deactivated successfully.
Dec 02 08:26:52 np0005541914.localdomain podman[79834]: 2025-12-02 08:26:52.050227209 +0000 UTC m=+0.056872582 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, architecture=x86_64, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, config_id=tripleo_step4)
Dec 02 08:26:52 np0005541914.localdomain podman[79833]: 2025-12-02 08:26:52.069339371 +0000 UTC m=+0.075261142 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, batch=17.1_20251118.1, name=rhosp17/openstack-cron, container_name=logrotate_crond, release=1761123044, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=)
Dec 02 08:26:52 np0005541914.localdomain podman[79833]: 2025-12-02 08:26:52.076702147 +0000 UTC m=+0.082623978 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, batch=17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12)
Dec 02 08:26:52 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:26:52 np0005541914.localdomain podman[79834]: 2025-12-02 08:26:52.134148515 +0000 UTC m=+0.140793908 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, release=1761123044, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.41.4)
Dec 02 08:26:52 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:26:52 np0005541914.localdomain podman[79835]: 2025-12-02 08:26:52.138656657 +0000 UTC m=+0.139275673 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, vcs-type=git, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 02 08:26:52 np0005541914.localdomain podman[79835]: 2025-12-02 08:26:52.221879762 +0000 UTC m=+0.222498688 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, url=https://www.redhat.com, io.buildah.version=1.41.4)
Dec 02 08:26:52 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:26:53 np0005541914.localdomain systemd[1]: tmp-crun.3w2wqi.mount: Deactivated successfully.
Dec 02 08:26:53 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:26:53 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:26:54 np0005541914.localdomain systemd[1]: tmp-crun.S1Je7S.mount: Deactivated successfully.
Dec 02 08:26:54 np0005541914.localdomain podman[79907]: 2025-12-02 08:26:54.084104118 +0000 UTC m=+0.083821784 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, vcs-type=git, config_id=tripleo_step4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com)
Dec 02 08:26:54 np0005541914.localdomain systemd[1]: tmp-crun.CgEWQG.mount: Deactivated successfully.
Dec 02 08:26:54 np0005541914.localdomain podman[79906]: 2025-12-02 08:26:54.135999772 +0000 UTC m=+0.138028706 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., distribution-scope=public, container_name=nova_compute, version=17.1.12, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:26:54 np0005541914.localdomain podman[79906]: 2025-12-02 08:26:54.185840586 +0000 UTC m=+0.187869520 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, container_name=nova_compute, batch=17.1_20251118.1, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, config_id=tripleo_step5, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12)
Dec 02 08:26:54 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:26:54 np0005541914.localdomain podman[79907]: 2025-12-02 08:26:54.488830789 +0000 UTC m=+0.488548435 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, vcs-type=git, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12)
Dec 02 08:26:54 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:26:56 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:26:56 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:26:57 np0005541914.localdomain systemd[1]: tmp-crun.nd9c3N.mount: Deactivated successfully.
Dec 02 08:26:57 np0005541914.localdomain podman[79955]: 2025-12-02 08:26:57.104262635 +0000 UTC m=+0.103858542 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 02 08:26:57 np0005541914.localdomain podman[79954]: 2025-12-02 08:26:57.063487726 +0000 UTC m=+0.069361088 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step4, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container)
Dec 02 08:26:57 np0005541914.localdomain podman[79954]: 2025-12-02 08:26:57.143796886 +0000 UTC m=+0.149670258 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, architecture=x86_64, config_id=tripleo_step4, release=1761123044, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git)
Dec 02 08:26:57 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:26:57 np0005541914.localdomain podman[79955]: 2025-12-02 08:26:57.202877862 +0000 UTC m=+0.202473789 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, vcs-type=git)
Dec 02 08:26:57 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:27:01 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:27:02 np0005541914.localdomain systemd[1]: tmp-crun.k34diR.mount: Deactivated successfully.
Dec 02 08:27:02 np0005541914.localdomain podman[80000]: 2025-12-02 08:27:02.046831696 +0000 UTC m=+0.059259013 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, io.buildah.version=1.41.4, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_id=tripleo_step3, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, container_name=collectd, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=)
Dec 02 08:27:02 np0005541914.localdomain podman[80000]: 2025-12-02 08:27:02.051497213 +0000 UTC m=+0.063924520 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, architecture=x86_64, vcs-type=git, url=https://www.redhat.com, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true)
Dec 02 08:27:02 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:27:02 np0005541914.localdomain sshd[80021]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:27:02 np0005541914.localdomain sshd[80021]: Accepted publickey for zuul from 38.102.83.114 port 44084 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 08:27:02 np0005541914.localdomain systemd-logind[760]: New session 34 of user zuul.
Dec 02 08:27:02 np0005541914.localdomain systemd[1]: Started Session 34 of User zuul.
Dec 02 08:27:02 np0005541914.localdomain sshd[80021]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 08:27:02 np0005541914.localdomain sudo[80038]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-izjnjncdsmrsnjbbnfiavwhuteqpoxrf ; /usr/bin/python3
Dec 02 08:27:02 np0005541914.localdomain sudo[80038]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 08:27:03 np0005541914.localdomain python3[80040]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 02 08:27:03 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:27:04 np0005541914.localdomain systemd[1]: tmp-crun.WxDWdy.mount: Deactivated successfully.
Dec 02 08:27:04 np0005541914.localdomain podman[80042]: 2025-12-02 08:27:04.104240426 +0000 UTC m=+0.101646928 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step3, architecture=x86_64, version=17.1.12, release=1761123044, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, batch=17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com)
Dec 02 08:27:04 np0005541914.localdomain podman[80042]: 2025-12-02 08:27:04.120821673 +0000 UTC m=+0.118228185 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, version=17.1.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044)
Dec 02 08:27:04 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:27:05 np0005541914.localdomain sudo[80038]: pam_unix(sudo:session): session closed for user root
Dec 02 08:27:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 08:27:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 2400.1 total, 600.0 interval
                                                          Cumulative writes: 4399 writes, 20K keys, 4399 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 4399 writes, 504 syncs, 8.73 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 02 08:27:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 08:27:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 2400.2 total, 600.0 interval
                                                          Cumulative writes: 5262 writes, 23K keys, 5262 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 5262 writes, 560 syncs, 9.40 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 02 08:27:17 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:27:18 np0005541914.localdomain podman[80060]: 2025-12-02 08:27:18.075327612 +0000 UTC m=+0.076559241 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 08:27:18 np0005541914.localdomain podman[80060]: 2025-12-02 08:27:18.299514219 +0000 UTC m=+0.300745828 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 08:27:18 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:27:22 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:27:22 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:27:22 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:27:23 np0005541914.localdomain podman[80090]: 2025-12-02 08:27:23.086519475 +0000 UTC m=+0.083836349 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, distribution-scope=public, architecture=x86_64, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, version=17.1.12, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 02 08:27:23 np0005541914.localdomain podman[80089]: 2025-12-02 08:27:23.062420802 +0000 UTC m=+0.068511630 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., batch=17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, version=17.1.12, distribution-scope=public, config_id=tripleo_step4, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron)
Dec 02 08:27:23 np0005541914.localdomain systemd[1]: tmp-crun.paoFkS.mount: Deactivated successfully.
Dec 02 08:27:23 np0005541914.localdomain podman[80091]: 2025-12-02 08:27:23.125669618 +0000 UTC m=+0.123026774 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., url=https://www.redhat.com, tcib_managed=true, config_id=tripleo_step4, io.openshift.expose-services=, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible)
Dec 02 08:27:23 np0005541914.localdomain podman[80090]: 2025-12-02 08:27:23.1369289 +0000 UTC m=+0.134245794 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, version=17.1.12, architecture=x86_64)
Dec 02 08:27:23 np0005541914.localdomain podman[80089]: 2025-12-02 08:27:23.146969153 +0000 UTC m=+0.153059971 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, version=17.1.12, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, managed_by=tripleo_ansible, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron)
Dec 02 08:27:23 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:27:23 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:27:23 np0005541914.localdomain podman[80091]: 2025-12-02 08:27:23.200949469 +0000 UTC m=+0.198306595 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Dec 02 08:27:23 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:27:24 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:27:24 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:27:25 np0005541914.localdomain systemd[1]: tmp-crun.60G0YW.mount: Deactivated successfully.
Dec 02 08:27:25 np0005541914.localdomain podman[80158]: 2025-12-02 08:27:25.076700922 +0000 UTC m=+0.081944970 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, release=1761123044, io.buildah.version=1.41.4, version=17.1.12, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1)
Dec 02 08:27:25 np0005541914.localdomain podman[80159]: 2025-12-02 08:27:25.127365544 +0000 UTC m=+0.126247123 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, architecture=x86_64, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:27:25 np0005541914.localdomain podman[80158]: 2025-12-02 08:27:25.152721256 +0000 UTC m=+0.157965364 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, managed_by=tripleo_ansible, release=1761123044, vcs-type=git)
Dec 02 08:27:25 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:27:25 np0005541914.localdomain podman[80159]: 2025-12-02 08:27:25.478887265 +0000 UTC m=+0.477768844 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, distribution-scope=public, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute)
Dec 02 08:27:25 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:27:27 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:27:27 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:27:28 np0005541914.localdomain podman[80207]: 2025-12-02 08:27:28.067202095 +0000 UTC m=+0.074221219 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, config_id=tripleo_step4, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, batch=17.1_20251118.1, release=1761123044, io.openshift.expose-services=, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn)
Dec 02 08:27:28 np0005541914.localdomain systemd[1]: tmp-crun.2QO5L8.mount: Deactivated successfully.
Dec 02 08:27:28 np0005541914.localdomain podman[80207]: 2025-12-02 08:27:28.137694307 +0000 UTC m=+0.144713371 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:27:28 np0005541914.localdomain podman[80208]: 2025-12-02 08:27:28.140250377 +0000 UTC m=+0.143289776 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1761123044, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, version=17.1.12, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, container_name=ovn_controller, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 02 08:27:28 np0005541914.localdomain podman[80208]: 2025-12-02 08:27:28.166829857 +0000 UTC m=+0.169869216 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, release=1761123044, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, container_name=ovn_controller, distribution-scope=public, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, tcib_managed=true, managed_by=tripleo_ansible)
Dec 02 08:27:28 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:27:28 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:27:32 np0005541914.localdomain sudo[80269]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-asmratncbexcivaggohtmcgoucbtmaor ; /usr/bin/python3
Dec 02 08:27:32 np0005541914.localdomain sudo[80269]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 08:27:32 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:27:32 np0005541914.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 02 08:27:32 np0005541914.localdomain recover_tripleo_nova_virtqemud[80274]: 61907
Dec 02 08:27:32 np0005541914.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 02 08:27:32 np0005541914.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 02 08:27:32 np0005541914.localdomain systemd[1]: tmp-crun.Q33nmP.mount: Deactivated successfully.
Dec 02 08:27:32 np0005541914.localdomain podman[80272]: 2025-12-02 08:27:32.593741778 +0000 UTC m=+0.110912655 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-collectd, vcs-type=git)
Dec 02 08:27:32 np0005541914.localdomain podman[80272]: 2025-12-02 08:27:32.628855075 +0000 UTC m=+0.146025942 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, release=1761123044, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=collectd, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z)
Dec 02 08:27:32 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:27:32 np0005541914.localdomain python3[80271]: ansible-ansible.legacy.dnf Invoked with name=['sos'] state=latest allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 02 08:27:34 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:27:35 np0005541914.localdomain podman[80295]: 2025-12-02 08:27:35.079782784 +0000 UTC m=+0.081048363 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, architecture=x86_64, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, container_name=iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:27:35 np0005541914.localdomain podman[80295]: 2025-12-02 08:27:35.088444965 +0000 UTC m=+0.089710514 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, build-date=2025-11-18T23:44:13Z, version=17.1.12, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.component=openstack-iscsid-container, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4)
Dec 02 08:27:35 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:27:36 np0005541914.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 02 08:27:36 np0005541914.localdomain systemd[1]: Starting man-db-cache-update.service...
Dec 02 08:27:36 np0005541914.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 02 08:27:36 np0005541914.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 02 08:27:36 np0005541914.localdomain systemd[1]: Finished man-db-cache-update.service.
Dec 02 08:27:36 np0005541914.localdomain systemd[1]: run-r8aad0ce0a02f400091ab72c1ae4e2496.service: Deactivated successfully.
Dec 02 08:27:36 np0005541914.localdomain systemd[1]: run-r1a9de5c2cb8c47c4aac0be42b57b46ab.service: Deactivated successfully.
Dec 02 08:27:37 np0005541914.localdomain sudo[80269]: pam_unix(sudo:session): session closed for user root
Dec 02 08:27:46 np0005541914.localdomain sshd[80463]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:27:46 np0005541914.localdomain sshd[80463]: Invalid user sol from 45.148.10.240 port 49488
Dec 02 08:27:46 np0005541914.localdomain sshd[80463]: Connection closed by invalid user sol 45.148.10.240 port 49488 [preauth]
Dec 02 08:27:48 np0005541914.localdomain sudo[80466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:27:48 np0005541914.localdomain sudo[80466]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:27:48 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:27:48 np0005541914.localdomain sudo[80466]: pam_unix(sudo:session): session closed for user root
Dec 02 08:27:48 np0005541914.localdomain podman[80499]: 2025-12-02 08:27:48.823783059 +0000 UTC m=+0.079781287 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 08:27:48 np0005541914.localdomain sudo[80506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 02 08:27:48 np0005541914.localdomain sudo[80506]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:27:49 np0005541914.localdomain podman[80499]: 2025-12-02 08:27:49.028658276 +0000 UTC m=+0.284656554 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 08:27:49 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:27:49 np0005541914.localdomain sudo[80506]: pam_unix(sudo:session): session closed for user root
Dec 02 08:27:49 np0005541914.localdomain sudo[80569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:27:49 np0005541914.localdomain sudo[80569]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:27:49 np0005541914.localdomain sudo[80569]: pam_unix(sudo:session): session closed for user root
Dec 02 08:27:49 np0005541914.localdomain sudo[80584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:27:49 np0005541914.localdomain sudo[80584]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:27:50 np0005541914.localdomain sudo[80584]: pam_unix(sudo:session): session closed for user root
Dec 02 08:27:50 np0005541914.localdomain sudo[80654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:27:50 np0005541914.localdomain sudo[80654]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:27:50 np0005541914.localdomain sudo[80654]: pam_unix(sudo:session): session closed for user root
Dec 02 08:27:53 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:27:53 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:27:53 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:27:54 np0005541914.localdomain podman[80670]: 2025-12-02 08:27:54.090020078 +0000 UTC m=+0.086244965 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, tcib_managed=true, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.expose-services=, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 02 08:27:54 np0005541914.localdomain podman[80670]: 2025-12-02 08:27:54.123906479 +0000 UTC m=+0.120131316 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, version=17.1.12, io.openshift.expose-services=, vcs-type=git, build-date=2025-11-19T00:11:48Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, config_id=tripleo_step4)
Dec 02 08:27:54 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:27:54 np0005541914.localdomain podman[80671]: 2025-12-02 08:27:54.204490056 +0000 UTC m=+0.192707501 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, vcs-type=git, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, distribution-scope=public, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 02 08:27:54 np0005541914.localdomain systemd[1]: tmp-crun.JiBZCI.mount: Deactivated successfully.
Dec 02 08:27:54 np0005541914.localdomain podman[80669]: 2025-12-02 08:27:54.264707047 +0000 UTC m=+0.261041217 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, distribution-scope=public, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, release=1761123044, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.12, managed_by=tripleo_ansible)
Dec 02 08:27:54 np0005541914.localdomain podman[80669]: 2025-12-02 08:27:54.274797208 +0000 UTC m=+0.271131418 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team)
Dec 02 08:27:54 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:27:54 np0005541914.localdomain podman[80671]: 2025-12-02 08:27:54.319087778 +0000 UTC m=+0.307305163 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, version=17.1.12, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, batch=17.1_20251118.1)
Dec 02 08:27:54 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:27:55 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:27:55 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:27:56 np0005541914.localdomain systemd[1]: tmp-crun.6pjI6v.mount: Deactivated successfully.
Dec 02 08:27:56 np0005541914.localdomain podman[80741]: 2025-12-02 08:27:56.095638867 +0000 UTC m=+0.089930957 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, release=1761123044, managed_by=tripleo_ansible, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=nova_compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:27:56 np0005541914.localdomain podman[80742]: 2025-12-02 08:27:56.079420847 +0000 UTC m=+0.072165465 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, url=https://www.redhat.com, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=)
Dec 02 08:27:56 np0005541914.localdomain podman[80741]: 2025-12-02 08:27:56.143729052 +0000 UTC m=+0.138021122 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, config_id=tripleo_step5, io.openshift.expose-services=, tcib_managed=true, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute)
Dec 02 08:27:56 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:27:56 np0005541914.localdomain podman[80742]: 2025-12-02 08:27:56.456124874 +0000 UTC m=+0.448869542 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, build-date=2025-11-19T00:36:58Z, version=17.1.12, release=1761123044, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=nova_migration_target, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:27:56 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:27:58 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:27:58 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:27:59 np0005541914.localdomain systemd[1]: tmp-crun.PwsfwB.mount: Deactivated successfully.
Dec 02 08:27:59 np0005541914.localdomain podman[80791]: 2025-12-02 08:27:59.095724245 +0000 UTC m=+0.094347111 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, managed_by=tripleo_ansible, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., version=17.1.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Dec 02 08:27:59 np0005541914.localdomain podman[80790]: 2025-12-02 08:27:59.06854188 +0000 UTC m=+0.077391590 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, tcib_managed=true, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 02 08:27:59 np0005541914.localdomain podman[80790]: 2025-12-02 08:27:59.151979046 +0000 UTC m=+0.160828776 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, release=1761123044, batch=17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, container_name=ovn_metadata_agent, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:27:59 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:27:59 np0005541914.localdomain podman[80791]: 2025-12-02 08:27:59.172444794 +0000 UTC m=+0.171067650 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.expose-services=, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 02 08:27:59 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:28:02 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:28:03 np0005541914.localdomain podman[80838]: 2025-12-02 08:28:03.063062074 +0000 UTC m=+0.072347320 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, release=1761123044, tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, version=17.1.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:28:03 np0005541914.localdomain podman[80838]: 2025-12-02 08:28:03.077841414 +0000 UTC m=+0.087126830 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, release=1761123044, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, container_name=collectd, managed_by=tripleo_ansible, url=https://www.redhat.com, batch=17.1_20251118.1)
Dec 02 08:28:03 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:28:05 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:28:06 np0005541914.localdomain systemd[1]: tmp-crun.4W15OX.mount: Deactivated successfully.
Dec 02 08:28:06 np0005541914.localdomain podman[80857]: 2025-12-02 08:28:06.075861743 +0000 UTC m=+0.080764483 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_id=tripleo_step3, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, io.openshift.expose-services=, version=17.1.12)
Dec 02 08:28:06 np0005541914.localdomain podman[80857]: 2025-12-02 08:28:06.109808685 +0000 UTC m=+0.114711425 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, tcib_managed=true, batch=17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., io.buildah.version=1.41.4, version=17.1.12, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Dec 02 08:28:06 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:28:19 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:28:20 np0005541914.localdomain podman[80874]: 2025-12-02 08:28:20.078539559 +0000 UTC m=+0.083727896 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., version=17.1.12, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 08:28:20 np0005541914.localdomain podman[80874]: 2025-12-02 08:28:20.266318452 +0000 UTC m=+0.271506719 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-type=git, architecture=x86_64, version=17.1.12, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1761123044, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:28:20 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:28:23 np0005541914.localdomain sudo[80916]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-haiifaxumqzonccxcgtsbsbrnuncdghn ; /usr/bin/python3
Dec 02 08:28:23 np0005541914.localdomain sudo[80916]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 08:28:24 np0005541914.localdomain python3[80918]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager repos --disable rhel-9-for-x86_64-baseos-eus-rpms --disable rhel-9-for-x86_64-appstream-eus-rpms --disable rhel-9-for-x86_64-highavailability-eus-rpms --disable openstack-17.1-for-rhel-9-x86_64-rpms --disable fast-datapath-for-rhel-9-x86_64-rpms _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 08:28:24 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:28:24 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:28:24 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:28:25 np0005541914.localdomain podman[80921]: 2025-12-02 08:28:25.091107846 +0000 UTC m=+0.088003204 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, tcib_managed=true, io.openshift.expose-services=, version=17.1.12, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron)
Dec 02 08:28:25 np0005541914.localdomain podman[80921]: 2025-12-02 08:28:25.124954745 +0000 UTC m=+0.121850113 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, vcs-type=git)
Dec 02 08:28:25 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:28:25 np0005541914.localdomain podman[80923]: 2025-12-02 08:28:25.133582795 +0000 UTC m=+0.128718485 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12)
Dec 02 08:28:25 np0005541914.localdomain podman[80923]: 2025-12-02 08:28:25.215992693 +0000 UTC m=+0.211128363 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:28:25 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:28:25 np0005541914.localdomain podman[80922]: 2025-12-02 08:28:25.187144781 +0000 UTC m=+0.185048778 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=17.1.12, container_name=ceilometer_agent_compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vendor=Red Hat, Inc.)
Dec 02 08:28:25 np0005541914.localdomain podman[80922]: 2025-12-02 08:28:25.26993854 +0000 UTC m=+0.267842467 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, release=1761123044, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, version=17.1.12, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 02 08:28:25 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:28:26 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:28:26 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:28:27 np0005541914.localdomain podman[81112]: 2025-12-02 08:28:27.067691429 +0000 UTC m=+0.072989517 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, url=https://www.redhat.com, vcs-type=git, release=1761123044, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Dec 02 08:28:27 np0005541914.localdomain systemd[1]: tmp-crun.P5uJ2g.mount: Deactivated successfully.
Dec 02 08:28:27 np0005541914.localdomain podman[81111]: 2025-12-02 08:28:27.141476507 +0000 UTC m=+0.146999911 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, release=1761123044, distribution-scope=public, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute)
Dec 02 08:28:27 np0005541914.localdomain podman[81111]: 2025-12-02 08:28:27.197630526 +0000 UTC m=+0.203153970 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-19T00:36:58Z, tcib_managed=true, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute)
Dec 02 08:28:27 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:28:27 np0005541914.localdomain rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 02 08:28:27 np0005541914.localdomain podman[81112]: 2025-12-02 08:28:27.420710099 +0000 UTC m=+0.426008117 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, batch=17.1_20251118.1, version=17.1.12, distribution-scope=public, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4)
Dec 02 08:28:27 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:28:29 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:28:29 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:28:30 np0005541914.localdomain systemd[1]: tmp-crun.3zPgUp.mount: Deactivated successfully.
Dec 02 08:28:30 np0005541914.localdomain podman[81171]: 2025-12-02 08:28:30.092046339 +0000 UTC m=+0.095343868 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, release=1761123044, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, version=17.1.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public)
Dec 02 08:28:30 np0005541914.localdomain systemd[1]: tmp-crun.jNOmuS.mount: Deactivated successfully.
Dec 02 08:28:30 np0005541914.localdomain podman[81171]: 2025-12-02 08:28:30.148510987 +0000 UTC m=+0.151808526 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, url=https://www.redhat.com, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, batch=17.1_20251118.1, architecture=x86_64)
Dec 02 08:28:30 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:28:30 np0005541914.localdomain podman[81170]: 2025-12-02 08:28:30.153936937 +0000 UTC m=+0.157989636 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, architecture=x86_64, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com)
Dec 02 08:28:30 np0005541914.localdomain podman[81170]: 2025-12-02 08:28:30.234500394 +0000 UTC m=+0.238553063 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.12, config_id=tripleo_step4, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, architecture=x86_64, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team)
Dec 02 08:28:30 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:28:31 np0005541914.localdomain sudo[80916]: pam_unix(sudo:session): session closed for user root
Dec 02 08:28:33 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:28:34 np0005541914.localdomain systemd[1]: tmp-crun.texLSm.mount: Deactivated successfully.
Dec 02 08:28:34 np0005541914.localdomain podman[81278]: 2025-12-02 08:28:34.088402575 +0000 UTC m=+0.090348079 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, tcib_managed=true, vendor=Red Hat, Inc., container_name=collectd, vcs-type=git, batch=17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Dec 02 08:28:34 np0005541914.localdomain podman[81278]: 2025-12-02 08:28:34.102966639 +0000 UTC m=+0.104912163 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, url=https://www.redhat.com, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd)
Dec 02 08:28:34 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:28:36 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:28:37 np0005541914.localdomain podman[81298]: 2025-12-02 08:28:37.060616188 +0000 UTC m=+0.068048330 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, container_name=iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, tcib_managed=true, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.openshift.expose-services=, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com)
Dec 02 08:28:37 np0005541914.localdomain podman[81298]: 2025-12-02 08:28:37.074941066 +0000 UTC m=+0.082373228 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team)
Dec 02 08:28:37 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:28:42 np0005541914.localdomain sshd[81317]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:28:42 np0005541914.localdomain sshd[81317]: error: kex_exchange_identification: Connection closed by remote host
Dec 02 08:28:42 np0005541914.localdomain sshd[81317]: Connection closed by 116.196.71.115 port 36984
Dec 02 08:28:49 np0005541914.localdomain sshd[81318]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:28:50 np0005541914.localdomain sudo[81340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:28:50 np0005541914.localdomain sudo[81340]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:28:50 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:28:50 np0005541914.localdomain sudo[81340]: pam_unix(sudo:session): session closed for user root
Dec 02 08:28:51 np0005541914.localdomain podman[81354]: 2025-12-02 08:28:51.033417554 +0000 UTC m=+0.076034982 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Dec 02 08:28:51 np0005541914.localdomain sudo[81364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:28:51 np0005541914.localdomain sudo[81364]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:28:51 np0005541914.localdomain podman[81354]: 2025-12-02 08:28:51.283857016 +0000 UTC m=+0.326474404 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-type=git, release=1761123044, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 08:28:51 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:28:51 np0005541914.localdomain sudo[81364]: pam_unix(sudo:session): session closed for user root
Dec 02 08:28:52 np0005541914.localdomain sudo[81455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:28:52 np0005541914.localdomain sudo[81455]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:28:52 np0005541914.localdomain sudo[81455]: pam_unix(sudo:session): session closed for user root
Dec 02 08:28:55 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:28:55 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:28:55 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:28:56 np0005541914.localdomain podman[81470]: 2025-12-02 08:28:56.055297678 +0000 UTC m=+0.060340866 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.component=openstack-cron-container, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, distribution-scope=public, version=17.1.12, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:28:56 np0005541914.localdomain podman[81470]: 2025-12-02 08:28:56.092771939 +0000 UTC m=+0.097815127 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.buildah.version=1.41.4, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, managed_by=tripleo_ansible, distribution-scope=public)
Dec 02 08:28:56 np0005541914.localdomain systemd[1]: tmp-crun.4FQvta.mount: Deactivated successfully.
Dec 02 08:28:56 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:28:56 np0005541914.localdomain podman[81472]: 2025-12-02 08:28:56.111740355 +0000 UTC m=+0.112702960 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, build-date=2025-11-19T00:12:45Z, release=1761123044, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20251118.1)
Dec 02 08:28:56 np0005541914.localdomain podman[81472]: 2025-12-02 08:28:56.130625759 +0000 UTC m=+0.131588374 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, release=1761123044, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, distribution-scope=public, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible)
Dec 02 08:28:56 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:28:56 np0005541914.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 02 08:28:56 np0005541914.localdomain podman[81471]: 2025-12-02 08:28:56.2405238 +0000 UTC m=+0.242523853 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, config_id=tripleo_step4, architecture=x86_64, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044)
Dec 02 08:28:56 np0005541914.localdomain recover_tripleo_nova_virtqemud[81535]: 61907
Dec 02 08:28:56 np0005541914.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 02 08:28:56 np0005541914.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 02 08:28:56 np0005541914.localdomain podman[81471]: 2025-12-02 08:28:56.269910266 +0000 UTC m=+0.271910309 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, tcib_managed=true)
Dec 02 08:28:56 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:28:57 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:28:57 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:28:58 np0005541914.localdomain systemd[1]: tmp-crun.jKWj5K.mount: Deactivated successfully.
Dec 02 08:28:58 np0005541914.localdomain podman[81546]: 2025-12-02 08:28:58.088216416 +0000 UTC m=+0.085792723 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, vcs-type=git, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, version=17.1.12, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, release=1761123044, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:28:58 np0005541914.localdomain podman[81545]: 2025-12-02 08:28:58.138679157 +0000 UTC m=+0.137980342 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.12, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc.)
Dec 02 08:28:58 np0005541914.localdomain podman[81545]: 2025-12-02 08:28:58.15898844 +0000 UTC m=+0.158289695 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=nova_compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:28:58 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:28:58 np0005541914.localdomain podman[81546]: 2025-12-02 08:28:58.454846124 +0000 UTC m=+0.452422431 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, architecture=x86_64, tcib_managed=true, container_name=nova_migration_target, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12)
Dec 02 08:28:58 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:28:59 np0005541914.localdomain systemd[1]: tmp-crun.w0UmK4.mount: Deactivated successfully.
Dec 02 08:29:00 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:29:00 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:29:01 np0005541914.localdomain systemd[1]: tmp-crun.CAYpmo.mount: Deactivated successfully.
Dec 02 08:29:01 np0005541914.localdomain podman[81594]: 2025-12-02 08:29:01.072554355 +0000 UTC m=+0.083884570 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.12, url=https://www.redhat.com, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 02 08:29:01 np0005541914.localdomain podman[81595]: 2025-12-02 08:29:01.127333046 +0000 UTC m=+0.130525855 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, container_name=ovn_controller, vcs-type=git, batch=17.1_20251118.1, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 02 08:29:01 np0005541914.localdomain podman[81594]: 2025-12-02 08:29:01.133942389 +0000 UTC m=+0.145272604 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, release=1761123044, distribution-scope=public, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-type=git, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 02 08:29:01 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:29:01 np0005541914.localdomain podman[81595]: 2025-12-02 08:29:01.147057104 +0000 UTC m=+0.150249893 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, distribution-scope=public, release=1761123044, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Dec 02 08:29:01 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:29:01 np0005541914.localdomain sshd[81318]: ssh_dispatch_run_fatal: Connection from 116.196.71.115 port 36990: Connection timed out [preauth]
Dec 02 08:29:04 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:29:05 np0005541914.localdomain podman[81642]: 2025-12-02 08:29:05.070212386 +0000 UTC m=+0.066620389 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vcs-type=git, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd)
Dec 02 08:29:05 np0005541914.localdomain podman[81642]: 2025-12-02 08:29:05.082723034 +0000 UTC m=+0.079131027 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, tcib_managed=true, version=17.1.12, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd)
Dec 02 08:29:05 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:29:07 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:29:08 np0005541914.localdomain podman[81664]: 2025-12-02 08:29:08.053966269 +0000 UTC m=+0.064629905 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, io.buildah.version=1.41.4, config_id=tripleo_step3, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, release=1761123044, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vcs-type=git)
Dec 02 08:29:08 np0005541914.localdomain podman[81664]: 2025-12-02 08:29:08.060734208 +0000 UTC m=+0.071397844 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, container_name=iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1, vcs-type=git, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team)
Dec 02 08:29:08 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:29:19 np0005541914.localdomain sudo[81696]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lwufavpmbaijqopajwpophqkjdmqibkr ; /usr/bin/python3
Dec 02 08:29:19 np0005541914.localdomain sudo[81696]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 08:29:19 np0005541914.localdomain python3[81698]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager repos --disable rhceph-7-tools-for-rhel-9-x86_64-rpms _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 08:29:21 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:29:22 np0005541914.localdomain systemd[1]: tmp-crun.Bd8v4W.mount: Deactivated successfully.
Dec 02 08:29:22 np0005541914.localdomain podman[81702]: 2025-12-02 08:29:22.073914324 +0000 UTC m=+0.077416261 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, batch=17.1_20251118.1, architecture=x86_64, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 08:29:22 np0005541914.localdomain podman[81702]: 2025-12-02 08:29:22.272676732 +0000 UTC m=+0.276178749 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.expose-services=, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:29:22 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:29:23 np0005541914.localdomain rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 02 08:29:23 np0005541914.localdomain rhsm-service[6579]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Dec 02 08:29:26 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:29:26 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:29:26 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:29:27 np0005541914.localdomain systemd[1]: tmp-crun.wZBgxq.mount: Deactivated successfully.
Dec 02 08:29:27 np0005541914.localdomain podman[81914]: 2025-12-02 08:29:27.085208745 +0000 UTC m=+0.086265916 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, batch=17.1_20251118.1)
Dec 02 08:29:27 np0005541914.localdomain podman[81913]: 2025-12-02 08:29:27.127765566 +0000 UTC m=+0.129055634 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, release=1761123044, io.buildah.version=1.41.4, container_name=logrotate_crond)
Dec 02 08:29:27 np0005541914.localdomain podman[81913]: 2025-12-02 08:29:27.133884966 +0000 UTC m=+0.135175024 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:29:27 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:29:27 np0005541914.localdomain podman[81915]: 2025-12-02 08:29:27.18194917 +0000 UTC m=+0.181603742 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, version=17.1.12, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=)
Dec 02 08:29:27 np0005541914.localdomain podman[81915]: 2025-12-02 08:29:27.21471972 +0000 UTC m=+0.214374312 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, architecture=x86_64, version=17.1.12, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vcs-type=git, managed_by=tripleo_ansible)
Dec 02 08:29:27 np0005541914.localdomain podman[81914]: 2025-12-02 08:29:27.212392395 +0000 UTC m=+0.213449556 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, version=17.1.12, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 02 08:29:27 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:29:27 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:29:27 np0005541914.localdomain sudo[81696]: pam_unix(sudo:session): session closed for user root
Dec 02 08:29:28 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:29:28 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:29:29 np0005541914.localdomain podman[81984]: 2025-12-02 08:29:29.057293723 +0000 UTC m=+0.064811321 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, url=https://www.redhat.com, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, container_name=nova_compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1)
Dec 02 08:29:29 np0005541914.localdomain podman[81985]: 2025-12-02 08:29:29.098307612 +0000 UTC m=+0.102774725 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, version=17.1.12, tcib_managed=true, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, distribution-scope=public, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z)
Dec 02 08:29:29 np0005541914.localdomain podman[81984]: 2025-12-02 08:29:29.103165796 +0000 UTC m=+0.110683394 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.openshift.expose-services=, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, managed_by=tripleo_ansible, config_id=tripleo_step5, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.12, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:29:29 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:29:29 np0005541914.localdomain podman[81985]: 2025-12-02 08:29:29.446815616 +0000 UTC m=+0.451282749 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.12, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target)
Dec 02 08:29:29 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:29:31 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:29:31 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:29:32 np0005541914.localdomain podman[82032]: 2025-12-02 08:29:32.060696782 +0000 UTC m=+0.070895439 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn)
Dec 02 08:29:32 np0005541914.localdomain podman[82033]: 2025-12-02 08:29:32.129686607 +0000 UTC m=+0.136494410 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, container_name=ovn_controller, io.buildah.version=1.41.4, version=17.1.12, distribution-scope=public, vcs-type=git, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 02 08:29:32 np0005541914.localdomain podman[82032]: 2025-12-02 08:29:32.145882826 +0000 UTC m=+0.156081473 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.buildah.version=1.41.4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 02 08:29:32 np0005541914.localdomain podman[82033]: 2025-12-02 08:29:32.155439052 +0000 UTC m=+0.162246865 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, version=17.1.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:29:32 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:29:32 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:29:35 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:29:36 np0005541914.localdomain podman[82078]: 2025-12-02 08:29:36.076865987 +0000 UTC m=+0.084080915 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, name=rhosp17/openstack-collectd, tcib_managed=true, config_id=tripleo_step3, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, architecture=x86_64, vcs-type=git, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, container_name=collectd)
Dec 02 08:29:36 np0005541914.localdomain podman[82078]: 2025-12-02 08:29:36.087643527 +0000 UTC m=+0.094858445 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, name=rhosp17/openstack-collectd, tcib_managed=true, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, architecture=x86_64, container_name=collectd, maintainer=OpenStack TripleO Team, vcs-type=git)
Dec 02 08:29:36 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:29:38 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:29:39 np0005541914.localdomain systemd[1]: tmp-crun.KflI28.mount: Deactivated successfully.
Dec 02 08:29:39 np0005541914.localdomain podman[82097]: 2025-12-02 08:29:39.088641119 +0000 UTC m=+0.090203615 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, container_name=iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1761123044, vcs-type=git, url=https://www.redhat.com, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 02 08:29:39 np0005541914.localdomain podman[82097]: 2025-12-02 08:29:39.123417824 +0000 UTC m=+0.124980300 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, version=17.1.12, container_name=iscsid, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible)
Dec 02 08:29:39 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:29:42 np0005541914.localdomain python3[82129]: ansible-ansible.builtin.slurp Invoked with path=/home/zuul/ansible_hostname src=/home/zuul/ansible_hostname
Dec 02 08:29:50 np0005541914.localdomain sshd[82130]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:29:50 np0005541914.localdomain sshd[82130]: Invalid user sol from 45.148.10.240 port 40076
Dec 02 08:29:50 np0005541914.localdomain sshd[82130]: Connection closed by invalid user sol 45.148.10.240 port 40076 [preauth]
Dec 02 08:29:52 np0005541914.localdomain sudo[82177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:29:52 np0005541914.localdomain sudo[82177]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:29:52 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:29:52 np0005541914.localdomain sudo[82177]: pam_unix(sudo:session): session closed for user root
Dec 02 08:29:52 np0005541914.localdomain sudo[82198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 02 08:29:52 np0005541914.localdomain sudo[82198]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:29:52 np0005541914.localdomain podman[82192]: 2025-12-02 08:29:52.568904691 +0000 UTC m=+0.071948748 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, container_name=metrics_qdr, version=17.1.12, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 08:29:52 np0005541914.localdomain podman[82192]: 2025-12-02 08:29:52.776026051 +0000 UTC m=+0.279070108 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, container_name=metrics_qdr, url=https://www.redhat.com, release=1761123044, distribution-scope=public, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 08:29:52 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:29:53 np0005541914.localdomain podman[82305]: 2025-12-02 08:29:53.505700188 +0000 UTC m=+0.106034865 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, GIT_BRANCH=main, ceph=True, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, architecture=x86_64, release=1763362218, description=Red Hat Ceph Storage 7, name=rhceph, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 08:29:53 np0005541914.localdomain podman[82305]: 2025-12-02 08:29:53.648860872 +0000 UTC m=+0.249195489 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_BRANCH=main, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, description=Red Hat Ceph Storage 7, ceph=True, io.openshift.tags=rhceph ceph, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhceph, GIT_CLEAN=True)
Dec 02 08:29:53 np0005541914.localdomain sudo[82198]: pam_unix(sudo:session): session closed for user root
Dec 02 08:29:54 np0005541914.localdomain sudo[82369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:29:54 np0005541914.localdomain sudo[82369]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:29:54 np0005541914.localdomain sudo[82369]: pam_unix(sudo:session): session closed for user root
Dec 02 08:29:54 np0005541914.localdomain sudo[82384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:29:54 np0005541914.localdomain sudo[82384]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:29:54 np0005541914.localdomain sudo[82384]: pam_unix(sudo:session): session closed for user root
Dec 02 08:29:55 np0005541914.localdomain sudo[82431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:29:55 np0005541914.localdomain sudo[82431]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:29:55 np0005541914.localdomain sudo[82431]: pam_unix(sudo:session): session closed for user root
Dec 02 08:29:57 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:29:57 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:29:58 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:29:58 np0005541914.localdomain podman[82447]: 2025-12-02 08:29:58.09628397 +0000 UTC m=+0.091711268 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, tcib_managed=true, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, release=1761123044, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, vcs-type=git, com.redhat.component=openstack-cron-container, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4)
Dec 02 08:29:58 np0005541914.localdomain podman[82447]: 2025-12-02 08:29:58.107986294 +0000 UTC m=+0.103413622 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, architecture=x86_64)
Dec 02 08:29:58 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:29:58 np0005541914.localdomain podman[82449]: 2025-12-02 08:29:58.200151213 +0000 UTC m=+0.186931941 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, release=1761123044, version=17.1.12, managed_by=tripleo_ansible)
Dec 02 08:29:58 np0005541914.localdomain podman[82449]: 2025-12-02 08:29:58.239920097 +0000 UTC m=+0.226700825 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 02 08:29:58 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:29:58 np0005541914.localdomain podman[82448]: 2025-12-02 08:29:58.263533883 +0000 UTC m=+0.251195425 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git)
Dec 02 08:29:58 np0005541914.localdomain podman[82448]: 2025-12-02 08:29:58.320893295 +0000 UTC m=+0.308554787 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, version=17.1.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.4, distribution-scope=public, release=1761123044, batch=17.1_20251118.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:29:58 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:29:59 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:29:59 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:30:00 np0005541914.localdomain podman[82520]: 2025-12-02 08:30:00.070381824 +0000 UTC m=+0.074665334 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, version=17.1.12, io.buildah.version=1.41.4, vcs-type=git, architecture=x86_64, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, distribution-scope=public)
Dec 02 08:30:00 np0005541914.localdomain podman[82521]: 2025-12-02 08:30:00.129656719 +0000 UTC m=+0.134334890 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, io.buildah.version=1.41.4, version=17.1.12, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_id=tripleo_step4)
Dec 02 08:30:00 np0005541914.localdomain podman[82520]: 2025-12-02 08:30:00.148505652 +0000 UTC m=+0.152789192 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, version=17.1.12, vcs-type=git, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com)
Dec 02 08:30:00 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:30:00 np0005541914.localdomain podman[82521]: 2025-12-02 08:30:00.528610064 +0000 UTC m=+0.533288285 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, com.redhat.component=openstack-nova-compute-container)
Dec 02 08:30:00 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:30:02 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:30:02 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:30:03 np0005541914.localdomain podman[82569]: 2025-12-02 08:30:03.088918813 +0000 UTC m=+0.084657501 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, distribution-scope=public, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 02 08:30:03 np0005541914.localdomain podman[82569]: 2025-12-02 08:30:03.149589647 +0000 UTC m=+0.145328295 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, release=1761123044, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, batch=17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:30:03 np0005541914.localdomain systemd[1]: tmp-crun.NYx2Zj.mount: Deactivated successfully.
Dec 02 08:30:03 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:30:03 np0005541914.localdomain podman[82570]: 2025-12-02 08:30:03.163146124 +0000 UTC m=+0.153843912 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_id=tripleo_step4, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, distribution-scope=public, release=1761123044, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=ovn_controller)
Dec 02 08:30:03 np0005541914.localdomain podman[82570]: 2025-12-02 08:30:03.189425113 +0000 UTC m=+0.180122901 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, vcs-type=git, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_id=tripleo_step4)
Dec 02 08:30:03 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:30:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:30:07 np0005541914.localdomain podman[82617]: 2025-12-02 08:30:07.068834731 +0000 UTC m=+0.074343385 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, distribution-scope=public, version=17.1.12, build-date=2025-11-18T22:51:28Z, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd)
Dec 02 08:30:07 np0005541914.localdomain podman[82617]: 2025-12-02 08:30:07.080855215 +0000 UTC m=+0.086363829 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, architecture=x86_64, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, version=17.1.12, config_id=tripleo_step3, vendor=Red Hat, Inc.)
Dec 02 08:30:07 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:30:09 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:30:10 np0005541914.localdomain podman[82637]: 2025-12-02 08:30:10.078035481 +0000 UTC m=+0.085454794 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, container_name=iscsid, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, com.redhat.component=openstack-iscsid-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Dec 02 08:30:10 np0005541914.localdomain podman[82637]: 2025-12-02 08:30:10.112143158 +0000 UTC m=+0.119562501 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, container_name=iscsid, config_id=tripleo_step3, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid)
Dec 02 08:30:10 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:30:22 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:30:22 np0005541914.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 02 08:30:23 np0005541914.localdomain recover_tripleo_nova_virtqemud[82662]: 61907
Dec 02 08:30:23 np0005541914.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 02 08:30:23 np0005541914.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 02 08:30:23 np0005541914.localdomain systemd[1]: tmp-crun.HhA59c.mount: Deactivated successfully.
Dec 02 08:30:23 np0005541914.localdomain podman[82655]: 2025-12-02 08:30:23.09057459 +0000 UTC m=+0.093997151 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, tcib_managed=true, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, release=1761123044, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, version=17.1.12, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=)
Dec 02 08:30:23 np0005541914.localdomain podman[82655]: 2025-12-02 08:30:23.277975013 +0000 UTC m=+0.281397594 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']})
Dec 02 08:30:23 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:30:28 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:30:28 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:30:28 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:30:29 np0005541914.localdomain systemd[1]: tmp-crun.4UfMhU.mount: Deactivated successfully.
Dec 02 08:30:29 np0005541914.localdomain podman[82686]: 2025-12-02 08:30:29.084962513 +0000 UTC m=+0.084915499 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, release=1761123044, version=17.1.12)
Dec 02 08:30:29 np0005541914.localdomain podman[82686]: 2025-12-02 08:30:29.119645836 +0000 UTC m=+0.119598822 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, tcib_managed=true, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, config_id=tripleo_step4, release=1761123044, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Dec 02 08:30:29 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:30:29 np0005541914.localdomain podman[82688]: 2025-12-02 08:30:29.144173967 +0000 UTC m=+0.138587269 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc.)
Dec 02 08:30:29 np0005541914.localdomain podman[82687]: 2025-12-02 08:30:29.19867923 +0000 UTC m=+0.194603063 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 02 08:30:29 np0005541914.localdomain podman[82688]: 2025-12-02 08:30:29.203025231 +0000 UTC m=+0.197438513 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, io.buildah.version=1.41.4, io.openshift.expose-services=)
Dec 02 08:30:29 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:30:29 np0005541914.localdomain podman[82687]: 2025-12-02 08:30:29.261978967 +0000 UTC m=+0.257902760 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step4, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.buildah.version=1.41.4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.expose-services=)
Dec 02 08:30:29 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:30:30 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:30:30 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:30:31 np0005541914.localdomain podman[82759]: 2025-12-02 08:30:31.090055238 +0000 UTC m=+0.091638165 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc.)
Dec 02 08:30:31 np0005541914.localdomain systemd[1]: tmp-crun.UyrbCY.mount: Deactivated successfully.
Dec 02 08:30:31 np0005541914.localdomain podman[82760]: 2025-12-02 08:30:31.137403952 +0000 UTC m=+0.136784628 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, container_name=nova_migration_target, architecture=x86_64, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 02 08:30:31 np0005541914.localdomain podman[82759]: 2025-12-02 08:30:31.145539918 +0000 UTC m=+0.147122845 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, architecture=x86_64, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:30:31 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:30:31 np0005541914.localdomain podman[82760]: 2025-12-02 08:30:31.545243404 +0000 UTC m=+0.544624160 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, vcs-type=git, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, distribution-scope=public, tcib_managed=true, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:30:31 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:30:33 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:30:33 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:30:34 np0005541914.localdomain systemd[1]: tmp-crun.Xrvt6R.mount: Deactivated successfully.
Dec 02 08:30:34 np0005541914.localdomain podman[82808]: 2025-12-02 08:30:34.080684212 +0000 UTC m=+0.090595776 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, container_name=ovn_metadata_agent, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, version=17.1.12)
Dec 02 08:30:34 np0005541914.localdomain podman[82809]: 2025-12-02 08:30:34.141606563 +0000 UTC m=+0.143029372 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, build-date=2025-11-18T23:34:05Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Dec 02 08:30:34 np0005541914.localdomain podman[82808]: 2025-12-02 08:30:34.151821346 +0000 UTC m=+0.161732910 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, distribution-scope=public, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 02 08:30:34 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:30:34 np0005541914.localdomain podman[82809]: 2025-12-02 08:30:34.166578596 +0000 UTC m=+0.168001335 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.12, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, container_name=ovn_controller, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc.)
Dec 02 08:30:34 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:30:37 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:30:38 np0005541914.localdomain podman[82858]: 2025-12-02 08:30:38.080145533 +0000 UTC m=+0.083473939 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, container_name=collectd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-type=git, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com)
Dec 02 08:30:38 np0005541914.localdomain podman[82858]: 2025-12-02 08:30:38.117125689 +0000 UTC m=+0.120454155 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, release=1761123044, managed_by=tripleo_ansible, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64)
Dec 02 08:30:38 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:30:40 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:30:41 np0005541914.localdomain systemd[1]: tmp-crun.hTs03W.mount: Deactivated successfully.
Dec 02 08:30:41 np0005541914.localdomain podman[82879]: 2025-12-02 08:30:41.085269648 +0000 UTC m=+0.084033894 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, managed_by=tripleo_ansible, container_name=iscsid, name=rhosp17/openstack-iscsid, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.buildah.version=1.41.4, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:30:41 np0005541914.localdomain podman[82879]: 2025-12-02 08:30:41.09327588 +0000 UTC m=+0.092040086 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, architecture=x86_64, container_name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:30:41 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:30:42 np0005541914.localdomain sshd[80024]: Received disconnect from 38.102.83.114 port 44084:11: disconnected by user
Dec 02 08:30:42 np0005541914.localdomain sshd[80024]: Disconnected from user zuul 38.102.83.114 port 44084
Dec 02 08:30:42 np0005541914.localdomain sshd[80021]: pam_unix(sshd:session): session closed for user zuul
Dec 02 08:30:42 np0005541914.localdomain systemd[1]: session-34.scope: Deactivated successfully.
Dec 02 08:30:42 np0005541914.localdomain systemd[1]: session-34.scope: Consumed 19.925s CPU time.
Dec 02 08:30:42 np0005541914.localdomain systemd-logind[760]: Session 34 logged out. Waiting for processes to exit.
Dec 02 08:30:42 np0005541914.localdomain systemd-logind[760]: Removed session 34.
Dec 02 08:30:53 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:30:54 np0005541914.localdomain systemd[1]: tmp-crun.qNiwER.mount: Deactivated successfully.
Dec 02 08:30:54 np0005541914.localdomain podman[82945]: 2025-12-02 08:30:54.099180782 +0000 UTC m=+0.096321743 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd)
Dec 02 08:30:54 np0005541914.localdomain podman[82945]: 2025-12-02 08:30:54.321842964 +0000 UTC m=+0.318983865 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, version=17.1.12, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step1, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 08:30:54 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:30:55 np0005541914.localdomain sudo[82974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:30:55 np0005541914.localdomain sudo[82974]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:30:55 np0005541914.localdomain sudo[82974]: pam_unix(sudo:session): session closed for user root
Dec 02 08:30:55 np0005541914.localdomain sudo[82989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:30:55 np0005541914.localdomain sudo[82989]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:30:56 np0005541914.localdomain sudo[82989]: pam_unix(sudo:session): session closed for user root
Dec 02 08:30:56 np0005541914.localdomain sudo[83035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:30:56 np0005541914.localdomain sudo[83035]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:30:56 np0005541914.localdomain sudo[83035]: pam_unix(sudo:session): session closed for user root
Dec 02 08:30:59 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:30:59 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:30:59 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:31:00 np0005541914.localdomain systemd[1]: tmp-crun.wCmoGQ.mount: Deactivated successfully.
Dec 02 08:31:00 np0005541914.localdomain podman[83051]: 2025-12-02 08:31:00.076018372 +0000 UTC m=+0.078211681 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, url=https://www.redhat.com, version=17.1.12, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, release=1761123044, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team)
Dec 02 08:31:00 np0005541914.localdomain podman[83050]: 2025-12-02 08:31:00.123381468 +0000 UTC m=+0.126447605 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, container_name=logrotate_crond)
Dec 02 08:31:00 np0005541914.localdomain podman[83051]: 2025-12-02 08:31:00.177867105 +0000 UTC m=+0.180060434 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., release=1761123044, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Dec 02 08:31:00 np0005541914.localdomain podman[83052]: 2025-12-02 08:31:00.184121669 +0000 UTC m=+0.182391037 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, release=1761123044, tcib_managed=true)
Dec 02 08:31:00 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:31:00 np0005541914.localdomain podman[83050]: 2025-12-02 08:31:00.215028506 +0000 UTC m=+0.218094573 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, container_name=logrotate_crond, release=1761123044, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container)
Dec 02 08:31:00 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:31:00 np0005541914.localdomain podman[83052]: 2025-12-02 08:31:00.244896571 +0000 UTC m=+0.243165959 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, version=17.1.12, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 02 08:31:00 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:31:01 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:31:01 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:31:02 np0005541914.localdomain systemd[1]: tmp-crun.h2APQR.mount: Deactivated successfully.
Dec 02 08:31:02 np0005541914.localdomain podman[83125]: 2025-12-02 08:31:02.054195321 +0000 UTC m=+0.061291919 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, release=1761123044, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com)
Dec 02 08:31:02 np0005541914.localdomain podman[83126]: 2025-12-02 08:31:02.066209302 +0000 UTC m=+0.069044508 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, version=17.1.12, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, architecture=x86_64)
Dec 02 08:31:02 np0005541914.localdomain podman[83125]: 2025-12-02 08:31:02.101876807 +0000 UTC m=+0.108973425 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, version=17.1.12, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public)
Dec 02 08:31:02 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:31:02 np0005541914.localdomain podman[83126]: 2025-12-02 08:31:02.404763093 +0000 UTC m=+0.407598279 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, container_name=nova_migration_target, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, release=1761123044, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, vcs-type=git, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:31:02 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:31:04 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:31:04 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:31:05 np0005541914.localdomain podman[83175]: 2025-12-02 08:31:05.078305676 +0000 UTC m=+0.075774606 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, config_id=tripleo_step4, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public)
Dec 02 08:31:05 np0005541914.localdomain podman[83175]: 2025-12-02 08:31:05.135115865 +0000 UTC m=+0.132584755 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, container_name=ovn_controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., config_id=tripleo_step4)
Dec 02 08:31:05 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:31:05 np0005541914.localdomain podman[83174]: 2025-12-02 08:31:05.137242301 +0000 UTC m=+0.138677434 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_id=tripleo_step4, url=https://www.redhat.com, io.buildah.version=1.41.4, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12)
Dec 02 08:31:05 np0005541914.localdomain podman[83174]: 2025-12-02 08:31:05.220669664 +0000 UTC m=+0.222104817 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true)
Dec 02 08:31:05 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:31:08 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:31:09 np0005541914.localdomain podman[83221]: 2025-12-02 08:31:09.064184365 +0000 UTC m=+0.070689879 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, container_name=collectd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, release=1761123044)
Dec 02 08:31:09 np0005541914.localdomain podman[83221]: 2025-12-02 08:31:09.078907181 +0000 UTC m=+0.085412735 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, architecture=x86_64, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, version=17.1.12, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true)
Dec 02 08:31:09 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:31:11 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:31:12 np0005541914.localdomain podman[83241]: 2025-12-02 08:31:12.078283242 +0000 UTC m=+0.085576721 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, version=17.1.12, container_name=iscsid, batch=17.1_20251118.1, distribution-scope=public, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 02 08:31:12 np0005541914.localdomain podman[83241]: 2025-12-02 08:31:12.086342781 +0000 UTC m=+0.093636290 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true)
Dec 02 08:31:12 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:31:24 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:31:25 np0005541914.localdomain podman[83260]: 2025-12-02 08:31:25.07025281 +0000 UTC m=+0.074306822 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, vcs-type=git, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:31:25 np0005541914.localdomain podman[83260]: 2025-12-02 08:31:25.265981258 +0000 UTC m=+0.270035260 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, tcib_managed=true, vcs-type=git, distribution-scope=public, config_id=tripleo_step1, version=17.1.12, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:31:25 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:31:30 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:31:30 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:31:30 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:31:31 np0005541914.localdomain podman[83289]: 2025-12-02 08:31:31.066645638 +0000 UTC m=+0.075693675 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, config_id=tripleo_step4, batch=17.1_20251118.1, container_name=logrotate_crond, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron)
Dec 02 08:31:31 np0005541914.localdomain podman[83289]: 2025-12-02 08:31:31.075668597 +0000 UTC m=+0.084716624 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., url=https://www.redhat.com, release=1761123044, io.openshift.expose-services=, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1)
Dec 02 08:31:31 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:31:31 np0005541914.localdomain podman[83291]: 2025-12-02 08:31:31.116856652 +0000 UTC m=+0.120124600 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, distribution-scope=public, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, container_name=ceilometer_agent_ipmi, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4)
Dec 02 08:31:31 np0005541914.localdomain podman[83290]: 2025-12-02 08:31:31.16233692 +0000 UTC m=+0.169807998 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, url=https://www.redhat.com, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.buildah.version=1.41.4, release=1761123044, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Dec 02 08:31:31 np0005541914.localdomain podman[83291]: 2025-12-02 08:31:31.169969476 +0000 UTC m=+0.173237404 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, url=https://www.redhat.com, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vendor=Red Hat, Inc., release=1761123044, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi)
Dec 02 08:31:31 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:31:31 np0005541914.localdomain podman[83290]: 2025-12-02 08:31:31.184221477 +0000 UTC m=+0.191692525 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, vcs-type=git, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 02 08:31:31 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:31:32 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:31:32 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:31:33 np0005541914.localdomain systemd[1]: tmp-crun.oM4GE0.mount: Deactivated successfully.
Dec 02 08:31:33 np0005541914.localdomain podman[83360]: 2025-12-02 08:31:33.081558433 +0000 UTC m=+0.082785853 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:36:58Z, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, release=1761123044, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target)
Dec 02 08:31:33 np0005541914.localdomain podman[83359]: 2025-12-02 08:31:33.137890577 +0000 UTC m=+0.143135932 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=nova_compute, io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:31:33 np0005541914.localdomain podman[83359]: 2025-12-02 08:31:33.192376424 +0000 UTC m=+0.197621729 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.buildah.version=1.41.4, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, url=https://www.redhat.com, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5)
Dec 02 08:31:33 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:31:33 np0005541914.localdomain podman[83360]: 2025-12-02 08:31:33.446074147 +0000 UTC m=+0.447301617 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, release=1761123044, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, vcs-type=git)
Dec 02 08:31:33 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:31:34 np0005541914.localdomain systemd[1]: tmp-crun.wuTW6d.mount: Deactivated successfully.
Dec 02 08:31:35 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:31:35 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:31:36 np0005541914.localdomain podman[83408]: 2025-12-02 08:31:36.090598032 +0000 UTC m=+0.087743797 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4)
Dec 02 08:31:36 np0005541914.localdomain podman[83408]: 2025-12-02 08:31:36.136298907 +0000 UTC m=+0.133444722 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, distribution-scope=public, vcs-type=git, url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, tcib_managed=true, architecture=x86_64)
Dec 02 08:31:36 np0005541914.localdomain systemd[1]: tmp-crun.A7Polt.mount: Deactivated successfully.
Dec 02 08:31:36 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:31:36 np0005541914.localdomain podman[83409]: 2025-12-02 08:31:36.146355428 +0000 UTC m=+0.141916954 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, container_name=ovn_controller, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 02 08:31:36 np0005541914.localdomain podman[83409]: 2025-12-02 08:31:36.229955096 +0000 UTC m=+0.225516652 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 02 08:31:36 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:31:39 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:31:39 np0005541914.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 02 08:31:40 np0005541914.localdomain recover_tripleo_nova_virtqemud[83463]: 61907
Dec 02 08:31:40 np0005541914.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 02 08:31:40 np0005541914.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 02 08:31:40 np0005541914.localdomain podman[83456]: 2025-12-02 08:31:40.087038078 +0000 UTC m=+0.095142157 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, container_name=collectd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, name=rhosp17/openstack-collectd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, release=1761123044, architecture=x86_64, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc.)
Dec 02 08:31:40 np0005541914.localdomain podman[83456]: 2025-12-02 08:31:40.100737592 +0000 UTC m=+0.108841711 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, container_name=collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step3, url=https://www.redhat.com)
Dec 02 08:31:40 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:31:42 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:31:43 np0005541914.localdomain systemd[1]: tmp-crun.zFKLPt.mount: Deactivated successfully.
Dec 02 08:31:43 np0005541914.localdomain podman[83479]: 2025-12-02 08:31:43.065935034 +0000 UTC m=+0.077188670 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, release=1761123044, architecture=x86_64, tcib_managed=true, container_name=iscsid, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, vcs-type=git, config_id=tripleo_step3)
Dec 02 08:31:43 np0005541914.localdomain podman[83479]: 2025-12-02 08:31:43.10164994 +0000 UTC m=+0.112903606 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, architecture=x86_64, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, distribution-scope=public, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., container_name=iscsid)
Dec 02 08:31:43 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:31:55 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:31:56 np0005541914.localdomain podman[83543]: 2025-12-02 08:31:56.074285618 +0000 UTC m=+0.081256766 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, distribution-scope=public, config_id=tripleo_step1, architecture=x86_64, vcs-type=git, version=17.1.12, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.buildah.version=1.41.4)
Dec 02 08:31:56 np0005541914.localdomain podman[83543]: 2025-12-02 08:31:56.279873572 +0000 UTC m=+0.286844690 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_id=tripleo_step1, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, batch=17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:31:56 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:31:57 np0005541914.localdomain sudo[83574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:31:57 np0005541914.localdomain sudo[83574]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:31:57 np0005541914.localdomain sudo[83574]: pam_unix(sudo:session): session closed for user root
Dec 02 08:31:57 np0005541914.localdomain sudo[83589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:31:57 np0005541914.localdomain sudo[83589]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:31:57 np0005541914.localdomain sudo[83589]: pam_unix(sudo:session): session closed for user root
Dec 02 08:31:58 np0005541914.localdomain sudo[83635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:31:58 np0005541914.localdomain sudo[83635]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:31:58 np0005541914.localdomain sudo[83635]: pam_unix(sudo:session): session closed for user root
Dec 02 08:32:00 np0005541914.localdomain sshd[83650]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:32:01 np0005541914.localdomain sshd[83650]: Invalid user user from 45.148.10.240 port 54680
Dec 02 08:32:01 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:32:01 np0005541914.localdomain sshd[83650]: Connection closed by invalid user user 45.148.10.240 port 54680 [preauth]
Dec 02 08:32:01 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:32:01 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:32:01 np0005541914.localdomain podman[83652]: 2025-12-02 08:32:01.240647022 +0000 UTC m=+0.102127322 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron)
Dec 02 08:32:01 np0005541914.localdomain podman[83652]: 2025-12-02 08:32:01.277827973 +0000 UTC m=+0.139308263 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, name=rhosp17/openstack-cron, release=1761123044, io.buildah.version=1.41.4, version=17.1.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=)
Dec 02 08:32:01 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:32:01 np0005541914.localdomain podman[83669]: 2025-12-02 08:32:01.338822461 +0000 UTC m=+0.095181748 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, container_name=ceilometer_agent_compute, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, maintainer=OpenStack TripleO Team, release=1761123044, io.buildah.version=1.41.4, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute)
Dec 02 08:32:01 np0005541914.localdomain podman[83669]: 2025-12-02 08:32:01.369934654 +0000 UTC m=+0.126293931 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step4, architecture=x86_64, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, batch=17.1_20251118.1, version=17.1.12)
Dec 02 08:32:01 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:32:01 np0005541914.localdomain podman[83671]: 2025-12-02 08:32:01.390598404 +0000 UTC m=+0.142861963 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, version=17.1.12, config_id=tripleo_step4, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc.)
Dec 02 08:32:01 np0005541914.localdomain podman[83671]: 2025-12-02 08:32:01.416932379 +0000 UTC m=+0.169195868 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, architecture=x86_64, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, release=1761123044, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 02 08:32:01 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:32:03 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:32:03 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:32:04 np0005541914.localdomain podman[83725]: 2025-12-02 08:32:04.055555492 +0000 UTC m=+0.060968259 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, tcib_managed=true, build-date=2025-11-19T00:36:58Z, version=17.1.12, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, container_name=nova_compute, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5)
Dec 02 08:32:04 np0005541914.localdomain podman[83725]: 2025-12-02 08:32:04.080878375 +0000 UTC m=+0.086291132 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, vcs-type=git, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']})
Dec 02 08:32:04 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:32:04 np0005541914.localdomain systemd[1]: tmp-crun.evZp44.mount: Deactivated successfully.
Dec 02 08:32:04 np0005541914.localdomain podman[83726]: 2025-12-02 08:32:04.128525631 +0000 UTC m=+0.126643662 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, release=1761123044, name=rhosp17/openstack-nova-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, architecture=x86_64, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.4)
Dec 02 08:32:04 np0005541914.localdomain podman[83726]: 2025-12-02 08:32:04.524133197 +0000 UTC m=+0.522251218 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=nova_migration_target, distribution-scope=public, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:32:04 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:32:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:32:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:32:07 np0005541914.localdomain systemd[1]: tmp-crun.fahlUm.mount: Deactivated successfully.
Dec 02 08:32:07 np0005541914.localdomain podman[83775]: 2025-12-02 08:32:07.088497271 +0000 UTC m=+0.083344492 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:32:07 np0005541914.localdomain podman[83774]: 2025-12-02 08:32:07.124606097 +0000 UTC m=+0.124235406 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent)
Dec 02 08:32:07 np0005541914.localdomain podman[83775]: 2025-12-02 08:32:07.14406354 +0000 UTC m=+0.138910771 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, distribution-scope=public, release=1761123044, vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., config_id=tripleo_step4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com)
Dec 02 08:32:07 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:32:07 np0005541914.localdomain podman[83774]: 2025-12-02 08:32:07.19930899 +0000 UTC m=+0.198938269 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z)
Dec 02 08:32:07 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:32:10 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:32:11 np0005541914.localdomain systemd[1]: tmp-crun.EORE3i.mount: Deactivated successfully.
Dec 02 08:32:11 np0005541914.localdomain podman[83820]: 2025-12-02 08:32:11.08085041 +0000 UTC m=+0.084330982 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, build-date=2025-11-18T22:51:28Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, batch=17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.buildah.version=1.41.4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public)
Dec 02 08:32:11 np0005541914.localdomain podman[83820]: 2025-12-02 08:32:11.117926677 +0000 UTC m=+0.121407229 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, vcs-type=git, container_name=collectd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, architecture=x86_64, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd)
Dec 02 08:32:11 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:32:13 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:32:14 np0005541914.localdomain podman[83841]: 2025-12-02 08:32:14.074722049 +0000 UTC m=+0.082189085 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, vcs-type=git, container_name=iscsid, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, architecture=x86_64, release=1761123044, distribution-scope=public)
Dec 02 08:32:14 np0005541914.localdomain podman[83841]: 2025-12-02 08:32:14.082436508 +0000 UTC m=+0.089903514 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Dec 02 08:32:14 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:32:26 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:32:27 np0005541914.localdomain systemd[1]: tmp-crun.7lnhAG.mount: Deactivated successfully.
Dec 02 08:32:27 np0005541914.localdomain podman[83861]: 2025-12-02 08:32:27.567348904 +0000 UTC m=+0.573210366 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vcs-type=git, release=1761123044, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.openshift.expose-services=, tcib_managed=true)
Dec 02 08:32:27 np0005541914.localdomain podman[83861]: 2025-12-02 08:32:27.761266907 +0000 UTC m=+0.767128389 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, architecture=x86_64, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-qdrouterd)
Dec 02 08:32:27 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:32:31 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:32:31 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:32:31 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:32:32 np0005541914.localdomain podman[83891]: 2025-12-02 08:32:32.095623424 +0000 UTC m=+0.093708171 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, architecture=x86_64, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, version=17.1.12, distribution-scope=public, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true)
Dec 02 08:32:32 np0005541914.localdomain podman[83891]: 2025-12-02 08:32:32.129250345 +0000 UTC m=+0.127335142 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, release=1761123044, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, architecture=x86_64, distribution-scope=public, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 02 08:32:32 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:32:32 np0005541914.localdomain podman[83892]: 2025-12-02 08:32:32.1526602 +0000 UTC m=+0.149748567 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, url=https://www.redhat.com)
Dec 02 08:32:32 np0005541914.localdomain systemd[1]: tmp-crun.9FrVZB.mount: Deactivated successfully.
Dec 02 08:32:32 np0005541914.localdomain podman[83890]: 2025-12-02 08:32:32.187656893 +0000 UTC m=+0.187967630 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, architecture=x86_64, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, version=17.1.12, config_id=tripleo_step4, vcs-type=git, distribution-scope=public)
Dec 02 08:32:32 np0005541914.localdomain podman[83892]: 2025-12-02 08:32:32.217636142 +0000 UTC m=+0.214724519 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, release=1761123044, config_id=tripleo_step4, io.openshift.expose-services=, managed_by=tripleo_ansible)
Dec 02 08:32:32 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:32:32 np0005541914.localdomain podman[83890]: 2025-12-02 08:32:32.274276825 +0000 UTC m=+0.274587572 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, config_id=tripleo_step4, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, architecture=x86_64, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron)
Dec 02 08:32:32 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:32:34 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:32:34 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:32:35 np0005541914.localdomain podman[83962]: 2025-12-02 08:32:35.060173785 +0000 UTC m=+0.070935276 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, distribution-scope=public, url=https://www.redhat.com, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute)
Dec 02 08:32:35 np0005541914.localdomain podman[83963]: 2025-12-02 08:32:35.084561531 +0000 UTC m=+0.086837700 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.12, release=1761123044, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, container_name=nova_migration_target, config_id=tripleo_step4)
Dec 02 08:32:35 np0005541914.localdomain podman[83962]: 2025-12-02 08:32:35.143904527 +0000 UTC m=+0.154666038 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, architecture=x86_64, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, release=1761123044, config_id=tripleo_step5, vendor=Red Hat, Inc.)
Dec 02 08:32:35 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:32:35 np0005541914.localdomain podman[83963]: 2025-12-02 08:32:35.430709417 +0000 UTC m=+0.432985536 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, version=17.1.12, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, vendor=Red Hat, Inc., container_name=nova_migration_target, managed_by=tripleo_ansible, release=1761123044, io.openshift.expose-services=, config_id=tripleo_step4)
Dec 02 08:32:35 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:32:37 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:32:37 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:32:38 np0005541914.localdomain systemd[1]: tmp-crun.ZxvTgL.mount: Deactivated successfully.
Dec 02 08:32:38 np0005541914.localdomain podman[84008]: 2025-12-02 08:32:38.070492895 +0000 UTC m=+0.074137525 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4)
Dec 02 08:32:38 np0005541914.localdomain systemd[1]: tmp-crun.KcGvne.mount: Deactivated successfully.
Dec 02 08:32:38 np0005541914.localdomain podman[84009]: 2025-12-02 08:32:38.103469686 +0000 UTC m=+0.105302361 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, release=1761123044, architecture=x86_64, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, vcs-type=git, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=)
Dec 02 08:32:38 np0005541914.localdomain podman[84009]: 2025-12-02 08:32:38.128837191 +0000 UTC m=+0.130669906 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, url=https://www.redhat.com, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, build-date=2025-11-18T23:34:05Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 02 08:32:38 np0005541914.localdomain podman[84008]: 2025-12-02 08:32:38.139748629 +0000 UTC m=+0.143393239 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.openshift.expose-services=)
Dec 02 08:32:38 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:32:38 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:32:41 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:32:42 np0005541914.localdomain podman[84055]: 2025-12-02 08:32:42.055911281 +0000 UTC m=+0.066532441 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, url=https://www.redhat.com, version=17.1.12, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, tcib_managed=true, container_name=collectd, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:32:42 np0005541914.localdomain podman[84055]: 2025-12-02 08:32:42.067779828 +0000 UTC m=+0.078401028 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, architecture=x86_64, managed_by=tripleo_ansible, url=https://www.redhat.com, vendor=Red Hat, Inc., config_id=tripleo_step3, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:32:42 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:32:44 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:32:45 np0005541914.localdomain podman[84075]: 2025-12-02 08:32:45.071368248 +0000 UTC m=+0.079861252 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., url=https://www.redhat.com, architecture=x86_64, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true)
Dec 02 08:32:45 np0005541914.localdomain podman[84075]: 2025-12-02 08:32:45.083902487 +0000 UTC m=+0.092395521 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, release=1761123044, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12)
Dec 02 08:32:45 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:32:57 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:32:58 np0005541914.localdomain podman[84140]: 2025-12-02 08:32:58.08091524 +0000 UTC m=+0.082889686 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, vcs-type=git, release=1761123044, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 08:32:58 np0005541914.localdomain podman[84140]: 2025-12-02 08:32:58.236656002 +0000 UTC m=+0.238630378 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, distribution-scope=public)
Dec 02 08:32:58 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:32:58 np0005541914.localdomain sudo[84168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:32:58 np0005541914.localdomain sudo[84168]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:32:58 np0005541914.localdomain sudo[84168]: pam_unix(sudo:session): session closed for user root
Dec 02 08:32:58 np0005541914.localdomain sudo[84183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:32:58 np0005541914.localdomain sudo[84183]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:32:59 np0005541914.localdomain sudo[84183]: pam_unix(sudo:session): session closed for user root
Dec 02 08:33:00 np0005541914.localdomain sudo[84229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:33:00 np0005541914.localdomain sudo[84229]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:33:00 np0005541914.localdomain sudo[84229]: pam_unix(sudo:session): session closed for user root
Dec 02 08:33:02 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:33:02 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:33:02 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:33:03 np0005541914.localdomain podman[84245]: 2025-12-02 08:33:03.085042381 +0000 UTC m=+0.087912862 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, batch=17.1_20251118.1, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, version=17.1.12, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, config_id=tripleo_step4, vcs-type=git, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z)
Dec 02 08:33:03 np0005541914.localdomain podman[84245]: 2025-12-02 08:33:03.115484774 +0000 UTC m=+0.118355275 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, batch=17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, tcib_managed=true, architecture=x86_64, container_name=ceilometer_agent_compute, url=https://www.redhat.com, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, version=17.1.12)
Dec 02 08:33:03 np0005541914.localdomain systemd[1]: tmp-crun.Rhe46F.mount: Deactivated successfully.
Dec 02 08:33:03 np0005541914.localdomain podman[84244]: 2025-12-02 08:33:03.135553565 +0000 UTC m=+0.137716674 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1761123044, url=https://www.redhat.com, tcib_managed=true, io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-cron, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron)
Dec 02 08:33:03 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:33:03 np0005541914.localdomain podman[84244]: 2025-12-02 08:33:03.174786519 +0000 UTC m=+0.176949678 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, build-date=2025-11-18T22:49:32Z, distribution-scope=public, container_name=logrotate_crond, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=)
Dec 02 08:33:03 np0005541914.localdomain systemd[1]: tmp-crun.xp0LGM.mount: Deactivated successfully.
Dec 02 08:33:03 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:33:03 np0005541914.localdomain podman[84246]: 2025-12-02 08:33:03.193680985 +0000 UTC m=+0.188850657 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Dec 02 08:33:03 np0005541914.localdomain podman[84246]: 2025-12-02 08:33:03.225841911 +0000 UTC m=+0.221011593 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-type=git, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4)
Dec 02 08:33:03 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:33:05 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:33:05 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:33:06 np0005541914.localdomain podman[84315]: 2025-12-02 08:33:06.089441278 +0000 UTC m=+0.089246774 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, config_id=tripleo_step5, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, container_name=nova_compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, version=17.1.12, build-date=2025-11-19T00:36:58Z)
Dec 02 08:33:06 np0005541914.localdomain podman[84315]: 2025-12-02 08:33:06.125033439 +0000 UTC m=+0.124838975 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.4, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, container_name=nova_compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git)
Dec 02 08:33:06 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:33:06 np0005541914.localdomain podman[84316]: 2025-12-02 08:33:06.142623284 +0000 UTC m=+0.139998235 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, container_name=nova_migration_target, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, release=1761123044, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team)
Dec 02 08:33:06 np0005541914.localdomain podman[84316]: 2025-12-02 08:33:06.5291762 +0000 UTC m=+0.526551111 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, version=17.1.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:33:06 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:33:08 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:33:08 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:33:09 np0005541914.localdomain podman[84364]: 2025-12-02 08:33:09.072423172 +0000 UTC m=+0.077084078 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, url=https://www.redhat.com, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, release=1761123044, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, distribution-scope=public, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 02 08:33:09 np0005541914.localdomain podman[84364]: 2025-12-02 08:33:09.10793552 +0000 UTC m=+0.112596456 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.buildah.version=1.41.4, distribution-scope=public, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 02 08:33:09 np0005541914.localdomain systemd[1]: tmp-crun.NYxfVO.mount: Deactivated successfully.
Dec 02 08:33:09 np0005541914.localdomain podman[84365]: 2025-12-02 08:33:09.13732675 +0000 UTC m=+0.139926072 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=)
Dec 02 08:33:09 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:33:09 np0005541914.localdomain podman[84365]: 2025-12-02 08:33:09.212724175 +0000 UTC m=+0.215323497 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, tcib_managed=true, config_id=tripleo_step4, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.buildah.version=1.41.4, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Dec 02 08:33:09 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:33:12 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:33:13 np0005541914.localdomain podman[84411]: 2025-12-02 08:33:13.077823915 +0000 UTC m=+0.084781785 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:33:13 np0005541914.localdomain podman[84411]: 2025-12-02 08:33:13.086432892 +0000 UTC m=+0.093390802 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, version=17.1.12, vendor=Red Hat, Inc., container_name=collectd, batch=17.1_20251118.1, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:33:13 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:33:15 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:33:16 np0005541914.localdomain systemd[1]: tmp-crun.7u8SSe.mount: Deactivated successfully.
Dec 02 08:33:16 np0005541914.localdomain podman[84431]: 2025-12-02 08:33:16.082002945 +0000 UTC m=+0.089203423 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, container_name=iscsid, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git)
Dec 02 08:33:16 np0005541914.localdomain podman[84431]: 2025-12-02 08:33:16.121005942 +0000 UTC m=+0.128206400 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=iscsid, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid)
Dec 02 08:33:16 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:33:28 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:33:28 np0005541914.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 02 08:33:29 np0005541914.localdomain recover_tripleo_nova_virtqemud[84452]: 61907
Dec 02 08:33:29 np0005541914.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 02 08:33:29 np0005541914.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 02 08:33:29 np0005541914.localdomain podman[84450]: 2025-12-02 08:33:29.088694588 +0000 UTC m=+0.085877820 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., container_name=metrics_qdr, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, io.buildah.version=1.41.4, tcib_managed=true, config_id=tripleo_step1, managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, release=1761123044, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:33:29 np0005541914.localdomain podman[84450]: 2025-12-02 08:33:29.307176452 +0000 UTC m=+0.304359684 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, config_id=tripleo_step1, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, version=17.1.12, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, batch=17.1_20251118.1, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Dec 02 08:33:29 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:33:33 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:33:33 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:33:33 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:33:34 np0005541914.localdomain podman[84483]: 2025-12-02 08:33:34.093252603 +0000 UTC m=+0.089229094 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.4, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, io.openshift.expose-services=, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, release=1761123044, url=https://www.redhat.com, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 02 08:33:34 np0005541914.localdomain podman[84482]: 2025-12-02 08:33:34.07377867 +0000 UTC m=+0.079342527 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, url=https://www.redhat.com, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z)
Dec 02 08:33:34 np0005541914.localdomain podman[84487]: 2025-12-02 08:33:34.134556932 +0000 UTC m=+0.129122938 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, architecture=x86_64, version=17.1.12, release=1761123044, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi)
Dec 02 08:33:34 np0005541914.localdomain podman[84483]: 2025-12-02 08:33:34.146893963 +0000 UTC m=+0.142870384 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 02 08:33:34 np0005541914.localdomain podman[84487]: 2025-12-02 08:33:34.160844045 +0000 UTC m=+0.155410091 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, distribution-scope=public, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com)
Dec 02 08:33:34 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:33:34 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:33:34 np0005541914.localdomain podman[84482]: 2025-12-02 08:33:34.211490893 +0000 UTC m=+0.217054770 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:33:34 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:33:35 np0005541914.localdomain systemd[1]: tmp-crun.S1kcyT.mount: Deactivated successfully.
Dec 02 08:33:36 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:33:36 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:33:37 np0005541914.localdomain podman[84556]: 2025-12-02 08:33:37.075509044 +0000 UTC m=+0.076526211 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, config_id=tripleo_step4, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, distribution-scope=public)
Dec 02 08:33:37 np0005541914.localdomain podman[84555]: 2025-12-02 08:33:37.13613009 +0000 UTC m=+0.139017254 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, distribution-scope=public, tcib_managed=true, container_name=nova_compute, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20251118.1)
Dec 02 08:33:37 np0005541914.localdomain podman[84555]: 2025-12-02 08:33:37.159639328 +0000 UTC m=+0.162526522 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:33:37 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:33:37 np0005541914.localdomain podman[84556]: 2025-12-02 08:33:37.543039597 +0000 UTC m=+0.544056804 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, release=1761123044, config_id=tripleo_step4, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, architecture=x86_64, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:33:37 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:33:39 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:33:39 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:33:40 np0005541914.localdomain podman[84601]: 2025-12-02 08:33:40.067283189 +0000 UTC m=+0.064175658 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, release=1761123044, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vcs-type=git, distribution-scope=public, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true)
Dec 02 08:33:40 np0005541914.localdomain podman[84601]: 2025-12-02 08:33:40.095854974 +0000 UTC m=+0.092747433 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:33:40 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:33:40 np0005541914.localdomain systemd[1]: tmp-crun.zN62g3.mount: Deactivated successfully.
Dec 02 08:33:40 np0005541914.localdomain podman[84600]: 2025-12-02 08:33:40.190012179 +0000 UTC m=+0.187124634 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, url=https://www.redhat.com, maintainer=OpenStack TripleO Team)
Dec 02 08:33:40 np0005541914.localdomain podman[84600]: 2025-12-02 08:33:40.257264351 +0000 UTC m=+0.254376816 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, version=17.1.12)
Dec 02 08:33:40 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:33:43 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:33:44 np0005541914.localdomain podman[84650]: 2025-12-02 08:33:44.069319258 +0000 UTC m=+0.074252349 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_id=tripleo_step3, release=1761123044, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12)
Dec 02 08:33:44 np0005541914.localdomain podman[84650]: 2025-12-02 08:33:44.10683951 +0000 UTC m=+0.111772651 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, release=1761123044, batch=17.1_20251118.1)
Dec 02 08:33:44 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:33:46 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:33:47 np0005541914.localdomain podman[84670]: 2025-12-02 08:33:47.076671265 +0000 UTC m=+0.081259606 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, release=1761123044, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-type=git, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, container_name=iscsid)
Dec 02 08:33:47 np0005541914.localdomain podman[84670]: 2025-12-02 08:33:47.086709465 +0000 UTC m=+0.091297746 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, version=17.1.12, container_name=iscsid, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, release=1761123044, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 02 08:33:47 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:33:59 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:34:00 np0005541914.localdomain podman[84734]: 2025-12-02 08:34:00.075639097 +0000 UTC m=+0.079261185 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vendor=Red Hat, Inc., release=1761123044, tcib_managed=true, version=17.1.12, architecture=x86_64, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=)
Dec 02 08:34:00 np0005541914.localdomain sudo[84748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:34:00 np0005541914.localdomain sudo[84748]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:34:00 np0005541914.localdomain sudo[84748]: pam_unix(sudo:session): session closed for user root
Dec 02 08:34:00 np0005541914.localdomain sudo[84778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:34:00 np0005541914.localdomain sudo[84778]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:34:00 np0005541914.localdomain podman[84734]: 2025-12-02 08:34:00.290829069 +0000 UTC m=+0.294451157 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, architecture=x86_64, url=https://www.redhat.com, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Dec 02 08:34:00 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:34:00 np0005541914.localdomain sudo[84778]: pam_unix(sudo:session): session closed for user root
Dec 02 08:34:01 np0005541914.localdomain sudo[84826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:34:01 np0005541914.localdomain sudo[84826]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:34:01 np0005541914.localdomain sudo[84826]: pam_unix(sudo:session): session closed for user root
Dec 02 08:34:04 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:34:04 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:34:04 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:34:05 np0005541914.localdomain systemd[1]: tmp-crun.vEit7F.mount: Deactivated successfully.
Dec 02 08:34:05 np0005541914.localdomain podman[84842]: 2025-12-02 08:34:05.063742343 +0000 UTC m=+0.070964319 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.12, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1761123044, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 02 08:34:05 np0005541914.localdomain podman[84841]: 2025-12-02 08:34:05.109470338 +0000 UTC m=+0.116560920 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, distribution-scope=public, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, release=1761123044, name=rhosp17/openstack-cron, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond)
Dec 02 08:34:05 np0005541914.localdomain podman[84841]: 2025-12-02 08:34:05.116223477 +0000 UTC m=+0.123314049 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, container_name=logrotate_crond, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12)
Dec 02 08:34:05 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:34:05 np0005541914.localdomain podman[84842]: 2025-12-02 08:34:05.160501247 +0000 UTC m=+0.167723223 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 02 08:34:05 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:34:05 np0005541914.localdomain podman[84843]: 2025-12-02 08:34:05.17641102 +0000 UTC m=+0.181581202 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, vcs-type=git)
Dec 02 08:34:05 np0005541914.localdomain podman[84843]: 2025-12-02 08:34:05.222971141 +0000 UTC m=+0.228141283 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, tcib_managed=true, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4)
Dec 02 08:34:05 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:34:06 np0005541914.localdomain systemd[1]: tmp-crun.1JXpZJ.mount: Deactivated successfully.
Dec 02 08:34:07 np0005541914.localdomain sshd[84913]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:34:07 np0005541914.localdomain sshd[84913]: Invalid user solv from 45.148.10.240 port 39278
Dec 02 08:34:07 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:34:07 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:34:08 np0005541914.localdomain sshd[84913]: Connection closed by invalid user solv 45.148.10.240 port 39278 [preauth]
Dec 02 08:34:08 np0005541914.localdomain podman[84916]: 2025-12-02 08:34:08.040322417 +0000 UTC m=+0.083697092 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, vcs-type=git, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1761123044, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 02 08:34:08 np0005541914.localdomain systemd[1]: tmp-crun.yabzFk.mount: Deactivated successfully.
Dec 02 08:34:08 np0005541914.localdomain podman[84915]: 2025-12-02 08:34:08.067710675 +0000 UTC m=+0.119508481 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute)
Dec 02 08:34:08 np0005541914.localdomain podman[84915]: 2025-12-02 08:34:08.092891165 +0000 UTC m=+0.144688961 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, release=1761123044, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 02 08:34:08 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:34:08 np0005541914.localdomain podman[84916]: 2025-12-02 08:34:08.380895241 +0000 UTC m=+0.424269876 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, version=17.1.12, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, distribution-scope=public, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc.)
Dec 02 08:34:08 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:34:10 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:34:10 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:34:11 np0005541914.localdomain podman[84962]: 2025-12-02 08:34:11.073088901 +0000 UTC m=+0.078082028 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, batch=17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, release=1761123044, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git)
Dec 02 08:34:11 np0005541914.localdomain podman[84962]: 2025-12-02 08:34:11.113471011 +0000 UTC m=+0.118464118 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, release=1761123044, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.buildah.version=1.41.4)
Dec 02 08:34:11 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:34:11 np0005541914.localdomain podman[84963]: 2025-12-02 08:34:11.131486619 +0000 UTC m=+0.134868607 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, batch=17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 02 08:34:11 np0005541914.localdomain podman[84963]: 2025-12-02 08:34:11.150793976 +0000 UTC m=+0.154175964 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step4, version=17.1.12, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 02 08:34:11 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:34:14 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:34:15 np0005541914.localdomain systemd[1]: tmp-crun.6ylebD.mount: Deactivated successfully.
Dec 02 08:34:15 np0005541914.localdomain podman[85009]: 2025-12-02 08:34:15.078425462 +0000 UTC m=+0.084741865 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, name=rhosp17/openstack-collectd, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1761123044, vcs-type=git, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd)
Dec 02 08:34:15 np0005541914.localdomain podman[85009]: 2025-12-02 08:34:15.115037995 +0000 UTC m=+0.121354368 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, io.openshift.expose-services=, release=1761123044, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public)
Dec 02 08:34:15 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:34:17 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:34:18 np0005541914.localdomain podman[85031]: 2025-12-02 08:34:18.053900673 +0000 UTC m=+0.064498098 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, container_name=iscsid, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid)
Dec 02 08:34:18 np0005541914.localdomain podman[85031]: 2025-12-02 08:34:18.090782454 +0000 UTC m=+0.101379799 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.4, config_id=tripleo_step3, distribution-scope=public, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, release=1761123044, url=https://www.redhat.com, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:34:18 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:34:30 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:34:31 np0005541914.localdomain systemd[1]: tmp-crun.w4yu1Z.mount: Deactivated successfully.
Dec 02 08:34:31 np0005541914.localdomain podman[85050]: 2025-12-02 08:34:31.070561022 +0000 UTC m=+0.080027108 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, config_id=tripleo_step1, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, release=1761123044, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']})
Dec 02 08:34:31 np0005541914.localdomain podman[85050]: 2025-12-02 08:34:31.265693103 +0000 UTC m=+0.275159179 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, tcib_managed=true, batch=17.1_20251118.1, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr)
Dec 02 08:34:31 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:34:35 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:34:35 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:34:35 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:34:36 np0005541914.localdomain podman[85081]: 2025-12-02 08:34:36.080793622 +0000 UTC m=+0.085541510 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1761123044, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 02 08:34:36 np0005541914.localdomain podman[85080]: 2025-12-02 08:34:36.130143029 +0000 UTC m=+0.135982020 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, release=1761123044, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, batch=17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git)
Dec 02 08:34:36 np0005541914.localdomain podman[85080]: 2025-12-02 08:34:36.138685473 +0000 UTC m=+0.144524435 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, vcs-type=git, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, config_id=tripleo_step4, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:34:36 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:34:36 np0005541914.localdomain podman[85082]: 2025-12-02 08:34:36.196478523 +0000 UTC m=+0.196749212 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, managed_by=tripleo_ansible, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1761123044, io.openshift.expose-services=)
Dec 02 08:34:36 np0005541914.localdomain podman[85081]: 2025-12-02 08:34:36.210846778 +0000 UTC m=+0.215594646 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, batch=17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, vcs-type=git, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, version=17.1.12, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 02 08:34:36 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:34:36 np0005541914.localdomain podman[85082]: 2025-12-02 08:34:36.225892193 +0000 UTC m=+0.226162882 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_id=tripleo_step4, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 02 08:34:36 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:34:38 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:34:38 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:34:39 np0005541914.localdomain podman[85154]: 2025-12-02 08:34:39.068095769 +0000 UTC m=+0.076179560 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, release=1761123044, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute)
Dec 02 08:34:39 np0005541914.localdomain systemd[1]: tmp-crun.2q3ohf.mount: Deactivated successfully.
Dec 02 08:34:39 np0005541914.localdomain podman[85155]: 2025-12-02 08:34:39.111610636 +0000 UTC m=+0.116736185 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, url=https://www.redhat.com, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Dec 02 08:34:39 np0005541914.localdomain podman[85154]: 2025-12-02 08:34:39.14341298 +0000 UTC m=+0.151496781 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, config_id=tripleo_step5, managed_by=tripleo_ansible, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, vendor=Red Hat, Inc., tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, version=17.1.12, distribution-scope=public, architecture=x86_64)
Dec 02 08:34:39 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:34:39 np0005541914.localdomain podman[85155]: 2025-12-02 08:34:39.524662762 +0000 UTC m=+0.529788341 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, version=17.1.12, url=https://www.redhat.com, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, batch=17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container)
Dec 02 08:34:39 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:34:41 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:34:41 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:34:42 np0005541914.localdomain systemd[1]: tmp-crun.bkcBnq.mount: Deactivated successfully.
Dec 02 08:34:42 np0005541914.localdomain podman[85204]: 2025-12-02 08:34:42.072575927 +0000 UTC m=+0.077995235 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, version=17.1.12, release=1761123044, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, tcib_managed=true)
Dec 02 08:34:42 np0005541914.localdomain podman[85205]: 2025-12-02 08:34:42.107484218 +0000 UTC m=+0.111150722 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, managed_by=tripleo_ansible, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, release=1761123044, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:34:42 np0005541914.localdomain podman[85204]: 2025-12-02 08:34:42.116232759 +0000 UTC m=+0.121652037 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, tcib_managed=true, build-date=2025-11-19T00:14:25Z, distribution-scope=public, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4)
Dec 02 08:34:42 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:34:42 np0005541914.localdomain podman[85205]: 2025-12-02 08:34:42.157310221 +0000 UTC m=+0.160976775 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, release=1761123044, build-date=2025-11-18T23:34:05Z, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, managed_by=tripleo_ansible, batch=17.1_20251118.1, container_name=ovn_controller, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, tcib_managed=true, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 02 08:34:42 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:34:45 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:34:46 np0005541914.localdomain podman[85252]: 2025-12-02 08:34:46.078989892 +0000 UTC m=+0.087122878 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=1761123044, config_id=tripleo_step3, vcs-type=git, version=17.1.12, build-date=2025-11-18T22:51:28Z, tcib_managed=true, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=)
Dec 02 08:34:46 np0005541914.localdomain podman[85252]: 2025-12-02 08:34:46.095019578 +0000 UTC m=+0.103152594 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, io.openshift.expose-services=, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3)
Dec 02 08:34:46 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:34:48 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:34:49 np0005541914.localdomain podman[85272]: 2025-12-02 08:34:49.057434375 +0000 UTC m=+0.068113969 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, tcib_managed=true, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Dec 02 08:34:49 np0005541914.localdomain podman[85272]: 2025-12-02 08:34:49.066819355 +0000 UTC m=+0.077498989 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, managed_by=tripleo_ansible)
Dec 02 08:34:49 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:35:01 np0005541914.localdomain sudo[85337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:35:01 np0005541914.localdomain sudo[85337]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:35:01 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:35:01 np0005541914.localdomain sudo[85337]: pam_unix(sudo:session): session closed for user root
Dec 02 08:35:01 np0005541914.localdomain sudo[85358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:35:01 np0005541914.localdomain sudo[85358]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:35:01 np0005541914.localdomain podman[85351]: 2025-12-02 08:35:01.893742322 +0000 UTC m=+0.100169712 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, build-date=2025-11-18T22:49:46Z, version=17.1.12, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container)
Dec 02 08:35:02 np0005541914.localdomain podman[85351]: 2025-12-02 08:35:02.112789003 +0000 UTC m=+0.319216393 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, tcib_managed=true, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, distribution-scope=public, io.buildah.version=1.41.4, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_id=tripleo_step1)
Dec 02 08:35:02 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:35:02 np0005541914.localdomain sudo[85358]: pam_unix(sudo:session): session closed for user root
Dec 02 08:35:03 np0005541914.localdomain sudo[85427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:35:03 np0005541914.localdomain sudo[85427]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:35:03 np0005541914.localdomain sudo[85427]: pam_unix(sudo:session): session closed for user root
Dec 02 08:35:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:35:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:35:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:35:07 np0005541914.localdomain systemd[1]: tmp-crun.BJWyI4.mount: Deactivated successfully.
Dec 02 08:35:07 np0005541914.localdomain podman[85442]: 2025-12-02 08:35:07.143228318 +0000 UTC m=+0.138429686 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step4, url=https://www.redhat.com, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, architecture=x86_64)
Dec 02 08:35:07 np0005541914.localdomain systemd[1]: tmp-crun.WrWuJB.mount: Deactivated successfully.
Dec 02 08:35:07 np0005541914.localdomain podman[85443]: 2025-12-02 08:35:07.176602092 +0000 UTC m=+0.173604246 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, architecture=x86_64, batch=17.1_20251118.1)
Dec 02 08:35:07 np0005541914.localdomain podman[85444]: 2025-12-02 08:35:07.213647068 +0000 UTC m=+0.208550246 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, architecture=x86_64, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:35:07 np0005541914.localdomain podman[85443]: 2025-12-02 08:35:07.227806877 +0000 UTC m=+0.224809021 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:35:07 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:35:07 np0005541914.localdomain podman[85444]: 2025-12-02 08:35:07.265888696 +0000 UTC m=+0.260791894 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, architecture=x86_64, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:35:07 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:35:07 np0005541914.localdomain podman[85442]: 2025-12-02 08:35:07.277575647 +0000 UTC m=+0.272777075 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12)
Dec 02 08:35:07 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:35:09 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:35:09 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:35:10 np0005541914.localdomain podman[85512]: 2025-12-02 08:35:10.073574953 +0000 UTC m=+0.079133481 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, vcs-type=git, io.openshift.expose-services=, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step5, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, distribution-scope=public)
Dec 02 08:35:10 np0005541914.localdomain podman[85512]: 2025-12-02 08:35:10.139611617 +0000 UTC m=+0.145170165 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, vendor=Red Hat, Inc., batch=17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044)
Dec 02 08:35:10 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:35:10 np0005541914.localdomain podman[85513]: 2025-12-02 08:35:10.140731272 +0000 UTC m=+0.141912965 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, architecture=x86_64, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:35:10 np0005541914.localdomain podman[85513]: 2025-12-02 08:35:10.51311821 +0000 UTC m=+0.514299953 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, container_name=nova_migration_target, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, version=17.1.12, vcs-type=git, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20251118.1)
Dec 02 08:35:10 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:35:12 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:35:12 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:35:12 np0005541914.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 02 08:35:13 np0005541914.localdomain recover_tripleo_nova_virtqemud[85568]: 61907
Dec 02 08:35:13 np0005541914.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 02 08:35:13 np0005541914.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 02 08:35:13 np0005541914.localdomain podman[85560]: 2025-12-02 08:35:13.084672017 +0000 UTC m=+0.084242039 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 02 08:35:13 np0005541914.localdomain podman[85560]: 2025-12-02 08:35:13.11901562 +0000 UTC m=+0.118585602 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, distribution-scope=public, batch=17.1_20251118.1, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z)
Dec 02 08:35:13 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:35:13 np0005541914.localdomain systemd[1]: tmp-crun.VqQKu8.mount: Deactivated successfully.
Dec 02 08:35:13 np0005541914.localdomain podman[85561]: 2025-12-02 08:35:13.194667752 +0000 UTC m=+0.186843306 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true)
Dec 02 08:35:13 np0005541914.localdomain podman[85561]: 2025-12-02 08:35:13.242894344 +0000 UTC m=+0.235069848 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, version=17.1.12, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, com.redhat.component=openstack-ovn-controller-container)
Dec 02 08:35:13 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:35:16 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:35:17 np0005541914.localdomain systemd[1]: tmp-crun.kD99f4.mount: Deactivated successfully.
Dec 02 08:35:17 np0005541914.localdomain podman[85608]: 2025-12-02 08:35:17.078706817 +0000 UTC m=+0.087555631 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=17.1.12, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, config_id=tripleo_step3, tcib_managed=true, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z)
Dec 02 08:35:17 np0005541914.localdomain podman[85608]: 2025-12-02 08:35:17.089350736 +0000 UTC m=+0.098199520 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, release=1761123044, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']})
Dec 02 08:35:17 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:35:19 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:35:20 np0005541914.localdomain podman[85628]: 2025-12-02 08:35:20.070905785 +0000 UTC m=+0.074590310 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, batch=17.1_20251118.1, vcs-type=git, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, io.buildah.version=1.41.4, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., version=17.1.12, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-iscsid)
Dec 02 08:35:20 np0005541914.localdomain podman[85628]: 2025-12-02 08:35:20.08076164 +0000 UTC m=+0.084446145 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., version=17.1.12, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, container_name=iscsid, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:35:20 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:35:32 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:35:33 np0005541914.localdomain systemd[1]: tmp-crun.gt0TK7.mount: Deactivated successfully.
Dec 02 08:35:33 np0005541914.localdomain podman[85647]: 2025-12-02 08:35:33.085008929 +0000 UTC m=+0.088909984 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, architecture=x86_64, batch=17.1_20251118.1, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container)
Dec 02 08:35:33 np0005541914.localdomain podman[85647]: 2025-12-02 08:35:33.306191676 +0000 UTC m=+0.310092741 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=17.1.12)
Dec 02 08:35:33 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:35:37 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:35:37 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:35:37 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:35:38 np0005541914.localdomain systemd[1]: tmp-crun.ziwUwL.mount: Deactivated successfully.
Dec 02 08:35:38 np0005541914.localdomain podman[85678]: 2025-12-02 08:35:38.070425291 +0000 UTC m=+0.074792797 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, release=1761123044, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Dec 02 08:35:38 np0005541914.localdomain podman[85678]: 2025-12-02 08:35:38.093964459 +0000 UTC m=+0.098331895 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1761123044, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:35:38 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:35:38 np0005541914.localdomain podman[85677]: 2025-12-02 08:35:38.174324727 +0000 UTC m=+0.178813496 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, io.openshift.expose-services=, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, io.buildah.version=1.41.4, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, batch=17.1_20251118.1)
Dec 02 08:35:38 np0005541914.localdomain podman[85677]: 2025-12-02 08:35:38.206791352 +0000 UTC m=+0.211280061 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, batch=17.1_20251118.1, container_name=logrotate_crond, release=1761123044, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z)
Dec 02 08:35:38 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:35:38 np0005541914.localdomain podman[85679]: 2025-12-02 08:35:38.217428141 +0000 UTC m=+0.218691611 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, release=1761123044, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Dec 02 08:35:38 np0005541914.localdomain podman[85679]: 2025-12-02 08:35:38.264526429 +0000 UTC m=+0.265789879 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, vcs-type=git, container_name=ceilometer_agent_ipmi, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 02 08:35:38 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:35:40 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:35:40 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:35:41 np0005541914.localdomain podman[85749]: 2025-12-02 08:35:41.058507741 +0000 UTC m=+0.064868539 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step5, distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:35:41 np0005541914.localdomain podman[85749]: 2025-12-02 08:35:41.078821541 +0000 UTC m=+0.085182359 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vendor=Red Hat, Inc., release=1761123044, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:35:41 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:35:41 np0005541914.localdomain podman[85750]: 2025-12-02 08:35:41.132382659 +0000 UTC m=+0.135796626 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, architecture=x86_64, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, vcs-type=git, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=)
Dec 02 08:35:41 np0005541914.localdomain podman[85750]: 2025-12-02 08:35:41.461968981 +0000 UTC m=+0.465382988 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64)
Dec 02 08:35:41 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:35:43 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:35:43 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:35:44 np0005541914.localdomain podman[85801]: 2025-12-02 08:35:44.084857167 +0000 UTC m=+0.092277117 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, container_name=ovn_controller, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1761123044, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git)
Dec 02 08:35:44 np0005541914.localdomain systemd[1]: tmp-crun.kHCejB.mount: Deactivated successfully.
Dec 02 08:35:44 np0005541914.localdomain podman[85801]: 2025-12-02 08:35:44.14280082 +0000 UTC m=+0.150220800 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1761123044, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, version=17.1.12, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:35:44 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:35:44 np0005541914.localdomain podman[85800]: 2025-12-02 08:35:44.228739101 +0000 UTC m=+0.233984064 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, batch=17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1761123044, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 02 08:35:44 np0005541914.localdomain podman[85800]: 2025-12-02 08:35:44.260735981 +0000 UTC m=+0.265980934 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, version=17.1.12, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_id=tripleo_step4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, container_name=ovn_metadata_agent, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1)
Dec 02 08:35:44 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:35:47 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:35:48 np0005541914.localdomain podman[85849]: 2025-12-02 08:35:48.075630087 +0000 UTC m=+0.083436764 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_id=tripleo_step3, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, distribution-scope=public, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible)
Dec 02 08:35:48 np0005541914.localdomain podman[85849]: 2025-12-02 08:35:48.084234963 +0000 UTC m=+0.092041630 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.12, io.openshift.expose-services=)
Dec 02 08:35:48 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:35:50 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:35:51 np0005541914.localdomain systemd[1]: tmp-crun.DDUE3I.mount: Deactivated successfully.
Dec 02 08:35:51 np0005541914.localdomain podman[85869]: 2025-12-02 08:35:51.058606601 +0000 UTC m=+0.068894984 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2025-11-18T23:44:13Z, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, url=https://www.redhat.com, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.buildah.version=1.41.4, batch=17.1_20251118.1, container_name=iscsid, version=17.1.12, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3)
Dec 02 08:35:51 np0005541914.localdomain podman[85869]: 2025-12-02 08:35:51.094965637 +0000 UTC m=+0.105253970 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, version=17.1.12, io.openshift.expose-services=, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step3, release=1761123044, batch=17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:35:51 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:36:03 np0005541914.localdomain sudo[85934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:36:03 np0005541914.localdomain sudo[85934]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:36:03 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:36:03 np0005541914.localdomain sudo[85934]: pam_unix(sudo:session): session closed for user root
Dec 02 08:36:03 np0005541914.localdomain sudo[85955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:36:03 np0005541914.localdomain sudo[85955]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:36:03 np0005541914.localdomain podman[85949]: 2025-12-02 08:36:03.465177976 +0000 UTC m=+0.083313190 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, tcib_managed=true, container_name=metrics_qdr, release=1761123044, vcs-type=git, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, distribution-scope=public, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc.)
Dec 02 08:36:03 np0005541914.localdomain podman[85949]: 2025-12-02 08:36:03.690494111 +0000 UTC m=+0.308629225 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_id=tripleo_step1, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Dec 02 08:36:03 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:36:04 np0005541914.localdomain sudo[85955]: pam_unix(sudo:session): session closed for user root
Dec 02 08:36:04 np0005541914.localdomain sudo[86025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:36:04 np0005541914.localdomain sudo[86025]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:36:04 np0005541914.localdomain sudo[86025]: pam_unix(sudo:session): session closed for user root
Dec 02 08:36:08 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:36:08 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:36:08 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:36:09 np0005541914.localdomain podman[86042]: 2025-12-02 08:36:09.076939916 +0000 UTC m=+0.077318995 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, managed_by=tripleo_ansible, release=1761123044, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 02 08:36:09 np0005541914.localdomain systemd[1]: tmp-crun.G5Io8U.mount: Deactivated successfully.
Dec 02 08:36:09 np0005541914.localdomain podman[86041]: 2025-12-02 08:36:09.165452976 +0000 UTC m=+0.165296848 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, architecture=x86_64)
Dec 02 08:36:09 np0005541914.localdomain podman[86042]: 2025-12-02 08:36:09.188955873 +0000 UTC m=+0.189334962 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, io.buildah.version=1.41.4)
Dec 02 08:36:09 np0005541914.localdomain podman[86041]: 2025-12-02 08:36:09.194288388 +0000 UTC m=+0.194132270 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, version=17.1.12, tcib_managed=true, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute)
Dec 02 08:36:09 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:36:09 np0005541914.localdomain podman[86040]: 2025-12-02 08:36:09.14297098 +0000 UTC m=+0.143243966 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, url=https://www.redhat.com, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, version=17.1.12, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:36:09 np0005541914.localdomain podman[86040]: 2025-12-02 08:36:09.279999971 +0000 UTC m=+0.280272977 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, vcs-type=git, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:36:09 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:36:09 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:36:11 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:36:11 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:36:12 np0005541914.localdomain podman[86112]: 2025-12-02 08:36:12.099751571 +0000 UTC m=+0.100392539 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, release=1761123044, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vcs-type=git, name=rhosp17/openstack-nova-compute, tcib_managed=true, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.buildah.version=1.41.4)
Dec 02 08:36:12 np0005541914.localdomain podman[86112]: 2025-12-02 08:36:12.125352284 +0000 UTC m=+0.125993212 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, tcib_managed=true, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.)
Dec 02 08:36:12 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:36:12 np0005541914.localdomain podman[86113]: 2025-12-02 08:36:12.195605838 +0000 UTC m=+0.188725663 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Dec 02 08:36:12 np0005541914.localdomain podman[86113]: 2025-12-02 08:36:12.599579414 +0000 UTC m=+0.592699189 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_id=tripleo_step4, container_name=nova_migration_target, batch=17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, version=17.1.12, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4)
Dec 02 08:36:12 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:36:14 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:36:14 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:36:15 np0005541914.localdomain podman[86162]: 2025-12-02 08:36:15.076956525 +0000 UTC m=+0.083097484 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, io.openshift.expose-services=, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, name=rhosp17/openstack-ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, version=17.1.12)
Dec 02 08:36:15 np0005541914.localdomain podman[86161]: 2025-12-02 08:36:15.126375394 +0000 UTC m=+0.133507363 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, batch=17.1_20251118.1, version=17.1.12, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.buildah.version=1.41.4, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, url=https://www.redhat.com, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible)
Dec 02 08:36:15 np0005541914.localdomain podman[86161]: 2025-12-02 08:36:15.162768011 +0000 UTC m=+0.169899980 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://www.redhat.com, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, batch=17.1_20251118.1, vcs-type=git)
Dec 02 08:36:15 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:36:15 np0005541914.localdomain podman[86162]: 2025-12-02 08:36:15.179907382 +0000 UTC m=+0.186048351 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12)
Dec 02 08:36:15 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:36:17 np0005541914.localdomain sshd[86209]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:36:17 np0005541914.localdomain sshd[86209]: Invalid user solv from 45.148.10.240 port 48654
Dec 02 08:36:17 np0005541914.localdomain sshd[86209]: Connection closed by invalid user solv 45.148.10.240 port 48654 [preauth]
Dec 02 08:36:18 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:36:19 np0005541914.localdomain podman[86211]: 2025-12-02 08:36:19.082532105 +0000 UTC m=+0.086097056 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.expose-services=, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, distribution-scope=public, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., batch=17.1_20251118.1)
Dec 02 08:36:19 np0005541914.localdomain podman[86211]: 2025-12-02 08:36:19.096975552 +0000 UTC m=+0.100540503 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, release=1761123044, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vendor=Red Hat, Inc.)
Dec 02 08:36:19 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:36:21 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:36:22 np0005541914.localdomain podman[86230]: 2025-12-02 08:36:22.083301479 +0000 UTC m=+0.090006407 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, version=17.1.12, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, url=https://www.redhat.com, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, vcs-type=git)
Dec 02 08:36:22 np0005541914.localdomain podman[86230]: 2025-12-02 08:36:22.126012041 +0000 UTC m=+0.132716979 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, batch=17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, container_name=iscsid, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044)
Dec 02 08:36:22 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:36:33 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:36:34 np0005541914.localdomain podman[86248]: 2025-12-02 08:36:34.077377951 +0000 UTC m=+0.082239516 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.openshift.expose-services=)
Dec 02 08:36:34 np0005541914.localdomain podman[86248]: 2025-12-02 08:36:34.262310465 +0000 UTC m=+0.267171970 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, config_id=tripleo_step1, release=1761123044, tcib_managed=true, container_name=metrics_qdr, io.openshift.expose-services=)
Dec 02 08:36:34 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:36:34 np0005541914.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 02 08:36:35 np0005541914.localdomain recover_tripleo_nova_virtqemud[86278]: 61907
Dec 02 08:36:35 np0005541914.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 02 08:36:35 np0005541914.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 02 08:36:39 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:36:39 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:36:39 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:36:40 np0005541914.localdomain podman[86281]: 2025-12-02 08:36:40.083731646 +0000 UTC m=+0.085098117 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, version=17.1.12, vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 02 08:36:40 np0005541914.localdomain podman[86279]: 2025-12-02 08:36:40.129233465 +0000 UTC m=+0.133889400 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step4, architecture=x86_64, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044)
Dec 02 08:36:40 np0005541914.localdomain podman[86281]: 2025-12-02 08:36:40.143961914 +0000 UTC m=+0.145328405 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi)
Dec 02 08:36:40 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:36:40 np0005541914.localdomain podman[86280]: 2025-12-02 08:36:40.187531736 +0000 UTC m=+0.190335299 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, release=1761123044, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, url=https://www.redhat.com, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4)
Dec 02 08:36:40 np0005541914.localdomain podman[86279]: 2025-12-02 08:36:40.215253507 +0000 UTC m=+0.219909482 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, vcs-type=git, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:36:40 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:36:40 np0005541914.localdomain podman[86280]: 2025-12-02 08:36:40.245067268 +0000 UTC m=+0.247870841 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, version=17.1.12, vcs-type=git, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, release=1761123044, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, tcib_managed=true)
Dec 02 08:36:40 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:36:42 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:36:42 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:36:43 np0005541914.localdomain podman[86351]: 2025-12-02 08:36:43.082252977 +0000 UTC m=+0.089674298 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, architecture=x86_64)
Dec 02 08:36:43 np0005541914.localdomain podman[86352]: 2025-12-02 08:36:43.13004837 +0000 UTC m=+0.135483265 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_id=tripleo_step4, release=1761123044, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc.)
Dec 02 08:36:43 np0005541914.localdomain podman[86351]: 2025-12-02 08:36:43.159311854 +0000 UTC m=+0.166733185 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, distribution-scope=public, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true)
Dec 02 08:36:43 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:36:43 np0005541914.localdomain podman[86352]: 2025-12-02 08:36:43.498880308 +0000 UTC m=+0.504315183 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.12, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.4)
Dec 02 08:36:43 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:36:45 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:36:45 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:36:46 np0005541914.localdomain podman[86400]: 2025-12-02 08:36:46.079185811 +0000 UTC m=+0.086156187 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64)
Dec 02 08:36:46 np0005541914.localdomain systemd[1]: tmp-crun.d4IGxh.mount: Deactivated successfully.
Dec 02 08:36:46 np0005541914.localdomain podman[86400]: 2025-12-02 08:36:46.133945373 +0000 UTC m=+0.140915779 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1761123044, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 02 08:36:46 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:36:46 np0005541914.localdomain podman[86401]: 2025-12-02 08:36:46.139021318 +0000 UTC m=+0.142259318 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.openshift.expose-services=, version=17.1.12, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, batch=17.1_20251118.1, distribution-scope=public, architecture=x86_64, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, container_name=ovn_controller, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4)
Dec 02 08:36:46 np0005541914.localdomain podman[86401]: 2025-12-02 08:36:46.217328941 +0000 UTC m=+0.220566931 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 02 08:36:46 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:36:50 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:36:50 np0005541914.localdomain systemd[1]: tmp-crun.0GXMyb.mount: Deactivated successfully.
Dec 02 08:36:50 np0005541914.localdomain podman[86450]: 2025-12-02 08:36:50.102711013 +0000 UTC m=+0.086171459 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, container_name=collectd, distribution-scope=public, release=1761123044, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, batch=17.1_20251118.1)
Dec 02 08:36:50 np0005541914.localdomain podman[86450]: 2025-12-02 08:36:50.109866306 +0000 UTC m=+0.093326682 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, distribution-scope=public, release=1761123044, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3)
Dec 02 08:36:50 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:36:52 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:36:53 np0005541914.localdomain podman[86469]: 2025-12-02 08:36:53.063645351 +0000 UTC m=+0.066381555 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12)
Dec 02 08:36:53 np0005541914.localdomain podman[86469]: 2025-12-02 08:36:53.099334888 +0000 UTC m=+0.102071142 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, container_name=iscsid, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, config_id=tripleo_step3, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 02 08:36:53 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:37:04 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:37:05 np0005541914.localdomain sudo[86533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:37:05 np0005541914.localdomain sudo[86533]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:37:05 np0005541914.localdomain sudo[86533]: pam_unix(sudo:session): session closed for user root
Dec 02 08:37:05 np0005541914.localdomain podman[86546]: 2025-12-02 08:37:05.100383215 +0000 UTC m=+0.103423560 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, version=17.1.12, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, container_name=metrics_qdr)
Dec 02 08:37:05 np0005541914.localdomain sudo[86559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:37:05 np0005541914.localdomain sudo[86559]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:37:05 np0005541914.localdomain podman[86546]: 2025-12-02 08:37:05.29723591 +0000 UTC m=+0.300276195 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, vendor=Red Hat, Inc., release=1761123044, tcib_managed=true, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, managed_by=tripleo_ansible, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z)
Dec 02 08:37:05 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:37:05 np0005541914.localdomain sudo[86559]: pam_unix(sudo:session): session closed for user root
Dec 02 08:37:08 np0005541914.localdomain sudo[86624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:37:08 np0005541914.localdomain sudo[86624]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:37:08 np0005541914.localdomain sudo[86624]: pam_unix(sudo:session): session closed for user root
Dec 02 08:37:10 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:37:10 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:37:10 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:37:11 np0005541914.localdomain podman[86641]: 2025-12-02 08:37:11.062565842 +0000 UTC m=+0.062039480 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, version=17.1.12, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Dec 02 08:37:11 np0005541914.localdomain systemd[1]: tmp-crun.CokC8E.mount: Deactivated successfully.
Dec 02 08:37:11 np0005541914.localdomain podman[86640]: 2025-12-02 08:37:11.077609111 +0000 UTC m=+0.077719428 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, distribution-scope=public)
Dec 02 08:37:11 np0005541914.localdomain podman[86641]: 2025-12-02 08:37:11.086668479 +0000 UTC m=+0.086142107 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.12, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 02 08:37:11 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:37:11 np0005541914.localdomain podman[86640]: 2025-12-02 08:37:11.099778423 +0000 UTC m=+0.099888750 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044)
Dec 02 08:37:11 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:37:11 np0005541914.localdomain podman[86639]: 2025-12-02 08:37:11.162174392 +0000 UTC m=+0.164943015 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, architecture=x86_64, release=1761123044, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, version=17.1.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, vcs-type=git, url=https://www.redhat.com, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container)
Dec 02 08:37:11 np0005541914.localdomain podman[86639]: 2025-12-02 08:37:11.172821046 +0000 UTC m=+0.175589709 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vcs-type=git, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.12, container_name=logrotate_crond)
Dec 02 08:37:11 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:37:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 08:37:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 3000.1 total, 600.0 interval
                                                          Cumulative writes: 4846 writes, 21K keys, 4846 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 4846 writes, 677 syncs, 7.16 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 447 writes, 1749 keys, 447 commit groups, 1.0 writes per commit group, ingest: 2.00 MB, 0.00 MB/s
                                                          Interval WAL: 447 writes, 173 syncs, 2.58 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 02 08:37:13 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:37:13 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:37:14 np0005541914.localdomain podman[86712]: 2025-12-02 08:37:14.0761107 +0000 UTC m=+0.077773628 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.4, version=17.1.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, tcib_managed=true, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git)
Dec 02 08:37:14 np0005541914.localdomain podman[86711]: 2025-12-02 08:37:14.130603974 +0000 UTC m=+0.135897987 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, version=17.1.12, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute)
Dec 02 08:37:14 np0005541914.localdomain podman[86711]: 2025-12-02 08:37:14.159872639 +0000 UTC m=+0.165166662 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, version=17.1.12, config_id=tripleo_step5, architecture=x86_64, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:37:14 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:37:14 np0005541914.localdomain podman[86712]: 2025-12-02 08:37:14.476132358 +0000 UTC m=+0.477795336 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, url=https://www.redhat.com, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, version=17.1.12, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:37:14 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:37:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 08:37:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 3000.2 total, 600.0 interval
                                                          Cumulative writes: 5767 writes, 25K keys, 5767 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 5767 writes, 746 syncs, 7.73 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 505 writes, 1943 keys, 505 commit groups, 1.0 writes per commit group, ingest: 2.58 MB, 0.00 MB/s
                                                          Interval WAL: 505 writes, 186 syncs, 2.72 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 02 08:37:16 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:37:16 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:37:17 np0005541914.localdomain podman[86758]: 2025-12-02 08:37:17.050341817 +0000 UTC m=+0.054617688 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, managed_by=tripleo_ansible)
Dec 02 08:37:17 np0005541914.localdomain systemd[1]: tmp-crun.X697zo.mount: Deactivated successfully.
Dec 02 08:37:17 np0005541914.localdomain podman[86758]: 2025-12-02 08:37:17.097806681 +0000 UTC m=+0.102082552 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, config_id=tripleo_step4, container_name=ovn_metadata_agent, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20251118.1)
Dec 02 08:37:17 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:37:17 np0005541914.localdomain podman[86759]: 2025-12-02 08:37:17.098684145 +0000 UTC m=+0.100031883 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, managed_by=tripleo_ansible, release=1761123044, vcs-type=git, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, architecture=x86_64, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=)
Dec 02 08:37:17 np0005541914.localdomain podman[86759]: 2025-12-02 08:37:17.180909831 +0000 UTC m=+0.182257559 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, batch=17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, tcib_managed=true, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 02 08:37:17 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:37:20 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:37:21 np0005541914.localdomain podman[86805]: 2025-12-02 08:37:21.071740707 +0000 UTC m=+0.079639203 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, container_name=collectd, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, config_id=tripleo_step3)
Dec 02 08:37:21 np0005541914.localdomain podman[86805]: 2025-12-02 08:37:21.10975033 +0000 UTC m=+0.117648796 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.buildah.version=1.41.4, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:37:21 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:37:23 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:37:24 np0005541914.localdomain podman[86825]: 2025-12-02 08:37:24.082228987 +0000 UTC m=+0.085074526 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, container_name=iscsid, io.buildah.version=1.41.4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container)
Dec 02 08:37:24 np0005541914.localdomain podman[86825]: 2025-12-02 08:37:24.118771489 +0000 UTC m=+0.121616998 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, release=1761123044, version=17.1.12, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, container_name=iscsid, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.buildah.version=1.41.4, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:37:24 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:37:35 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:37:36 np0005541914.localdomain podman[86845]: 2025-12-02 08:37:36.085846308 +0000 UTC m=+0.088606507 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step1, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, io.openshift.expose-services=, architecture=x86_64)
Dec 02 08:37:36 np0005541914.localdomain podman[86845]: 2025-12-02 08:37:36.314152649 +0000 UTC m=+0.316912788 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, config_id=tripleo_step1, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, architecture=x86_64, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, version=17.1.12)
Dec 02 08:37:36 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:37:41 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:37:41 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:37:41 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:37:42 np0005541914.localdomain podman[86875]: 2025-12-02 08:37:42.0808749 +0000 UTC m=+0.082689109 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, release=1761123044, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Dec 02 08:37:42 np0005541914.localdomain podman[86875]: 2025-12-02 08:37:42.085733239 +0000 UTC m=+0.087547438 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.12, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, distribution-scope=public, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc.)
Dec 02 08:37:42 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:37:42 np0005541914.localdomain systemd[1]: tmp-crun.jOeZ81.mount: Deactivated successfully.
Dec 02 08:37:42 np0005541914.localdomain podman[86876]: 2025-12-02 08:37:42.139538323 +0000 UTC m=+0.141610749 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, architecture=x86_64, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 02 08:37:42 np0005541914.localdomain podman[86876]: 2025-12-02 08:37:42.167776138 +0000 UTC m=+0.169848544 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1761123044, tcib_managed=true, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute)
Dec 02 08:37:42 np0005541914.localdomain systemd[1]: tmp-crun.6a8Hbw.mount: Deactivated successfully.
Dec 02 08:37:42 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:37:42 np0005541914.localdomain podman[86877]: 2025-12-02 08:37:42.184862465 +0000 UTC m=+0.182677820 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, architecture=x86_64, io.buildah.version=1.41.4, release=1761123044, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:37:42 np0005541914.localdomain podman[86877]: 2025-12-02 08:37:42.210712333 +0000 UTC m=+0.208527718 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, vcs-type=git, container_name=ceilometer_agent_ipmi, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 02 08:37:42 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:37:44 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:37:44 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:37:45 np0005541914.localdomain podman[86946]: 2025-12-02 08:37:45.078188116 +0000 UTC m=+0.081183416 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, name=rhosp17/openstack-nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true)
Dec 02 08:37:45 np0005541914.localdomain systemd[1]: tmp-crun.xptAV5.mount: Deactivated successfully.
Dec 02 08:37:45 np0005541914.localdomain podman[86945]: 2025-12-02 08:37:45.137349023 +0000 UTC m=+0.143301328 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, distribution-scope=public, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 02 08:37:45 np0005541914.localdomain podman[86945]: 2025-12-02 08:37:45.167824492 +0000 UTC m=+0.173776797 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, version=17.1.12, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, vcs-type=git)
Dec 02 08:37:45 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:37:45 np0005541914.localdomain podman[86946]: 2025-12-02 08:37:45.443908826 +0000 UTC m=+0.446904136 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., release=1761123044, version=17.1.12, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.buildah.version=1.41.4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 02 08:37:45 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:37:47 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:37:47 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:37:47 np0005541914.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 02 08:37:48 np0005541914.localdomain recover_tripleo_nova_virtqemud[87005]: 61907
Dec 02 08:37:48 np0005541914.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 02 08:37:48 np0005541914.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 02 08:37:48 np0005541914.localdomain systemd[1]: tmp-crun.Q8HDUK.mount: Deactivated successfully.
Dec 02 08:37:48 np0005541914.localdomain podman[86993]: 2025-12-02 08:37:48.086950908 +0000 UTC m=+0.086360154 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, build-date=2025-11-18T23:34:05Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, container_name=ovn_controller, config_id=tripleo_step4, architecture=x86_64, tcib_managed=true, version=17.1.12, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:37:48 np0005541914.localdomain podman[86993]: 2025-12-02 08:37:48.112665051 +0000 UTC m=+0.112074317 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, tcib_managed=true, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, version=17.1.12, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller)
Dec 02 08:37:48 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:37:48 np0005541914.localdomain podman[86992]: 2025-12-02 08:37:48.135024339 +0000 UTC m=+0.137598805 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, release=1761123044, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 02 08:37:48 np0005541914.localdomain podman[86992]: 2025-12-02 08:37:48.202939186 +0000 UTC m=+0.205513592 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, distribution-scope=public, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, url=https://www.redhat.com)
Dec 02 08:37:48 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:37:49 np0005541914.localdomain systemd[1]: tmp-crun.Ly8xlf.mount: Deactivated successfully.
Dec 02 08:37:51 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:37:52 np0005541914.localdomain podman[87042]: 2025-12-02 08:37:52.084550639 +0000 UTC m=+0.084697136 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1)
Dec 02 08:37:52 np0005541914.localdomain podman[87042]: 2025-12-02 08:37:52.122938594 +0000 UTC m=+0.123085101 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, config_id=tripleo_step3, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, vcs-type=git, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, version=17.1.12)
Dec 02 08:37:52 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:37:54 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:37:55 np0005541914.localdomain systemd[1]: tmp-crun.27AuaZ.mount: Deactivated successfully.
Dec 02 08:37:55 np0005541914.localdomain podman[87063]: 2025-12-02 08:37:55.081141524 +0000 UTC m=+0.082757461 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-iscsid)
Dec 02 08:37:55 np0005541914.localdomain podman[87063]: 2025-12-02 08:37:55.119122817 +0000 UTC m=+0.120738744 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.openshift.expose-services=, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, vendor=Red Hat, Inc.)
Dec 02 08:37:55 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:38:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:38:07 np0005541914.localdomain podman[87127]: 2025-12-02 08:38:07.081000239 +0000 UTC m=+0.086812437 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, container_name=metrics_qdr, release=1761123044, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_id=tripleo_step1, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc.)
Dec 02 08:38:07 np0005541914.localdomain podman[87127]: 2025-12-02 08:38:07.276746321 +0000 UTC m=+0.282558519 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, container_name=metrics_qdr, batch=17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, architecture=x86_64, config_id=tripleo_step1, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://www.redhat.com, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 08:38:07 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:38:09 np0005541914.localdomain sudo[87155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:38:09 np0005541914.localdomain sudo[87155]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:38:09 np0005541914.localdomain sudo[87155]: pam_unix(sudo:session): session closed for user root
Dec 02 08:38:09 np0005541914.localdomain sudo[87170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 02 08:38:09 np0005541914.localdomain sudo[87170]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:38:09 np0005541914.localdomain sudo[87170]: pam_unix(sudo:session): session closed for user root
Dec 02 08:38:09 np0005541914.localdomain sudo[87205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:38:09 np0005541914.localdomain sudo[87205]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:38:09 np0005541914.localdomain sudo[87205]: pam_unix(sudo:session): session closed for user root
Dec 02 08:38:09 np0005541914.localdomain sudo[87220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:38:09 np0005541914.localdomain sudo[87220]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:38:10 np0005541914.localdomain sudo[87220]: pam_unix(sudo:session): session closed for user root
Dec 02 08:38:11 np0005541914.localdomain sudo[87266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:38:11 np0005541914.localdomain sudo[87266]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:38:11 np0005541914.localdomain sudo[87266]: pam_unix(sudo:session): session closed for user root
Dec 02 08:38:12 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:38:12 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:38:12 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:38:13 np0005541914.localdomain podman[87283]: 2025-12-02 08:38:13.09845674 +0000 UTC m=+0.089645877 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, batch=17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044)
Dec 02 08:38:13 np0005541914.localdomain podman[87283]: 2025-12-02 08:38:13.130469504 +0000 UTC m=+0.121658621 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, batch=17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, distribution-scope=public, release=1761123044, tcib_managed=true, build-date=2025-11-19T00:12:45Z, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 02 08:38:13 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:38:13 np0005541914.localdomain podman[87281]: 2025-12-02 08:38:13.145820051 +0000 UTC m=+0.142244726 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, release=1761123044, version=17.1.12, tcib_managed=true, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron)
Dec 02 08:38:13 np0005541914.localdomain podman[87281]: 2025-12-02 08:38:13.156670271 +0000 UTC m=+0.153094976 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4)
Dec 02 08:38:13 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:38:13 np0005541914.localdomain podman[87282]: 2025-12-02 08:38:13.200577073 +0000 UTC m=+0.195996230 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, release=1761123044, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true)
Dec 02 08:38:13 np0005541914.localdomain podman[87282]: 2025-12-02 08:38:13.255809589 +0000 UTC m=+0.251228746 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-19T00:11:48Z, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, release=1761123044, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, config_id=tripleo_step4, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 02 08:38:13 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:38:15 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:38:15 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:38:16 np0005541914.localdomain podman[87354]: 2025-12-02 08:38:16.062298842 +0000 UTC m=+0.070078490 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, container_name=nova_compute, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, vcs-type=git)
Dec 02 08:38:16 np0005541914.localdomain podman[87355]: 2025-12-02 08:38:16.120508212 +0000 UTC m=+0.123480592 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc.)
Dec 02 08:38:16 np0005541914.localdomain podman[87354]: 2025-12-02 08:38:16.140043959 +0000 UTC m=+0.147823657 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, release=1761123044, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., vcs-type=git)
Dec 02 08:38:16 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:38:16 np0005541914.localdomain podman[87355]: 2025-12-02 08:38:16.493849748 +0000 UTC m=+0.496822098 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, batch=17.1_20251118.1, tcib_managed=true, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1761123044)
Dec 02 08:38:16 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:38:18 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:38:18 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:38:19 np0005541914.localdomain systemd[1]: tmp-crun.ostplU.mount: Deactivated successfully.
Dec 02 08:38:19 np0005541914.localdomain podman[87403]: 2025-12-02 08:38:19.064052985 +0000 UTC m=+0.068845815 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, distribution-scope=public, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc.)
Dec 02 08:38:19 np0005541914.localdomain podman[87403]: 2025-12-02 08:38:19.083022536 +0000 UTC m=+0.087815326 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, release=1761123044, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, vcs-type=git)
Dec 02 08:38:19 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:38:19 np0005541914.localdomain systemd[1]: tmp-crun.tTKR2A.mount: Deactivated successfully.
Dec 02 08:38:19 np0005541914.localdomain podman[87402]: 2025-12-02 08:38:19.117319264 +0000 UTC m=+0.121235448 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, io.buildah.version=1.41.4, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, vcs-type=git, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, tcib_managed=true)
Dec 02 08:38:19 np0005541914.localdomain podman[87402]: 2025-12-02 08:38:19.17681683 +0000 UTC m=+0.180733004 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, container_name=ovn_metadata_agent)
Dec 02 08:38:19 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:38:22 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:38:23 np0005541914.localdomain podman[87448]: 2025-12-02 08:38:23.075052478 +0000 UTC m=+0.082222616 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=collectd, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, release=1761123044, config_id=tripleo_step3, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z)
Dec 02 08:38:23 np0005541914.localdomain podman[87448]: 2025-12-02 08:38:23.090748555 +0000 UTC m=+0.097918683 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=collectd, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, release=1761123044, version=17.1.12, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd)
Dec 02 08:38:23 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:38:25 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:38:26 np0005541914.localdomain podman[87468]: 2025-12-02 08:38:26.070688116 +0000 UTC m=+0.079454456 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, managed_by=tripleo_ansible, url=https://www.redhat.com, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, tcib_managed=true, vcs-type=git, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid)
Dec 02 08:38:26 np0005541914.localdomain podman[87468]: 2025-12-02 08:38:26.084927472 +0000 UTC m=+0.093693842 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, com.redhat.component=openstack-iscsid-container, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, container_name=iscsid, io.buildah.version=1.41.4)
Dec 02 08:38:26 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:38:28 np0005541914.localdomain sshd[87487]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:38:28 np0005541914.localdomain sshd[87487]: Invalid user solv from 45.148.10.240 port 47738
Dec 02 08:38:28 np0005541914.localdomain sshd[87487]: Connection closed by invalid user solv 45.148.10.240 port 47738 [preauth]
Dec 02 08:38:37 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:38:38 np0005541914.localdomain podman[87489]: 2025-12-02 08:38:38.071427075 +0000 UTC m=+0.076197384 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, version=17.1.12, io.openshift.expose-services=, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, io.buildah.version=1.41.4)
Dec 02 08:38:38 np0005541914.localdomain podman[87489]: 2025-12-02 08:38:38.268825123 +0000 UTC m=+0.273595412 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, batch=17.1_20251118.1, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc.)
Dec 02 08:38:38 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:38:43 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:38:43 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:38:43 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:38:44 np0005541914.localdomain podman[87519]: 2025-12-02 08:38:44.085093479 +0000 UTC m=+0.078515390 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, release=1761123044, distribution-scope=public, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-type=git)
Dec 02 08:38:44 np0005541914.localdomain podman[87519]: 2025-12-02 08:38:44.116142145 +0000 UTC m=+0.109564036 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, distribution-scope=public, config_id=tripleo_step4, release=1761123044, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, architecture=x86_64, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com)
Dec 02 08:38:44 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:38:44 np0005541914.localdomain podman[87517]: 2025-12-02 08:38:44.195695193 +0000 UTC m=+0.194250501 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:38:44 np0005541914.localdomain podman[87517]: 2025-12-02 08:38:44.234809528 +0000 UTC m=+0.233364866 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, url=https://www.redhat.com, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12)
Dec 02 08:38:44 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:38:44 np0005541914.localdomain podman[87518]: 2025-12-02 08:38:44.258718081 +0000 UTC m=+0.251824553 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, io.buildah.version=1.41.4)
Dec 02 08:38:44 np0005541914.localdomain podman[87518]: 2025-12-02 08:38:44.315969833 +0000 UTC m=+0.309076285 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:38:44 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:38:46 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:38:46 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:38:47 np0005541914.localdomain podman[87591]: 2025-12-02 08:38:47.07589926 +0000 UTC m=+0.080897899 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.4, url=https://www.redhat.com, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12)
Dec 02 08:38:47 np0005541914.localdomain podman[87591]: 2025-12-02 08:38:47.135923222 +0000 UTC m=+0.140921881 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, distribution-scope=public, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, managed_by=tripleo_ansible, container_name=nova_compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 02 08:38:47 np0005541914.localdomain systemd[1]: tmp-crun.vl1iFp.mount: Deactivated successfully.
Dec 02 08:38:47 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:38:47 np0005541914.localdomain podman[87592]: 2025-12-02 08:38:47.141724037 +0000 UTC m=+0.143756691 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, distribution-scope=public, vcs-type=git, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, release=1761123044, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible)
Dec 02 08:38:47 np0005541914.localdomain podman[87592]: 2025-12-02 08:38:47.50686014 +0000 UTC m=+0.508892794 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, config_id=tripleo_step4, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, version=17.1.12)
Dec 02 08:38:47 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:38:49 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:38:49 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:38:50 np0005541914.localdomain systemd[1]: tmp-crun.T9AtK8.mount: Deactivated successfully.
Dec 02 08:38:50 np0005541914.localdomain podman[87638]: 2025-12-02 08:38:50.057092746 +0000 UTC m=+0.065359885 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.4, managed_by=tripleo_ansible, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12)
Dec 02 08:38:50 np0005541914.localdomain podman[87639]: 2025-12-02 08:38:50.13019539 +0000 UTC m=+0.131360026 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-type=git, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, release=1761123044, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 02 08:38:50 np0005541914.localdomain podman[87638]: 2025-12-02 08:38:50.148790181 +0000 UTC m=+0.157057310 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, batch=17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, io.buildah.version=1.41.4, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64)
Dec 02 08:38:50 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:38:50 np0005541914.localdomain podman[87639]: 2025-12-02 08:38:50.180872476 +0000 UTC m=+0.182037072 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, managed_by=tripleo_ansible, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, version=17.1.12, vendor=Red Hat, Inc., release=1761123044, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, container_name=ovn_controller, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:38:50 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:38:53 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:38:54 np0005541914.localdomain systemd[1]: tmp-crun.UCiVkF.mount: Deactivated successfully.
Dec 02 08:38:54 np0005541914.localdomain podman[87685]: 2025-12-02 08:38:54.091654032 +0000 UTC m=+0.093130277 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, container_name=collectd, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, distribution-scope=public, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd)
Dec 02 08:38:54 np0005541914.localdomain podman[87685]: 2025-12-02 08:38:54.128793821 +0000 UTC m=+0.130270086 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, architecture=x86_64, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, vendor=Red Hat, Inc.)
Dec 02 08:38:54 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:38:56 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:38:57 np0005541914.localdomain podman[87706]: 2025-12-02 08:38:57.085028775 +0000 UTC m=+0.091287854 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, tcib_managed=true, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 02 08:38:57 np0005541914.localdomain podman[87706]: 2025-12-02 08:38:57.099751234 +0000 UTC m=+0.106010333 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step3, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=)
Dec 02 08:38:57 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:39:08 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:39:09 np0005541914.localdomain podman[87770]: 2025-12-02 08:39:09.085019575 +0000 UTC m=+0.090510943 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.expose-services=, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, tcib_managed=true, io.buildah.version=1.41.4, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 08:39:09 np0005541914.localdomain podman[87770]: 2025-12-02 08:39:09.286940193 +0000 UTC m=+0.292431561 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, version=17.1.12, container_name=metrics_qdr, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git)
Dec 02 08:39:09 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:39:11 np0005541914.localdomain sudo[87799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:39:11 np0005541914.localdomain sudo[87799]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:39:11 np0005541914.localdomain sudo[87799]: pam_unix(sudo:session): session closed for user root
Dec 02 08:39:11 np0005541914.localdomain sudo[87814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:39:11 np0005541914.localdomain sudo[87814]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:39:12 np0005541914.localdomain sudo[87814]: pam_unix(sudo:session): session closed for user root
Dec 02 08:39:12 np0005541914.localdomain sudo[87860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:39:12 np0005541914.localdomain sudo[87860]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:39:12 np0005541914.localdomain sudo[87860]: pam_unix(sudo:session): session closed for user root
Dec 02 08:39:14 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:39:14 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:39:14 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:39:15 np0005541914.localdomain podman[87876]: 2025-12-02 08:39:15.082417725 +0000 UTC m=+0.085566712 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 02 08:39:15 np0005541914.localdomain podman[87875]: 2025-12-02 08:39:15.117638159 +0000 UTC m=+0.121468455 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, tcib_managed=true, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, managed_by=tripleo_ansible, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron)
Dec 02 08:39:15 np0005541914.localdomain podman[87875]: 2025-12-02 08:39:15.128694904 +0000 UTC m=+0.132525210 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1)
Dec 02 08:39:15 np0005541914.localdomain podman[87876]: 2025-12-02 08:39:15.13731026 +0000 UTC m=+0.140459257 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, version=17.1.12, architecture=x86_64, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, distribution-scope=public, config_id=tripleo_step4, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true)
Dec 02 08:39:15 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:39:15 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:39:15 np0005541914.localdomain podman[87877]: 2025-12-02 08:39:15.174214882 +0000 UTC m=+0.178104200 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, architecture=x86_64, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, batch=17.1_20251118.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, managed_by=tripleo_ansible)
Dec 02 08:39:15 np0005541914.localdomain podman[87877]: 2025-12-02 08:39:15.198838284 +0000 UTC m=+0.202727582 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, release=1761123044, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, vendor=Red Hat, Inc.)
Dec 02 08:39:15 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:39:16 np0005541914.localdomain systemd[1]: tmp-crun.tMOTVf.mount: Deactivated successfully.
Dec 02 08:39:17 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:39:17 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:39:18 np0005541914.localdomain systemd[1]: tmp-crun.4yEucg.mount: Deactivated successfully.
Dec 02 08:39:18 np0005541914.localdomain podman[87950]: 2025-12-02 08:39:18.083117337 +0000 UTC m=+0.084683927 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, release=1761123044, container_name=nova_migration_target, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., url=https://www.redhat.com, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, managed_by=tripleo_ansible)
Dec 02 08:39:18 np0005541914.localdomain podman[87949]: 2025-12-02 08:39:18.132520255 +0000 UTC m=+0.138437409 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, managed_by=tripleo_ansible, release=1761123044, architecture=x86_64, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, container_name=nova_compute, url=https://www.redhat.com)
Dec 02 08:39:18 np0005541914.localdomain podman[87949]: 2025-12-02 08:39:18.157915359 +0000 UTC m=+0.163832503 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vcs-type=git, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, release=1761123044, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']})
Dec 02 08:39:18 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:39:18 np0005541914.localdomain podman[87950]: 2025-12-02 08:39:18.43776309 +0000 UTC m=+0.439329630 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, release=1761123044, architecture=x86_64, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:39:18 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:39:20 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:39:20 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:39:21 np0005541914.localdomain podman[87999]: 2025-12-02 08:39:21.087966007 +0000 UTC m=+0.091798808 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, tcib_managed=true, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, version=17.1.12, vcs-type=git, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:39:21 np0005541914.localdomain podman[87999]: 2025-12-02 08:39:21.145398615 +0000 UTC m=+0.149231436 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, batch=17.1_20251118.1, version=17.1.12, distribution-scope=public, release=1761123044, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller)
Dec 02 08:39:21 np0005541914.localdomain systemd[1]: tmp-crun.It5g2i.mount: Deactivated successfully.
Dec 02 08:39:21 np0005541914.localdomain podman[87998]: 2025-12-02 08:39:21.154386141 +0000 UTC m=+0.158046098 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-11-19T00:14:25Z, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., release=1761123044, url=https://www.redhat.com)
Dec 02 08:39:21 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:39:21 np0005541914.localdomain podman[87998]: 2025-12-02 08:39:21.203863642 +0000 UTC m=+0.207523609 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, tcib_managed=true, release=1761123044, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, version=17.1.12, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 02 08:39:21 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:39:24 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:39:25 np0005541914.localdomain systemd[1]: tmp-crun.7QINUA.mount: Deactivated successfully.
Dec 02 08:39:25 np0005541914.localdomain podman[88047]: 2025-12-02 08:39:25.08175234 +0000 UTC m=+0.084565013 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, release=1761123044, managed_by=tripleo_ansible, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4)
Dec 02 08:39:25 np0005541914.localdomain podman[88047]: 2025-12-02 08:39:25.094903045 +0000 UTC m=+0.097715708 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_id=tripleo_step3, io.openshift.expose-services=, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, container_name=collectd, io.buildah.version=1.41.4, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 08:39:25 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:39:27 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:39:28 np0005541914.localdomain podman[88067]: 2025-12-02 08:39:28.038760826 +0000 UTC m=+0.049649997 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 02 08:39:28 np0005541914.localdomain podman[88067]: 2025-12-02 08:39:28.046963239 +0000 UTC m=+0.057852390 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.12, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, batch=17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044)
Dec 02 08:39:28 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:39:34 np0005541914.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 02 08:39:35 np0005541914.localdomain recover_tripleo_nova_virtqemud[88088]: 61907
Dec 02 08:39:35 np0005541914.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 02 08:39:35 np0005541914.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 02 08:39:39 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:39:40 np0005541914.localdomain podman[88090]: 2025-12-02 08:39:40.081907046 +0000 UTC m=+0.087637010 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, version=17.1.12, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, architecture=x86_64)
Dec 02 08:39:40 np0005541914.localdomain podman[88090]: 2025-12-02 08:39:40.272870402 +0000 UTC m=+0.278600296 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 08:39:40 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:39:45 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:39:45 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:39:45 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:39:46 np0005541914.localdomain systemd[1]: tmp-crun.qsRGT0.mount: Deactivated successfully.
Dec 02 08:39:46 np0005541914.localdomain podman[88119]: 2025-12-02 08:39:46.090602687 +0000 UTC m=+0.088675249 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.41.4, io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, release=1761123044)
Dec 02 08:39:46 np0005541914.localdomain podman[88120]: 2025-12-02 08:39:46.154919221 +0000 UTC m=+0.150159062 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, config_id=tripleo_step4, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:39:46 np0005541914.localdomain podman[88119]: 2025-12-02 08:39:46.172976697 +0000 UTC m=+0.171049249 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, release=1761123044, config_id=tripleo_step4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Dec 02 08:39:46 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:39:46 np0005541914.localdomain podman[88120]: 2025-12-02 08:39:46.214204673 +0000 UTC m=+0.209444474 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, container_name=ceilometer_agent_compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, release=1761123044, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vendor=Red Hat, Inc.)
Dec 02 08:39:46 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:39:46 np0005541914.localdomain podman[88121]: 2025-12-02 08:39:46.125913345 +0000 UTC m=+0.113493218 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.12)
Dec 02 08:39:46 np0005541914.localdomain podman[88121]: 2025-12-02 08:39:46.262164281 +0000 UTC m=+0.249744134 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, release=1761123044, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:12:45Z, architecture=x86_64)
Dec 02 08:39:46 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:39:48 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:39:48 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:39:49 np0005541914.localdomain podman[88191]: 2025-12-02 08:39:49.048782807 +0000 UTC m=+0.061727852 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, distribution-scope=public, config_id=tripleo_step5, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:39:49 np0005541914.localdomain systemd[1]: tmp-crun.VHOE7W.mount: Deactivated successfully.
Dec 02 08:39:49 np0005541914.localdomain podman[88192]: 2025-12-02 08:39:49.111875546 +0000 UTC m=+0.117829701 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, version=17.1.12, architecture=x86_64, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vendor=Red Hat, Inc.)
Dec 02 08:39:49 np0005541914.localdomain podman[88191]: 2025-12-02 08:39:49.140886593 +0000 UTC m=+0.153831678 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, config_id=tripleo_step5, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=)
Dec 02 08:39:49 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:39:49 np0005541914.localdomain podman[88192]: 2025-12-02 08:39:49.50477157 +0000 UTC m=+0.510725705 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.12, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, release=1761123044, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:39:49 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:39:51 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:39:51 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:39:52 np0005541914.localdomain systemd[1]: tmp-crun.TtulTE.mount: Deactivated successfully.
Dec 02 08:39:52 np0005541914.localdomain podman[88240]: 2025-12-02 08:39:52.103202722 +0000 UTC m=+0.099046266 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, release=1761123044, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller)
Dec 02 08:39:52 np0005541914.localdomain podman[88239]: 2025-12-02 08:39:52.140214867 +0000 UTC m=+0.142874735 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vcs-type=git, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, distribution-scope=public, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn)
Dec 02 08:39:52 np0005541914.localdomain podman[88240]: 2025-12-02 08:39:52.15295538 +0000 UTC m=+0.148798864 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, container_name=ovn_controller, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1761123044, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, distribution-scope=public, tcib_managed=true)
Dec 02 08:39:52 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:39:52 np0005541914.localdomain podman[88239]: 2025-12-02 08:39:52.185990662 +0000 UTC m=+0.188650500 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step4, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, build-date=2025-11-19T00:14:25Z, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:39:52 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:39:55 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:39:56 np0005541914.localdomain podman[88286]: 2025-12-02 08:39:56.081700207 +0000 UTC m=+0.085629153 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., release=1761123044, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, vcs-type=git, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:39:56 np0005541914.localdomain podman[88286]: 2025-12-02 08:39:56.116444418 +0000 UTC m=+0.120373334 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, batch=17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd)
Dec 02 08:39:56 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:39:58 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:39:59 np0005541914.localdomain systemd[1]: tmp-crun.LVeIrm.mount: Deactivated successfully.
Dec 02 08:39:59 np0005541914.localdomain podman[88305]: 2025-12-02 08:39:59.070567372 +0000 UTC m=+0.080227799 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, distribution-scope=public, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, version=17.1.12, managed_by=tripleo_ansible, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.buildah.version=1.41.4, tcib_managed=true, architecture=x86_64, container_name=iscsid, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Dec 02 08:39:59 np0005541914.localdomain podman[88305]: 2025-12-02 08:39:59.107155686 +0000 UTC m=+0.116816103 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Dec 02 08:39:59 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:40:10 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:40:11 np0005541914.localdomain systemd[1]: tmp-crun.nwmvRx.mount: Deactivated successfully.
Dec 02 08:40:11 np0005541914.localdomain podman[88347]: 2025-12-02 08:40:11.092441685 +0000 UTC m=+0.096606596 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com)
Dec 02 08:40:11 np0005541914.localdomain podman[88347]: 2025-12-02 08:40:11.310484083 +0000 UTC m=+0.314648914 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, distribution-scope=public, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']})
Dec 02 08:40:11 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:40:12 np0005541914.localdomain sudo[88376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:40:12 np0005541914.localdomain sudo[88376]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:40:12 np0005541914.localdomain sudo[88376]: pam_unix(sudo:session): session closed for user root
Dec 02 08:40:12 np0005541914.localdomain sudo[88391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 02 08:40:12 np0005541914.localdomain sudo[88391]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:40:13 np0005541914.localdomain systemd[1]: tmp-crun.OxxWDN.mount: Deactivated successfully.
Dec 02 08:40:13 np0005541914.localdomain podman[88478]: 2025-12-02 08:40:13.790155497 +0000 UTC m=+0.068122104 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, vcs-type=git, CEPH_POINT_RELEASE=, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, ceph=True, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., distribution-scope=public, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main)
Dec 02 08:40:13 np0005541914.localdomain podman[88478]: 2025-12-02 08:40:13.896902701 +0000 UTC m=+0.174869248 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 02 08:40:14 np0005541914.localdomain sudo[88391]: pam_unix(sudo:session): session closed for user root
Dec 02 08:40:14 np0005541914.localdomain sudo[88546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:40:14 np0005541914.localdomain sudo[88546]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:40:14 np0005541914.localdomain sudo[88546]: pam_unix(sudo:session): session closed for user root
Dec 02 08:40:14 np0005541914.localdomain sudo[88561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:40:14 np0005541914.localdomain sudo[88561]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:40:14 np0005541914.localdomain sudo[88561]: pam_unix(sudo:session): session closed for user root
Dec 02 08:40:15 np0005541914.localdomain sudo[88608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:40:15 np0005541914.localdomain sudo[88608]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:40:15 np0005541914.localdomain sudo[88608]: pam_unix(sudo:session): session closed for user root
Dec 02 08:40:16 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:40:16 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:40:16 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:40:17 np0005541914.localdomain podman[88625]: 2025-12-02 08:40:17.134518619 +0000 UTC m=+0.128026232 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible)
Dec 02 08:40:17 np0005541914.localdomain podman[88624]: 2025-12-02 08:40:17.102862937 +0000 UTC m=+0.100318852 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.12, config_id=tripleo_step4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 02 08:40:17 np0005541914.localdomain podman[88624]: 2025-12-02 08:40:17.188935241 +0000 UTC m=+0.186391186 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., io.buildah.version=1.41.4, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, io.openshift.expose-services=)
Dec 02 08:40:17 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:40:17 np0005541914.localdomain podman[88623]: 2025-12-02 08:40:17.228748357 +0000 UTC m=+0.228527739 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, release=1761123044, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, build-date=2025-11-18T22:49:32Z, architecture=x86_64, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, batch=17.1_20251118.1)
Dec 02 08:40:17 np0005541914.localdomain podman[88625]: 2025-12-02 08:40:17.243979771 +0000 UTC m=+0.237487384 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, io.openshift.expose-services=, release=1761123044, architecture=x86_64, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z)
Dec 02 08:40:17 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:40:17 np0005541914.localdomain podman[88623]: 2025-12-02 08:40:17.265930227 +0000 UTC m=+0.265709619 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=logrotate_crond, distribution-scope=public, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:40:17 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:40:19 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:40:19 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:40:20 np0005541914.localdomain podman[88694]: 2025-12-02 08:40:20.105492443 +0000 UTC m=+0.105066847 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.openshift.expose-services=, vcs-type=git, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1761123044, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:40:20 np0005541914.localdomain podman[88694]: 2025-12-02 08:40:20.139791901 +0000 UTC m=+0.139366325 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, distribution-scope=public, version=17.1.12, maintainer=OpenStack TripleO Team)
Dec 02 08:40:20 np0005541914.localdomain systemd[1]: tmp-crun.MCtYz8.mount: Deactivated successfully.
Dec 02 08:40:20 np0005541914.localdomain podman[88695]: 2025-12-02 08:40:20.163876797 +0000 UTC m=+0.159587882 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.12, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 02 08:40:20 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:40:20 np0005541914.localdomain podman[88695]: 2025-12-02 08:40:20.529233907 +0000 UTC m=+0.524944992 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64)
Dec 02 08:40:20 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:40:22 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:40:22 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:40:23 np0005541914.localdomain podman[88743]: 2025-12-02 08:40:23.088429838 +0000 UTC m=+0.086626892 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:14:25Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 02 08:40:23 np0005541914.localdomain systemd[1]: tmp-crun.hmnsE2.mount: Deactivated successfully.
Dec 02 08:40:23 np0005541914.localdomain podman[88744]: 2025-12-02 08:40:23.157734044 +0000 UTC m=+0.150668788 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, managed_by=tripleo_ansible, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4)
Dec 02 08:40:23 np0005541914.localdomain podman[88743]: 2025-12-02 08:40:23.165309711 +0000 UTC m=+0.163506775 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, release=1761123044, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, config_id=tripleo_step4, vendor=Red Hat, Inc., vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:40:23 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:40:23 np0005541914.localdomain podman[88744]: 2025-12-02 08:40:23.187405011 +0000 UTC m=+0.180339775 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, release=1761123044, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller)
Dec 02 08:40:23 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:40:26 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:40:27 np0005541914.localdomain systemd[1]: tmp-crun.zr6gCM.mount: Deactivated successfully.
Dec 02 08:40:27 np0005541914.localdomain podman[88792]: 2025-12-02 08:40:27.071182065 +0000 UTC m=+0.071517270 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=tripleo_step3, container_name=collectd, version=17.1.12, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, release=1761123044, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:40:27 np0005541914.localdomain podman[88792]: 2025-12-02 08:40:27.078894915 +0000 UTC m=+0.079230120 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, release=1761123044, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, distribution-scope=public, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1)
Dec 02 08:40:27 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:40:29 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:40:30 np0005541914.localdomain podman[88812]: 2025-12-02 08:40:30.075157809 +0000 UTC m=+0.080141885 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, batch=17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, managed_by=tripleo_ansible)
Dec 02 08:40:30 np0005541914.localdomain podman[88812]: 2025-12-02 08:40:30.107003508 +0000 UTC m=+0.111987544 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, release=1761123044, vcs-type=git, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc.)
Dec 02 08:40:30 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:40:32 np0005541914.localdomain sshd[88832]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:40:32 np0005541914.localdomain sshd[88832]: Invalid user solv from 45.148.10.240 port 48728
Dec 02 08:40:33 np0005541914.localdomain sshd[88832]: Connection closed by invalid user solv 45.148.10.240 port 48728 [preauth]
Dec 02 08:40:41 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:40:42 np0005541914.localdomain systemd[1]: tmp-crun.LfQ5YF.mount: Deactivated successfully.
Dec 02 08:40:42 np0005541914.localdomain podman[88834]: 2025-12-02 08:40:42.119516672 +0000 UTC m=+0.122099553 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Dec 02 08:40:42 np0005541914.localdomain podman[88834]: 2025-12-02 08:40:42.308060029 +0000 UTC m=+0.310642950 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, io.openshift.expose-services=, config_id=tripleo_step1, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 08:40:42 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:40:47 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:40:47 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:40:47 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:40:48 np0005541914.localdomain podman[88866]: 2025-12-02 08:40:48.080270108 +0000 UTC m=+0.087580329 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, managed_by=tripleo_ansible, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, name=rhosp17/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git)
Dec 02 08:40:48 np0005541914.localdomain podman[88866]: 2025-12-02 08:40:48.088720879 +0000 UTC m=+0.096031110 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, version=17.1.12, batch=17.1_20251118.1, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, release=1761123044, name=rhosp17/openstack-cron, io.openshift.expose-services=)
Dec 02 08:40:48 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:40:48 np0005541914.localdomain podman[88868]: 2025-12-02 08:40:48.05474774 +0000 UTC m=+0.056998646 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-type=git, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi)
Dec 02 08:40:48 np0005541914.localdomain podman[88868]: 2025-12-02 08:40:48.132402764 +0000 UTC m=+0.134653690 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team)
Dec 02 08:40:48 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:40:48 np0005541914.localdomain podman[88867]: 2025-12-02 08:40:48.178252053 +0000 UTC m=+0.179883082 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, architecture=x86_64, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044)
Dec 02 08:40:48 np0005541914.localdomain podman[88867]: 2025-12-02 08:40:48.23077603 +0000 UTC m=+0.232407059 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, tcib_managed=true, version=17.1.12, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 02 08:40:48 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:40:50 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:40:50 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:40:51 np0005541914.localdomain systemd[1]: tmp-crun.KD7nZO.mount: Deactivated successfully.
Dec 02 08:40:51 np0005541914.localdomain podman[88939]: 2025-12-02 08:40:51.081696121 +0000 UTC m=+0.087851346 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, container_name=nova_compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:40:51 np0005541914.localdomain systemd[1]: tmp-crun.Bl4cBn.mount: Deactivated successfully.
Dec 02 08:40:51 np0005541914.localdomain podman[88940]: 2025-12-02 08:40:51.129163905 +0000 UTC m=+0.133465938 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, url=https://www.redhat.com, batch=17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, tcib_managed=true, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute)
Dec 02 08:40:51 np0005541914.localdomain podman[88939]: 2025-12-02 08:40:51.159183151 +0000 UTC m=+0.165338406 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1)
Dec 02 08:40:51 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:40:51 np0005541914.localdomain podman[88940]: 2025-12-02 08:40:51.530394356 +0000 UTC m=+0.534696439 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.12, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible)
Dec 02 08:40:51 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:40:53 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:40:53 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:40:54 np0005541914.localdomain podman[88985]: 2025-12-02 08:40:54.084134064 +0000 UTC m=+0.083598795 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.12, container_name=ovn_metadata_agent, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 02 08:40:54 np0005541914.localdomain podman[88985]: 2025-12-02 08:40:54.125358569 +0000 UTC m=+0.124823270 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, vcs-type=git, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, io.openshift.expose-services=, release=1761123044, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 02 08:40:54 np0005541914.localdomain systemd[1]: tmp-crun.qnBagF.mount: Deactivated successfully.
Dec 02 08:40:54 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:40:54 np0005541914.localdomain podman[88986]: 2025-12-02 08:40:54.144201186 +0000 UTC m=+0.140066685 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, distribution-scope=public, release=1761123044, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller)
Dec 02 08:40:54 np0005541914.localdomain podman[88986]: 2025-12-02 08:40:54.167774419 +0000 UTC m=+0.163639948 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, config_id=tripleo_step4)
Dec 02 08:40:54 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:40:57 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:40:57 np0005541914.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 02 08:40:58 np0005541914.localdomain recover_tripleo_nova_virtqemud[89036]: 61907
Dec 02 08:40:58 np0005541914.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 02 08:40:58 np0005541914.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 02 08:40:58 np0005541914.localdomain podman[89034]: 2025-12-02 08:40:58.087608692 +0000 UTC m=+0.091580583 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, tcib_managed=true, architecture=x86_64, distribution-scope=public, release=1761123044)
Dec 02 08:40:58 np0005541914.localdomain podman[89034]: 2025-12-02 08:40:58.100775028 +0000 UTC m=+0.104746939 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=collectd, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, version=17.1.12, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-collectd)
Dec 02 08:40:58 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:41:00 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:41:01 np0005541914.localdomain podman[89056]: 2025-12-02 08:41:01.077099614 +0000 UTC m=+0.081947038 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, maintainer=OpenStack TripleO Team, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, tcib_managed=true, architecture=x86_64, batch=17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4)
Dec 02 08:41:01 np0005541914.localdomain podman[89056]: 2025-12-02 08:41:01.113964825 +0000 UTC m=+0.118812229 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vcs-type=git, name=rhosp17/openstack-iscsid, container_name=iscsid, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vendor=Red Hat, Inc.)
Dec 02 08:41:01 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:41:12 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:41:13 np0005541914.localdomain podman[89099]: 2025-12-02 08:41:13.057866302 +0000 UTC m=+0.064977873 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, version=17.1.12, batch=17.1_20251118.1, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com)
Dec 02 08:41:13 np0005541914.localdomain podman[89099]: 2025-12-02 08:41:13.237843205 +0000 UTC m=+0.244954686 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, config_id=tripleo_step1, version=17.1.12, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, distribution-scope=public, release=1761123044, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd)
Dec 02 08:41:13 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:41:15 np0005541914.localdomain sudo[89128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:41:15 np0005541914.localdomain sudo[89128]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:41:15 np0005541914.localdomain sudo[89128]: pam_unix(sudo:session): session closed for user root
Dec 02 08:41:15 np0005541914.localdomain sudo[89143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:41:15 np0005541914.localdomain sudo[89143]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:41:16 np0005541914.localdomain sudo[89143]: pam_unix(sudo:session): session closed for user root
Dec 02 08:41:17 np0005541914.localdomain sudo[89190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:41:17 np0005541914.localdomain sudo[89190]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:41:17 np0005541914.localdomain sudo[89190]: pam_unix(sudo:session): session closed for user root
Dec 02 08:41:18 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:41:18 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:41:18 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:41:19 np0005541914.localdomain podman[89205]: 2025-12-02 08:41:19.095341906 +0000 UTC m=+0.093682774 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, tcib_managed=true, url=https://www.redhat.com, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, version=17.1.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:41:19 np0005541914.localdomain podman[89205]: 2025-12-02 08:41:19.109997813 +0000 UTC m=+0.108338761 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.openshift.expose-services=, release=1761123044, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vcs-type=git, io.buildah.version=1.41.4, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron)
Dec 02 08:41:19 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:41:19 np0005541914.localdomain systemd[1]: tmp-crun.ybRkqf.mount: Deactivated successfully.
Dec 02 08:41:19 np0005541914.localdomain podman[89206]: 2025-12-02 08:41:19.208632896 +0000 UTC m=+0.205288485 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2025-11-19T00:11:48Z, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1761123044, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, container_name=ceilometer_agent_compute, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1)
Dec 02 08:41:19 np0005541914.localdomain podman[89207]: 2025-12-02 08:41:19.297574473 +0000 UTC m=+0.289730733 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.4, version=17.1.12, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step4, release=1761123044, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 02 08:41:19 np0005541914.localdomain podman[89206]: 2025-12-02 08:41:19.317310185 +0000 UTC m=+0.313965824 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 02 08:41:19 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:41:19 np0005541914.localdomain podman[89207]: 2025-12-02 08:41:19.331205472 +0000 UTC m=+0.323361702 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64)
Dec 02 08:41:19 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:41:21 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:41:21 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:41:22 np0005541914.localdomain podman[89277]: 2025-12-02 08:41:22.058974691 +0000 UTC m=+0.063780270 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_id=tripleo_step5, container_name=nova_compute, io.openshift.expose-services=, vcs-type=git, version=17.1.12, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team)
Dec 02 08:41:22 np0005541914.localdomain podman[89277]: 2025-12-02 08:41:22.075666617 +0000 UTC m=+0.080472206 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, release=1761123044, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step5, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc.)
Dec 02 08:41:22 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:41:22 np0005541914.localdomain podman[89278]: 2025-12-02 08:41:22.159052935 +0000 UTC m=+0.160866099 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, architecture=x86_64, io.buildah.version=1.41.4, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public)
Dec 02 08:41:22 np0005541914.localdomain podman[89278]: 2025-12-02 08:41:22.500935145 +0000 UTC m=+0.502748309 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, version=17.1.12)
Dec 02 08:41:22 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:41:24 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:41:24 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:41:25 np0005541914.localdomain systemd[1]: tmp-crun.iLWRWA.mount: Deactivated successfully.
Dec 02 08:41:25 np0005541914.localdomain podman[89327]: 2025-12-02 08:41:25.06870465 +0000 UTC m=+0.070797160 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., version=17.1.12, container_name=ovn_controller, io.buildah.version=1.41.4, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-type=git)
Dec 02 08:41:25 np0005541914.localdomain podman[89327]: 2025-12-02 08:41:25.120099596 +0000 UTC m=+0.122192146 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.buildah.version=1.41.4, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, maintainer=OpenStack TripleO Team, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 02 08:41:25 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:41:25 np0005541914.localdomain podman[89326]: 2025-12-02 08:41:25.121093694 +0000 UTC m=+0.127176747 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, distribution-scope=public, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z)
Dec 02 08:41:25 np0005541914.localdomain podman[89326]: 2025-12-02 08:41:25.204952126 +0000 UTC m=+0.211035139 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, io.buildah.version=1.41.4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:41:25 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:41:28 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:41:29 np0005541914.localdomain podman[89373]: 2025-12-02 08:41:29.075962683 +0000 UTC m=+0.081223026 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.12, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-collectd-container, release=1761123044, managed_by=tripleo_ansible, config_id=tripleo_step3, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd)
Dec 02 08:41:29 np0005541914.localdomain podman[89373]: 2025-12-02 08:41:29.087857717 +0000 UTC m=+0.093118070 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:41:29 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:41:31 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:41:32 np0005541914.localdomain systemd[1]: tmp-crun.25FuUf.mount: Deactivated successfully.
Dec 02 08:41:32 np0005541914.localdomain podman[89394]: 2025-12-02 08:41:32.072792848 +0000 UTC m=+0.075869723 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, tcib_managed=true, distribution-scope=public, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container)
Dec 02 08:41:32 np0005541914.localdomain podman[89394]: 2025-12-02 08:41:32.080508594 +0000 UTC m=+0.083585409 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, container_name=iscsid, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Dec 02 08:41:32 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:41:43 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:41:44 np0005541914.localdomain systemd[1]: tmp-crun.Ba9CUP.mount: Deactivated successfully.
Dec 02 08:41:44 np0005541914.localdomain podman[89415]: 2025-12-02 08:41:44.08014474 +0000 UTC m=+0.086287981 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, config_id=tripleo_step1, version=17.1.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:41:44 np0005541914.localdomain podman[89415]: 2025-12-02 08:41:44.266836912 +0000 UTC m=+0.272980153 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 08:41:44 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:41:49 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:41:49 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:41:49 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:41:50 np0005541914.localdomain podman[89445]: 2025-12-02 08:41:50.075989687 +0000 UTC m=+0.078067910 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, batch=17.1_20251118.1, config_id=tripleo_step4)
Dec 02 08:41:50 np0005541914.localdomain podman[89445]: 2025-12-02 08:41:50.093321497 +0000 UTC m=+0.095399700 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044)
Dec 02 08:41:50 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:41:50 np0005541914.localdomain podman[89446]: 2025-12-02 08:41:50.132791344 +0000 UTC m=+0.129308497 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, vcs-type=git, config_id=tripleo_step4, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible)
Dec 02 08:41:50 np0005541914.localdomain podman[89444]: 2025-12-02 08:41:50.182674161 +0000 UTC m=+0.184752054 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, release=1761123044, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=logrotate_crond, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron)
Dec 02 08:41:50 np0005541914.localdomain podman[89446]: 2025-12-02 08:41:50.189877382 +0000 UTC m=+0.186394465 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, release=1761123044, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 02 08:41:50 np0005541914.localdomain podman[89444]: 2025-12-02 08:41:50.19671645 +0000 UTC m=+0.198794373 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.openshift.expose-services=, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, url=https://www.redhat.com, managed_by=tripleo_ansible)
Dec 02 08:41:50 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:41:50 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:41:52 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:41:52 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:41:53 np0005541914.localdomain podman[89517]: 2025-12-02 08:41:53.084724805 +0000 UTC m=+0.082535737 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-type=git, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4)
Dec 02 08:41:53 np0005541914.localdomain podman[89516]: 2025-12-02 08:41:53.14533863 +0000 UTC m=+0.142982785 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, managed_by=tripleo_ansible, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:41:53 np0005541914.localdomain podman[89516]: 2025-12-02 08:41:53.175774811 +0000 UTC m=+0.173418906 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step5, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, version=17.1.12, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20251118.1, distribution-scope=public)
Dec 02 08:41:53 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:41:53 np0005541914.localdomain podman[89517]: 2025-12-02 08:41:53.408970227 +0000 UTC m=+0.406781129 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, distribution-scope=public, container_name=nova_migration_target, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, release=1761123044, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible)
Dec 02 08:41:53 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:41:55 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:41:55 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:41:56 np0005541914.localdomain systemd[1]: tmp-crun.yEqPCH.mount: Deactivated successfully.
Dec 02 08:41:56 np0005541914.localdomain podman[89564]: 2025-12-02 08:41:56.068794499 +0000 UTC m=+0.074823580 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://www.redhat.com, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, release=1761123044, config_id=tripleo_step4, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn)
Dec 02 08:41:56 np0005541914.localdomain podman[89565]: 2025-12-02 08:41:56.079162967 +0000 UTC m=+0.085280941 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, url=https://www.redhat.com, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, version=17.1.12, vcs-type=git, container_name=ovn_controller, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible)
Dec 02 08:41:56 np0005541914.localdomain podman[89564]: 2025-12-02 08:41:56.107821004 +0000 UTC m=+0.113850075 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, build-date=2025-11-19T00:14:25Z, tcib_managed=true, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn)
Dec 02 08:41:56 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:41:56 np0005541914.localdomain podman[89565]: 2025-12-02 08:41:56.129893099 +0000 UTC m=+0.136011073 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, container_name=ovn_controller, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, version=17.1.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, release=1761123044, config_id=tripleo_step4, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git)
Dec 02 08:41:56 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:41:59 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:42:00 np0005541914.localdomain podman[89612]: 2025-12-02 08:42:00.075338008 +0000 UTC m=+0.078652537 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, release=1761123044, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, version=17.1.12, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, container_name=collectd)
Dec 02 08:42:00 np0005541914.localdomain podman[89612]: 2025-12-02 08:42:00.084919382 +0000 UTC m=+0.088233951 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, architecture=x86_64, tcib_managed=true, config_id=tripleo_step3, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:42:00 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:42:02 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:42:03 np0005541914.localdomain podman[89651]: 2025-12-02 08:42:03.066284163 +0000 UTC m=+0.071117197 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, com.redhat.component=openstack-iscsid-container, container_name=iscsid, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, build-date=2025-11-18T23:44:13Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:42:03 np0005541914.localdomain podman[89651]: 2025-12-02 08:42:03.079132126 +0000 UTC m=+0.083965170 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, release=1761123044, container_name=iscsid)
Dec 02 08:42:03 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:42:14 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:42:14 np0005541914.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 02 08:42:15 np0005541914.localdomain recover_tripleo_nova_virtqemud[89675]: 61907
Dec 02 08:42:15 np0005541914.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 02 08:42:15 np0005541914.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 02 08:42:15 np0005541914.localdomain systemd[1]: tmp-crun.gdh1li.mount: Deactivated successfully.
Dec 02 08:42:15 np0005541914.localdomain podman[89673]: 2025-12-02 08:42:15.088104666 +0000 UTC m=+0.093984796 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr)
Dec 02 08:42:15 np0005541914.localdomain podman[89673]: 2025-12-02 08:42:15.278892573 +0000 UTC m=+0.284772703 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, version=17.1.12, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-type=git, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:42:15 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:42:17 np0005541914.localdomain sudo[89704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:42:17 np0005541914.localdomain sudo[89704]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:42:17 np0005541914.localdomain sudo[89704]: pam_unix(sudo:session): session closed for user root
Dec 02 08:42:17 np0005541914.localdomain sudo[89719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:42:17 np0005541914.localdomain sudo[89719]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:42:17 np0005541914.localdomain sudo[89719]: pam_unix(sudo:session): session closed for user root
Dec 02 08:42:18 np0005541914.localdomain sudo[89766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:42:18 np0005541914.localdomain sudo[89766]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:42:18 np0005541914.localdomain sudo[89766]: pam_unix(sudo:session): session closed for user root
Dec 02 08:42:20 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:42:20 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:42:20 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:42:21 np0005541914.localdomain podman[89782]: 2025-12-02 08:42:21.090944067 +0000 UTC m=+0.085321301 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., version=17.1.12, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute)
Dec 02 08:42:21 np0005541914.localdomain systemd[1]: tmp-crun.WcPrud.mount: Deactivated successfully.
Dec 02 08:42:21 np0005541914.localdomain systemd[1]: tmp-crun.ah1BpU.mount: Deactivated successfully.
Dec 02 08:42:21 np0005541914.localdomain podman[89781]: 2025-12-02 08:42:21.140583736 +0000 UTC m=+0.134810986 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-cron-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, release=1761123044, url=https://www.redhat.com, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z)
Dec 02 08:42:21 np0005541914.localdomain podman[89781]: 2025-12-02 08:42:21.151702266 +0000 UTC m=+0.145929536 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1761123044, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, architecture=x86_64)
Dec 02 08:42:21 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:42:21 np0005541914.localdomain podman[89783]: 2025-12-02 08:42:21.200679934 +0000 UTC m=+0.190413877 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, architecture=x86_64, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 02 08:42:21 np0005541914.localdomain podman[89782]: 2025-12-02 08:42:21.221961376 +0000 UTC m=+0.216338610 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, vcs-type=git, build-date=2025-11-19T00:11:48Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, version=17.1.12)
Dec 02 08:42:21 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:42:21 np0005541914.localdomain podman[89783]: 2025-12-02 08:42:21.254877063 +0000 UTC m=+0.244611006 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, io.buildah.version=1.41.4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-type=git, build-date=2025-11-19T00:12:45Z, distribution-scope=public, io.openshift.expose-services=)
Dec 02 08:42:21 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:42:22 np0005541914.localdomain systemd[1]: tmp-crun.KO6jR3.mount: Deactivated successfully.
Dec 02 08:42:23 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:42:23 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:42:24 np0005541914.localdomain podman[89852]: 2025-12-02 08:42:24.068591545 +0000 UTC m=+0.070974442 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, distribution-scope=public, release=1761123044, config_id=tripleo_step5, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:42:24 np0005541914.localdomain podman[89853]: 2025-12-02 08:42:24.129686735 +0000 UTC m=+0.129959877 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, release=1761123044, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Dec 02 08:42:24 np0005541914.localdomain podman[89852]: 2025-12-02 08:42:24.150266354 +0000 UTC m=+0.152649271 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, version=17.1.12, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute)
Dec 02 08:42:24 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:42:24 np0005541914.localdomain podman[89853]: 2025-12-02 08:42:24.541715362 +0000 UTC m=+0.541988444 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vcs-type=git, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.12, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., release=1761123044, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z)
Dec 02 08:42:24 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:42:26 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:42:26 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:42:27 np0005541914.localdomain podman[89902]: 2025-12-02 08:42:27.128675336 +0000 UTC m=+0.137834158 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git)
Dec 02 08:42:27 np0005541914.localdomain systemd[1]: tmp-crun.JLvdCv.mount: Deactivated successfully.
Dec 02 08:42:27 np0005541914.localdomain podman[89903]: 2025-12-02 08:42:27.17849291 +0000 UTC m=+0.174687785 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, distribution-scope=public, name=rhosp17/openstack-ovn-controller, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4)
Dec 02 08:42:27 np0005541914.localdomain podman[89902]: 2025-12-02 08:42:27.189795446 +0000 UTC m=+0.198954248 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., config_id=tripleo_step4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044)
Dec 02 08:42:27 np0005541914.localdomain podman[89903]: 2025-12-02 08:42:27.197740569 +0000 UTC m=+0.193935444 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=17.1.12, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Dec 02 08:42:27 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:42:27 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:42:30 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:42:31 np0005541914.localdomain systemd[1]: tmp-crun.BD5WiU.mount: Deactivated successfully.
Dec 02 08:42:31 np0005541914.localdomain podman[89950]: 2025-12-02 08:42:31.087715792 +0000 UTC m=+0.087133867 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, container_name=collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, url=https://www.redhat.com, release=1761123044, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, vcs-type=git, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd)
Dec 02 08:42:31 np0005541914.localdomain podman[89950]: 2025-12-02 08:42:31.101871215 +0000 UTC m=+0.101289280 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, distribution-scope=public, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, config_id=tripleo_step3, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true)
Dec 02 08:42:31 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:42:33 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:42:34 np0005541914.localdomain systemd[1]: tmp-crun.xmbQHP.mount: Deactivated successfully.
Dec 02 08:42:34 np0005541914.localdomain podman[89970]: 2025-12-02 08:42:34.087212517 +0000 UTC m=+0.087400655 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, container_name=iscsid, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, url=https://www.redhat.com, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:42:34 np0005541914.localdomain podman[89970]: 2025-12-02 08:42:34.123681993 +0000 UTC m=+0.123870141 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, version=17.1.12, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, tcib_managed=true)
Dec 02 08:42:34 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:42:38 np0005541914.localdomain sshd[89990]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:42:38 np0005541914.localdomain sshd[89990]: Invalid user solv from 45.148.10.240 port 55320
Dec 02 08:42:38 np0005541914.localdomain sshd[89990]: Connection closed by invalid user solv 45.148.10.240 port 55320 [preauth]
Dec 02 08:42:45 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:42:46 np0005541914.localdomain systemd[1]: tmp-crun.959v3f.mount: Deactivated successfully.
Dec 02 08:42:46 np0005541914.localdomain podman[89992]: 2025-12-02 08:42:46.107173577 +0000 UTC m=+0.101475585 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, vcs-type=git, managed_by=tripleo_ansible)
Dec 02 08:42:46 np0005541914.localdomain podman[89992]: 2025-12-02 08:42:46.291834818 +0000 UTC m=+0.286136806 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, version=17.1.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:42:46 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:42:51 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:42:51 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:42:51 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:42:52 np0005541914.localdomain systemd[1]: tmp-crun.AD6H7d.mount: Deactivated successfully.
Dec 02 08:42:52 np0005541914.localdomain podman[90023]: 2025-12-02 08:42:52.084781846 +0000 UTC m=+0.081972359 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, version=17.1.12)
Dec 02 08:42:52 np0005541914.localdomain podman[90021]: 2025-12-02 08:42:52.13885602 +0000 UTC m=+0.141263093 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron)
Dec 02 08:42:52 np0005541914.localdomain podman[90023]: 2025-12-02 08:42:52.144343878 +0000 UTC m=+0.141534351 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, distribution-scope=public, vendor=Red Hat, Inc., config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, tcib_managed=true, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible)
Dec 02 08:42:52 np0005541914.localdomain podman[90021]: 2025-12-02 08:42:52.144811742 +0000 UTC m=+0.147218785 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, batch=17.1_20251118.1, release=1761123044, vcs-type=git, version=17.1.12, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container)
Dec 02 08:42:52 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:42:52 np0005541914.localdomain podman[90022]: 2025-12-02 08:42:52.190268314 +0000 UTC m=+0.187186929 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.expose-services=, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, batch=17.1_20251118.1)
Dec 02 08:42:52 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:42:52 np0005541914.localdomain podman[90022]: 2025-12-02 08:42:52.220772326 +0000 UTC m=+0.217690941 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, architecture=x86_64, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044)
Dec 02 08:42:52 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:42:54 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:42:54 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:42:55 np0005541914.localdomain podman[90094]: 2025-12-02 08:42:55.092919107 +0000 UTC m=+0.091192911 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, architecture=x86_64, batch=17.1_20251118.1, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:42:55 np0005541914.localdomain podman[90093]: 2025-12-02 08:42:55.151742747 +0000 UTC m=+0.154239650 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, release=1761123044, container_name=nova_compute)
Dec 02 08:42:55 np0005541914.localdomain podman[90093]: 2025-12-02 08:42:55.171990236 +0000 UTC m=+0.174487159 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, batch=17.1_20251118.1, tcib_managed=true)
Dec 02 08:42:55 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:42:55 np0005541914.localdomain podman[90094]: 2025-12-02 08:42:55.528432732 +0000 UTC m=+0.526706566 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=)
Dec 02 08:42:55 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:42:57 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:42:57 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:42:58 np0005541914.localdomain systemd[1]: tmp-crun.wxT6Bq.mount: Deactivated successfully.
Dec 02 08:42:58 np0005541914.localdomain podman[90145]: 2025-12-02 08:42:58.085591895 +0000 UTC m=+0.089040156 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, build-date=2025-11-18T23:34:05Z, distribution-scope=public, name=rhosp17/openstack-ovn-controller, config_id=tripleo_step4, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 02 08:42:58 np0005541914.localdomain podman[90144]: 2025-12-02 08:42:58.056824075 +0000 UTC m=+0.066942479 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, config_id=tripleo_step4, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, release=1761123044, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12)
Dec 02 08:42:58 np0005541914.localdomain podman[90145]: 2025-12-02 08:42:58.132921343 +0000 UTC m=+0.136369624 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, architecture=x86_64, build-date=2025-11-18T23:34:05Z, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20251118.1, url=https://www.redhat.com)
Dec 02 08:42:58 np0005541914.localdomain podman[90144]: 2025-12-02 08:42:58.140805965 +0000 UTC m=+0.150924339 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, batch=17.1_20251118.1, url=https://www.redhat.com, release=1761123044, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:42:58 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:42:58 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:43:01 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:43:02 np0005541914.localdomain systemd[1]: tmp-crun.AzLlhR.mount: Deactivated successfully.
Dec 02 08:43:02 np0005541914.localdomain podman[90192]: 2025-12-02 08:43:02.089164143 +0000 UTC m=+0.091321214 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1761123044, version=17.1.12, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, com.redhat.component=openstack-collectd-container)
Dec 02 08:43:02 np0005541914.localdomain podman[90192]: 2025-12-02 08:43:02.10211396 +0000 UTC m=+0.104271081 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, url=https://www.redhat.com, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd)
Dec 02 08:43:02 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:43:04 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:43:05 np0005541914.localdomain podman[90235]: 2025-12-02 08:43:05.092382763 +0000 UTC m=+0.092362897 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, version=17.1.12, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., release=1761123044, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, batch=17.1_20251118.1, container_name=iscsid, name=rhosp17/openstack-iscsid)
Dec 02 08:43:05 np0005541914.localdomain podman[90235]: 2025-12-02 08:43:05.133212953 +0000 UTC m=+0.133193097 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, container_name=iscsid, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, release=1761123044, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, config_id=tripleo_step3)
Dec 02 08:43:05 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:43:16 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:43:17 np0005541914.localdomain systemd[1]: tmp-crun.HZbyQn.mount: Deactivated successfully.
Dec 02 08:43:17 np0005541914.localdomain podman[90254]: 2025-12-02 08:43:17.089524153 +0000 UTC m=+0.096078622 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-qdrouterd, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 08:43:17 np0005541914.localdomain podman[90254]: 2025-12-02 08:43:17.29728958 +0000 UTC m=+0.303844039 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, version=17.1.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, release=1761123044, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://www.redhat.com, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team)
Dec 02 08:43:17 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:43:18 np0005541914.localdomain sudo[90285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:43:18 np0005541914.localdomain sudo[90285]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:43:18 np0005541914.localdomain sudo[90285]: pam_unix(sudo:session): session closed for user root
Dec 02 08:43:18 np0005541914.localdomain sudo[90300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:43:18 np0005541914.localdomain sudo[90300]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:43:19 np0005541914.localdomain sudo[90300]: pam_unix(sudo:session): session closed for user root
Dec 02 08:43:20 np0005541914.localdomain sudo[90346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:43:20 np0005541914.localdomain sudo[90346]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:43:20 np0005541914.localdomain sudo[90346]: pam_unix(sudo:session): session closed for user root
Dec 02 08:43:22 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:43:22 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:43:22 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:43:23 np0005541914.localdomain systemd[1]: tmp-crun.u2XG8I.mount: Deactivated successfully.
Dec 02 08:43:23 np0005541914.localdomain podman[90361]: 2025-12-02 08:43:23.094950391 +0000 UTC m=+0.095769451 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, tcib_managed=true, architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044)
Dec 02 08:43:23 np0005541914.localdomain podman[90362]: 2025-12-02 08:43:23.147425167 +0000 UTC m=+0.148564336 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, vcs-type=git, io.buildah.version=1.41.4, io.openshift.expose-services=, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 02 08:43:23 np0005541914.localdomain podman[90361]: 2025-12-02 08:43:23.15603123 +0000 UTC m=+0.156850260 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, version=17.1.12, distribution-scope=public, io.buildah.version=1.41.4, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond)
Dec 02 08:43:23 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:43:23 np0005541914.localdomain podman[90362]: 2025-12-02 08:43:23.199895913 +0000 UTC m=+0.201035042 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, distribution-scope=public, release=1761123044, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team)
Dec 02 08:43:23 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:43:23 np0005541914.localdomain podman[90363]: 2025-12-02 08:43:23.246658044 +0000 UTC m=+0.242626095 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, release=1761123044, vendor=Red Hat, Inc., version=17.1.12, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 02 08:43:23 np0005541914.localdomain podman[90363]: 2025-12-02 08:43:23.296762746 +0000 UTC m=+0.292730777 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, version=17.1.12, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git)
Dec 02 08:43:23 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:43:25 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:43:25 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:43:26 np0005541914.localdomain systemd[1]: tmp-crun.yJOU4t.mount: Deactivated successfully.
Dec 02 08:43:26 np0005541914.localdomain podman[90436]: 2025-12-02 08:43:26.096190491 +0000 UTC m=+0.094989008 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, tcib_managed=true, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, release=1761123044, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, container_name=nova_compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:43:26 np0005541914.localdomain podman[90436]: 2025-12-02 08:43:26.129702416 +0000 UTC m=+0.128500873 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, config_id=tripleo_step5, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, distribution-scope=public, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute)
Dec 02 08:43:26 np0005541914.localdomain systemd[1]: tmp-crun.66OldF.mount: Deactivated successfully.
Dec 02 08:43:26 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:43:26 np0005541914.localdomain podman[90437]: 2025-12-02 08:43:26.143668554 +0000 UTC m=+0.139159180 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, distribution-scope=public, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:43:26 np0005541914.localdomain podman[90437]: 2025-12-02 08:43:26.517744989 +0000 UTC m=+0.513235595 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, config_id=tripleo_step4, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, version=17.1.12, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container)
Dec 02 08:43:26 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:43:28 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:43:28 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:43:29 np0005541914.localdomain systemd[1]: tmp-crun.zLhFX7.mount: Deactivated successfully.
Dec 02 08:43:29 np0005541914.localdomain podman[90489]: 2025-12-02 08:43:29.092946304 +0000 UTC m=+0.093271666 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, release=1761123044, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git)
Dec 02 08:43:29 np0005541914.localdomain podman[90488]: 2025-12-02 08:43:29.072825008 +0000 UTC m=+0.080039740 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, container_name=ovn_metadata_agent, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://www.redhat.com)
Dec 02 08:43:29 np0005541914.localdomain podman[90489]: 2025-12-02 08:43:29.146592565 +0000 UTC m=+0.146917917 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, io.buildah.version=1.41.4, container_name=ovn_controller, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, batch=17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Dec 02 08:43:29 np0005541914.localdomain podman[90488]: 2025-12-02 08:43:29.15589551 +0000 UTC m=+0.163110202 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.buildah.version=1.41.4, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, batch=17.1_20251118.1, url=https://www.redhat.com, container_name=ovn_metadata_agent, architecture=x86_64)
Dec 02 08:43:29 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:43:29 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:43:32 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:43:32 np0005541914.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 02 08:43:33 np0005541914.localdomain recover_tripleo_nova_virtqemud[90542]: 61907
Dec 02 08:43:33 np0005541914.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 02 08:43:33 np0005541914.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 02 08:43:33 np0005541914.localdomain podman[90535]: 2025-12-02 08:43:33.0784987 +0000 UTC m=+0.081803364 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, release=1761123044, architecture=x86_64, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd)
Dec 02 08:43:33 np0005541914.localdomain podman[90535]: 2025-12-02 08:43:33.088128805 +0000 UTC m=+0.091433519 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20251118.1, io.buildah.version=1.41.4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, container_name=collectd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 08:43:33 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:43:35 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:43:36 np0005541914.localdomain systemd[1]: tmp-crun.k256vW.mount: Deactivated successfully.
Dec 02 08:43:36 np0005541914.localdomain podman[90557]: 2025-12-02 08:43:36.080585375 +0000 UTC m=+0.081073511 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, release=1761123044, name=rhosp17/openstack-iscsid, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, version=17.1.12, architecture=x86_64, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, container_name=iscsid, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Dec 02 08:43:36 np0005541914.localdomain podman[90557]: 2025-12-02 08:43:36.094878783 +0000 UTC m=+0.095366959 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., release=1761123044, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, url=https://www.redhat.com)
Dec 02 08:43:36 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:43:47 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:43:48 np0005541914.localdomain systemd[1]: tmp-crun.2dhiCB.mount: Deactivated successfully.
Dec 02 08:43:48 np0005541914.localdomain podman[90575]: 2025-12-02 08:43:48.081515202 +0000 UTC m=+0.085000133 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, version=17.1.12, release=1761123044, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 08:43:48 np0005541914.localdomain podman[90575]: 2025-12-02 08:43:48.284317986 +0000 UTC m=+0.287802917 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, release=1761123044, io.openshift.expose-services=, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, batch=17.1_20251118.1, io.buildah.version=1.41.4, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git)
Dec 02 08:43:48 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:43:53 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:43:53 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:43:53 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:43:54 np0005541914.localdomain podman[90604]: 2025-12-02 08:43:54.069630261 +0000 UTC m=+0.079249075 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, version=17.1.12, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 08:43:54 np0005541914.localdomain podman[90604]: 2025-12-02 08:43:54.079823894 +0000 UTC m=+0.089442708 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, tcib_managed=true, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, distribution-scope=public, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 08:43:54 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:43:54 np0005541914.localdomain podman[90605]: 2025-12-02 08:43:54.128294717 +0000 UTC m=+0.132189466 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com)
Dec 02 08:43:54 np0005541914.localdomain podman[90605]: 2025-12-02 08:43:54.186702864 +0000 UTC m=+0.190597633 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, vcs-type=git)
Dec 02 08:43:54 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:43:54 np0005541914.localdomain podman[90606]: 2025-12-02 08:43:54.18819145 +0000 UTC m=+0.189798809 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20251118.1, release=1761123044, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 02 08:43:54 np0005541914.localdomain podman[90606]: 2025-12-02 08:43:54.273977364 +0000 UTC m=+0.275584703 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, config_id=tripleo_step4, url=https://www.redhat.com, version=17.1.12, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible)
Dec 02 08:43:54 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:43:56 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:43:56 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:43:57 np0005541914.localdomain podman[90673]: 2025-12-02 08:43:57.049767906 +0000 UTC m=+0.062437271 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, io.buildah.version=1.41.4, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp17/openstack-nova-compute, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 02 08:43:57 np0005541914.localdomain podman[90673]: 2025-12-02 08:43:57.095699982 +0000 UTC m=+0.108369317 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, version=17.1.12, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64)
Dec 02 08:43:57 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:43:57 np0005541914.localdomain podman[90674]: 2025-12-02 08:43:57.173696348 +0000 UTC m=+0.179336259 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, release=1761123044, distribution-scope=public, version=17.1.12)
Dec 02 08:43:57 np0005541914.localdomain podman[90674]: 2025-12-02 08:43:57.50582358 +0000 UTC m=+0.511463451 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.12, release=1761123044, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 02 08:43:57 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:43:59 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:43:59 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:44:00 np0005541914.localdomain podman[90723]: 2025-12-02 08:44:00.087890945 +0000 UTC m=+0.086955991 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.12, release=1761123044, batch=17.1_20251118.1, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4)
Dec 02 08:44:00 np0005541914.localdomain podman[90722]: 2025-12-02 08:44:00.066107809 +0000 UTC m=+0.070333403 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, version=17.1.12, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1)
Dec 02 08:44:00 np0005541914.localdomain podman[90723]: 2025-12-02 08:44:00.133931714 +0000 UTC m=+0.132996780 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1761123044, architecture=x86_64, tcib_managed=true)
Dec 02 08:44:00 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:44:00 np0005541914.localdomain podman[90722]: 2025-12-02 08:44:00.149754338 +0000 UTC m=+0.153979912 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://www.redhat.com, architecture=x86_64, release=1761123044, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, vcs-type=git, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.)
Dec 02 08:44:00 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:44:03 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:44:04 np0005541914.localdomain podman[90770]: 2025-12-02 08:44:04.065762027 +0000 UTC m=+0.072657144 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, config_id=tripleo_step3, distribution-scope=public, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-collectd, version=17.1.12)
Dec 02 08:44:04 np0005541914.localdomain podman[90770]: 2025-12-02 08:44:04.09789539 +0000 UTC m=+0.104790457 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, vcs-type=git, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, url=https://www.redhat.com, batch=17.1_20251118.1, config_id=tripleo_step3, distribution-scope=public, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z)
Dec 02 08:44:04 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:44:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:44:07 np0005541914.localdomain podman[90790]: 2025-12-02 08:44:07.074536727 +0000 UTC m=+0.079631408 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, name=rhosp17/openstack-iscsid, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Dec 02 08:44:07 np0005541914.localdomain podman[90790]: 2025-12-02 08:44:07.111918921 +0000 UTC m=+0.117013622 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, container_name=iscsid, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12)
Dec 02 08:44:07 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:44:18 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:44:19 np0005541914.localdomain systemd[1]: tmp-crun.Bud94f.mount: Deactivated successfully.
Dec 02 08:44:19 np0005541914.localdomain podman[90810]: 2025-12-02 08:44:19.082707652 +0000 UTC m=+0.084682902 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, version=17.1.12, release=1761123044, build-date=2025-11-18T22:49:46Z, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:44:19 np0005541914.localdomain podman[90810]: 2025-12-02 08:44:19.308006136 +0000 UTC m=+0.309981386 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, container_name=metrics_qdr, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1)
Dec 02 08:44:19 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:44:20 np0005541914.localdomain sudo[90840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:44:20 np0005541914.localdomain sudo[90840]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:44:20 np0005541914.localdomain sudo[90840]: pam_unix(sudo:session): session closed for user root
Dec 02 08:44:20 np0005541914.localdomain sudo[90855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:44:20 np0005541914.localdomain sudo[90855]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:44:21 np0005541914.localdomain sudo[90855]: pam_unix(sudo:session): session closed for user root
Dec 02 08:44:21 np0005541914.localdomain sudo[90902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:44:21 np0005541914.localdomain sudo[90902]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:44:21 np0005541914.localdomain sudo[90902]: pam_unix(sudo:session): session closed for user root
Dec 02 08:44:24 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:44:24 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:44:24 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:44:25 np0005541914.localdomain systemd[1]: tmp-crun.Cbnaly.mount: Deactivated successfully.
Dec 02 08:44:25 np0005541914.localdomain podman[90918]: 2025-12-02 08:44:25.102747579 +0000 UTC m=+0.096928476 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, url=https://www.redhat.com, version=17.1.12, config_id=tripleo_step4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 02 08:44:25 np0005541914.localdomain systemd[1]: tmp-crun.ZDHwnQ.mount: Deactivated successfully.
Dec 02 08:44:25 np0005541914.localdomain podman[90919]: 2025-12-02 08:44:25.149803229 +0000 UTC m=+0.141853541 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Dec 02 08:44:25 np0005541914.localdomain podman[90918]: 2025-12-02 08:44:25.165167749 +0000 UTC m=+0.159348616 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., url=https://www.redhat.com, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, architecture=x86_64)
Dec 02 08:44:25 np0005541914.localdomain podman[90917]: 2025-12-02 08:44:25.175754293 +0000 UTC m=+0.175187200 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., vcs-type=git, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron)
Dec 02 08:44:25 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:44:25 np0005541914.localdomain podman[90919]: 2025-12-02 08:44:25.182722366 +0000 UTC m=+0.174772678 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, tcib_managed=true, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:44:25 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:44:25 np0005541914.localdomain podman[90917]: 2025-12-02 08:44:25.214867559 +0000 UTC m=+0.214300526 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, container_name=logrotate_crond, io.buildah.version=1.41.4, release=1761123044, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git)
Dec 02 08:44:25 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:44:27 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:44:27 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:44:28 np0005541914.localdomain systemd[1]: tmp-crun.l2efAu.mount: Deactivated successfully.
Dec 02 08:44:28 np0005541914.localdomain podman[90987]: 2025-12-02 08:44:28.075356452 +0000 UTC m=+0.081804744 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute)
Dec 02 08:44:28 np0005541914.localdomain podman[90987]: 2025-12-02 08:44:28.12888864 +0000 UTC m=+0.135336982 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, version=17.1.12, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, container_name=nova_compute, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.buildah.version=1.41.4, distribution-scope=public, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64)
Dec 02 08:44:28 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:44:28 np0005541914.localdomain podman[90988]: 2025-12-02 08:44:28.130877101 +0000 UTC m=+0.132250088 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, version=17.1.12, architecture=x86_64, name=rhosp17/openstack-nova-compute, release=1761123044)
Dec 02 08:44:28 np0005541914.localdomain podman[90988]: 2025-12-02 08:44:28.538924186 +0000 UTC m=+0.540297183 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, tcib_managed=true, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team)
Dec 02 08:44:28 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:44:30 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:44:30 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:44:31 np0005541914.localdomain podman[91034]: 2025-12-02 08:44:31.044275883 +0000 UTC m=+0.052046174 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:44:31 np0005541914.localdomain podman[91034]: 2025-12-02 08:44:31.067856144 +0000 UTC m=+0.075626445 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_id=tripleo_step4, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, version=17.1.12, release=1761123044, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:44:31 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:44:31 np0005541914.localdomain systemd[1]: tmp-crun.pC9fKe.mount: Deactivated successfully.
Dec 02 08:44:31 np0005541914.localdomain podman[91035]: 2025-12-02 08:44:31.110461638 +0000 UTC m=+0.115659990 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, url=https://www.redhat.com, batch=17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, name=rhosp17/openstack-ovn-controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 02 08:44:31 np0005541914.localdomain podman[91035]: 2025-12-02 08:44:31.160823009 +0000 UTC m=+0.166021371 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, distribution-scope=public, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container)
Dec 02 08:44:31 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:44:34 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:44:35 np0005541914.localdomain podman[91082]: 2025-12-02 08:44:35.064179581 +0000 UTC m=+0.070788087 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, release=1761123044, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.openshift.expose-services=, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com)
Dec 02 08:44:35 np0005541914.localdomain podman[91082]: 2025-12-02 08:44:35.078898141 +0000 UTC m=+0.085506637 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, distribution-scope=public, version=17.1.12, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, io.buildah.version=1.41.4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd)
Dec 02 08:44:35 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:44:37 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:44:38 np0005541914.localdomain podman[91102]: 2025-12-02 08:44:38.053653931 +0000 UTC m=+0.061470722 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com, architecture=x86_64, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, com.redhat.component=openstack-iscsid-container)
Dec 02 08:44:38 np0005541914.localdomain podman[91102]: 2025-12-02 08:44:38.060740398 +0000 UTC m=+0.068557179 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, io.openshift.expose-services=, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, build-date=2025-11-18T23:44:13Z)
Dec 02 08:44:38 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:44:47 np0005541914.localdomain sshd[91121]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:44:47 np0005541914.localdomain sshd[91121]: Invalid user validator from 45.148.10.240 port 45460
Dec 02 08:44:47 np0005541914.localdomain sshd[91121]: Connection closed by invalid user validator 45.148.10.240 port 45460 [preauth]
Dec 02 08:44:49 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:44:50 np0005541914.localdomain podman[91123]: 2025-12-02 08:44:50.079300333 +0000 UTC m=+0.084186977 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, release=1761123044, version=17.1.12, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 08:44:50 np0005541914.localdomain podman[91123]: 2025-12-02 08:44:50.270944247 +0000 UTC m=+0.275830891 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, version=17.1.12, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, release=1761123044, vendor=Red Hat, Inc., container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public)
Dec 02 08:44:50 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:44:55 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:44:55 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:44:55 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:44:56 np0005541914.localdomain systemd[1]: tmp-crun.QCdG2E.mount: Deactivated successfully.
Dec 02 08:44:56 np0005541914.localdomain podman[91151]: 2025-12-02 08:44:56.087813818 +0000 UTC m=+0.088945463 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., version=17.1.12, vcs-type=git, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, container_name=logrotate_crond, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z)
Dec 02 08:44:56 np0005541914.localdomain podman[91151]: 2025-12-02 08:44:56.121490398 +0000 UTC m=+0.122622043 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc.)
Dec 02 08:44:56 np0005541914.localdomain systemd[1]: tmp-crun.yPTtOg.mount: Deactivated successfully.
Dec 02 08:44:56 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:44:56 np0005541914.localdomain podman[91153]: 2025-12-02 08:44:56.142002956 +0000 UTC m=+0.136202139 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1761123044, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Dec 02 08:44:56 np0005541914.localdomain podman[91153]: 2025-12-02 08:44:56.171946762 +0000 UTC m=+0.166145915 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 02 08:44:56 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:44:56 np0005541914.localdomain podman[91152]: 2025-12-02 08:44:56.178033738 +0000 UTC m=+0.175463820 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public)
Dec 02 08:44:56 np0005541914.localdomain podman[91152]: 2025-12-02 08:44:56.257169289 +0000 UTC m=+0.254599361 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vcs-type=git, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=ceilometer_agent_compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64)
Dec 02 08:44:56 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:44:58 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:44:58 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:44:58 np0005541914.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 02 08:44:59 np0005541914.localdomain recover_tripleo_nova_virtqemud[91230]: 61907
Dec 02 08:44:59 np0005541914.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 02 08:44:59 np0005541914.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 02 08:44:59 np0005541914.localdomain systemd[1]: tmp-crun.c7tfKd.mount: Deactivated successfully.
Dec 02 08:44:59 np0005541914.localdomain podman[91222]: 2025-12-02 08:44:59.095396033 +0000 UTC m=+0.095634346 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, tcib_managed=true, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, distribution-scope=public, vendor=Red Hat, Inc., release=1761123044, vcs-type=git, maintainer=OpenStack TripleO Team)
Dec 02 08:44:59 np0005541914.localdomain systemd[1]: tmp-crun.ySagJi.mount: Deactivated successfully.
Dec 02 08:44:59 np0005541914.localdomain podman[91223]: 2025-12-02 08:44:59.152869781 +0000 UTC m=+0.149547925 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, io.openshift.expose-services=, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.41.4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 02 08:44:59 np0005541914.localdomain podman[91222]: 2025-12-02 08:44:59.175287906 +0000 UTC m=+0.175526169 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, config_id=tripleo_step5, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 02 08:44:59 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:44:59 np0005541914.localdomain podman[91223]: 2025-12-02 08:44:59.543909429 +0000 UTC m=+0.540587603 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, architecture=x86_64, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:44:59 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:45:01 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:45:01 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:45:02 np0005541914.localdomain systemd[1]: tmp-crun.Fgr0BD.mount: Deactivated successfully.
Dec 02 08:45:02 np0005541914.localdomain podman[91276]: 2025-12-02 08:45:02.066130032 +0000 UTC m=+0.067618698 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, release=1761123044, maintainer=OpenStack TripleO Team, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, version=17.1.12, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 02 08:45:02 np0005541914.localdomain podman[91276]: 2025-12-02 08:45:02.109814508 +0000 UTC m=+0.111303194 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, architecture=x86_64, version=17.1.12, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, url=https://www.redhat.com, batch=17.1_20251118.1, release=1761123044, com.redhat.component=openstack-ovn-controller-container, vcs-type=git)
Dec 02 08:45:02 np0005541914.localdomain systemd[1]: tmp-crun.C0Gina.mount: Deactivated successfully.
Dec 02 08:45:02 np0005541914.localdomain podman[91275]: 2025-12-02 08:45:02.119661039 +0000 UTC m=+0.125836919 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, release=1761123044, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, version=17.1.12, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, container_name=ovn_metadata_agent)
Dec 02 08:45:02 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:45:02 np0005541914.localdomain podman[91275]: 2025-12-02 08:45:02.171764592 +0000 UTC m=+0.177940452 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 02 08:45:02 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:45:05 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:45:06 np0005541914.localdomain podman[91322]: 2025-12-02 08:45:06.058356141 +0000 UTC m=+0.064732000 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.buildah.version=1.41.4, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, container_name=collectd, distribution-scope=public, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, tcib_managed=true)
Dec 02 08:45:06 np0005541914.localdomain podman[91322]: 2025-12-02 08:45:06.065698055 +0000 UTC m=+0.072073904 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vendor=Red Hat, Inc., container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com)
Dec 02 08:45:06 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:45:08 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:45:09 np0005541914.localdomain podman[91342]: 2025-12-02 08:45:09.087545358 +0000 UTC m=+0.089279791 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, url=https://www.redhat.com, distribution-scope=public)
Dec 02 08:45:09 np0005541914.localdomain podman[91342]: 2025-12-02 08:45:09.128002796 +0000 UTC m=+0.129737229 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, version=17.1.12, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:45:09 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:45:14 np0005541914.localdomain sshd[91361]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:45:16 np0005541914.localdomain sshd[91361]: Invalid user ubuntu from 43.225.159.111 port 50362
Dec 02 08:45:16 np0005541914.localdomain sshd[91361]: Received disconnect from 43.225.159.111 port 50362:11:  [preauth]
Dec 02 08:45:16 np0005541914.localdomain sshd[91361]: Disconnected from invalid user ubuntu 43.225.159.111 port 50362 [preauth]
Dec 02 08:45:20 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:45:21 np0005541914.localdomain podman[91363]: 2025-12-02 08:45:21.09026337 +0000 UTC m=+0.086574859 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, version=17.1.12, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc.)
Dec 02 08:45:21 np0005541914.localdomain podman[91363]: 2025-12-02 08:45:21.32637669 +0000 UTC m=+0.322688129 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, batch=17.1_20251118.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, architecture=x86_64, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044)
Dec 02 08:45:21 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:45:22 np0005541914.localdomain sudo[91393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:45:22 np0005541914.localdomain sudo[91393]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:45:22 np0005541914.localdomain sudo[91393]: pam_unix(sudo:session): session closed for user root
Dec 02 08:45:22 np0005541914.localdomain sudo[91408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:45:22 np0005541914.localdomain sudo[91408]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:45:22 np0005541914.localdomain sudo[91408]: pam_unix(sudo:session): session closed for user root
Dec 02 08:45:26 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:45:26 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:45:26 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:45:27 np0005541914.localdomain systemd[1]: tmp-crun.YmR5fk.mount: Deactivated successfully.
Dec 02 08:45:27 np0005541914.localdomain podman[91455]: 2025-12-02 08:45:27.091665761 +0000 UTC m=+0.092344625 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, release=1761123044, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 02 08:45:27 np0005541914.localdomain systemd[1]: tmp-crun.k5iqQM.mount: Deactivated successfully.
Dec 02 08:45:27 np0005541914.localdomain podman[91454]: 2025-12-02 08:45:27.141862466 +0000 UTC m=+0.142689104 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, config_id=tripleo_step4, managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, tcib_managed=true, vcs-type=git)
Dec 02 08:45:27 np0005541914.localdomain podman[91456]: 2025-12-02 08:45:27.181601181 +0000 UTC m=+0.176718095 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi, version=17.1.12, io.buildah.version=1.41.4, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git)
Dec 02 08:45:27 np0005541914.localdomain podman[91455]: 2025-12-02 08:45:27.212976431 +0000 UTC m=+0.213655375 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vcs-type=git, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true)
Dec 02 08:45:27 np0005541914.localdomain sudo[91508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:45:27 np0005541914.localdomain sudo[91508]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:45:27 np0005541914.localdomain sudo[91508]: pam_unix(sudo:session): session closed for user root
Dec 02 08:45:27 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:45:27 np0005541914.localdomain podman[91454]: 2025-12-02 08:45:27.233335754 +0000 UTC m=+0.234162392 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, architecture=x86_64, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 08:45:27 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:45:27 np0005541914.localdomain podman[91456]: 2025-12-02 08:45:27.285037055 +0000 UTC m=+0.280153979 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.4, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi)
Dec 02 08:45:27 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:45:29 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:45:29 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:45:30 np0005541914.localdomain podman[91540]: 2025-12-02 08:45:30.079240355 +0000 UTC m=+0.083764953 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, build-date=2025-11-19T00:36:58Z, distribution-scope=public, io.buildah.version=1.41.4, url=https://www.redhat.com, managed_by=tripleo_ansible)
Dec 02 08:45:30 np0005541914.localdomain podman[91540]: 2025-12-02 08:45:30.136726364 +0000 UTC m=+0.141250962 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, vcs-type=git, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, config_id=tripleo_step5, release=1761123044, version=17.1.12, name=rhosp17/openstack-nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:45:30 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:45:30 np0005541914.localdomain podman[91541]: 2025-12-02 08:45:30.143900212 +0000 UTC m=+0.143371865 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.12, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, vcs-type=git, architecture=x86_64, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=nova_migration_target)
Dec 02 08:45:30 np0005541914.localdomain podman[91541]: 2025-12-02 08:45:30.510499734 +0000 UTC m=+0.509971437 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, release=1761123044, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true)
Dec 02 08:45:30 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:45:32 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:45:32 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:45:33 np0005541914.localdomain systemd[1]: tmp-crun.mOyPVs.mount: Deactivated successfully.
Dec 02 08:45:33 np0005541914.localdomain podman[91590]: 2025-12-02 08:45:33.074718682 +0000 UTC m=+0.082106162 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, batch=17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container)
Dec 02 08:45:33 np0005541914.localdomain podman[91591]: 2025-12-02 08:45:33.054724311 +0000 UTC m=+0.063123572 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, tcib_managed=true, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., vcs-type=git, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Dec 02 08:45:33 np0005541914.localdomain podman[91590]: 2025-12-02 08:45:33.114794218 +0000 UTC m=+0.122181668 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 02 08:45:33 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:45:33 np0005541914.localdomain podman[91591]: 2025-12-02 08:45:33.138876305 +0000 UTC m=+0.147275606 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Dec 02 08:45:33 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:45:33 np0005541914.localdomain sshd[91638]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:45:33 np0005541914.localdomain sshd[91638]: error: kex_exchange_identification: Connection closed by remote host
Dec 02 08:45:33 np0005541914.localdomain sshd[91638]: Connection closed by 217.170.199.90 port 37892
Dec 02 08:45:36 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:45:37 np0005541914.localdomain systemd[1]: tmp-crun.cLtTZK.mount: Deactivated successfully.
Dec 02 08:45:37 np0005541914.localdomain podman[91639]: 2025-12-02 08:45:37.0705291 +0000 UTC m=+0.079343107 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, batch=17.1_20251118.1)
Dec 02 08:45:37 np0005541914.localdomain podman[91639]: 2025-12-02 08:45:37.084322402 +0000 UTC m=+0.093136399 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.buildah.version=1.41.4, vcs-type=git, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, version=17.1.12, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, maintainer=OpenStack TripleO Team)
Dec 02 08:45:37 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:45:39 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:45:40 np0005541914.localdomain podman[91659]: 2025-12-02 08:45:40.058300871 +0000 UTC m=+0.068865777 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, release=1761123044, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public)
Dec 02 08:45:40 np0005541914.localdomain podman[91659]: 2025-12-02 08:45:40.091496566 +0000 UTC m=+0.102061462 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=iscsid, com.redhat.component=openstack-iscsid-container, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.buildah.version=1.41.4)
Dec 02 08:45:40 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:45:51 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:45:52 np0005541914.localdomain systemd[1]: tmp-crun.cBIbNM.mount: Deactivated successfully.
Dec 02 08:45:52 np0005541914.localdomain podman[91682]: 2025-12-02 08:45:52.090766542 +0000 UTC m=+0.090045314 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, url=https://www.redhat.com, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, distribution-scope=public, config_id=tripleo_step1, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:45:52 np0005541914.localdomain podman[91682]: 2025-12-02 08:45:52.30227334 +0000 UTC m=+0.301552092 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, architecture=x86_64, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git)
Dec 02 08:45:52 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:45:57 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:45:57 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:45:57 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:45:57 np0005541914.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 02 08:45:58 np0005541914.localdomain recover_tripleo_nova_virtqemud[91731]: 61907
Dec 02 08:45:58 np0005541914.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 02 08:45:58 np0005541914.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 02 08:45:58 np0005541914.localdomain systemd[1]: tmp-crun.SoeqUj.mount: Deactivated successfully.
Dec 02 08:45:58 np0005541914.localdomain podman[91714]: 2025-12-02 08:45:58.057676569 +0000 UTC m=+0.057073207 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, version=17.1.12, architecture=x86_64, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-type=git, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Dec 02 08:45:58 np0005541914.localdomain podman[91712]: 2025-12-02 08:45:58.083309633 +0000 UTC m=+0.084766743 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4)
Dec 02 08:45:58 np0005541914.localdomain podman[91712]: 2025-12-02 08:45:58.094857406 +0000 UTC m=+0.096314556 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.12, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4)
Dec 02 08:45:58 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:45:58 np0005541914.localdomain podman[91714]: 2025-12-02 08:45:58.140598425 +0000 UTC m=+0.139995043 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044)
Dec 02 08:45:58 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:45:58 np0005541914.localdomain podman[91713]: 2025-12-02 08:45:58.194581236 +0000 UTC m=+0.192686464 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=ceilometer_agent_compute, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc.)
Dec 02 08:45:58 np0005541914.localdomain podman[91713]: 2025-12-02 08:45:58.221755867 +0000 UTC m=+0.219861135 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1761123044, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, vendor=Red Hat, Inc., vcs-type=git, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, url=https://www.redhat.com, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 02 08:45:58 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:46:00 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:46:00 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:46:01 np0005541914.localdomain podman[91786]: 2025-12-02 08:46:01.064565414 +0000 UTC m=+0.070672233 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12)
Dec 02 08:46:01 np0005541914.localdomain podman[91785]: 2025-12-02 08:46:01.123100734 +0000 UTC m=+0.128450050 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, version=17.1.12, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1761123044)
Dec 02 08:46:01 np0005541914.localdomain podman[91785]: 2025-12-02 08:46:01.175106315 +0000 UTC m=+0.180455601 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step5, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., container_name=nova_compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044)
Dec 02 08:46:01 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:46:01 np0005541914.localdomain podman[91786]: 2025-12-02 08:46:01.445486703 +0000 UTC m=+0.451593462 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, url=https://www.redhat.com, release=1761123044, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64)
Dec 02 08:46:01 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:46:03 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:46:03 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:46:04 np0005541914.localdomain podman[91832]: 2025-12-02 08:46:04.08554683 +0000 UTC m=+0.086357711 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=ovn_metadata_agent)
Dec 02 08:46:04 np0005541914.localdomain podman[91833]: 2025-12-02 08:46:04.121077417 +0000 UTC m=+0.120962720 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.12, url=https://www.redhat.com, container_name=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 02 08:46:04 np0005541914.localdomain podman[91832]: 2025-12-02 08:46:04.144921596 +0000 UTC m=+0.145732467 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.buildah.version=1.41.4, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, architecture=x86_64, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z)
Dec 02 08:46:04 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:46:04 np0005541914.localdomain podman[91833]: 2025-12-02 08:46:04.171962613 +0000 UTC m=+0.171847876 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, batch=17.1_20251118.1, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, version=17.1.12, build-date=2025-11-18T23:34:05Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team)
Dec 02 08:46:04 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:46:07 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:46:08 np0005541914.localdomain podman[91882]: 2025-12-02 08:46:08.083128141 +0000 UTC m=+0.086286709 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, container_name=collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, tcib_managed=true, release=1761123044)
Dec 02 08:46:08 np0005541914.localdomain podman[91882]: 2025-12-02 08:46:08.117174693 +0000 UTC m=+0.120333281 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, release=1761123044, tcib_managed=true, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, batch=17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd)
Dec 02 08:46:08 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:46:10 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:46:11 np0005541914.localdomain systemd[1]: tmp-crun.0ylcIi.mount: Deactivated successfully.
Dec 02 08:46:11 np0005541914.localdomain podman[91902]: 2025-12-02 08:46:11.087351175 +0000 UTC m=+0.089908251 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, managed_by=tripleo_ansible, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, name=rhosp17/openstack-iscsid)
Dec 02 08:46:11 np0005541914.localdomain podman[91902]: 2025-12-02 08:46:11.125195722 +0000 UTC m=+0.127752788 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, release=1761123044, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid)
Dec 02 08:46:11 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:46:22 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:46:23 np0005541914.localdomain systemd[1]: tmp-crun.0FDrUl.mount: Deactivated successfully.
Dec 02 08:46:23 np0005541914.localdomain podman[91921]: 2025-12-02 08:46:23.091624544 +0000 UTC m=+0.096649177 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, release=1761123044, vendor=Red Hat, Inc., container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:46:23 np0005541914.localdomain podman[91921]: 2025-12-02 08:46:23.302287796 +0000 UTC m=+0.307312349 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, distribution-scope=public, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, architecture=x86_64, release=1761123044, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:46:23 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:46:27 np0005541914.localdomain sudo[91950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:46:27 np0005541914.localdomain sudo[91950]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:46:27 np0005541914.localdomain sudo[91950]: pam_unix(sudo:session): session closed for user root
Dec 02 08:46:27 np0005541914.localdomain sudo[91965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:46:27 np0005541914.localdomain sudo[91965]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:46:27 np0005541914.localdomain sudo[91965]: pam_unix(sudo:session): session closed for user root
Dec 02 08:46:28 np0005541914.localdomain sudo[92012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:46:28 np0005541914.localdomain sudo[92012]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:46:28 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:46:28 np0005541914.localdomain sudo[92012]: pam_unix(sudo:session): session closed for user root
Dec 02 08:46:28 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:46:28 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:46:28 np0005541914.localdomain sudo[92032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Dec 02 08:46:28 np0005541914.localdomain sudo[92032]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:46:28 np0005541914.localdomain podman[92028]: 2025-12-02 08:46:28.494861863 +0000 UTC m=+0.095972706 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, batch=17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4)
Dec 02 08:46:28 np0005541914.localdomain podman[92028]: 2025-12-02 08:46:28.522630863 +0000 UTC m=+0.123741716 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:46:28 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:46:28 np0005541914.localdomain podman[92027]: 2025-12-02 08:46:28.544255373 +0000 UTC m=+0.147175721 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron)
Dec 02 08:46:28 np0005541914.localdomain podman[92027]: 2025-12-02 08:46:28.552677551 +0000 UTC m=+0.155597889 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, config_id=tripleo_step4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., url=https://www.redhat.com, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:46:28 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:46:28 np0005541914.localdomain podman[92029]: 2025-12-02 08:46:28.598256085 +0000 UTC m=+0.195000354 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com)
Dec 02 08:46:28 np0005541914.localdomain podman[92029]: 2025-12-02 08:46:28.623851768 +0000 UTC m=+0.220596067 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, release=1761123044, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team)
Dec 02 08:46:28 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:46:28 np0005541914.localdomain sudo[92032]: pam_unix(sudo:session): session closed for user root
Dec 02 08:46:31 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:46:31 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:46:32 np0005541914.localdomain systemd[1]: tmp-crun.Vbyhlc.mount: Deactivated successfully.
Dec 02 08:46:32 np0005541914.localdomain podman[92133]: 2025-12-02 08:46:32.102558603 +0000 UTC m=+0.099650009 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., version=17.1.12, config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20251118.1)
Dec 02 08:46:32 np0005541914.localdomain podman[92132]: 2025-12-02 08:46:32.136516511 +0000 UTC m=+0.135460424 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, distribution-scope=public, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, release=1761123044)
Dec 02 08:46:32 np0005541914.localdomain podman[92132]: 2025-12-02 08:46:32.164822466 +0000 UTC m=+0.163766399 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12)
Dec 02 08:46:32 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:46:32 np0005541914.localdomain podman[92133]: 2025-12-02 08:46:32.551700688 +0000 UTC m=+0.548792034 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, version=17.1.12, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:46:32 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:46:32 np0005541914.localdomain sudo[92178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:46:32 np0005541914.localdomain sudo[92178]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:46:32 np0005541914.localdomain sudo[92178]: pam_unix(sudo:session): session closed for user root
Dec 02 08:46:34 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:46:34 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:46:35 np0005541914.localdomain podman[92193]: 2025-12-02 08:46:35.094034515 +0000 UTC m=+0.094764038 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, build-date=2025-11-19T00:14:25Z, container_name=ovn_metadata_agent, architecture=x86_64, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12)
Dec 02 08:46:35 np0005541914.localdomain podman[92193]: 2025-12-02 08:46:35.137923397 +0000 UTC m=+0.138652930 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1)
Dec 02 08:46:35 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:46:35 np0005541914.localdomain podman[92194]: 2025-12-02 08:46:35.162558421 +0000 UTC m=+0.158648833 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 02 08:46:35 np0005541914.localdomain podman[92194]: 2025-12-02 08:46:35.211035963 +0000 UTC m=+0.207126445 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, url=https://www.redhat.com, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=tripleo_ansible, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 02 08:46:35 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:46:38 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:46:39 np0005541914.localdomain podman[92242]: 2025-12-02 08:46:39.072965317 +0000 UTC m=+0.076947735 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, release=1761123044, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, version=17.1.12, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:46:39 np0005541914.localdomain podman[92242]: 2025-12-02 08:46:39.111884527 +0000 UTC m=+0.115866955 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true)
Dec 02 08:46:39 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:46:41 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:46:42 np0005541914.localdomain podman[92263]: 2025-12-02 08:46:42.064506012 +0000 UTC m=+0.068302150 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., release=1761123044, container_name=iscsid, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, io.buildah.version=1.41.4, vcs-type=git, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid)
Dec 02 08:46:42 np0005541914.localdomain podman[92263]: 2025-12-02 08:46:42.073775085 +0000 UTC m=+0.077571263 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:46:42 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:46:50 np0005541914.localdomain sshd[92282]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:46:51 np0005541914.localdomain sshd[92282]: Invalid user solana from 45.148.10.240 port 56186
Dec 02 08:46:51 np0005541914.localdomain sshd[92282]: Connection closed by invalid user solana 45.148.10.240 port 56186 [preauth]
Dec 02 08:46:53 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:46:54 np0005541914.localdomain systemd[1]: tmp-crun.5flj7s.mount: Deactivated successfully.
Dec 02 08:46:54 np0005541914.localdomain podman[92284]: 2025-12-02 08:46:54.086199825 +0000 UTC m=+0.086094994 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., url=https://www.redhat.com, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, config_id=tripleo_step1, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, architecture=x86_64, container_name=metrics_qdr, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 08:46:54 np0005541914.localdomain podman[92284]: 2025-12-02 08:46:54.301241382 +0000 UTC m=+0.301136621 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, release=1761123044, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible)
Dec 02 08:46:54 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:46:58 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:46:58 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:46:58 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:46:59 np0005541914.localdomain podman[92314]: 2025-12-02 08:46:59.048504441 +0000 UTC m=+0.056640553 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, build-date=2025-11-19T00:12:45Z, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi)
Dec 02 08:46:59 np0005541914.localdomain podman[92314]: 2025-12-02 08:46:59.092760344 +0000 UTC m=+0.100896806 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 02 08:46:59 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:46:59 np0005541914.localdomain podman[92312]: 2025-12-02 08:46:59.107147174 +0000 UTC m=+0.117179595 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, tcib_managed=true, release=1761123044, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, vcs-type=git, batch=17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 08:46:59 np0005541914.localdomain podman[92312]: 2025-12-02 08:46:59.11585297 +0000 UTC m=+0.125885391 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, tcib_managed=true, vcs-type=git, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron)
Dec 02 08:46:59 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:46:59 np0005541914.localdomain podman[92313]: 2025-12-02 08:46:59.129202459 +0000 UTC m=+0.135682500 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, release=1761123044, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, url=https://www.redhat.com)
Dec 02 08:46:59 np0005541914.localdomain podman[92313]: 2025-12-02 08:46:59.174837934 +0000 UTC m=+0.181317975 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, container_name=ceilometer_agent_compute, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:46:59 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:47:02 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:47:02 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:47:03 np0005541914.localdomain podman[92384]: 2025-12-02 08:47:03.066305061 +0000 UTC m=+0.069527257 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, batch=17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible)
Dec 02 08:47:03 np0005541914.localdomain systemd[1]: tmp-crun.BAwf0D.mount: Deactivated successfully.
Dec 02 08:47:03 np0005541914.localdomain podman[92383]: 2025-12-02 08:47:03.139670495 +0000 UTC m=+0.144722577 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://www.redhat.com, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=)
Dec 02 08:47:03 np0005541914.localdomain podman[92383]: 2025-12-02 08:47:03.164012009 +0000 UTC m=+0.169064041 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vcs-type=git, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., release=1761123044, managed_by=tripleo_ansible, config_id=tripleo_step5, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:47:03 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:47:03 np0005541914.localdomain podman[92384]: 2025-12-02 08:47:03.435274104 +0000 UTC m=+0.438496310 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, container_name=nova_migration_target)
Dec 02 08:47:03 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:47:05 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:47:05 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:47:06 np0005541914.localdomain podman[92433]: 2025-12-02 08:47:06.083732738 +0000 UTC m=+0.084803955 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1761123044, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, batch=17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, version=17.1.12, distribution-scope=public)
Dec 02 08:47:06 np0005541914.localdomain podman[92433]: 2025-12-02 08:47:06.106755192 +0000 UTC m=+0.107826389 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, architecture=x86_64)
Dec 02 08:47:06 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Deactivated successfully.
Dec 02 08:47:06 np0005541914.localdomain podman[92432]: 2025-12-02 08:47:06.183326723 +0000 UTC m=+0.186665429 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, version=17.1.12, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 02 08:47:06 np0005541914.localdomain podman[92432]: 2025-12-02 08:47:06.232972182 +0000 UTC m=+0.236310888 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true)
Dec 02 08:47:06 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:47:09 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:47:10 np0005541914.localdomain podman[92479]: 2025-12-02 08:47:10.06304039 +0000 UTC m=+0.073475328 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, release=1761123044, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']})
Dec 02 08:47:10 np0005541914.localdomain podman[92479]: 2025-12-02 08:47:10.077859072 +0000 UTC m=+0.088293940 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, config_id=tripleo_step3, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=collectd, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:47:10 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 08:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 3600.1 total, 600.0 interval
                                                          Cumulative writes: 4846 writes, 21K keys, 4846 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 4846 writes, 677 syncs, 7.16 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 02 08:47:12 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:47:13 np0005541914.localdomain podman[92502]: 2025-12-02 08:47:13.075027281 +0000 UTC m=+0.081746661 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, tcib_managed=true, architecture=x86_64, config_id=tripleo_step3)
Dec 02 08:47:13 np0005541914.localdomain podman[92502]: 2025-12-02 08:47:13.084962305 +0000 UTC m=+0.091681665 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.buildah.version=1.41.4)
Dec 02 08:47:13 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 08:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 3600.2 total, 600.0 interval
                                                          Cumulative writes: 5767 writes, 25K keys, 5767 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 5767 writes, 746 syncs, 7.73 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 02 08:47:24 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:47:24 np0005541914.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 02 08:47:25 np0005541914.localdomain recover_tripleo_nova_virtqemud[92528]: 61907
Dec 02 08:47:25 np0005541914.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 02 08:47:25 np0005541914.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 02 08:47:25 np0005541914.localdomain systemd[1]: tmp-crun.SgRY0A.mount: Deactivated successfully.
Dec 02 08:47:25 np0005541914.localdomain podman[92521]: 2025-12-02 08:47:25.087478099 +0000 UTC m=+0.089906890 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-type=git, release=1761123044, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container)
Dec 02 08:47:25 np0005541914.localdomain podman[92521]: 2025-12-02 08:47:25.306856689 +0000 UTC m=+0.309285470 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.4, architecture=x86_64, release=1761123044, managed_by=tripleo_ansible, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 08:47:25 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:47:29 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:47:29 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:47:29 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:47:30 np0005541914.localdomain podman[92552]: 2025-12-02 08:47:30.081628189 +0000 UTC m=+0.086356693 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, batch=17.1_20251118.1, io.buildah.version=1.41.4, container_name=logrotate_crond, distribution-scope=public, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, version=17.1.12, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 08:47:30 np0005541914.localdomain systemd[1]: tmp-crun.ST722N.mount: Deactivated successfully.
Dec 02 08:47:30 np0005541914.localdomain podman[92553]: 2025-12-02 08:47:30.138673193 +0000 UTC m=+0.141187269 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 02 08:47:30 np0005541914.localdomain podman[92552]: 2025-12-02 08:47:30.166244566 +0000 UTC m=+0.170973090 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1761123044, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, config_id=tripleo_step4, io.buildah.version=1.41.4, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:47:30 np0005541914.localdomain systemd[1]: tmp-crun.yY6ws1.mount: Deactivated successfully.
Dec 02 08:47:30 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:47:30 np0005541914.localdomain podman[92554]: 2025-12-02 08:47:30.188718273 +0000 UTC m=+0.186907096 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, tcib_managed=true, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc.)
Dec 02 08:47:30 np0005541914.localdomain podman[92553]: 2025-12-02 08:47:30.220086512 +0000 UTC m=+0.222600568 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, version=17.1.12, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 02 08:47:30 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:47:30 np0005541914.localdomain podman[92554]: 2025-12-02 08:47:30.242034414 +0000 UTC m=+0.240223057 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, distribution-scope=public, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc., url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:47:30 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:47:32 np0005541914.localdomain sudo[92624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:47:32 np0005541914.localdomain sudo[92624]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:47:32 np0005541914.localdomain sudo[92624]: pam_unix(sudo:session): session closed for user root
Dec 02 08:47:32 np0005541914.localdomain sudo[92639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:47:32 np0005541914.localdomain sudo[92639]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:47:33 np0005541914.localdomain sudo[92639]: pam_unix(sudo:session): session closed for user root
Dec 02 08:47:33 np0005541914.localdomain sudo[92686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:47:33 np0005541914.localdomain sudo[92686]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:47:33 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:47:33 np0005541914.localdomain sudo[92686]: pam_unix(sudo:session): session closed for user root
Dec 02 08:47:33 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:47:33 np0005541914.localdomain sudo[92703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074 -- inventory --format=json-pretty --filter-for-batch
Dec 02 08:47:33 np0005541914.localdomain sudo[92703]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:47:33 np0005541914.localdomain systemd[1]: tmp-crun.7Kg4MV.mount: Deactivated successfully.
Dec 02 08:47:33 np0005541914.localdomain podman[92701]: 2025-12-02 08:47:33.668319044 +0000 UTC m=+0.075284573 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:47:33 np0005541914.localdomain podman[92702]: 2025-12-02 08:47:33.727267627 +0000 UTC m=+0.133344998 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vcs-type=git, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.expose-services=, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, vendor=Red Hat, Inc., url=https://www.redhat.com)
Dec 02 08:47:33 np0005541914.localdomain podman[92701]: 2025-12-02 08:47:33.750630682 +0000 UTC m=+0.157596271 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, container_name=nova_compute)
Dec 02 08:47:33 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:47:34 np0005541914.localdomain podman[92702]: 2025-12-02 08:47:34.128760315 +0000 UTC m=+0.534837646 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, vcs-type=git, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:47:34 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:47:34 np0005541914.localdomain podman[92803]: 
Dec 02 08:47:34 np0005541914.localdomain podman[92803]: 2025-12-02 08:47:34.226893867 +0000 UTC m=+0.072666453 container create 65075710a3a76baec8d241a3692900b096538f3dafcc1e392ceea190ce7c6f29 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_sinoussi, vendor=Red Hat, Inc., RELEASE=main, description=Red Hat Ceph Storage 7, name=rhceph, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, vcs-type=git, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, ceph=True, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, architecture=x86_64)
Dec 02 08:47:34 np0005541914.localdomain systemd[1]: Started libpod-conmon-65075710a3a76baec8d241a3692900b096538f3dafcc1e392ceea190ce7c6f29.scope.
Dec 02 08:47:34 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:47:34 np0005541914.localdomain podman[92803]: 2025-12-02 08:47:34.296278339 +0000 UTC m=+0.142050895 container init 65075710a3a76baec8d241a3692900b096538f3dafcc1e392ceea190ce7c6f29 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_sinoussi, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, architecture=x86_64, release=1763362218, com.redhat.component=rhceph-container, GIT_CLEAN=True, ceph=True, vendor=Red Hat, Inc., name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=)
Dec 02 08:47:34 np0005541914.localdomain podman[92803]: 2025-12-02 08:47:34.197875269 +0000 UTC m=+0.043647835 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 08:47:34 np0005541914.localdomain podman[92803]: 2025-12-02 08:47:34.306272164 +0000 UTC m=+0.152044720 container start 65075710a3a76baec8d241a3692900b096538f3dafcc1e392ceea190ce7c6f29 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_sinoussi, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, vcs-type=git, io.buildah.version=1.41.4, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.openshift.expose-services=)
Dec 02 08:47:34 np0005541914.localdomain podman[92803]: 2025-12-02 08:47:34.306395278 +0000 UTC m=+0.152167834 container attach 65075710a3a76baec8d241a3692900b096538f3dafcc1e392ceea190ce7c6f29 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_sinoussi, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.buildah.version=1.41.4, release=1763362218, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, distribution-scope=public, version=7, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.expose-services=, vcs-type=git, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph)
Dec 02 08:47:34 np0005541914.localdomain practical_sinoussi[92819]: 167 167
Dec 02 08:47:34 np0005541914.localdomain systemd[1]: libpod-65075710a3a76baec8d241a3692900b096538f3dafcc1e392ceea190ce7c6f29.scope: Deactivated successfully.
Dec 02 08:47:34 np0005541914.localdomain podman[92803]: 2025-12-02 08:47:34.312572587 +0000 UTC m=+0.158345163 container died 65075710a3a76baec8d241a3692900b096538f3dafcc1e392ceea190ce7c6f29 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_sinoussi, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, build-date=2025-11-26T19:44:28Z, RELEASE=main, version=7, architecture=x86_64, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=)
Dec 02 08:47:34 np0005541914.localdomain podman[92824]: 2025-12-02 08:47:34.382884487 +0000 UTC m=+0.062339137 container remove 65075710a3a76baec8d241a3692900b096538f3dafcc1e392ceea190ce7c6f29 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_sinoussi, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, release=1763362218, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, distribution-scope=public, name=rhceph, RELEASE=main, version=7, io.openshift.expose-services=, GIT_CLEAN=True, vendor=Red Hat, Inc., vcs-type=git, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 02 08:47:34 np0005541914.localdomain systemd[1]: libpod-conmon-65075710a3a76baec8d241a3692900b096538f3dafcc1e392ceea190ce7c6f29.scope: Deactivated successfully.
Dec 02 08:47:34 np0005541914.localdomain podman[92845]: 
Dec 02 08:47:34 np0005541914.localdomain podman[92845]: 2025-12-02 08:47:34.565872133 +0000 UTC m=+0.052191817 container create 51b658b7dc7a640b3ab3b8826e2097d7b061446628d185ebf21cac5f17ccfdcb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_mccarthy, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-type=git, ceph=True, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, architecture=x86_64, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, distribution-scope=public, RELEASE=main, CEPH_POINT_RELEASE=, version=7, name=rhceph)
Dec 02 08:47:34 np0005541914.localdomain systemd[1]: Started libpod-conmon-51b658b7dc7a640b3ab3b8826e2097d7b061446628d185ebf21cac5f17ccfdcb.scope.
Dec 02 08:47:34 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 08:47:34 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5a71f6a8c6ec1d1ab1b3686d5b40fa2aa704eaf1d0854ac4a3921bab55c8903/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 02 08:47:34 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5a71f6a8c6ec1d1ab1b3686d5b40fa2aa704eaf1d0854ac4a3921bab55c8903/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 02 08:47:34 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5a71f6a8c6ec1d1ab1b3686d5b40fa2aa704eaf1d0854ac4a3921bab55c8903/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 02 08:47:34 np0005541914.localdomain podman[92845]: 2025-12-02 08:47:34.623925698 +0000 UTC m=+0.110245342 container init 51b658b7dc7a640b3ab3b8826e2097d7b061446628d185ebf21cac5f17ccfdcb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_mccarthy, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, distribution-scope=public, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, vcs-type=git, ceph=True, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7)
Dec 02 08:47:34 np0005541914.localdomain podman[92845]: 2025-12-02 08:47:34.630288903 +0000 UTC m=+0.116608547 container start 51b658b7dc7a640b3ab3b8826e2097d7b061446628d185ebf21cac5f17ccfdcb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_mccarthy, distribution-scope=public, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., name=rhceph, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, version=7)
Dec 02 08:47:34 np0005541914.localdomain podman[92845]: 2025-12-02 08:47:34.630466829 +0000 UTC m=+0.116786493 container attach 51b658b7dc7a640b3ab3b8826e2097d7b061446628d185ebf21cac5f17ccfdcb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_mccarthy, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.component=rhceph-container, GIT_CLEAN=True, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, version=7, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, name=rhceph, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1763362218)
Dec 02 08:47:34 np0005541914.localdomain podman[92845]: 2025-12-02 08:47:34.545676716 +0000 UTC m=+0.031996390 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 08:47:34 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3703270082c346e915c0c2c64e6bbc9751038af99dd0c78722d640cc272922a2-merged.mount: Deactivated successfully.
Dec 02 08:47:35 np0005541914.localdomain pensive_mccarthy[92860]: [
Dec 02 08:47:35 np0005541914.localdomain pensive_mccarthy[92860]:     {
Dec 02 08:47:35 np0005541914.localdomain pensive_mccarthy[92860]:         "available": false,
Dec 02 08:47:35 np0005541914.localdomain pensive_mccarthy[92860]:         "ceph_device": false,
Dec 02 08:47:35 np0005541914.localdomain pensive_mccarthy[92860]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 02 08:47:35 np0005541914.localdomain pensive_mccarthy[92860]:         "lsm_data": {},
Dec 02 08:47:35 np0005541914.localdomain pensive_mccarthy[92860]:         "lvs": [],
Dec 02 08:47:35 np0005541914.localdomain pensive_mccarthy[92860]:         "path": "/dev/sr0",
Dec 02 08:47:35 np0005541914.localdomain pensive_mccarthy[92860]:         "rejected_reasons": [
Dec 02 08:47:35 np0005541914.localdomain pensive_mccarthy[92860]:             "Insufficient space (<5GB)",
Dec 02 08:47:35 np0005541914.localdomain pensive_mccarthy[92860]:             "Has a FileSystem"
Dec 02 08:47:35 np0005541914.localdomain pensive_mccarthy[92860]:         ],
Dec 02 08:47:35 np0005541914.localdomain pensive_mccarthy[92860]:         "sys_api": {
Dec 02 08:47:35 np0005541914.localdomain pensive_mccarthy[92860]:             "actuators": null,
Dec 02 08:47:35 np0005541914.localdomain pensive_mccarthy[92860]:             "device_nodes": "sr0",
Dec 02 08:47:35 np0005541914.localdomain pensive_mccarthy[92860]:             "human_readable_size": "482.00 KB",
Dec 02 08:47:35 np0005541914.localdomain pensive_mccarthy[92860]:             "id_bus": "ata",
Dec 02 08:47:35 np0005541914.localdomain pensive_mccarthy[92860]:             "model": "QEMU DVD-ROM",
Dec 02 08:47:35 np0005541914.localdomain pensive_mccarthy[92860]:             "nr_requests": "2",
Dec 02 08:47:35 np0005541914.localdomain pensive_mccarthy[92860]:             "partitions": {},
Dec 02 08:47:35 np0005541914.localdomain pensive_mccarthy[92860]:             "path": "/dev/sr0",
Dec 02 08:47:35 np0005541914.localdomain pensive_mccarthy[92860]:             "removable": "1",
Dec 02 08:47:35 np0005541914.localdomain pensive_mccarthy[92860]:             "rev": "2.5+",
Dec 02 08:47:35 np0005541914.localdomain pensive_mccarthy[92860]:             "ro": "0",
Dec 02 08:47:35 np0005541914.localdomain pensive_mccarthy[92860]:             "rotational": "1",
Dec 02 08:47:35 np0005541914.localdomain pensive_mccarthy[92860]:             "sas_address": "",
Dec 02 08:47:35 np0005541914.localdomain pensive_mccarthy[92860]:             "sas_device_handle": "",
Dec 02 08:47:35 np0005541914.localdomain pensive_mccarthy[92860]:             "scheduler_mode": "mq-deadline",
Dec 02 08:47:35 np0005541914.localdomain pensive_mccarthy[92860]:             "sectors": 0,
Dec 02 08:47:35 np0005541914.localdomain pensive_mccarthy[92860]:             "sectorsize": "2048",
Dec 02 08:47:35 np0005541914.localdomain pensive_mccarthy[92860]:             "size": 493568.0,
Dec 02 08:47:35 np0005541914.localdomain pensive_mccarthy[92860]:             "support_discard": "0",
Dec 02 08:47:35 np0005541914.localdomain pensive_mccarthy[92860]:             "type": "disk",
Dec 02 08:47:35 np0005541914.localdomain pensive_mccarthy[92860]:             "vendor": "QEMU"
Dec 02 08:47:35 np0005541914.localdomain pensive_mccarthy[92860]:         }
Dec 02 08:47:35 np0005541914.localdomain pensive_mccarthy[92860]:     }
Dec 02 08:47:35 np0005541914.localdomain pensive_mccarthy[92860]: ]
Dec 02 08:47:35 np0005541914.localdomain systemd[1]: libpod-51b658b7dc7a640b3ab3b8826e2097d7b061446628d185ebf21cac5f17ccfdcb.scope: Deactivated successfully.
Dec 02 08:47:35 np0005541914.localdomain podman[92845]: 2025-12-02 08:47:35.552339921 +0000 UTC m=+1.038659615 container died 51b658b7dc7a640b3ab3b8826e2097d7b061446628d185ebf21cac5f17ccfdcb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_mccarthy, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, name=rhceph, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, version=7, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, release=1763362218, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.buildah.version=1.41.4, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z)
Dec 02 08:47:35 np0005541914.localdomain systemd[1]: tmp-crun.PYKnMo.mount: Deactivated successfully.
Dec 02 08:47:35 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d5a71f6a8c6ec1d1ab1b3686d5b40fa2aa704eaf1d0854ac4a3921bab55c8903-merged.mount: Deactivated successfully.
Dec 02 08:47:35 np0005541914.localdomain podman[94894]: 2025-12-02 08:47:35.65924854 +0000 UTC m=+0.092942023 container remove 51b658b7dc7a640b3ab3b8826e2097d7b061446628d185ebf21cac5f17ccfdcb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_mccarthy, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_CLEAN=True, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, com.redhat.component=rhceph-container, distribution-scope=public, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, ceph=True, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 08:47:35 np0005541914.localdomain systemd[1]: libpod-conmon-51b658b7dc7a640b3ab3b8826e2097d7b061446628d185ebf21cac5f17ccfdcb.scope: Deactivated successfully.
Dec 02 08:47:35 np0005541914.localdomain sudo[92703]: pam_unix(sudo:session): session closed for user root
Dec 02 08:47:36 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:47:36 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:47:37 np0005541914.localdomain podman[94908]: 2025-12-02 08:47:37.073324575 +0000 UTC m=+0.075627734 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, distribution-scope=public, vcs-type=git, version=17.1.12, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:47:37 np0005541914.localdomain podman[94908]: 2025-12-02 08:47:37.116907497 +0000 UTC m=+0.119210616 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_id=tripleo_step4, build-date=2025-11-19T00:14:25Z, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 02 08:47:37 np0005541914.localdomain systemd[1]: tmp-crun.c9TKR0.mount: Deactivated successfully.
Dec 02 08:47:37 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:47:37 np0005541914.localdomain podman[94909]: 2025-12-02 08:47:37.137440396 +0000 UTC m=+0.138720024 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, version=17.1.12, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, build-date=2025-11-18T23:34:05Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4)
Dec 02 08:47:37 np0005541914.localdomain podman[94909]: 2025-12-02 08:47:37.187795085 +0000 UTC m=+0.189074704 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, version=17.1.12, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, release=1761123044)
Dec 02 08:47:37 np0005541914.localdomain podman[94909]: unhealthy
Dec 02 08:47:37 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:47:37 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 08:47:38 np0005541914.localdomain sudo[94959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:47:38 np0005541914.localdomain sudo[94959]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:47:38 np0005541914.localdomain sudo[94959]: pam_unix(sudo:session): session closed for user root
Dec 02 08:47:40 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:47:41 np0005541914.localdomain podman[94974]: 2025-12-02 08:47:41.083307256 +0000 UTC m=+0.084763783 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, name=rhosp17/openstack-collectd, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, architecture=x86_64, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, batch=17.1_20251118.1, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Dec 02 08:47:41 np0005541914.localdomain podman[94974]: 2025-12-02 08:47:41.114301934 +0000 UTC m=+0.115758471 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.openshift.expose-services=, container_name=collectd, architecture=x86_64, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Dec 02 08:47:41 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:47:41 np0005541914.localdomain sshd[94994]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:47:42 np0005541914.localdomain sshd[94994]: error: kex_exchange_identification: Connection closed by remote host
Dec 02 08:47:42 np0005541914.localdomain sshd[94994]: Connection closed by 80.94.92.182 port 48994
Dec 02 08:47:43 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:47:44 np0005541914.localdomain podman[94995]: 2025-12-02 08:47:44.072021035 +0000 UTC m=+0.073083656 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, config_id=tripleo_step3, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, tcib_managed=true)
Dec 02 08:47:44 np0005541914.localdomain podman[94995]: 2025-12-02 08:47:44.108064407 +0000 UTC m=+0.109126988 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, distribution-scope=public, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, vcs-type=git, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, release=1761123044, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, container_name=iscsid, vendor=Red Hat, Inc.)
Dec 02 08:47:44 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:47:55 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:47:56 np0005541914.localdomain systemd[1]: tmp-crun.6nf2HH.mount: Deactivated successfully.
Dec 02 08:47:56 np0005541914.localdomain podman[95015]: 2025-12-02 08:47:56.086957431 +0000 UTC m=+0.092848990 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=metrics_qdr, config_id=tripleo_step1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, release=1761123044)
Dec 02 08:47:56 np0005541914.localdomain podman[95015]: 2025-12-02 08:47:56.292951731 +0000 UTC m=+0.298843300 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, vendor=Red Hat, Inc., container_name=metrics_qdr, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, architecture=x86_64, io.buildah.version=1.41.4, tcib_managed=true, batch=17.1_20251118.1, managed_by=tripleo_ansible)
Dec 02 08:47:56 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:48:00 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:48:00 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:48:00 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:48:01 np0005541914.localdomain podman[95044]: 2025-12-02 08:48:01.081014408 +0000 UTC m=+0.080393860 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_id=tripleo_step4, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-type=git, tcib_managed=true, distribution-scope=public)
Dec 02 08:48:01 np0005541914.localdomain podman[95044]: 2025-12-02 08:48:01.107789316 +0000 UTC m=+0.107168758 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 02 08:48:01 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:48:01 np0005541914.localdomain podman[95045]: 2025-12-02 08:48:01.184944406 +0000 UTC m=+0.181471271 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.buildah.version=1.41.4, release=1761123044, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, batch=17.1_20251118.1, tcib_managed=true)
Dec 02 08:48:01 np0005541914.localdomain podman[95045]: 2025-12-02 08:48:01.215519962 +0000 UTC m=+0.212046817 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Dec 02 08:48:01 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:48:01 np0005541914.localdomain podman[95043]: 2025-12-02 08:48:01.233681997 +0000 UTC m=+0.236601607 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1761123044, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, url=https://www.redhat.com, config_id=tripleo_step4, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron)
Dec 02 08:48:01 np0005541914.localdomain podman[95043]: 2025-12-02 08:48:01.263977373 +0000 UTC m=+0.266897013 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., release=1761123044, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, vcs-type=git, container_name=logrotate_crond, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com)
Dec 02 08:48:01 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:48:03 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:48:04 np0005541914.localdomain podman[95115]: 2025-12-02 08:48:04.071240434 +0000 UTC m=+0.079967936 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, version=17.1.12, url=https://www.redhat.com, io.openshift.expose-services=, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:48:04 np0005541914.localdomain podman[95115]: 2025-12-02 08:48:04.101932502 +0000 UTC m=+0.110660034 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1761123044, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.expose-services=, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute)
Dec 02 08:48:04 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:48:04 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:48:05 np0005541914.localdomain podman[95141]: 2025-12-02 08:48:05.074711701 +0000 UTC m=+0.079704288 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute)
Dec 02 08:48:05 np0005541914.localdomain podman[95141]: 2025-12-02 08:48:05.447025957 +0000 UTC m=+0.452019004 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:48:05 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:48:07 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:48:07 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:48:08 np0005541914.localdomain systemd[1]: tmp-crun.YEQ0u3.mount: Deactivated successfully.
Dec 02 08:48:08 np0005541914.localdomain podman[95166]: 2025-12-02 08:48:08.049329649 +0000 UTC m=+0.060830822 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:48:08 np0005541914.localdomain podman[95166]: 2025-12-02 08:48:08.100851525 +0000 UTC m=+0.112352748 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, url=https://www.redhat.com, release=1761123044, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1)
Dec 02 08:48:08 np0005541914.localdomain podman[95166]: unhealthy
Dec 02 08:48:08 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:48:08 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 08:48:08 np0005541914.localdomain podman[95165]: 2025-12-02 08:48:08.103996552 +0000 UTC m=+0.113958387 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, build-date=2025-11-19T00:14:25Z, release=1761123044, batch=17.1_20251118.1, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:48:08 np0005541914.localdomain podman[95165]: 2025-12-02 08:48:08.192013123 +0000 UTC m=+0.201974948 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, version=17.1.12, release=1761123044, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, container_name=ovn_metadata_agent, architecture=x86_64, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:48:08 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Deactivated successfully.
Dec 02 08:48:11 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:48:12 np0005541914.localdomain podman[95215]: 2025-12-02 08:48:12.075262479 +0000 UTC m=+0.082239056 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, batch=17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-collectd-container, vcs-type=git, name=rhosp17/openstack-collectd, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']})
Dec 02 08:48:12 np0005541914.localdomain podman[95215]: 2025-12-02 08:48:12.084192172 +0000 UTC m=+0.091168739 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=17.1.12, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4)
Dec 02 08:48:12 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:48:14 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:48:15 np0005541914.localdomain podman[95237]: 2025-12-02 08:48:15.05192063 +0000 UTC m=+0.057858191 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, container_name=iscsid, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., release=1761123044, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.expose-services=)
Dec 02 08:48:15 np0005541914.localdomain podman[95237]: 2025-12-02 08:48:15.058248783 +0000 UTC m=+0.064186364 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-iscsid, version=17.1.12, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.41.4, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Dec 02 08:48:15 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:48:27 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:48:27 np0005541914.localdomain podman[95256]: 2025-12-02 08:48:27.328150506 +0000 UTC m=+0.081829253 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, architecture=x86_64, vcs-type=git, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=)
Dec 02 08:48:27 np0005541914.localdomain podman[95256]: 2025-12-02 08:48:27.524949624 +0000 UTC m=+0.278628411 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 08:48:27 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:48:31 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:48:31 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:48:31 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:48:32 np0005541914.localdomain podman[95286]: 2025-12-02 08:48:32.066603626 +0000 UTC m=+0.069179697 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:48:32 np0005541914.localdomain podman[95285]: 2025-12-02 08:48:32.091826507 +0000 UTC m=+0.095274884 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, architecture=x86_64, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team)
Dec 02 08:48:32 np0005541914.localdomain podman[95285]: 2025-12-02 08:48:32.099251424 +0000 UTC m=+0.102699831 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-cron-container, release=1761123044, config_id=tripleo_step4, managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 08:48:32 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:48:32 np0005541914.localdomain podman[95286]: 2025-12-02 08:48:32.134864573 +0000 UTC m=+0.137440644 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, batch=17.1_20251118.1, io.openshift.expose-services=, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true)
Dec 02 08:48:32 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:48:32 np0005541914.localdomain podman[95287]: 2025-12-02 08:48:32.183220762 +0000 UTC m=+0.179532812 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, config_id=tripleo_step4, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi)
Dec 02 08:48:32 np0005541914.localdomain podman[95287]: 2025-12-02 08:48:32.21160772 +0000 UTC m=+0.207919800 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 02 08:48:32 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:48:34 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:48:35 np0005541914.localdomain podman[95354]: 2025-12-02 08:48:35.078116952 +0000 UTC m=+0.081438193 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, batch=17.1_20251118.1, config_id=tripleo_step5, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute)
Dec 02 08:48:35 np0005541914.localdomain podman[95354]: 2025-12-02 08:48:35.136851717 +0000 UTC m=+0.140172888 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, container_name=nova_compute, version=17.1.12, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, url=https://www.redhat.com, release=1761123044, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step5)
Dec 02 08:48:35 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:48:35 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:48:36 np0005541914.localdomain systemd[1]: tmp-crun.bbrLaw.mount: Deactivated successfully.
Dec 02 08:48:36 np0005541914.localdomain podman[95379]: 2025-12-02 08:48:36.09470397 +0000 UTC m=+0.090091426 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, container_name=nova_migration_target, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team)
Dec 02 08:48:36 np0005541914.localdomain podman[95379]: 2025-12-02 08:48:36.494093284 +0000 UTC m=+0.489480740 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, batch=17.1_20251118.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, version=17.1.12, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:48:36 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:48:38 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:48:38 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:48:39 np0005541914.localdomain sudo[95402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:48:39 np0005541914.localdomain systemd[1]: tmp-crun.tuNFF1.mount: Deactivated successfully.
Dec 02 08:48:39 np0005541914.localdomain sudo[95402]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:48:39 np0005541914.localdomain podman[95401]: 2025-12-02 08:48:39.084418831 +0000 UTC m=+0.085852757 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, container_name=ovn_controller, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step4, batch=17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller)
Dec 02 08:48:39 np0005541914.localdomain sudo[95402]: pam_unix(sudo:session): session closed for user root
Dec 02 08:48:39 np0005541914.localdomain podman[95401]: 2025-12-02 08:48:39.132509392 +0000 UTC m=+0.133943288 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, container_name=ovn_controller, architecture=x86_64, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, tcib_managed=true, io.buildah.version=1.41.4)
Dec 02 08:48:39 np0005541914.localdomain podman[95401]: unhealthy
Dec 02 08:48:39 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:48:39 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 08:48:39 np0005541914.localdomain sudo[95446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 02 08:48:39 np0005541914.localdomain sudo[95446]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:48:39 np0005541914.localdomain podman[95400]: 2025-12-02 08:48:39.137443383 +0000 UTC m=+0.138224009 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 02 08:48:39 np0005541914.localdomain podman[95400]: 2025-12-02 08:48:39.236519672 +0000 UTC m=+0.237300298 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.12, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, config_id=tripleo_step4, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:48:39 np0005541914.localdomain podman[95400]: unhealthy
Dec 02 08:48:39 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:48:39 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 08:48:39 np0005541914.localdomain sudo[95446]: pam_unix(sudo:session): session closed for user root
Dec 02 08:48:39 np0005541914.localdomain sudo[95492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:48:39 np0005541914.localdomain sudo[95492]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:48:39 np0005541914.localdomain sudo[95492]: pam_unix(sudo:session): session closed for user root
Dec 02 08:48:39 np0005541914.localdomain sudo[95507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:48:39 np0005541914.localdomain sudo[95507]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:48:40 np0005541914.localdomain sudo[95507]: pam_unix(sudo:session): session closed for user root
Dec 02 08:48:41 np0005541914.localdomain sudo[95553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:48:41 np0005541914.localdomain sudo[95553]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:48:41 np0005541914.localdomain sudo[95553]: pam_unix(sudo:session): session closed for user root
Dec 02 08:48:42 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:48:43 np0005541914.localdomain podman[95568]: 2025-12-02 08:48:43.049082677 +0000 UTC m=+0.055499069 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, release=1761123044, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd)
Dec 02 08:48:43 np0005541914.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 02 08:48:43 np0005541914.localdomain podman[95568]: 2025-12-02 08:48:43.091016809 +0000 UTC m=+0.097433241 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp17/openstack-collectd, tcib_managed=true, build-date=2025-11-18T22:51:28Z, distribution-scope=public, io.buildah.version=1.41.4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, release=1761123044, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12)
Dec 02 08:48:43 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:48:43 np0005541914.localdomain recover_tripleo_nova_virtqemud[95590]: 61907
Dec 02 08:48:43 np0005541914.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 02 08:48:43 np0005541914.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 02 08:48:45 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:48:46 np0005541914.localdomain podman[95591]: 2025-12-02 08:48:46.065799463 +0000 UTC m=+0.067861516 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, release=1761123044, distribution-scope=public, version=17.1.12, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4)
Dec 02 08:48:46 np0005541914.localdomain podman[95591]: 2025-12-02 08:48:46.105916599 +0000 UTC m=+0.107978702 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., build-date=2025-11-18T23:44:13Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, tcib_managed=true, io.buildah.version=1.41.4, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=)
Dec 02 08:48:46 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:48:57 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:48:58 np0005541914.localdomain systemd[1]: tmp-crun.4oZ36J.mount: Deactivated successfully.
Dec 02 08:48:58 np0005541914.localdomain podman[95609]: 2025-12-02 08:48:58.075571919 +0000 UTC m=+0.082885366 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1)
Dec 02 08:48:58 np0005541914.localdomain podman[95609]: 2025-12-02 08:48:58.257960907 +0000 UTC m=+0.265274324 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1761123044, distribution-scope=public)
Dec 02 08:48:58 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:49:00 np0005541914.localdomain sshd[95638]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:49:00 np0005541914.localdomain sshd[95638]: Invalid user solana from 45.148.10.240 port 52184
Dec 02 08:49:00 np0005541914.localdomain sshd[95638]: Connection closed by invalid user solana 45.148.10.240 port 52184 [preauth]
Dec 02 08:49:02 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:49:02 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:49:02 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:49:03 np0005541914.localdomain podman[95642]: 2025-12-02 08:49:03.069011626 +0000 UTC m=+0.067240308 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, release=1761123044, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 02 08:49:03 np0005541914.localdomain systemd[1]: tmp-crun.MlcdYJ.mount: Deactivated successfully.
Dec 02 08:49:03 np0005541914.localdomain podman[95642]: 2025-12-02 08:49:03.114710014 +0000 UTC m=+0.112938686 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.buildah.version=1.41.4, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:49:03 np0005541914.localdomain podman[95641]: 2025-12-02 08:49:03.117031464 +0000 UTC m=+0.116696979 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-type=git, batch=17.1_20251118.1, vendor=Red Hat, Inc., release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:49:03 np0005541914.localdomain podman[95641]: 2025-12-02 08:49:03.137231533 +0000 UTC m=+0.136897038 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, batch=17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, maintainer=OpenStack TripleO Team)
Dec 02 08:49:03 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:49:03 np0005541914.localdomain podman[95640]: 2025-12-02 08:49:03.172336796 +0000 UTC m=+0.172144955 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step4, release=1761123044, container_name=logrotate_crond, architecture=x86_64, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, batch=17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, io.openshift.expose-services=, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron)
Dec 02 08:49:03 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:49:03 np0005541914.localdomain podman[95640]: 2025-12-02 08:49:03.20682627 +0000 UTC m=+0.206634419 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, version=17.1.12, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, config_id=tripleo_step4, release=1761123044, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, architecture=x86_64)
Dec 02 08:49:03 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:49:05 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:49:06 np0005541914.localdomain podman[95712]: 2025-12-02 08:49:06.075640293 +0000 UTC m=+0.079809192 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, batch=17.1_20251118.1, container_name=nova_compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, architecture=x86_64)
Dec 02 08:49:06 np0005541914.localdomain podman[95712]: 2025-12-02 08:49:06.124015892 +0000 UTC m=+0.128184811 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=nova_compute, version=17.1.12, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']})
Dec 02 08:49:06 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:49:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:49:07 np0005541914.localdomain podman[95739]: 2025-12-02 08:49:07.083797343 +0000 UTC m=+0.088393653 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, distribution-scope=public, version=17.1.12, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, release=1761123044, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:49:07 np0005541914.localdomain podman[95739]: 2025-12-02 08:49:07.476791812 +0000 UTC m=+0.481388052 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1761123044, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, distribution-scope=public, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, vcs-type=git, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:49:07 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:49:09 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:49:09 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:49:10 np0005541914.localdomain podman[95762]: 2025-12-02 08:49:10.070341957 +0000 UTC m=+0.075665015 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, container_name=ovn_metadata_agent, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-type=git, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc.)
Dec 02 08:49:10 np0005541914.localdomain podman[95762]: 2025-12-02 08:49:10.110768653 +0000 UTC m=+0.116091721 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:49:10 np0005541914.localdomain podman[95762]: unhealthy
Dec 02 08:49:10 np0005541914.localdomain systemd[1]: tmp-crun.J2llWv.mount: Deactivated successfully.
Dec 02 08:49:10 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:49:10 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 08:49:10 np0005541914.localdomain podman[95763]: 2025-12-02 08:49:10.132211949 +0000 UTC m=+0.133442112 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., container_name=ovn_controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, version=17.1.12, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Dec 02 08:49:10 np0005541914.localdomain podman[95763]: 2025-12-02 08:49:10.146269079 +0000 UTC m=+0.147499222 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, release=1761123044, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true)
Dec 02 08:49:10 np0005541914.localdomain podman[95763]: unhealthy
Dec 02 08:49:10 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:49:10 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 08:49:13 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:49:14 np0005541914.localdomain podman[95801]: 2025-12-02 08:49:14.081631707 +0000 UTC m=+0.082624347 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, tcib_managed=true, name=rhosp17/openstack-collectd, release=1761123044, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com)
Dec 02 08:49:14 np0005541914.localdomain podman[95801]: 2025-12-02 08:49:14.116839804 +0000 UTC m=+0.117832444 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-type=git, version=17.1.12, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:49:14 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:49:16 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:49:17 np0005541914.localdomain podman[95821]: 2025-12-02 08:49:17.054886524 +0000 UTC m=+0.066626979 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vcs-type=git, release=1761123044, batch=17.1_20251118.1, config_id=tripleo_step3, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, version=17.1.12, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:49:17 np0005541914.localdomain podman[95821]: 2025-12-02 08:49:17.093530026 +0000 UTC m=+0.105270491 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.4, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, container_name=iscsid, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public)
Dec 02 08:49:17 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:49:29 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:49:29 np0005541914.localdomain systemd[1]: tmp-crun.pYuLdU.mount: Deactivated successfully.
Dec 02 08:49:29 np0005541914.localdomain podman[95840]: 2025-12-02 08:49:29.068745363 +0000 UTC m=+0.059562782 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, release=1761123044, url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1)
Dec 02 08:49:29 np0005541914.localdomain podman[95840]: 2025-12-02 08:49:29.252492123 +0000 UTC m=+0.243309502 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, version=17.1.12, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, tcib_managed=true, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, io.buildah.version=1.41.4, batch=17.1_20251118.1, release=1761123044, io.openshift.expose-services=, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 08:49:29 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:49:34 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:49:34 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:49:34 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:49:34 np0005541914.localdomain systemd[1]: tmp-crun.VAtNaH.mount: Deactivated successfully.
Dec 02 08:49:34 np0005541914.localdomain podman[95871]: 2025-12-02 08:49:34.120319749 +0000 UTC m=+0.080540834 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, version=17.1.12, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 02 08:49:34 np0005541914.localdomain podman[95870]: 2025-12-02 08:49:34.089677812 +0000 UTC m=+0.055489508 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, version=17.1.12, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, io.buildah.version=1.41.4, tcib_managed=true, name=rhosp17/openstack-cron, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron)
Dec 02 08:49:34 np0005541914.localdomain podman[95872]: 2025-12-02 08:49:34.153650937 +0000 UTC m=+0.112961505 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, distribution-scope=public, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4)
Dec 02 08:49:34 np0005541914.localdomain podman[95871]: 2025-12-02 08:49:34.165897913 +0000 UTC m=+0.126119068 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, release=1761123044, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, io.buildah.version=1.41.4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:49:34 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:49:34 np0005541914.localdomain podman[95872]: 2025-12-02 08:49:34.181644314 +0000 UTC m=+0.140954822 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public)
Dec 02 08:49:34 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:49:34 np0005541914.localdomain podman[95870]: 2025-12-02 08:49:34.220024117 +0000 UTC m=+0.185835853 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, distribution-scope=public, release=1761123044, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., architecture=x86_64, container_name=logrotate_crond)
Dec 02 08:49:34 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:49:36 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:49:37 np0005541914.localdomain podman[95942]: 2025-12-02 08:49:37.088385206 +0000 UTC m=+0.087799676 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64, container_name=nova_compute, maintainer=OpenStack TripleO Team, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_id=tripleo_step5, vcs-type=git, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']})
Dec 02 08:49:37 np0005541914.localdomain podman[95942]: 2025-12-02 08:49:37.138945402 +0000 UTC m=+0.138359882 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2025-11-19T00:36:58Z, version=17.1.12, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:49:37 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:49:37 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:49:38 np0005541914.localdomain systemd[1]: tmp-crun.M9V0l2.mount: Deactivated successfully.
Dec 02 08:49:38 np0005541914.localdomain podman[95969]: 2025-12-02 08:49:38.083548289 +0000 UTC m=+0.087967361 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, release=1761123044, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:49:38 np0005541914.localdomain podman[95969]: 2025-12-02 08:49:38.474886187 +0000 UTC m=+0.479305309 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1761123044, batch=17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, version=17.1.12, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:49:38 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:49:40 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:49:40 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:49:41 np0005541914.localdomain systemd[1]: tmp-crun.XMhF7w.mount: Deactivated successfully.
Dec 02 08:49:41 np0005541914.localdomain podman[95991]: 2025-12-02 08:49:41.079637945 +0000 UTC m=+0.087611341 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, url=https://www.redhat.com, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, version=17.1.12, release=1761123044, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, architecture=x86_64)
Dec 02 08:49:41 np0005541914.localdomain podman[95992]: 2025-12-02 08:49:41.124542318 +0000 UTC m=+0.131776051 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.buildah.version=1.41.4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z)
Dec 02 08:49:41 np0005541914.localdomain podman[95991]: 2025-12-02 08:49:41.143161457 +0000 UTC m=+0.151134913 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, architecture=x86_64, vcs-type=git, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, distribution-scope=public, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:49:41 np0005541914.localdomain podman[95991]: unhealthy
Dec 02 08:49:41 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:49:41 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 08:49:41 np0005541914.localdomain podman[95992]: 2025-12-02 08:49:41.168088649 +0000 UTC m=+0.175322372 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, version=17.1.12, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, config_id=tripleo_step4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:49:41 np0005541914.localdomain podman[95992]: unhealthy
Dec 02 08:49:41 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:49:41 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 08:49:41 np0005541914.localdomain sudo[96032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:49:41 np0005541914.localdomain sudo[96032]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:49:41 np0005541914.localdomain sudo[96032]: pam_unix(sudo:session): session closed for user root
Dec 02 08:49:41 np0005541914.localdomain sudo[96047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:49:41 np0005541914.localdomain sudo[96047]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:49:41 np0005541914.localdomain sudo[96047]: pam_unix(sudo:session): session closed for user root
Dec 02 08:49:42 np0005541914.localdomain sudo[96093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:49:42 np0005541914.localdomain sudo[96093]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:49:42 np0005541914.localdomain sudo[96093]: pam_unix(sudo:session): session closed for user root
Dec 02 08:49:44 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:49:45 np0005541914.localdomain podman[96108]: 2025-12-02 08:49:45.082092016 +0000 UTC m=+0.089129836 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, io.buildah.version=1.41.4, container_name=collectd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:49:45 np0005541914.localdomain podman[96108]: 2025-12-02 08:49:45.123837532 +0000 UTC m=+0.130875332 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, config_id=tripleo_step3, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, version=17.1.12, vcs-type=git, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, container_name=collectd, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']})
Dec 02 08:49:45 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:49:47 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:49:48 np0005541914.localdomain podman[96128]: 2025-12-02 08:49:48.056478717 +0000 UTC m=+0.063462831 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, architecture=x86_64, tcib_managed=true, container_name=iscsid, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:49:48 np0005541914.localdomain podman[96128]: 2025-12-02 08:49:48.069163995 +0000 UTC m=+0.076148119 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1761123044, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, version=17.1.12, managed_by=tripleo_ansible, vcs-type=git)
Dec 02 08:49:48 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:49:59 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:50:00 np0005541914.localdomain podman[96147]: 2025-12-02 08:50:00.079484527 +0000 UTC m=+0.078818351 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, container_name=metrics_qdr, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, url=https://www.redhat.com, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:50:00 np0005541914.localdomain podman[96147]: 2025-12-02 08:50:00.274674147 +0000 UTC m=+0.274008061 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, distribution-scope=public, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true)
Dec 02 08:50:00 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:50:04 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:50:04 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:50:04 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:50:05 np0005541914.localdomain podman[96176]: 2025-12-02 08:50:05.077317588 +0000 UTC m=+0.082195234 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, version=17.1.12, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, vcs-type=git, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com)
Dec 02 08:50:05 np0005541914.localdomain podman[96176]: 2025-12-02 08:50:05.089342235 +0000 UTC m=+0.094219941 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, container_name=logrotate_crond, release=1761123044, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, config_id=tripleo_step4, architecture=x86_64)
Dec 02 08:50:05 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:50:05 np0005541914.localdomain podman[96178]: 2025-12-02 08:50:05.137699355 +0000 UTC m=+0.139293111 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, version=17.1.12, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container)
Dec 02 08:50:05 np0005541914.localdomain podman[96177]: 2025-12-02 08:50:05.185131375 +0000 UTC m=+0.189112694 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, release=1761123044, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, vcs-type=git, url=https://www.redhat.com, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:50:05 np0005541914.localdomain podman[96177]: 2025-12-02 08:50:05.213215694 +0000 UTC m=+0.217197013 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, config_id=tripleo_step4, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, io.buildah.version=1.41.4, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 02 08:50:05 np0005541914.localdomain podman[96178]: 2025-12-02 08:50:05.214780242 +0000 UTC m=+0.216373948 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi)
Dec 02 08:50:05 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:50:05 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:50:07 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:50:08 np0005541914.localdomain podman[96248]: 2025-12-02 08:50:08.075859947 +0000 UTC m=+0.078714418 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., release=1761123044, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:50:08 np0005541914.localdomain podman[96248]: 2025-12-02 08:50:08.103831792 +0000 UTC m=+0.106686253 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, version=17.1.12)
Dec 02 08:50:08 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:50:08 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:50:09 np0005541914.localdomain systemd[1]: tmp-crun.770tkU.mount: Deactivated successfully.
Dec 02 08:50:09 np0005541914.localdomain podman[96274]: 2025-12-02 08:50:09.06770596 +0000 UTC m=+0.079686179 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, container_name=nova_migration_target, batch=17.1_20251118.1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com)
Dec 02 08:50:09 np0005541914.localdomain podman[96274]: 2025-12-02 08:50:09.461911334 +0000 UTC m=+0.473891573 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, io.buildah.version=1.41.4, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=)
Dec 02 08:50:09 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:50:11 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:50:11 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:50:12 np0005541914.localdomain podman[96297]: 2025-12-02 08:50:12.072371516 +0000 UTC m=+0.080499483 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64)
Dec 02 08:50:12 np0005541914.localdomain podman[96297]: 2025-12-02 08:50:12.08884597 +0000 UTC m=+0.096973927 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, tcib_managed=true, version=17.1.12, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public)
Dec 02 08:50:12 np0005541914.localdomain podman[96297]: unhealthy
Dec 02 08:50:12 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:50:12 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 08:50:12 np0005541914.localdomain systemd[1]: tmp-crun.d5lPCW.mount: Deactivated successfully.
Dec 02 08:50:12 np0005541914.localdomain podman[96298]: 2025-12-02 08:50:12.176704627 +0000 UTC m=+0.179010056 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 02 08:50:12 np0005541914.localdomain podman[96298]: 2025-12-02 08:50:12.192332175 +0000 UTC m=+0.194637594 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, batch=17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 02 08:50:12 np0005541914.localdomain podman[96298]: unhealthy
Dec 02 08:50:12 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:50:12 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 08:50:15 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:50:16 np0005541914.localdomain systemd[1]: tmp-crun.vmNpRx.mount: Deactivated successfully.
Dec 02 08:50:16 np0005541914.localdomain podman[96338]: 2025-12-02 08:50:16.069415312 +0000 UTC m=+0.075760958 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, config_id=tripleo_step3, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, container_name=collectd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, architecture=x86_64, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, io.buildah.version=1.41.4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:50:16 np0005541914.localdomain podman[96338]: 2025-12-02 08:50:16.083040149 +0000 UTC m=+0.089385785 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, name=rhosp17/openstack-collectd, vcs-type=git, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com)
Dec 02 08:50:16 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:50:18 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:50:19 np0005541914.localdomain podman[96359]: 2025-12-02 08:50:19.066633462 +0000 UTC m=+0.075511530 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, tcib_managed=true, config_id=tripleo_step3, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, container_name=iscsid, distribution-scope=public, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:50:19 np0005541914.localdomain podman[96359]: 2025-12-02 08:50:19.075817693 +0000 UTC m=+0.084695771 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, container_name=iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:50:19 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:50:24 np0005541914.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 02 08:50:25 np0005541914.localdomain recover_tripleo_nova_virtqemud[96379]: 61907
Dec 02 08:50:25 np0005541914.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 02 08:50:25 np0005541914.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 02 08:50:30 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:50:31 np0005541914.localdomain systemd[1]: tmp-crun.82FxN6.mount: Deactivated successfully.
Dec 02 08:50:31 np0005541914.localdomain podman[96380]: 2025-12-02 08:50:31.086995733 +0000 UTC m=+0.092834811 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, vendor=Red Hat, Inc., batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, build-date=2025-11-18T22:49:46Z, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, release=1761123044, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:50:31 np0005541914.localdomain podman[96380]: 2025-12-02 08:50:31.277372095 +0000 UTC m=+0.283211233 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, distribution-scope=public, vendor=Red Hat, Inc., release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']})
Dec 02 08:50:31 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:50:35 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:50:35 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:50:35 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:50:36 np0005541914.localdomain podman[96410]: 2025-12-02 08:50:36.089782815 +0000 UTC m=+0.091775247 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, architecture=x86_64, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, tcib_managed=true, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, config_id=tripleo_step4)
Dec 02 08:50:36 np0005541914.localdomain podman[96410]: 2025-12-02 08:50:36.0997304 +0000 UTC m=+0.101722842 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, release=1761123044, version=17.1.12, config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, io.openshift.expose-services=, io.buildah.version=1.41.4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, tcib_managed=true, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 08:50:36 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:50:36 np0005541914.localdomain podman[96412]: 2025-12-02 08:50:36.140803876 +0000 UTC m=+0.137668261 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, architecture=x86_64, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, url=https://www.redhat.com, distribution-scope=public, tcib_managed=true, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible)
Dec 02 08:50:36 np0005541914.localdomain podman[96411]: 2025-12-02 08:50:36.062601715 +0000 UTC m=+0.067770194 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, url=https://www.redhat.com, batch=17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute)
Dec 02 08:50:36 np0005541914.localdomain podman[96412]: 2025-12-02 08:50:36.173963951 +0000 UTC m=+0.170828316 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, release=1761123044, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step4, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, tcib_managed=true)
Dec 02 08:50:36 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:50:36 np0005541914.localdomain podman[96411]: 2025-12-02 08:50:36.197101768 +0000 UTC m=+0.202270287 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12)
Dec 02 08:50:36 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:50:38 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:50:39 np0005541914.localdomain podman[96482]: 2025-12-02 08:50:39.061538445 +0000 UTC m=+0.065216995 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.expose-services=, vcs-type=git, release=1761123044, architecture=x86_64, version=17.1.12, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, config_id=tripleo_step5, io.buildah.version=1.41.4)
Dec 02 08:50:39 np0005541914.localdomain podman[96482]: 2025-12-02 08:50:39.118919901 +0000 UTC m=+0.122598411 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, config_id=tripleo_step5, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, tcib_managed=true, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1)
Dec 02 08:50:39 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:50:39 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:50:40 np0005541914.localdomain podman[96508]: 2025-12-02 08:50:40.079690103 +0000 UTC m=+0.084891358 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, version=17.1.12, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, vcs-type=git, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, release=1761123044, architecture=x86_64, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team)
Dec 02 08:50:40 np0005541914.localdomain podman[96508]: 2025-12-02 08:50:40.385806444 +0000 UTC m=+0.391007689 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., version=17.1.12, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, io.buildah.version=1.41.4, distribution-scope=public)
Dec 02 08:50:40 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:50:42 np0005541914.localdomain sudo[96531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:50:42 np0005541914.localdomain sudo[96531]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:50:42 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:50:42 np0005541914.localdomain sudo[96531]: pam_unix(sudo:session): session closed for user root
Dec 02 08:50:42 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:50:42 np0005541914.localdomain sudo[96563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 02 08:50:42 np0005541914.localdomain sudo[96563]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:50:42 np0005541914.localdomain podman[96547]: 2025-12-02 08:50:42.826069731 +0000 UTC m=+0.072136838 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.4, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step4, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 02 08:50:42 np0005541914.localdomain podman[96547]: 2025-12-02 08:50:42.837705146 +0000 UTC m=+0.083772183 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, distribution-scope=public, config_id=tripleo_step4, tcib_managed=true, managed_by=tripleo_ansible)
Dec 02 08:50:42 np0005541914.localdomain podman[96547]: unhealthy
Dec 02 08:50:42 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:50:42 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 08:50:42 np0005541914.localdomain podman[96545]: 2025-12-02 08:50:42.885495958 +0000 UTC m=+0.130375358 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn)
Dec 02 08:50:42 np0005541914.localdomain podman[96545]: 2025-12-02 08:50:42.897590408 +0000 UTC m=+0.142469788 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, version=17.1.12, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_id=tripleo_step4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:50:42 np0005541914.localdomain podman[96545]: unhealthy
Dec 02 08:50:42 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:50:42 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 08:50:43 np0005541914.localdomain podman[96670]: 2025-12-02 08:50:43.581520244 +0000 UTC m=+0.059497040 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, name=rhceph, vcs-type=git, architecture=x86_64, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main)
Dec 02 08:50:43 np0005541914.localdomain podman[96670]: 2025-12-02 08:50:43.672740133 +0000 UTC m=+0.150716939 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, RELEASE=main, io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, release=1763362218, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, name=rhceph, version=7, ceph=True, vendor=Red Hat, Inc.)
Dec 02 08:50:43 np0005541914.localdomain sudo[96563]: pam_unix(sudo:session): session closed for user root
Dec 02 08:50:43 np0005541914.localdomain sudo[96736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:50:43 np0005541914.localdomain sudo[96736]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:50:43 np0005541914.localdomain sudo[96736]: pam_unix(sudo:session): session closed for user root
Dec 02 08:50:44 np0005541914.localdomain sudo[96751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:50:44 np0005541914.localdomain sudo[96751]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:50:44 np0005541914.localdomain sudo[96751]: pam_unix(sudo:session): session closed for user root
Dec 02 08:50:45 np0005541914.localdomain sudo[96800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:50:45 np0005541914.localdomain sudo[96800]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:50:45 np0005541914.localdomain sudo[96800]: pam_unix(sudo:session): session closed for user root
Dec 02 08:50:46 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:50:47 np0005541914.localdomain podman[96815]: 2025-12-02 08:50:47.057781923 +0000 UTC m=+0.065855806 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, vcs-type=git, container_name=collectd, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:50:47 np0005541914.localdomain podman[96815]: 2025-12-02 08:50:47.064497527 +0000 UTC m=+0.072571400 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.41.4, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:50:47 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:50:49 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:50:50 np0005541914.localdomain systemd[1]: tmp-crun.IlDVRv.mount: Deactivated successfully.
Dec 02 08:50:50 np0005541914.localdomain podman[96835]: 2025-12-02 08:50:50.084794292 +0000 UTC m=+0.087801795 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, architecture=x86_64, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, release=1761123044, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid)
Dec 02 08:50:50 np0005541914.localdomain podman[96835]: 2025-12-02 08:50:50.120089192 +0000 UTC m=+0.123096765 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc., config_id=tripleo_step3, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Dec 02 08:50:50 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:51:01 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:51:02 np0005541914.localdomain podman[96854]: 2025-12-02 08:51:02.087550476 +0000 UTC m=+0.091794618 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, vcs-type=git, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:51:02 np0005541914.localdomain podman[96854]: 2025-12-02 08:51:02.309982848 +0000 UTC m=+0.314226990 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc.)
Dec 02 08:51:02 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:51:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:51:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:51:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:51:07 np0005541914.localdomain systemd[1]: tmp-crun.R9ao15.mount: Deactivated successfully.
Dec 02 08:51:07 np0005541914.localdomain podman[96884]: 2025-12-02 08:51:07.08158644 +0000 UTC m=+0.077807079 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, release=1761123044, build-date=2025-11-18T22:49:32Z, version=17.1.12, config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git)
Dec 02 08:51:07 np0005541914.localdomain podman[96884]: 2025-12-02 08:51:07.12015489 +0000 UTC m=+0.116375539 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:49:32Z, release=1761123044, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, version=17.1.12, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git)
Dec 02 08:51:07 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:51:07 np0005541914.localdomain podman[96885]: 2025-12-02 08:51:07.143951718 +0000 UTC m=+0.139237779 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, version=17.1.12, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20251118.1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 02 08:51:07 np0005541914.localdomain podman[96886]: 2025-12-02 08:51:07.154090998 +0000 UTC m=+0.145223023 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, maintainer=OpenStack TripleO Team, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.buildah.version=1.41.4)
Dec 02 08:51:07 np0005541914.localdomain podman[96885]: 2025-12-02 08:51:07.181878218 +0000 UTC m=+0.177164309 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container)
Dec 02 08:51:07 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:51:07 np0005541914.localdomain podman[96886]: 2025-12-02 08:51:07.21398814 +0000 UTC m=+0.205120175 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1761123044, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, batch=17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, version=17.1.12)
Dec 02 08:51:07 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:51:09 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:51:10 np0005541914.localdomain podman[96953]: 2025-12-02 08:51:10.088361581 +0000 UTC m=+0.085465485 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, batch=17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, version=17.1.12, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vendor=Red Hat, Inc.)
Dec 02 08:51:10 np0005541914.localdomain podman[96953]: 2025-12-02 08:51:10.14422922 +0000 UTC m=+0.141333094 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, batch=17.1_20251118.1)
Dec 02 08:51:10 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:51:10 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:51:11 np0005541914.localdomain podman[96979]: 2025-12-02 08:51:11.071652351 +0000 UTC m=+0.075732727 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.12, io.buildah.version=1.41.4, release=1761123044, architecture=x86_64, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, config_id=tripleo_step4, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:51:11 np0005541914.localdomain podman[96979]: 2025-12-02 08:51:11.459017267 +0000 UTC m=+0.463097673 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, container_name=nova_migration_target, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container)
Dec 02 08:51:11 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:51:12 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:51:12 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:51:13 np0005541914.localdomain systemd[1]: tmp-crun.eVyJKp.mount: Deactivated successfully.
Dec 02 08:51:13 np0005541914.localdomain podman[97002]: 2025-12-02 08:51:13.084503586 +0000 UTC m=+0.081135282 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, release=1761123044, tcib_managed=true, architecture=x86_64, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, container_name=ovn_metadata_agent, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, config_id=tripleo_step4, io.openshift.expose-services=, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:51:13 np0005541914.localdomain podman[97003]: 2025-12-02 08:51:13.14577087 +0000 UTC m=+0.139952281 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, release=1761123044, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 02 08:51:13 np0005541914.localdomain podman[97002]: 2025-12-02 08:51:13.174262641 +0000 UTC m=+0.170894387 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, url=https://www.redhat.com, release=1761123044, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 02 08:51:13 np0005541914.localdomain podman[97002]: unhealthy
Dec 02 08:51:13 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:51:13 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 08:51:13 np0005541914.localdomain podman[97003]: 2025-12-02 08:51:13.187718053 +0000 UTC m=+0.181899484 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, build-date=2025-11-18T23:34:05Z, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_id=tripleo_step4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Dec 02 08:51:13 np0005541914.localdomain podman[97003]: unhealthy
Dec 02 08:51:13 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:51:13 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 08:51:13 np0005541914.localdomain sshd[97042]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:51:13 np0005541914.localdomain sshd[97042]: Invalid user pbanx from 45.148.10.240 port 46668
Dec 02 08:51:13 np0005541914.localdomain sshd[97042]: Connection closed by invalid user pbanx 45.148.10.240 port 46668 [preauth]
Dec 02 08:51:17 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:51:18 np0005541914.localdomain systemd[1]: tmp-crun.80E22n.mount: Deactivated successfully.
Dec 02 08:51:18 np0005541914.localdomain podman[97044]: 2025-12-02 08:51:18.091681743 +0000 UTC m=+0.091880381 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T22:51:28Z, container_name=collectd, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, tcib_managed=true, config_id=tripleo_step3, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc.)
Dec 02 08:51:18 np0005541914.localdomain podman[97044]: 2025-12-02 08:51:18.107091344 +0000 UTC m=+0.107289972 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., tcib_managed=true, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:51:18 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:51:20 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:51:21 np0005541914.localdomain systemd[1]: tmp-crun.0qo9ek.mount: Deactivated successfully.
Dec 02 08:51:21 np0005541914.localdomain podman[97064]: 2025-12-02 08:51:21.077071861 +0000 UTC m=+0.082122962 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, batch=17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., release=1761123044, version=17.1.12, url=https://www.redhat.com, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 02 08:51:21 np0005541914.localdomain podman[97064]: 2025-12-02 08:51:21.088966786 +0000 UTC m=+0.094017857 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, version=17.1.12, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, container_name=iscsid, io.buildah.version=1.41.4, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team)
Dec 02 08:51:21 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:51:32 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:51:33 np0005541914.localdomain podman[97084]: 2025-12-02 08:51:33.069944131 +0000 UTC m=+0.076607723 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, distribution-scope=public, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, architecture=x86_64, name=rhosp17/openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:51:33 np0005541914.localdomain podman[97084]: 2025-12-02 08:51:33.307488826 +0000 UTC m=+0.314152378 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, release=1761123044, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, config_id=tripleo_step1, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:51:33 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:51:37 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:51:37 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:51:37 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:51:38 np0005541914.localdomain systemd[1]: tmp-crun.RIVuUS.mount: Deactivated successfully.
Dec 02 08:51:38 np0005541914.localdomain podman[97115]: 2025-12-02 08:51:38.086140975 +0000 UTC m=+0.090490528 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, distribution-scope=public, tcib_managed=true, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, container_name=logrotate_crond, release=1761123044, batch=17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 08:51:38 np0005541914.localdomain podman[97115]: 2025-12-02 08:51:38.121837327 +0000 UTC m=+0.126186830 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp17/openstack-cron, architecture=x86_64, batch=17.1_20251118.1, vendor=Red Hat, Inc., release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.buildah.version=1.41.4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, vcs-type=git, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron)
Dec 02 08:51:38 np0005541914.localdomain systemd[1]: tmp-crun.jxRd6Y.mount: Deactivated successfully.
Dec 02 08:51:38 np0005541914.localdomain podman[97117]: 2025-12-02 08:51:38.133287137 +0000 UTC m=+0.132625258 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi)
Dec 02 08:51:38 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:51:38 np0005541914.localdomain podman[97116]: 2025-12-02 08:51:38.195250362 +0000 UTC m=+0.195017236 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:51:38 np0005541914.localdomain podman[97117]: 2025-12-02 08:51:38.218848663 +0000 UTC m=+0.218186764 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step4, release=1761123044, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1)
Dec 02 08:51:38 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:51:38 np0005541914.localdomain podman[97116]: 2025-12-02 08:51:38.255958379 +0000 UTC m=+0.255725203 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.4, config_id=tripleo_step4, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z)
Dec 02 08:51:38 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:51:40 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:51:41 np0005541914.localdomain systemd[1]: tmp-crun.HOAGpz.mount: Deactivated successfully.
Dec 02 08:51:41 np0005541914.localdomain podman[97188]: 2025-12-02 08:51:41.065156998 +0000 UTC m=+0.072870980 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, version=17.1.12, tcib_managed=true, io.buildah.version=1.41.4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, release=1761123044, container_name=nova_compute, url=https://www.redhat.com)
Dec 02 08:51:41 np0005541914.localdomain podman[97188]: 2025-12-02 08:51:41.093049861 +0000 UTC m=+0.100763883 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1761123044, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, tcib_managed=true, vendor=Red Hat, Inc.)
Dec 02 08:51:41 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:51:41 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:51:42 np0005541914.localdomain systemd[1]: tmp-crun.Fccq3R.mount: Deactivated successfully.
Dec 02 08:51:42 np0005541914.localdomain podman[97215]: 2025-12-02 08:51:42.089563466 +0000 UTC m=+0.094201892 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Dec 02 08:51:42 np0005541914.localdomain podman[97215]: 2025-12-02 08:51:42.527116897 +0000 UTC m=+0.531755323 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.openshift.expose-services=, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vcs-type=git, distribution-scope=public, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Dec 02 08:51:42 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:51:43 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:51:43 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:51:44 np0005541914.localdomain podman[97240]: 2025-12-02 08:51:44.083956258 +0000 UTC m=+0.084046451 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, container_name=ovn_controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, tcib_managed=true, vcs-type=git, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, io.buildah.version=1.41.4)
Dec 02 08:51:44 np0005541914.localdomain podman[97240]: 2025-12-02 08:51:44.123919 +0000 UTC m=+0.124009193 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, container_name=ovn_controller, config_id=tripleo_step4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4)
Dec 02 08:51:44 np0005541914.localdomain podman[97240]: unhealthy
Dec 02 08:51:44 np0005541914.localdomain systemd[1]: tmp-crun.PyrZ7M.mount: Deactivated successfully.
Dec 02 08:51:44 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:51:44 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 08:51:44 np0005541914.localdomain podman[97239]: 2025-12-02 08:51:44.143746186 +0000 UTC m=+0.148378469 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, build-date=2025-11-19T00:14:25Z, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc.)
Dec 02 08:51:44 np0005541914.localdomain podman[97239]: 2025-12-02 08:51:44.185939196 +0000 UTC m=+0.190571429 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, version=17.1.12, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, distribution-scope=public, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4)
Dec 02 08:51:44 np0005541914.localdomain podman[97239]: unhealthy
Dec 02 08:51:44 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:51:44 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 08:51:45 np0005541914.localdomain sudo[97280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:51:45 np0005541914.localdomain sudo[97280]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:51:45 np0005541914.localdomain sudo[97280]: pam_unix(sudo:session): session closed for user root
Dec 02 08:51:45 np0005541914.localdomain sudo[97295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:51:45 np0005541914.localdomain sudo[97295]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:51:46 np0005541914.localdomain sudo[97295]: pam_unix(sudo:session): session closed for user root
Dec 02 08:51:47 np0005541914.localdomain sudo[97342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:51:47 np0005541914.localdomain sudo[97342]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:51:47 np0005541914.localdomain sudo[97342]: pam_unix(sudo:session): session closed for user root
Dec 02 08:51:48 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:51:49 np0005541914.localdomain podman[97357]: 2025-12-02 08:51:49.086666388 +0000 UTC m=+0.087786386 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, name=rhosp17/openstack-collectd, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container)
Dec 02 08:51:49 np0005541914.localdomain podman[97357]: 2025-12-02 08:51:49.098022326 +0000 UTC m=+0.099142374 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, vcs-type=git, release=1761123044, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd)
Dec 02 08:51:49 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:51:51 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:51:52 np0005541914.localdomain systemd[1]: tmp-crun.zX6zu2.mount: Deactivated successfully.
Dec 02 08:51:52 np0005541914.localdomain podman[97377]: 2025-12-02 08:51:52.061832174 +0000 UTC m=+0.069017202 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, container_name=iscsid, version=17.1.12, config_id=tripleo_step3, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044)
Dec 02 08:51:52 np0005541914.localdomain podman[97377]: 2025-12-02 08:51:52.096621067 +0000 UTC m=+0.103806035 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.buildah.version=1.41.4, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Dec 02 08:51:52 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:52:03 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:52:03 np0005541914.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 02 08:52:04 np0005541914.localdomain recover_tripleo_nova_virtqemud[97398]: 61907
Dec 02 08:52:04 np0005541914.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 02 08:52:04 np0005541914.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 02 08:52:04 np0005541914.localdomain podman[97396]: 2025-12-02 08:52:04.095401247 +0000 UTC m=+0.092702675 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, container_name=metrics_qdr, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, name=rhosp17/openstack-qdrouterd, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']})
Dec 02 08:52:04 np0005541914.localdomain podman[97396]: 2025-12-02 08:52:04.305803282 +0000 UTC m=+0.303104700 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, distribution-scope=public, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., config_id=tripleo_step1, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 08:52:04 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:52:08 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:52:08 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:52:08 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:52:09 np0005541914.localdomain podman[97428]: 2025-12-02 08:52:09.094193278 +0000 UTC m=+0.093643415 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-type=git, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team)
Dec 02 08:52:09 np0005541914.localdomain podman[97430]: 2025-12-02 08:52:09.14949676 +0000 UTC m=+0.141334914 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2025-11-19T00:12:45Z, architecture=x86_64, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, distribution-scope=public, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git)
Dec 02 08:52:09 np0005541914.localdomain podman[97428]: 2025-12-02 08:52:09.162007742 +0000 UTC m=+0.161457919 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, batch=17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, container_name=logrotate_crond, url=https://www.redhat.com, release=1761123044, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z)
Dec 02 08:52:09 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:52:09 np0005541914.localdomain podman[97430]: 2025-12-02 08:52:09.182803397 +0000 UTC m=+0.174641501 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.buildah.version=1.41.4, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, release=1761123044, vendor=Red Hat, Inc., version=17.1.12, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=)
Dec 02 08:52:09 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:52:09 np0005541914.localdomain podman[97429]: 2025-12-02 08:52:09.239358118 +0000 UTC m=+0.236276158 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, tcib_managed=true, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 02 08:52:09 np0005541914.localdomain podman[97429]: 2025-12-02 08:52:09.275136721 +0000 UTC m=+0.272054761 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=)
Dec 02 08:52:09 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:52:11 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:52:12 np0005541914.localdomain podman[97500]: 2025-12-02 08:52:12.078805062 +0000 UTC m=+0.080098480 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, vcs-type=git, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, container_name=nova_compute, io.buildah.version=1.41.4)
Dec 02 08:52:12 np0005541914.localdomain podman[97500]: 2025-12-02 08:52:12.129911744 +0000 UTC m=+0.131205212 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, config_id=tripleo_step5, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, build-date=2025-11-19T00:36:58Z, vcs-type=git, name=rhosp17/openstack-nova-compute)
Dec 02 08:52:12 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:52:12 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:52:13 np0005541914.localdomain podman[97526]: 2025-12-02 08:52:13.081718822 +0000 UTC m=+0.083442833 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:52:13 np0005541914.localdomain podman[97526]: 2025-12-02 08:52:13.498837448 +0000 UTC m=+0.500561499 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, release=1761123044, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute)
Dec 02 08:52:13 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:52:14 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:52:14 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:52:15 np0005541914.localdomain systemd[1]: tmp-crun.OxRZuW.mount: Deactivated successfully.
Dec 02 08:52:15 np0005541914.localdomain podman[97551]: 2025-12-02 08:52:15.087921575 +0000 UTC m=+0.089284691 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git)
Dec 02 08:52:15 np0005541914.localdomain podman[97551]: 2025-12-02 08:52:15.134032406 +0000 UTC m=+0.135395492 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, version=17.1.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Dec 02 08:52:15 np0005541914.localdomain podman[97551]: unhealthy
Dec 02 08:52:15 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:52:15 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 08:52:15 np0005541914.localdomain podman[97550]: 2025-12-02 08:52:15.13482157 +0000 UTC m=+0.139094585 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, version=17.1.12, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044)
Dec 02 08:52:15 np0005541914.localdomain podman[97550]: 2025-12-02 08:52:15.219160579 +0000 UTC m=+0.223433634 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, release=1761123044, url=https://www.redhat.com, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn)
Dec 02 08:52:15 np0005541914.localdomain podman[97550]: unhealthy
Dec 02 08:52:15 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:52:15 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 08:52:19 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:52:20 np0005541914.localdomain podman[97591]: 2025-12-02 08:52:20.069649255 +0000 UTC m=+0.077915804 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, distribution-scope=public, container_name=collectd, vendor=Red Hat, Inc., io.buildah.version=1.41.4, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git)
Dec 02 08:52:20 np0005541914.localdomain podman[97591]: 2025-12-02 08:52:20.08093142 +0000 UTC m=+0.089197989 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, container_name=collectd, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, name=rhosp17/openstack-collectd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc.)
Dec 02 08:52:20 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:52:22 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:52:23 np0005541914.localdomain podman[97611]: 2025-12-02 08:52:23.086520995 +0000 UTC m=+0.090177469 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, config_id=tripleo_step3, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, batch=17.1_20251118.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid)
Dec 02 08:52:23 np0005541914.localdomain podman[97611]: 2025-12-02 08:52:23.126337003 +0000 UTC m=+0.129993347 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, release=1761123044, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.buildah.version=1.41.4, config_id=tripleo_step3, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid)
Dec 02 08:52:23 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:52:34 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:52:35 np0005541914.localdomain systemd[1]: tmp-crun.0PKPGu.mount: Deactivated successfully.
Dec 02 08:52:35 np0005541914.localdomain podman[97631]: 2025-12-02 08:52:35.098225111 +0000 UTC m=+0.103922899 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, container_name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.buildah.version=1.41.4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc., url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible)
Dec 02 08:52:35 np0005541914.localdomain podman[97631]: 2025-12-02 08:52:35.319684324 +0000 UTC m=+0.325382072 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.buildah.version=1.41.4, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, url=https://www.redhat.com)
Dec 02 08:52:35 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:52:39 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:52:39 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:52:39 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:52:40 np0005541914.localdomain podman[97662]: 2025-12-02 08:52:40.094831556 +0000 UTC m=+0.089278551 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, release=1761123044, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.4)
Dec 02 08:52:40 np0005541914.localdomain systemd[1]: tmp-crun.x8WbSb.mount: Deactivated successfully.
Dec 02 08:52:40 np0005541914.localdomain podman[97662]: 2025-12-02 08:52:40.177308798 +0000 UTC m=+0.171755833 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1761123044, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, distribution-scope=public, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4)
Dec 02 08:52:40 np0005541914.localdomain podman[97660]: 2025-12-02 08:52:40.189044618 +0000 UTC m=+0.189112725 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, batch=17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-cron-container, vcs-type=git, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, architecture=x86_64, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron)
Dec 02 08:52:40 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:52:40 np0005541914.localdomain podman[97660]: 2025-12-02 08:52:40.200780067 +0000 UTC m=+0.200848174 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, distribution-scope=public, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:52:40 np0005541914.localdomain podman[97661]: 2025-12-02 08:52:40.159105352 +0000 UTC m=+0.154672191 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://www.redhat.com, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, tcib_managed=true, batch=17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc.)
Dec 02 08:52:40 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:52:40 np0005541914.localdomain podman[97661]: 2025-12-02 08:52:40.241967845 +0000 UTC m=+0.237534654 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, tcib_managed=true, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=)
Dec 02 08:52:40 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:52:42 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:52:43 np0005541914.localdomain podman[97731]: 2025-12-02 08:52:43.075800738 +0000 UTC m=+0.079062309 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, distribution-scope=public, io.buildah.version=1.41.4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-11-19T00:36:58Z)
Dec 02 08:52:43 np0005541914.localdomain podman[97731]: 2025-12-02 08:52:43.129813449 +0000 UTC m=+0.133074970 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, name=rhosp17/openstack-nova-compute, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 02 08:52:43 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:52:43 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:52:44 np0005541914.localdomain systemd[1]: tmp-crun.rZZbDq.mount: Deactivated successfully.
Dec 02 08:52:44 np0005541914.localdomain podman[97757]: 2025-12-02 08:52:44.086537518 +0000 UTC m=+0.094634836 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, architecture=x86_64, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:52:44 np0005541914.localdomain podman[97757]: 2025-12-02 08:52:44.494162063 +0000 UTC m=+0.502259331 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, url=https://www.redhat.com, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute)
Dec 02 08:52:44 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:52:45 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:52:45 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:52:46 np0005541914.localdomain systemd[1]: tmp-crun.ConUxC.mount: Deactivated successfully.
Dec 02 08:52:46 np0005541914.localdomain podman[97779]: 2025-12-02 08:52:46.097650551 +0000 UTC m=+0.092712476 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, container_name=ovn_metadata_agent, distribution-scope=public, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:52:46 np0005541914.localdomain podman[97780]: 2025-12-02 08:52:46.144285836 +0000 UTC m=+0.137569658 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, tcib_managed=true, io.buildah.version=1.41.4, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1761123044, version=17.1.12, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 02 08:52:46 np0005541914.localdomain podman[97780]: 2025-12-02 08:52:46.162808223 +0000 UTC m=+0.156092095 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, io.buildah.version=1.41.4, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, version=17.1.12)
Dec 02 08:52:46 np0005541914.localdomain podman[97780]: unhealthy
Dec 02 08:52:46 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:52:46 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 08:52:46 np0005541914.localdomain podman[97779]: 2025-12-02 08:52:46.217823156 +0000 UTC m=+0.212885071 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.buildah.version=1.41.4)
Dec 02 08:52:46 np0005541914.localdomain podman[97779]: unhealthy
Dec 02 08:52:46 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:52:46 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 08:52:47 np0005541914.localdomain sudo[97818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:52:47 np0005541914.localdomain sudo[97818]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:52:47 np0005541914.localdomain sudo[97818]: pam_unix(sudo:session): session closed for user root
Dec 02 08:52:47 np0005541914.localdomain sudo[97833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:52:47 np0005541914.localdomain sudo[97833]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:52:47 np0005541914.localdomain sudo[97833]: pam_unix(sudo:session): session closed for user root
Dec 02 08:52:48 np0005541914.localdomain sudo[97880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:52:48 np0005541914.localdomain sudo[97880]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:52:48 np0005541914.localdomain sudo[97880]: pam_unix(sudo:session): session closed for user root
Dec 02 08:52:50 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:52:51 np0005541914.localdomain podman[97895]: 2025-12-02 08:52:51.096489292 +0000 UTC m=+0.088795797 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, vcs-type=git, build-date=2025-11-18T22:51:28Z, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, config_id=tripleo_step3, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:52:51 np0005541914.localdomain podman[97895]: 2025-12-02 08:52:51.141063345 +0000 UTC m=+0.133369850 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, release=1761123044, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, version=17.1.12, build-date=2025-11-18T22:51:28Z, container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, batch=17.1_20251118.1)
Dec 02 08:52:51 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:52:53 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:52:54 np0005541914.localdomain podman[97915]: 2025-12-02 08:52:54.088948095 +0000 UTC m=+0.092814919 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team)
Dec 02 08:52:54 np0005541914.localdomain podman[97915]: 2025-12-02 08:52:54.104863712 +0000 UTC m=+0.108730536 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, release=1761123044, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1)
Dec 02 08:52:54 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:53:05 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:53:06 np0005541914.localdomain systemd[1]: tmp-crun.IN5p09.mount: Deactivated successfully.
Dec 02 08:53:06 np0005541914.localdomain podman[97933]: 2025-12-02 08:53:06.091760419 +0000 UTC m=+0.097051439 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, managed_by=tripleo_ansible, release=1761123044, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true)
Dec 02 08:53:06 np0005541914.localdomain podman[97933]: 2025-12-02 08:53:06.269915798 +0000 UTC m=+0.275206858 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, release=1761123044, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64)
Dec 02 08:53:06 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:53:10 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:53:10 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:53:10 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:53:11 np0005541914.localdomain systemd[1]: tmp-crun.odIgXI.mount: Deactivated successfully.
Dec 02 08:53:11 np0005541914.localdomain podman[97961]: 2025-12-02 08:53:11.077542613 +0000 UTC m=+0.086021762 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, container_name=logrotate_crond, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:32Z, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_id=tripleo_step4, release=1761123044)
Dec 02 08:53:11 np0005541914.localdomain systemd[1]: tmp-crun.vtOEff.mount: Deactivated successfully.
Dec 02 08:53:11 np0005541914.localdomain podman[97962]: 2025-12-02 08:53:11.114189733 +0000 UTC m=+0.114980707 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com)
Dec 02 08:53:11 np0005541914.localdomain podman[97961]: 2025-12-02 08:53:11.118760242 +0000 UTC m=+0.127239431 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, managed_by=tripleo_ansible, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp17/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, release=1761123044)
Dec 02 08:53:11 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:53:11 np0005541914.localdomain podman[97968]: 2025-12-02 08:53:11.199796881 +0000 UTC m=+0.193734686 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.12, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.buildah.version=1.41.4, architecture=x86_64, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:53:11 np0005541914.localdomain podman[97962]: 2025-12-02 08:53:11.223399993 +0000 UTC m=+0.224190917 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, version=17.1.12, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team)
Dec 02 08:53:11 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:53:11 np0005541914.localdomain podman[97968]: 2025-12-02 08:53:11.255160254 +0000 UTC m=+0.249098029 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, version=17.1.12, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 02 08:53:11 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:53:13 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:53:14 np0005541914.localdomain podman[98034]: 2025-12-02 08:53:14.081820198 +0000 UTC m=+0.083407342 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, release=1761123044, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, config_id=tripleo_step5, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Dec 02 08:53:14 np0005541914.localdomain podman[98034]: 2025-12-02 08:53:14.109795124 +0000 UTC m=+0.111382278 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, vcs-type=git, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute)
Dec 02 08:53:14 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:53:14 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:53:15 np0005541914.localdomain podman[98062]: 2025-12-02 08:53:15.083172871 +0000 UTC m=+0.083144463 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, distribution-scope=public, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, maintainer=OpenStack TripleO Team, release=1761123044, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container)
Dec 02 08:53:15 np0005541914.localdomain podman[98062]: 2025-12-02 08:53:15.458992474 +0000 UTC m=+0.458964076 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, name=rhosp17/openstack-nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20251118.1, tcib_managed=true, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z)
Dec 02 08:53:15 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:53:16 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:53:16 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:53:17 np0005541914.localdomain podman[98084]: 2025-12-02 08:53:17.077867502 +0000 UTC m=+0.083806034 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-type=git, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, url=https://www.redhat.com, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, container_name=ovn_metadata_agent)
Dec 02 08:53:17 np0005541914.localdomain podman[98084]: 2025-12-02 08:53:17.121092463 +0000 UTC m=+0.127030985 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, version=17.1.12, url=https://www.redhat.com, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack TripleO Team)
Dec 02 08:53:17 np0005541914.localdomain podman[98084]: unhealthy
Dec 02 08:53:17 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:53:17 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 08:53:17 np0005541914.localdomain podman[98085]: 2025-12-02 08:53:17.140008362 +0000 UTC m=+0.142916601 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, tcib_managed=true, url=https://www.redhat.com, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, container_name=ovn_controller, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Dec 02 08:53:17 np0005541914.localdomain podman[98085]: 2025-12-02 08:53:17.15793147 +0000 UTC m=+0.160839749 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, build-date=2025-11-18T23:34:05Z, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, container_name=ovn_controller, architecture=x86_64, name=rhosp17/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, url=https://www.redhat.com, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:53:17 np0005541914.localdomain podman[98085]: unhealthy
Dec 02 08:53:17 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:53:17 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 08:53:21 np0005541914.localdomain sshd[98126]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:53:21 np0005541914.localdomain sshd[98126]: Invalid user banxgg from 45.148.10.240 port 57910
Dec 02 08:53:21 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:53:21 np0005541914.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 02 08:53:22 np0005541914.localdomain recover_tripleo_nova_virtqemud[98135]: 61907
Dec 02 08:53:22 np0005541914.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 02 08:53:22 np0005541914.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 02 08:53:22 np0005541914.localdomain sshd[98126]: Connection closed by invalid user banxgg 45.148.10.240 port 57910 [preauth]
Dec 02 08:53:22 np0005541914.localdomain podman[98128]: 2025-12-02 08:53:22.062104427 +0000 UTC m=+0.089838999 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, version=17.1.12, batch=17.1_20251118.1, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vcs-type=git)
Dec 02 08:53:22 np0005541914.localdomain podman[98128]: 2025-12-02 08:53:22.102009507 +0000 UTC m=+0.129744099 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, release=1761123044, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, version=17.1.12, io.buildah.version=1.41.4, distribution-scope=public, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:53:22 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:53:24 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:53:25 np0005541914.localdomain podman[98150]: 2025-12-02 08:53:25.079575116 +0000 UTC m=+0.080385170 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, config_id=tripleo_step3, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, release=1761123044, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid)
Dec 02 08:53:25 np0005541914.localdomain podman[98150]: 2025-12-02 08:53:25.091063627 +0000 UTC m=+0.091873721 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, tcib_managed=true, vcs-type=git, container_name=iscsid, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Dec 02 08:53:25 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:53:36 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:53:37 np0005541914.localdomain podman[98170]: 2025-12-02 08:53:37.086707166 +0000 UTC m=+0.085896768 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, container_name=metrics_qdr, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, batch=17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, vcs-type=git, url=https://www.redhat.com)
Dec 02 08:53:37 np0005541914.localdomain podman[98170]: 2025-12-02 08:53:37.314015868 +0000 UTC m=+0.313205450 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, url=https://www.redhat.com, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc.)
Dec 02 08:53:37 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:53:41 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:53:41 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:53:41 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:53:42 np0005541914.localdomain podman[98199]: 2025-12-02 08:53:42.087104937 +0000 UTC m=+0.087861148 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, release=1761123044, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, name=rhosp17/openstack-cron, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond)
Dec 02 08:53:42 np0005541914.localdomain systemd[1]: tmp-crun.qEPWfd.mount: Deactivated successfully.
Dec 02 08:53:42 np0005541914.localdomain podman[98200]: 2025-12-02 08:53:42.14999305 +0000 UTC m=+0.150292517 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, url=https://www.redhat.com, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1)
Dec 02 08:53:42 np0005541914.localdomain podman[98201]: 2025-12-02 08:53:42.162983138 +0000 UTC m=+0.155668952 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, config_id=tripleo_step4, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container)
Dec 02 08:53:42 np0005541914.localdomain podman[98199]: 2025-12-02 08:53:42.170290942 +0000 UTC m=+0.171047153 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, architecture=x86_64, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1)
Dec 02 08:53:42 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:53:42 np0005541914.localdomain podman[98201]: 2025-12-02 08:53:42.19609358 +0000 UTC m=+0.188779384 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, distribution-scope=public, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, architecture=x86_64, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1)
Dec 02 08:53:42 np0005541914.localdomain podman[98200]: 2025-12-02 08:53:42.212951226 +0000 UTC m=+0.213250683 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible)
Dec 02 08:53:42 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:53:42 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:53:44 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:53:45 np0005541914.localdomain systemd[1]: tmp-crun.x3VjrL.mount: Deactivated successfully.
Dec 02 08:53:45 np0005541914.localdomain podman[98270]: 2025-12-02 08:53:45.090538432 +0000 UTC m=+0.091403636 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, version=17.1.12, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 02 08:53:45 np0005541914.localdomain podman[98270]: 2025-12-02 08:53:45.122966895 +0000 UTC m=+0.123832109 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1761123044, container_name=nova_compute)
Dec 02 08:53:45 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:53:45 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:53:46 np0005541914.localdomain podman[98297]: 2025-12-02 08:53:46.071616567 +0000 UTC m=+0.076063737 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.12, release=1761123044, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, distribution-scope=public, container_name=nova_migration_target, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 02 08:53:46 np0005541914.localdomain podman[98297]: 2025-12-02 08:53:46.486482975 +0000 UTC m=+0.490930115 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, release=1761123044, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vcs-type=git, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:53:46 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:53:47 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:53:47 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:53:48 np0005541914.localdomain podman[98322]: 2025-12-02 08:53:48.074052931 +0000 UTC m=+0.078006577 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step4, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, vcs-type=git)
Dec 02 08:53:48 np0005541914.localdomain podman[98321]: 2025-12-02 08:53:48.136379437 +0000 UTC m=+0.139391754 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, version=17.1.12, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn)
Dec 02 08:53:48 np0005541914.localdomain podman[98322]: 2025-12-02 08:53:48.163697802 +0000 UTC m=+0.167651498 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, version=17.1.12, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 02 08:53:48 np0005541914.localdomain podman[98322]: unhealthy
Dec 02 08:53:48 np0005541914.localdomain podman[98321]: 2025-12-02 08:53:48.175679569 +0000 UTC m=+0.178691866 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, container_name=ovn_metadata_agent, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2025-11-19T00:14:25Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, config_id=tripleo_step4, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:53:48 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:53:48 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 08:53:48 np0005541914.localdomain podman[98321]: unhealthy
Dec 02 08:53:48 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:53:48 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 08:53:48 np0005541914.localdomain sudo[98361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:53:48 np0005541914.localdomain sudo[98361]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:53:48 np0005541914.localdomain sudo[98361]: pam_unix(sudo:session): session closed for user root
Dec 02 08:53:48 np0005541914.localdomain sudo[98376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:53:48 np0005541914.localdomain sudo[98376]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:53:49 np0005541914.localdomain sudo[98376]: pam_unix(sudo:session): session closed for user root
Dec 02 08:53:50 np0005541914.localdomain sudo[98422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:53:50 np0005541914.localdomain sudo[98422]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:53:50 np0005541914.localdomain sudo[98422]: pam_unix(sudo:session): session closed for user root
Dec 02 08:53:52 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:53:53 np0005541914.localdomain systemd[1]: tmp-crun.7FJmnJ.mount: Deactivated successfully.
Dec 02 08:53:53 np0005541914.localdomain podman[98437]: 2025-12-02 08:53:53.096940779 +0000 UTC m=+0.100167405 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, io.buildah.version=1.41.4, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, maintainer=OpenStack TripleO Team)
Dec 02 08:53:53 np0005541914.localdomain podman[98437]: 2025-12-02 08:53:53.133433125 +0000 UTC m=+0.136659671 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, version=17.1.12, io.openshift.expose-services=, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1761123044, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:53:53 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:53:55 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:53:56 np0005541914.localdomain podman[98456]: 2025-12-02 08:53:56.074123572 +0000 UTC m=+0.081251186 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, release=1761123044, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Dec 02 08:53:56 np0005541914.localdomain podman[98456]: 2025-12-02 08:53:56.087929095 +0000 UTC m=+0.095056679 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, version=17.1.12, io.openshift.expose-services=, io.buildah.version=1.41.4, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid)
Dec 02 08:53:56 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:54:07 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:54:08 np0005541914.localdomain systemd[1]: tmp-crun.aHF7KS.mount: Deactivated successfully.
Dec 02 08:54:08 np0005541914.localdomain podman[98475]: 2025-12-02 08:54:08.069395173 +0000 UTC m=+0.075074518 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, url=https://www.redhat.com, name=rhosp17/openstack-qdrouterd, architecture=x86_64, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 08:54:08 np0005541914.localdomain podman[98475]: 2025-12-02 08:54:08.243103845 +0000 UTC m=+0.248783180 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:54:08 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:54:12 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:54:12 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:54:12 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:54:13 np0005541914.localdomain systemd[1]: tmp-crun.XScZ6U.mount: Deactivated successfully.
Dec 02 08:54:13 np0005541914.localdomain podman[98504]: 2025-12-02 08:54:13.096544672 +0000 UTC m=+0.098377870 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, batch=17.1_20251118.1, config_id=tripleo_step4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, version=17.1.12, release=1761123044)
Dec 02 08:54:13 np0005541914.localdomain podman[98506]: 2025-12-02 08:54:13.15468953 +0000 UTC m=+0.149014658 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, architecture=x86_64, distribution-scope=public, tcib_managed=true, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Dec 02 08:54:13 np0005541914.localdomain podman[98504]: 2025-12-02 08:54:13.179733116 +0000 UTC m=+0.181566254 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, distribution-scope=public, vcs-type=git, architecture=x86_64, release=1761123044, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, batch=17.1_20251118.1, tcib_managed=true, version=17.1.12, vendor=Red Hat, Inc., config_id=tripleo_step4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:54:13 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:54:13 np0005541914.localdomain podman[98506]: 2025-12-02 08:54:13.207803135 +0000 UTC m=+0.202128263 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.4, vcs-type=git, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 02 08:54:13 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:54:13 np0005541914.localdomain podman[98505]: 2025-12-02 08:54:13.131203972 +0000 UTC m=+0.129827392 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, version=17.1.12, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, architecture=x86_64, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, vcs-type=git)
Dec 02 08:54:13 np0005541914.localdomain podman[98505]: 2025-12-02 08:54:13.266056917 +0000 UTC m=+0.264680307 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, release=1761123044, config_id=tripleo_step4, maintainer=OpenStack TripleO Team)
Dec 02 08:54:13 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:54:14 np0005541914.localdomain systemd[1]: tmp-crun.DBOL5B.mount: Deactivated successfully.
Dec 02 08:54:15 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:54:16 np0005541914.localdomain podman[98578]: 2025-12-02 08:54:16.077304895 +0000 UTC m=+0.083033470 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.4, container_name=nova_compute, version=17.1.12, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']})
Dec 02 08:54:16 np0005541914.localdomain podman[98578]: 2025-12-02 08:54:16.136941599 +0000 UTC m=+0.142670194 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, architecture=x86_64, name=rhosp17/openstack-nova-compute, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5)
Dec 02 08:54:16 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:54:16 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:54:17 np0005541914.localdomain podman[98607]: 2025-12-02 08:54:17.081362083 +0000 UTC m=+0.084226147 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, vcs-type=git, tcib_managed=true, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, io.buildah.version=1.41.4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, release=1761123044)
Dec 02 08:54:17 np0005541914.localdomain podman[98607]: 2025-12-02 08:54:17.480751037 +0000 UTC m=+0.483615071 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.12, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64)
Dec 02 08:54:17 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:54:18 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:54:18 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:54:19 np0005541914.localdomain podman[98631]: 2025-12-02 08:54:19.083515377 +0000 UTC m=+0.087318432 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, container_name=ovn_metadata_agent, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, release=1761123044, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=)
Dec 02 08:54:19 np0005541914.localdomain podman[98631]: 2025-12-02 08:54:19.124653895 +0000 UTC m=+0.128456940 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, version=17.1.12, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn)
Dec 02 08:54:19 np0005541914.localdomain podman[98631]: unhealthy
Dec 02 08:54:19 np0005541914.localdomain systemd[1]: tmp-crun.FVX65C.mount: Deactivated successfully.
Dec 02 08:54:19 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:54:19 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 08:54:19 np0005541914.localdomain podman[98632]: 2025-12-02 08:54:19.154502928 +0000 UTC m=+0.156013032 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_id=tripleo_step4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, io.openshift.expose-services=, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12)
Dec 02 08:54:19 np0005541914.localdomain podman[98632]: 2025-12-02 08:54:19.192933843 +0000 UTC m=+0.194443927 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, vcs-type=git, io.buildah.version=1.41.4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, config_id=tripleo_step4)
Dec 02 08:54:19 np0005541914.localdomain podman[98632]: unhealthy
Dec 02 08:54:19 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:54:19 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 08:54:23 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:54:24 np0005541914.localdomain podman[98671]: 2025-12-02 08:54:24.050967721 +0000 UTC m=+0.057528151 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, version=17.1.12, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd)
Dec 02 08:54:24 np0005541914.localdomain podman[98671]: 2025-12-02 08:54:24.088913491 +0000 UTC m=+0.095473941 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, tcib_managed=true, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z, com.redhat.component=openstack-collectd-container, version=17.1.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com)
Dec 02 08:54:24 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:54:27 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:54:27 np0005541914.localdomain systemd[1]: tmp-crun.VMswzS.mount: Deactivated successfully.
Dec 02 08:54:27 np0005541914.localdomain podman[98692]: 2025-12-02 08:54:27.459487036 +0000 UTC m=+0.063213844 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, container_name=iscsid, url=https://www.redhat.com, release=1761123044, architecture=x86_64, name=rhosp17/openstack-iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid)
Dec 02 08:54:27 np0005541914.localdomain podman[98692]: 2025-12-02 08:54:27.471929087 +0000 UTC m=+0.075655945 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, release=1761123044, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, url=https://www.redhat.com, batch=17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, version=17.1.12)
Dec 02 08:54:27 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:54:34 np0005541914.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 02 08:54:35 np0005541914.localdomain recover_tripleo_nova_virtqemud[98712]: 61907
Dec 02 08:54:35 np0005541914.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 02 08:54:35 np0005541914.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 02 08:54:38 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:54:39 np0005541914.localdomain podman[98713]: 2025-12-02 08:54:39.084592676 +0000 UTC m=+0.090043914 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, distribution-scope=public, tcib_managed=true, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vcs-type=git, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.12)
Dec 02 08:54:39 np0005541914.localdomain podman[98713]: 2025-12-02 08:54:39.288892275 +0000 UTC m=+0.294343503 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, container_name=metrics_qdr, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, name=rhosp17/openstack-qdrouterd, release=1761123044, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 08:54:39 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:54:43 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:54:43 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:54:43 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:54:44 np0005541914.localdomain podman[98743]: 2025-12-02 08:54:44.076584052 +0000 UTC m=+0.080495313 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, io.buildah.version=1.41.4, release=1761123044, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible)
Dec 02 08:54:44 np0005541914.localdomain podman[98745]: 2025-12-02 08:54:44.10170866 +0000 UTC m=+0.095905564 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, version=17.1.12, release=1761123044, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4)
Dec 02 08:54:44 np0005541914.localdomain podman[98745]: 2025-12-02 08:54:44.134872715 +0000 UTC m=+0.129069659 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1761123044, distribution-scope=public, build-date=2025-11-19T00:12:45Z, version=17.1.12, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc.)
Dec 02 08:54:44 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:54:44 np0005541914.localdomain podman[98744]: 2025-12-02 08:54:44.201160132 +0000 UTC m=+0.198628256 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vendor=Red Hat, Inc., version=17.1.12, build-date=2025-11-19T00:11:48Z, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Dec 02 08:54:44 np0005541914.localdomain podman[98743]: 2025-12-02 08:54:44.212152268 +0000 UTC m=+0.216063529 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1761123044, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, tcib_managed=true, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=logrotate_crond, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:54:44 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:54:44 np0005541914.localdomain podman[98744]: 2025-12-02 08:54:44.262944432 +0000 UTC m=+0.260412556 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute)
Dec 02 08:54:44 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:54:45 np0005541914.localdomain systemd[1]: tmp-crun.IfYqNZ.mount: Deactivated successfully.
Dec 02 08:54:46 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:54:47 np0005541914.localdomain podman[98817]: 2025-12-02 08:54:47.079136392 +0000 UTC m=+0.085701073 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-type=git, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public)
Dec 02 08:54:47 np0005541914.localdomain podman[98817]: 2025-12-02 08:54:47.112840542 +0000 UTC m=+0.119405253 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, config_id=tripleo_step5, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:54:47 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:54:47 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:54:48 np0005541914.localdomain systemd[1]: tmp-crun.HTXVwM.mount: Deactivated successfully.
Dec 02 08:54:48 np0005541914.localdomain podman[98843]: 2025-12-02 08:54:48.08483749 +0000 UTC m=+0.087904860 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, io.openshift.expose-services=, io.buildah.version=1.41.4, distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, release=1761123044, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 02 08:54:48 np0005541914.localdomain podman[98843]: 2025-12-02 08:54:48.46248505 +0000 UTC m=+0.465552470 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, release=1761123044, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute)
Dec 02 08:54:48 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:54:49 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:54:49 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:54:50 np0005541914.localdomain podman[98868]: 2025-12-02 08:54:50.064171845 +0000 UTC m=+0.069227918 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, io.openshift.expose-services=, vcs-type=git, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, batch=17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Dec 02 08:54:50 np0005541914.localdomain podman[98868]: 2025-12-02 08:54:50.07708969 +0000 UTC m=+0.082145743 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, distribution-scope=public, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12)
Dec 02 08:54:50 np0005541914.localdomain podman[98868]: unhealthy
Dec 02 08:54:50 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:54:50 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 08:54:50 np0005541914.localdomain systemd[1]: tmp-crun.KfgGO7.mount: Deactivated successfully.
Dec 02 08:54:50 np0005541914.localdomain podman[98867]: 2025-12-02 08:54:50.135652432 +0000 UTC m=+0.137818936 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, release=1761123044, vcs-type=git, container_name=ovn_metadata_agent, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 02 08:54:50 np0005541914.localdomain podman[98867]: 2025-12-02 08:54:50.170436235 +0000 UTC m=+0.172602769 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-type=git, container_name=ovn_metadata_agent)
Dec 02 08:54:50 np0005541914.localdomain podman[98867]: unhealthy
Dec 02 08:54:50 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:54:50 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 08:54:50 np0005541914.localdomain sudo[98906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:54:50 np0005541914.localdomain sudo[98906]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:54:50 np0005541914.localdomain sudo[98906]: pam_unix(sudo:session): session closed for user root
Dec 02 08:54:50 np0005541914.localdomain sudo[98921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:54:50 np0005541914.localdomain sudo[98921]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:54:51 np0005541914.localdomain sudo[98921]: pam_unix(sudo:session): session closed for user root
Dec 02 08:54:51 np0005541914.localdomain sudo[98967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:54:51 np0005541914.localdomain sudo[98967]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:54:51 np0005541914.localdomain sudo[98967]: pam_unix(sudo:session): session closed for user root
Dec 02 08:54:54 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:54:55 np0005541914.localdomain podman[98982]: 2025-12-02 08:54:55.09801819 +0000 UTC m=+0.097282426 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, distribution-scope=public, vcs-type=git, name=rhosp17/openstack-collectd)
Dec 02 08:54:55 np0005541914.localdomain podman[98982]: 2025-12-02 08:54:55.110313257 +0000 UTC m=+0.109577543 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, config_id=tripleo_step3, io.buildah.version=1.41.4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd)
Dec 02 08:54:55 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:54:57 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:54:58 np0005541914.localdomain podman[99003]: 2025-12-02 08:54:58.081640731 +0000 UTC m=+0.086851757 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, tcib_managed=true, container_name=iscsid, config_id=tripleo_step3, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64)
Dec 02 08:54:58 np0005541914.localdomain podman[99003]: 2025-12-02 08:54:58.094819495 +0000 UTC m=+0.100030531 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, distribution-scope=public, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, managed_by=tripleo_ansible, container_name=iscsid, release=1761123044, tcib_managed=true)
Dec 02 08:54:58 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:55:04 np0005541914.localdomain sshd[36030]: Received disconnect from 192.168.122.100 port 55594:11: disconnected by user
Dec 02 08:55:04 np0005541914.localdomain sshd[36030]: Disconnected from user tripleo-admin 192.168.122.100 port 55594
Dec 02 08:55:04 np0005541914.localdomain sshd[36010]: pam_unix(sshd:session): session closed for user tripleo-admin
Dec 02 08:55:04 np0005541914.localdomain systemd[1]: session-28.scope: Deactivated successfully.
Dec 02 08:55:04 np0005541914.localdomain systemd[1]: session-28.scope: Consumed 7min 641ms CPU time.
Dec 02 08:55:04 np0005541914.localdomain systemd-logind[760]: Session 28 logged out. Waiting for processes to exit.
Dec 02 08:55:04 np0005541914.localdomain systemd-logind[760]: Removed session 28.
Dec 02 08:55:09 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:55:09 np0005541914.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 02 08:55:10 np0005541914.localdomain recover_tripleo_nova_virtqemud[99025]: 61907
Dec 02 08:55:10 np0005541914.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 02 08:55:10 np0005541914.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 02 08:55:10 np0005541914.localdomain podman[99023]: 2025-12-02 08:55:10.097790292 +0000 UTC m=+0.097502643 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, container_name=metrics_qdr, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, tcib_managed=true, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-qdrouterd, release=1761123044, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:55:10 np0005541914.localdomain podman[99023]: 2025-12-02 08:55:10.29783614 +0000 UTC m=+0.297548541 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, config_id=tripleo_step1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4)
Dec 02 08:55:10 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:55:14 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:55:15 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:55:15 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:55:15 np0005541914.localdomain systemd[1]: Stopping User Manager for UID 1003...
Dec 02 08:55:15 np0005541914.localdomain systemd[36014]: Activating special unit Exit the Session...
Dec 02 08:55:15 np0005541914.localdomain systemd[36014]: Removed slice User Background Tasks Slice.
Dec 02 08:55:15 np0005541914.localdomain systemd[36014]: Stopped target Main User Target.
Dec 02 08:55:15 np0005541914.localdomain systemd[36014]: Stopped target Basic System.
Dec 02 08:55:15 np0005541914.localdomain systemd[36014]: Stopped target Paths.
Dec 02 08:55:15 np0005541914.localdomain systemd[36014]: Stopped target Sockets.
Dec 02 08:55:15 np0005541914.localdomain systemd[36014]: Stopped target Timers.
Dec 02 08:55:15 np0005541914.localdomain systemd[36014]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 02 08:55:15 np0005541914.localdomain systemd[36014]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 02 08:55:15 np0005541914.localdomain systemd[36014]: Closed D-Bus User Message Bus Socket.
Dec 02 08:55:15 np0005541914.localdomain systemd[36014]: Stopped Create User's Volatile Files and Directories.
Dec 02 08:55:15 np0005541914.localdomain systemd[36014]: Removed slice User Application Slice.
Dec 02 08:55:15 np0005541914.localdomain systemd[36014]: Reached target Shutdown.
Dec 02 08:55:15 np0005541914.localdomain systemd[36014]: Finished Exit the Session.
Dec 02 08:55:15 np0005541914.localdomain systemd[36014]: Reached target Exit the Session.
Dec 02 08:55:15 np0005541914.localdomain systemd[1]: user@1003.service: Deactivated successfully.
Dec 02 08:55:15 np0005541914.localdomain systemd[1]: Stopped User Manager for UID 1003.
Dec 02 08:55:15 np0005541914.localdomain systemd[1]: user@1003.service: Consumed 5.092s CPU time, read 0B from disk, written 7.0K to disk.
Dec 02 08:55:15 np0005541914.localdomain systemd[1]: Stopping User Runtime Directory /run/user/1003...
Dec 02 08:55:15 np0005541914.localdomain systemd[1]: run-user-1003.mount: Deactivated successfully.
Dec 02 08:55:15 np0005541914.localdomain systemd[1]: user-runtime-dir@1003.service: Deactivated successfully.
Dec 02 08:55:15 np0005541914.localdomain systemd[1]: Stopped User Runtime Directory /run/user/1003.
Dec 02 08:55:15 np0005541914.localdomain systemd[1]: Removed slice User Slice of UID 1003.
Dec 02 08:55:15 np0005541914.localdomain systemd[1]: user-1003.slice: Consumed 7min 5.768s CPU time.
Dec 02 08:55:15 np0005541914.localdomain podman[99057]: 2025-12-02 08:55:15.119842435 +0000 UTC m=+0.103940590 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, architecture=x86_64, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044)
Dec 02 08:55:15 np0005541914.localdomain podman[99057]: 2025-12-02 08:55:15.159595881 +0000 UTC m=+0.143694056 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, version=17.1.12, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, architecture=x86_64, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, batch=17.1_20251118.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 02 08:55:15 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:55:15 np0005541914.localdomain podman[99055]: 2025-12-02 08:55:15.1664234 +0000 UTC m=+0.150549836 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, release=1761123044, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, distribution-scope=public, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron)
Dec 02 08:55:15 np0005541914.localdomain podman[99056]: 2025-12-02 08:55:15.238918207 +0000 UTC m=+0.222601699 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, release=1761123044, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, version=17.1.12, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 02 08:55:15 np0005541914.localdomain podman[99055]: 2025-12-02 08:55:15.246373025 +0000 UTC m=+0.230499461 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, container_name=logrotate_crond, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, build-date=2025-11-18T22:49:32Z)
Dec 02 08:55:15 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:55:15 np0005541914.localdomain podman[99056]: 2025-12-02 08:55:15.295830597 +0000 UTC m=+0.279514079 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:55:15 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:55:17 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:55:18 np0005541914.localdomain podman[99128]: 2025-12-02 08:55:18.083752391 +0000 UTC m=+0.085117464 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, tcib_managed=true, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, vcs-type=git, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, architecture=x86_64, container_name=nova_compute, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, config_id=tripleo_step5)
Dec 02 08:55:18 np0005541914.localdomain podman[99128]: 2025-12-02 08:55:18.119941988 +0000 UTC m=+0.121307061 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, vcs-type=git, config_id=tripleo_step5, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, version=17.1.12, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute)
Dec 02 08:55:18 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:55:18 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:55:19 np0005541914.localdomain podman[99154]: 2025-12-02 08:55:19.105559302 +0000 UTC m=+0.112672587 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public)
Dec 02 08:55:19 np0005541914.localdomain podman[99154]: 2025-12-02 08:55:19.477829097 +0000 UTC m=+0.484942402 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.buildah.version=1.41.4)
Dec 02 08:55:19 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:55:20 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:55:20 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:55:21 np0005541914.localdomain podman[99177]: 2025-12-02 08:55:21.075592003 +0000 UTC m=+0.081001757 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.buildah.version=1.41.4, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, vcs-type=git)
Dec 02 08:55:21 np0005541914.localdomain podman[99177]: 2025-12-02 08:55:21.11568102 +0000 UTC m=+0.121090754 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, release=1761123044, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 02 08:55:21 np0005541914.localdomain podman[99177]: unhealthy
Dec 02 08:55:21 np0005541914.localdomain systemd[1]: tmp-crun.BJtz8l.mount: Deactivated successfully.
Dec 02 08:55:21 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:55:21 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 08:55:21 np0005541914.localdomain podman[99178]: 2025-12-02 08:55:21.135593409 +0000 UTC m=+0.138545619 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, release=1761123044, batch=17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Dec 02 08:55:21 np0005541914.localdomain podman[99178]: 2025-12-02 08:55:21.178059377 +0000 UTC m=+0.181011587 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, container_name=ovn_controller, vendor=Red Hat, Inc., vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, tcib_managed=true, version=17.1.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://www.redhat.com)
Dec 02 08:55:21 np0005541914.localdomain podman[99178]: unhealthy
Dec 02 08:55:21 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:55:21 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 08:55:25 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:55:26 np0005541914.localdomain podman[99215]: 2025-12-02 08:55:26.079276735 +0000 UTC m=+0.082467033 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=collectd, architecture=x86_64, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_id=tripleo_step3, tcib_managed=true, build-date=2025-11-18T22:51:28Z, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.12)
Dec 02 08:55:26 np0005541914.localdomain podman[99215]: 2025-12-02 08:55:26.119199286 +0000 UTC m=+0.122389584 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, tcib_managed=true, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, build-date=2025-11-18T22:51:28Z, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1761123044, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, version=17.1.12)
Dec 02 08:55:26 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:55:28 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:55:29 np0005541914.localdomain podman[99236]: 2025-12-02 08:55:29.080924956 +0000 UTC m=+0.083084702 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, build-date=2025-11-18T23:44:13Z, tcib_managed=true, url=https://www.redhat.com, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, batch=17.1_20251118.1, container_name=iscsid, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:55:29 np0005541914.localdomain podman[99236]: 2025-12-02 08:55:29.094037837 +0000 UTC m=+0.096197593 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1761123044, name=rhosp17/openstack-iscsid, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, container_name=iscsid, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, vcs-type=git)
Dec 02 08:55:29 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:55:33 np0005541914.localdomain sshd[99255]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:55:33 np0005541914.localdomain sshd[99255]: Invalid user banx from 45.148.10.240 port 45282
Dec 02 08:55:33 np0005541914.localdomain sshd[99255]: Connection closed by invalid user banx 45.148.10.240 port 45282 [preauth]
Dec 02 08:55:40 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:55:41 np0005541914.localdomain podman[99257]: 2025-12-02 08:55:41.07231471 +0000 UTC m=+0.073349555 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step1, vendor=Red Hat, Inc., container_name=metrics_qdr, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, version=17.1.12, batch=17.1_20251118.1, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044)
Dec 02 08:55:41 np0005541914.localdomain podman[99257]: 2025-12-02 08:55:41.259823454 +0000 UTC m=+0.260858239 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:55:41 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:55:45 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:55:45 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:55:45 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:55:46 np0005541914.localdomain podman[99286]: 2025-12-02 08:55:46.072158253 +0000 UTC m=+0.075098097 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.buildah.version=1.41.4, release=1761123044, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron)
Dec 02 08:55:46 np0005541914.localdomain podman[99286]: 2025-12-02 08:55:46.082114118 +0000 UTC m=+0.085054042 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, architecture=x86_64, build-date=2025-11-18T22:49:32Z, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, io.buildah.version=1.41.4)
Dec 02 08:55:46 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:55:46 np0005541914.localdomain podman[99287]: 2025-12-02 08:55:46.122758971 +0000 UTC m=+0.126467099 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.buildah.version=1.41.4, config_id=tripleo_step4, release=1761123044, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, io.openshift.expose-services=, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team)
Dec 02 08:55:46 np0005541914.localdomain podman[99288]: 2025-12-02 08:55:46.139880624 +0000 UTC m=+0.138592139 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, distribution-scope=public, release=1761123044, vcs-type=git, maintainer=OpenStack TripleO Team)
Dec 02 08:55:46 np0005541914.localdomain podman[99287]: 2025-12-02 08:55:46.149257292 +0000 UTC m=+0.152965440 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.buildah.version=1.41.4, release=1761123044, container_name=ceilometer_agent_compute, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:55:46 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:55:46 np0005541914.localdomain podman[99288]: 2025-12-02 08:55:46.199865489 +0000 UTC m=+0.198576994 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.12, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Dec 02 08:55:46 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:55:48 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:55:49 np0005541914.localdomain podman[99358]: 2025-12-02 08:55:49.082384107 +0000 UTC m=+0.088166698 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, batch=17.1_20251118.1, container_name=nova_compute, release=1761123044, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com)
Dec 02 08:55:49 np0005541914.localdomain podman[99358]: 2025-12-02 08:55:49.117804741 +0000 UTC m=+0.123587322 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step5, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=nova_compute, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 02 08:55:49 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:55:49 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:55:50 np0005541914.localdomain podman[99384]: 2025-12-02 08:55:50.085133695 +0000 UTC m=+0.084324470 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, version=17.1.12, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., container_name=nova_migration_target, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, name=rhosp17/openstack-nova-compute, vcs-type=git, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64)
Dec 02 08:55:50 np0005541914.localdomain podman[99384]: 2025-12-02 08:55:50.447040233 +0000 UTC m=+0.446231018 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, container_name=nova_migration_target, vcs-type=git, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container)
Dec 02 08:55:50 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:55:51 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:55:51 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:55:52 np0005541914.localdomain sudo[99408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:55:52 np0005541914.localdomain sudo[99408]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:55:52 np0005541914.localdomain sudo[99408]: pam_unix(sudo:session): session closed for user root
Dec 02 08:55:52 np0005541914.localdomain systemd[1]: tmp-crun.NPBv2q.mount: Deactivated successfully.
Dec 02 08:55:52 np0005541914.localdomain podman[99421]: 2025-12-02 08:55:52.098183762 +0000 UTC m=+0.103066293 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, architecture=x86_64, url=https://www.redhat.com, version=17.1.12, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, container_name=ovn_metadata_agent, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']})
Dec 02 08:55:52 np0005541914.localdomain podman[99421]: 2025-12-02 08:55:52.111652254 +0000 UTC m=+0.116534745 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, version=17.1.12, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git)
Dec 02 08:55:52 np0005541914.localdomain podman[99421]: unhealthy
Dec 02 08:55:52 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:55:52 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 08:55:52 np0005541914.localdomain sudo[99447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:55:52 np0005541914.localdomain sudo[99447]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:55:52 np0005541914.localdomain systemd[1]: tmp-crun.nYNY5u.mount: Deactivated successfully.
Dec 02 08:55:52 np0005541914.localdomain podman[99422]: 2025-12-02 08:55:52.209700792 +0000 UTC m=+0.207344842 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, version=17.1.12, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=ovn_controller, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container)
Dec 02 08:55:52 np0005541914.localdomain podman[99422]: 2025-12-02 08:55:52.229858559 +0000 UTC m=+0.227502609 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vcs-type=git, version=17.1.12, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1761123044, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, architecture=x86_64, config_id=tripleo_step4)
Dec 02 08:55:52 np0005541914.localdomain podman[99422]: unhealthy
Dec 02 08:55:52 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:55:52 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 08:55:52 np0005541914.localdomain sudo[99447]: pam_unix(sudo:session): session closed for user root
Dec 02 08:55:53 np0005541914.localdomain sudo[99510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:55:53 np0005541914.localdomain sudo[99510]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:55:53 np0005541914.localdomain sudo[99510]: pam_unix(sudo:session): session closed for user root
Dec 02 08:55:56 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:55:57 np0005541914.localdomain podman[99525]: 2025-12-02 08:55:57.087155903 +0000 UTC m=+0.092206112 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, release=1761123044, container_name=collectd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, managed_by=tripleo_ansible, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, url=https://www.redhat.com)
Dec 02 08:55:57 np0005541914.localdomain podman[99525]: 2025-12-02 08:55:57.125389742 +0000 UTC m=+0.130439921 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., architecture=x86_64, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, name=rhosp17/openstack-collectd)
Dec 02 08:55:57 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:55:59 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:56:00 np0005541914.localdomain systemd[1]: tmp-crun.tmfG0F.mount: Deactivated successfully.
Dec 02 08:56:00 np0005541914.localdomain podman[99545]: 2025-12-02 08:56:00.08569216 +0000 UTC m=+0.086747344 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, vcs-type=git, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, name=rhosp17/openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, version=17.1.12)
Dec 02 08:56:00 np0005541914.localdomain podman[99545]: 2025-12-02 08:56:00.09749222 +0000 UTC m=+0.098547354 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, url=https://www.redhat.com, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1)
Dec 02 08:56:00 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:56:12 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:56:12 np0005541914.localdomain podman[99566]: 2025-12-02 08:56:12.094348901 +0000 UTC m=+0.069246338 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20251118.1, tcib_managed=true, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_id=tripleo_step1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 08:56:12 np0005541914.localdomain podman[99566]: 2025-12-02 08:56:12.286607222 +0000 UTC m=+0.261504659 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.12, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, config_id=tripleo_step1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 08:56:12 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:56:16 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:56:16 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:56:16 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:56:17 np0005541914.localdomain systemd[1]: tmp-crun.YosS0P.mount: Deactivated successfully.
Dec 02 08:56:17 np0005541914.localdomain podman[99595]: 2025-12-02 08:56:17.087946825 +0000 UTC m=+0.083158485 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4)
Dec 02 08:56:17 np0005541914.localdomain systemd[1]: tmp-crun.qQBOh8.mount: Deactivated successfully.
Dec 02 08:56:17 np0005541914.localdomain podman[99594]: 2025-12-02 08:56:17.131833417 +0000 UTC m=+0.129458541 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., container_name=logrotate_crond, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, tcib_managed=true, release=1761123044, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:32Z, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1)
Dec 02 08:56:17 np0005541914.localdomain podman[99595]: 2025-12-02 08:56:17.142569625 +0000 UTC m=+0.137781325 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64)
Dec 02 08:56:17 np0005541914.localdomain podman[99594]: 2025-12-02 08:56:17.14958473 +0000 UTC m=+0.147209874 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step4, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron)
Dec 02 08:56:17 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:56:17 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:56:17 np0005541914.localdomain podman[99596]: 2025-12-02 08:56:17.197183266 +0000 UTC m=+0.186919938 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, version=17.1.12, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=)
Dec 02 08:56:17 np0005541914.localdomain podman[99596]: 2025-12-02 08:56:17.221913492 +0000 UTC m=+0.211650234 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 02 08:56:17 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:56:19 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:56:19 np0005541914.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 02 08:56:20 np0005541914.localdomain recover_tripleo_nova_virtqemud[99667]: 61907
Dec 02 08:56:20 np0005541914.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 02 08:56:20 np0005541914.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 02 08:56:20 np0005541914.localdomain systemd[1]: tmp-crun.LSn8al.mount: Deactivated successfully.
Dec 02 08:56:20 np0005541914.localdomain podman[99665]: 2025-12-02 08:56:20.093517897 +0000 UTC m=+0.094808031 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step5, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, architecture=x86_64, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 02 08:56:20 np0005541914.localdomain podman[99665]: 2025-12-02 08:56:20.120534622 +0000 UTC m=+0.121824786 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., container_name=nova_compute, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.12, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container)
Dec 02 08:56:20 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:56:20 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:56:21 np0005541914.localdomain podman[99693]: 2025-12-02 08:56:21.08042856 +0000 UTC m=+0.083275618 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, architecture=x86_64, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vcs-type=git, tcib_managed=true, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:56:21 np0005541914.localdomain podman[99693]: 2025-12-02 08:56:21.461863016 +0000 UTC m=+0.464710054 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, container_name=nova_migration_target, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vcs-type=git, version=17.1.12, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:56:21 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:56:22 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:56:22 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:56:23 np0005541914.localdomain podman[99717]: 2025-12-02 08:56:23.093936351 +0000 UTC m=+0.091826230 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, version=17.1.12, container_name=ovn_controller, batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4)
Dec 02 08:56:23 np0005541914.localdomain podman[99716]: 2025-12-02 08:56:23.143835447 +0000 UTC m=+0.143224092 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, container_name=ovn_metadata_agent, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=17.1.12, batch=17.1_20251118.1)
Dec 02 08:56:23 np0005541914.localdomain podman[99716]: 2025-12-02 08:56:23.161352953 +0000 UTC m=+0.160741628 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, container_name=ovn_metadata_agent, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, url=https://www.redhat.com, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, name=rhosp17/openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z)
Dec 02 08:56:23 np0005541914.localdomain podman[99717]: 2025-12-02 08:56:23.163577361 +0000 UTC m=+0.161467220 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, version=17.1.12, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, release=1761123044, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4)
Dec 02 08:56:23 np0005541914.localdomain podman[99717]: unhealthy
Dec 02 08:56:23 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:56:23 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 08:56:23 np0005541914.localdomain podman[99716]: unhealthy
Dec 02 08:56:23 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:56:23 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 08:56:27 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:56:28 np0005541914.localdomain podman[99756]: 2025-12-02 08:56:28.083231171 +0000 UTC m=+0.084887537 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.12, io.openshift.expose-services=, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, release=1761123044)
Dec 02 08:56:28 np0005541914.localdomain podman[99756]: 2025-12-02 08:56:28.095856587 +0000 UTC m=+0.097512923 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, architecture=x86_64, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., version=17.1.12, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:56:28 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:56:30 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:56:31 np0005541914.localdomain podman[99776]: 2025-12-02 08:56:31.073511465 +0000 UTC m=+0.077315485 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, version=17.1.12, architecture=x86_64, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, container_name=iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:56:31 np0005541914.localdomain podman[99776]: 2025-12-02 08:56:31.086911025 +0000 UTC m=+0.090715085 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, release=1761123044, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, distribution-scope=public, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Dec 02 08:56:31 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:56:35 np0005541914.localdomain sshd[99795]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:56:35 np0005541914.localdomain sshd[99795]: Invalid user sol from 80.94.92.182 port 52538
Dec 02 08:56:36 np0005541914.localdomain sshd[99795]: Connection closed by invalid user sol 80.94.92.182 port 52538 [preauth]
Dec 02 08:56:42 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:56:43 np0005541914.localdomain podman[99797]: 2025-12-02 08:56:43.085262509 +0000 UTC m=+0.084931998 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.12, tcib_managed=true, io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, config_id=tripleo_step1, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20251118.1, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:56:43 np0005541914.localdomain podman[99797]: 2025-12-02 08:56:43.27951164 +0000 UTC m=+0.279181099 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, config_id=tripleo_step1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 08:56:43 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:56:47 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:56:47 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:56:47 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:56:48 np0005541914.localdomain systemd[1]: tmp-crun.yYJw1x.mount: Deactivated successfully.
Dec 02 08:56:48 np0005541914.localdomain podman[99825]: 2025-12-02 08:56:48.102278309 +0000 UTC m=+0.104878459 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z)
Dec 02 08:56:48 np0005541914.localdomain podman[99825]: 2025-12-02 08:56:48.143105667 +0000 UTC m=+0.145705787 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.12, release=1761123044, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., container_name=logrotate_crond, vcs-type=git, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, distribution-scope=public, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true)
Dec 02 08:56:48 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:56:48 np0005541914.localdomain podman[99827]: 2025-12-02 08:56:48.149818123 +0000 UTC m=+0.144526532 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 02 08:56:48 np0005541914.localdomain podman[99826]: 2025-12-02 08:56:48.255543317 +0000 UTC m=+0.255289470 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, managed_by=tripleo_ansible, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, architecture=x86_64, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, batch=17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 02 08:56:48 np0005541914.localdomain podman[99827]: 2025-12-02 08:56:48.284922955 +0000 UTC m=+0.279631334 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.12, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-ipmi)
Dec 02 08:56:48 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:56:48 np0005541914.localdomain podman[99826]: 2025-12-02 08:56:48.315372876 +0000 UTC m=+0.315119039 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1761123044, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Dec 02 08:56:48 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:56:50 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:56:51 np0005541914.localdomain podman[99895]: 2025-12-02 08:56:51.101858558 +0000 UTC m=+0.102435294 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_id=tripleo_step5, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, url=https://www.redhat.com, io.buildah.version=1.41.4)
Dec 02 08:56:51 np0005541914.localdomain podman[99895]: 2025-12-02 08:56:51.151011071 +0000 UTC m=+0.151587737 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, container_name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:56:51 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:56:51 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:56:52 np0005541914.localdomain podman[99922]: 2025-12-02 08:56:52.078555109 +0000 UTC m=+0.083409442 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container)
Dec 02 08:56:52 np0005541914.localdomain podman[99922]: 2025-12-02 08:56:52.449994259 +0000 UTC m=+0.454848522 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, name=rhosp17/openstack-nova-compute, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4)
Dec 02 08:56:52 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:56:53 np0005541914.localdomain sudo[99943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:56:53 np0005541914.localdomain sudo[99943]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:56:53 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:56:53 np0005541914.localdomain sudo[99943]: pam_unix(sudo:session): session closed for user root
Dec 02 08:56:53 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:56:53 np0005541914.localdomain sudo[99970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:56:53 np0005541914.localdomain sudo[99970]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:56:53 np0005541914.localdomain podman[99959]: 2025-12-02 08:56:53.760234941 +0000 UTC m=+0.094912924 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, managed_by=tripleo_ansible, url=https://www.redhat.com, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller)
Dec 02 08:56:53 np0005541914.localdomain podman[99958]: 2025-12-02 08:56:53.810635443 +0000 UTC m=+0.147496792 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, architecture=x86_64)
Dec 02 08:56:53 np0005541914.localdomain podman[99958]: 2025-12-02 08:56:53.8269206 +0000 UTC m=+0.163781939 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, io.openshift.expose-services=, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1761123044, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true)
Dec 02 08:56:53 np0005541914.localdomain podman[99959]: 2025-12-02 08:56:53.83309807 +0000 UTC m=+0.167776063 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, version=17.1.12, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, distribution-scope=public, release=1761123044, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 02 08:56:53 np0005541914.localdomain podman[99959]: unhealthy
Dec 02 08:56:53 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:56:53 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 08:56:53 np0005541914.localdomain podman[99958]: unhealthy
Dec 02 08:56:53 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:56:53 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 08:56:54 np0005541914.localdomain sudo[99970]: pam_unix(sudo:session): session closed for user root
Dec 02 08:56:55 np0005541914.localdomain sudo[100043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:56:55 np0005541914.localdomain sudo[100043]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:56:55 np0005541914.localdomain sudo[100043]: pam_unix(sudo:session): session closed for user root
Dec 02 08:56:58 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:56:59 np0005541914.localdomain podman[100058]: 2025-12-02 08:56:59.066994213 +0000 UTC m=+0.075378226 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, managed_by=tripleo_ansible, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd)
Dec 02 08:56:59 np0005541914.localdomain podman[100058]: 2025-12-02 08:56:59.077177445 +0000 UTC m=+0.085561488 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp17/openstack-collectd, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4)
Dec 02 08:56:59 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:57:01 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:57:02 np0005541914.localdomain systemd[1]: tmp-crun.uiOAwQ.mount: Deactivated successfully.
Dec 02 08:57:02 np0005541914.localdomain podman[100077]: 2025-12-02 08:57:02.08124545 +0000 UTC m=+0.087032433 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.component=openstack-iscsid-container, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., config_id=tripleo_step3, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Dec 02 08:57:02 np0005541914.localdomain podman[100077]: 2025-12-02 08:57:02.112119574 +0000 UTC m=+0.117906517 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 02 08:57:02 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:57:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 08:57:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 4200.1 total, 600.0 interval
                                                          Cumulative writes: 4846 writes, 21K keys, 4846 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 4846 writes, 677 syncs, 7.16 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 02 08:57:13 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:57:14 np0005541914.localdomain podman[100094]: 2025-12-02 08:57:14.089855609 +0000 UTC m=+0.095113030 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, version=17.1.12, config_id=tripleo_step1, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd)
Dec 02 08:57:14 np0005541914.localdomain podman[100094]: 2025-12-02 08:57:14.288890536 +0000 UTC m=+0.294147947 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, container_name=metrics_qdr, version=17.1.12, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1)
Dec 02 08:57:14 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:57:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 08:57:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 4200.2 total, 600.0 interval
                                                          Cumulative writes: 5767 writes, 25K keys, 5767 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5767 writes, 746 syncs, 7.73 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 02 08:57:18 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:57:18 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:57:18 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:57:19 np0005541914.localdomain podman[100125]: 2025-12-02 08:57:19.072046433 +0000 UTC m=+0.068238697 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, release=1761123044, io.openshift.expose-services=, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc.)
Dec 02 08:57:19 np0005541914.localdomain podman[100123]: 2025-12-02 08:57:19.140191887 +0000 UTC m=+0.137036432 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, build-date=2025-11-18T22:49:32Z, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, architecture=x86_64, io.openshift.expose-services=, version=17.1.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, name=rhosp17/openstack-cron, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20251118.1, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:57:19 np0005541914.localdomain podman[100123]: 2025-12-02 08:57:19.1530151 +0000 UTC m=+0.149859655 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.12, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, tcib_managed=true, batch=17.1_20251118.1, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, architecture=x86_64, name=rhosp17/openstack-cron, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4)
Dec 02 08:57:19 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:57:19 np0005541914.localdomain podman[100124]: 2025-12-02 08:57:19.24425446 +0000 UTC m=+0.240444914 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, build-date=2025-11-19T00:11:48Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 02 08:57:19 np0005541914.localdomain podman[100125]: 2025-12-02 08:57:19.257805124 +0000 UTC m=+0.253997408 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, url=https://www.redhat.com, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, container_name=ceilometer_agent_ipmi, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, build-date=2025-11-19T00:12:45Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 02 08:57:19 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:57:19 np0005541914.localdomain podman[100124]: 2025-12-02 08:57:19.302955935 +0000 UTC m=+0.299146419 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.12, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, distribution-scope=public, container_name=ceilometer_agent_compute, url=https://www.redhat.com, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 02 08:57:19 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:57:21 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:57:22 np0005541914.localdomain systemd[1]: tmp-crun.uDols8.mount: Deactivated successfully.
Dec 02 08:57:22 np0005541914.localdomain podman[100197]: 2025-12-02 08:57:22.068869267 +0000 UTC m=+0.075335135 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_id=tripleo_step5, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:57:22 np0005541914.localdomain podman[100197]: 2025-12-02 08:57:22.126089767 +0000 UTC m=+0.132555635 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., release=1761123044, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:57:22 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:57:22 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:57:23 np0005541914.localdomain podman[100223]: 2025-12-02 08:57:23.088539752 +0000 UTC m=+0.092694995 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.12, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible)
Dec 02 08:57:23 np0005541914.localdomain podman[100223]: 2025-12-02 08:57:23.450844223 +0000 UTC m=+0.454999556 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, version=17.1.12, batch=17.1_20251118.1, vcs-type=git, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, vendor=Red Hat, Inc., io.buildah.version=1.41.4)
Dec 02 08:57:23 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:57:23 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:57:23 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:57:24 np0005541914.localdomain podman[100246]: 2025-12-02 08:57:24.093079345 +0000 UTC m=+0.091172079 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-type=git, architecture=x86_64, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, version=17.1.12, release=1761123044, io.buildah.version=1.41.4, config_id=tripleo_step4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']})
Dec 02 08:57:24 np0005541914.localdomain podman[100246]: 2025-12-02 08:57:24.128906671 +0000 UTC m=+0.126999195 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.12, container_name=ovn_metadata_agent, io.openshift.expose-services=, tcib_managed=true, release=1761123044, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 02 08:57:24 np0005541914.localdomain podman[100246]: unhealthy
Dec 02 08:57:24 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:57:24 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 08:57:24 np0005541914.localdomain podman[100247]: 2025-12-02 08:57:24.152530884 +0000 UTC m=+0.148639807 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, release=1761123044)
Dec 02 08:57:24 np0005541914.localdomain podman[100247]: 2025-12-02 08:57:24.189439442 +0000 UTC m=+0.185548355 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, tcib_managed=true, url=https://www.redhat.com, name=rhosp17/openstack-ovn-controller)
Dec 02 08:57:24 np0005541914.localdomain podman[100247]: unhealthy
Dec 02 08:57:24 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:57:24 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 08:57:29 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:57:30 np0005541914.localdomain podman[100285]: 2025-12-02 08:57:30.081989788 +0000 UTC m=+0.083892267 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, name=rhosp17/openstack-collectd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1761123044, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, managed_by=tripleo_ansible)
Dec 02 08:57:30 np0005541914.localdomain podman[100285]: 2025-12-02 08:57:30.119957409 +0000 UTC m=+0.121859888 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-type=git, container_name=collectd, io.buildah.version=1.41.4, config_id=tripleo_step3, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public)
Dec 02 08:57:30 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:57:32 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:57:33 np0005541914.localdomain podman[100305]: 2025-12-02 08:57:33.077665108 +0000 UTC m=+0.079648057 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, url=https://www.redhat.com, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, tcib_managed=true, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=)
Dec 02 08:57:33 np0005541914.localdomain podman[100305]: 2025-12-02 08:57:33.112137882 +0000 UTC m=+0.114120851 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, release=1761123044, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp17/openstack-iscsid, tcib_managed=true, architecture=x86_64, distribution-scope=public, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3)
Dec 02 08:57:33 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:57:44 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:57:45 np0005541914.localdomain podman[100323]: 2025-12-02 08:57:45.079573472 +0000 UTC m=+0.082506035 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_id=tripleo_step1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc.)
Dec 02 08:57:45 np0005541914.localdomain podman[100323]: 2025-12-02 08:57:45.286954424 +0000 UTC m=+0.289886997 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, io.buildah.version=1.41.4, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, version=17.1.12, vcs-type=git, container_name=metrics_qdr, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., release=1761123044, batch=17.1_20251118.1, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 08:57:45 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:57:46 np0005541914.localdomain sshd[100353]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:57:47 np0005541914.localdomain sshd[100353]: Connection closed by authenticating user root 45.148.10.240 port 49718 [preauth]
Dec 02 08:57:49 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:57:49 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:57:49 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:57:50 np0005541914.localdomain podman[100355]: 2025-12-02 08:57:50.085171652 +0000 UTC m=+0.087885169 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, config_id=tripleo_step4, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, version=17.1.12, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron)
Dec 02 08:57:50 np0005541914.localdomain podman[100355]: 2025-12-02 08:57:50.120896144 +0000 UTC m=+0.123609641 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, build-date=2025-11-18T22:49:32Z, release=1761123044, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:57:50 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:57:50 np0005541914.localdomain podman[100357]: 2025-12-02 08:57:50.122313848 +0000 UTC m=+0.124281411 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible)
Dec 02 08:57:50 np0005541914.localdomain systemd[1]: tmp-crun.g3EKdO.mount: Deactivated successfully.
Dec 02 08:57:50 np0005541914.localdomain podman[100356]: 2025-12-02 08:57:50.191711331 +0000 UTC m=+0.193118137 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.buildah.version=1.41.4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute)
Dec 02 08:57:50 np0005541914.localdomain podman[100357]: 2025-12-02 08:57:50.205903855 +0000 UTC m=+0.207871378 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, distribution-scope=public, build-date=2025-11-19T00:12:45Z, architecture=x86_64, release=1761123044, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:57:50 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:57:50 np0005541914.localdomain podman[100356]: 2025-12-02 08:57:50.241912616 +0000 UTC m=+0.243319402 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, container_name=ceilometer_agent_compute, version=17.1.12, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute)
Dec 02 08:57:50 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:57:52 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:57:53 np0005541914.localdomain systemd[1]: tmp-crun.MxR5KR.mount: Deactivated successfully.
Dec 02 08:57:53 np0005541914.localdomain podman[100429]: 2025-12-02 08:57:53.073224958 +0000 UTC m=+0.076930544 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, io.openshift.expose-services=, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step5, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64)
Dec 02 08:57:53 np0005541914.localdomain podman[100429]: 2025-12-02 08:57:53.125512627 +0000 UTC m=+0.129218223 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=tripleo_step5, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, tcib_managed=true, distribution-scope=public, release=1761123044)
Dec 02 08:57:53 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:57:53 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:57:54 np0005541914.localdomain systemd[1]: tmp-crun.4wSFsu.mount: Deactivated successfully.
Dec 02 08:57:54 np0005541914.localdomain podman[100455]: 2025-12-02 08:57:54.07611501 +0000 UTC m=+0.082745131 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, tcib_managed=true)
Dec 02 08:57:54 np0005541914.localdomain podman[100455]: 2025-12-02 08:57:54.471034278 +0000 UTC m=+0.477664369 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, version=17.1.12)
Dec 02 08:57:54 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:57:54 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:57:54 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:57:54 np0005541914.localdomain podman[100478]: 2025-12-02 08:57:54.580013901 +0000 UTC m=+0.084832665 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, io.buildah.version=1.41.4, release=1761123044, version=17.1.12, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn)
Dec 02 08:57:54 np0005541914.localdomain podman[100479]: 2025-12-02 08:57:54.642553254 +0000 UTC m=+0.141943432 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, version=17.1.12, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, build-date=2025-11-18T23:34:05Z, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 02 08:57:54 np0005541914.localdomain podman[100478]: 2025-12-02 08:57:54.668425336 +0000 UTC m=+0.173244110 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, managed_by=tripleo_ansible, version=17.1.12, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20251118.1, release=1761123044, build-date=2025-11-19T00:14:25Z)
Dec 02 08:57:54 np0005541914.localdomain podman[100478]: unhealthy
Dec 02 08:57:54 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:57:54 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 08:57:54 np0005541914.localdomain podman[100479]: 2025-12-02 08:57:54.684569779 +0000 UTC m=+0.183959947 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vcs-type=git, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, version=17.1.12, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 02 08:57:54 np0005541914.localdomain podman[100479]: unhealthy
Dec 02 08:57:54 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:57:54 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 08:57:55 np0005541914.localdomain sudo[100518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:57:55 np0005541914.localdomain sudo[100518]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:57:55 np0005541914.localdomain sudo[100518]: pam_unix(sudo:session): session closed for user root
Dec 02 08:57:55 np0005541914.localdomain sudo[100533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:57:55 np0005541914.localdomain sudo[100533]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:57:55 np0005541914.localdomain sudo[100533]: pam_unix(sudo:session): session closed for user root
Dec 02 08:57:59 np0005541914.localdomain sudo[100580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:57:59 np0005541914.localdomain sudo[100580]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:57:59 np0005541914.localdomain sudo[100580]: pam_unix(sudo:session): session closed for user root
Dec 02 08:58:00 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:58:01 np0005541914.localdomain podman[100595]: 2025-12-02 08:58:01.097324166 +0000 UTC m=+0.088404255 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, release=1761123044, com.redhat.component=openstack-collectd-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, config_id=tripleo_step3, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 08:58:01 np0005541914.localdomain podman[100595]: 2025-12-02 08:58:01.109299322 +0000 UTC m=+0.100379431 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, vcs-type=git, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, com.redhat.component=openstack-collectd-container, name=rhosp17/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, version=17.1.12, release=1761123044, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:58:01 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:58:03 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:58:04 np0005541914.localdomain podman[100613]: 2025-12-02 08:58:04.08271377 +0000 UTC m=+0.085913338 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://www.redhat.com, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, vcs-type=git, name=rhosp17/openstack-iscsid, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, config_id=tripleo_step3, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public)
Dec 02 08:58:04 np0005541914.localdomain podman[100613]: 2025-12-02 08:58:04.123213349 +0000 UTC m=+0.126412907 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20251118.1, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, name=rhosp17/openstack-iscsid, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, tcib_managed=true, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z)
Dec 02 08:58:04 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:58:15 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:58:15 np0005541914.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 02 08:58:16 np0005541914.localdomain recover_tripleo_nova_virtqemud[100640]: 61907
Dec 02 08:58:16 np0005541914.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 02 08:58:16 np0005541914.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 02 08:58:16 np0005541914.localdomain podman[100633]: 2025-12-02 08:58:16.067779629 +0000 UTC m=+0.078662778 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.openshift.expose-services=, release=1761123044, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-type=git, io.buildah.version=1.41.4)
Dec 02 08:58:16 np0005541914.localdomain podman[100633]: 2025-12-02 08:58:16.290097268 +0000 UTC m=+0.300980327 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, container_name=metrics_qdr, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, distribution-scope=public, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc.)
Dec 02 08:58:16 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:58:20 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:58:20 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:58:20 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:58:21 np0005541914.localdomain systemd[1]: tmp-crun.Em19E7.mount: Deactivated successfully.
Dec 02 08:58:21 np0005541914.localdomain podman[100665]: 2025-12-02 08:58:21.09832335 +0000 UTC m=+0.100064310 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, container_name=ceilometer_agent_compute, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, release=1761123044, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, architecture=x86_64, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:58:21 np0005541914.localdomain podman[100666]: 2025-12-02 08:58:21.143362068 +0000 UTC m=+0.140325063 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, managed_by=tripleo_ansible, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Dec 02 08:58:21 np0005541914.localdomain podman[100665]: 2025-12-02 08:58:21.157809899 +0000 UTC m=+0.159550799 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:11:48Z, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 02 08:58:21 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:58:21 np0005541914.localdomain podman[100666]: 2025-12-02 08:58:21.182396631 +0000 UTC m=+0.179359656 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public)
Dec 02 08:58:21 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:58:21 np0005541914.localdomain podman[100664]: 2025-12-02 08:58:21.24804157 +0000 UTC m=+0.250752411 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, io.buildah.version=1.41.4, managed_by=tripleo_ansible, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 08:58:21 np0005541914.localdomain podman[100664]: 2025-12-02 08:58:21.261988986 +0000 UTC m=+0.264699797 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.4, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, tcib_managed=true, version=17.1.12, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron)
Dec 02 08:58:21 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:58:23 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:58:24 np0005541914.localdomain podman[100737]: 2025-12-02 08:58:24.07798787 +0000 UTC m=+0.080730970 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1, container_name=nova_compute, release=1761123044)
Dec 02 08:58:24 np0005541914.localdomain podman[100737]: 2025-12-02 08:58:24.10907603 +0000 UTC m=+0.111819120 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step5, vendor=Red Hat, Inc., container_name=nova_compute, vcs-type=git, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, distribution-scope=public, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:58:24 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:58:24 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:58:24 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:58:24 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:58:25 np0005541914.localdomain podman[100763]: 2025-12-02 08:58:25.083255264 +0000 UTC m=+0.088052864 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:58:25 np0005541914.localdomain systemd[1]: tmp-crun.X5en6K.mount: Deactivated successfully.
Dec 02 08:58:25 np0005541914.localdomain podman[100764]: 2025-12-02 08:58:25.184277764 +0000 UTC m=+0.185969699 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, url=https://www.redhat.com, container_name=ovn_controller, tcib_managed=true, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, release=1761123044, build-date=2025-11-18T23:34:05Z, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 02 08:58:25 np0005541914.localdomain podman[100763]: 2025-12-02 08:58:25.199834339 +0000 UTC m=+0.204631969 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn)
Dec 02 08:58:25 np0005541914.localdomain podman[100763]: unhealthy
Dec 02 08:58:25 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:58:25 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 08:58:25 np0005541914.localdomain podman[100764]: 2025-12-02 08:58:25.223751631 +0000 UTC m=+0.225443536 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, release=1761123044, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:58:25 np0005541914.localdomain podman[100764]: unhealthy
Dec 02 08:58:25 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:58:25 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 08:58:25 np0005541914.localdomain podman[100765]: 2025-12-02 08:58:25.281655792 +0000 UTC m=+0.279545350 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-type=git, release=1761123044, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_id=tripleo_step4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, version=17.1.12, vendor=Red Hat, Inc.)
Dec 02 08:58:25 np0005541914.localdomain podman[100765]: 2025-12-02 08:58:25.634218754 +0000 UTC m=+0.632108272 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, release=1761123044, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:58:25 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:58:31 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:58:32 np0005541914.localdomain podman[100823]: 2025-12-02 08:58:32.08451028 +0000 UTC m=+0.091161189 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, vcs-type=git, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, version=17.1.12, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.buildah.version=1.41.4, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, release=1761123044, url=https://www.redhat.com, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:58:32 np0005541914.localdomain podman[100823]: 2025-12-02 08:58:32.101998475 +0000 UTC m=+0.108649374 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, config_id=tripleo_step3, release=1761123044, architecture=x86_64, name=rhosp17/openstack-collectd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container)
Dec 02 08:58:32 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:58:34 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:58:35 np0005541914.localdomain podman[100843]: 2025-12-02 08:58:35.081523811 +0000 UTC m=+0.084598599 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, architecture=x86_64, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044)
Dec 02 08:58:35 np0005541914.localdomain podman[100843]: 2025-12-02 08:58:35.120936786 +0000 UTC m=+0.124011614 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20251118.1, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3)
Dec 02 08:58:35 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:58:46 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:58:47 np0005541914.localdomain podman[100863]: 2025-12-02 08:58:47.081096225 +0000 UTC m=+0.084294609 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, tcib_managed=true, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team)
Dec 02 08:58:47 np0005541914.localdomain podman[100863]: 2025-12-02 08:58:47.300391811 +0000 UTC m=+0.303590185 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20251118.1, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd)
Dec 02 08:58:47 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:58:51 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:58:51 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:58:51 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:58:52 np0005541914.localdomain podman[100893]: 2025-12-02 08:58:52.06660512 +0000 UTC m=+0.067980410 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.buildah.version=1.41.4, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team)
Dec 02 08:58:52 np0005541914.localdomain podman[100893]: 2025-12-02 08:58:52.091614755 +0000 UTC m=+0.092990085 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, release=1761123044, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com)
Dec 02 08:58:52 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:58:52 np0005541914.localdomain podman[100894]: 2025-12-02 08:58:52.183328379 +0000 UTC m=+0.180068178 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2025-11-19T00:12:45Z, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 02 08:58:52 np0005541914.localdomain podman[100894]: 2025-12-02 08:58:52.235089203 +0000 UTC m=+0.231828952 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, url=https://www.redhat.com)
Dec 02 08:58:52 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:58:52 np0005541914.localdomain podman[100892]: 2025-12-02 08:58:52.239985233 +0000 UTC m=+0.241366213 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-11-18T22:49:32Z, architecture=x86_64, io.buildah.version=1.41.4, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, release=1761123044, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron)
Dec 02 08:58:52 np0005541914.localdomain podman[100892]: 2025-12-02 08:58:52.320946039 +0000 UTC m=+0.322327049 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, batch=17.1_20251118.1, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, container_name=logrotate_crond, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc.)
Dec 02 08:58:52 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:58:54 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:58:55 np0005541914.localdomain podman[100965]: 2025-12-02 08:58:55.060129243 +0000 UTC m=+0.065277397 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, container_name=nova_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, distribution-scope=public, release=1761123044, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64)
Dec 02 08:58:55 np0005541914.localdomain podman[100965]: 2025-12-02 08:58:55.078254708 +0000 UTC m=+0.083402892 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, distribution-scope=public, build-date=2025-11-19T00:36:58Z, tcib_managed=true, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, batch=17.1_20251118.1, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:58:55 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:58:55 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:58:55 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:58:55 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:58:56 np0005541914.localdomain systemd[1]: tmp-crun.HE9pcL.mount: Deactivated successfully.
Dec 02 08:58:56 np0005541914.localdomain podman[100995]: 2025-12-02 08:58:56.079307613 +0000 UTC m=+0.072072205 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, tcib_managed=true, io.buildah.version=1.41.4, batch=17.1_20251118.1, config_id=tripleo_step4, architecture=x86_64, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:58:56 np0005541914.localdomain podman[100993]: 2025-12-02 08:58:56.130655124 +0000 UTC m=+0.129143861 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, version=17.1.12, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64)
Dec 02 08:58:56 np0005541914.localdomain podman[100993]: 2025-12-02 08:58:56.169182762 +0000 UTC m=+0.167671529 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, url=https://www.redhat.com, build-date=2025-11-19T00:14:25Z, distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, tcib_managed=true, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible)
Dec 02 08:58:56 np0005541914.localdomain podman[100993]: unhealthy
Dec 02 08:58:56 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:58:56 np0005541914.localdomain podman[100994]: 2025-12-02 08:58:56.18873575 +0000 UTC m=+0.181622096 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, distribution-scope=public, version=17.1.12, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vendor=Red Hat, Inc., release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 02 08:58:56 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 08:58:56 np0005541914.localdomain podman[100994]: 2025-12-02 08:58:56.23186736 +0000 UTC m=+0.224753746 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, release=1761123044, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com)
Dec 02 08:58:56 np0005541914.localdomain podman[100994]: unhealthy
Dec 02 08:58:56 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:58:56 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 08:58:56 np0005541914.localdomain podman[100995]: 2025-12-02 08:58:56.47484242 +0000 UTC m=+0.467607052 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.12, url=https://www.redhat.com, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc.)
Dec 02 08:58:56 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:58:57 np0005541914.localdomain systemd[1]: tmp-crun.zxQsi6.mount: Deactivated successfully.
Dec 02 08:58:59 np0005541914.localdomain sudo[101055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:58:59 np0005541914.localdomain sudo[101055]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:58:59 np0005541914.localdomain sudo[101055]: pam_unix(sudo:session): session closed for user root
Dec 02 08:58:59 np0005541914.localdomain sudo[101070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 02 08:58:59 np0005541914.localdomain sudo[101070]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:58:59 np0005541914.localdomain sudo[101070]: pam_unix(sudo:session): session closed for user root
Dec 02 08:59:00 np0005541914.localdomain sudo[101105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 08:59:00 np0005541914.localdomain sudo[101105]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:59:00 np0005541914.localdomain sudo[101105]: pam_unix(sudo:session): session closed for user root
Dec 02 08:59:00 np0005541914.localdomain sudo[101120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 08:59:00 np0005541914.localdomain sudo[101120]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:59:00 np0005541914.localdomain sudo[101120]: pam_unix(sudo:session): session closed for user root
Dec 02 08:59:01 np0005541914.localdomain sudo[101167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 08:59:01 np0005541914.localdomain sudo[101167]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 08:59:01 np0005541914.localdomain sudo[101167]: pam_unix(sudo:session): session closed for user root
Dec 02 08:59:02 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:59:03 np0005541914.localdomain podman[101182]: 2025-12-02 08:59:03.106181252 +0000 UTC m=+0.101299800 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=tripleo_step3, vendor=Red Hat, Inc., release=1761123044, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, container_name=collectd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Dec 02 08:59:03 np0005541914.localdomain podman[101182]: 2025-12-02 08:59:03.120808209 +0000 UTC m=+0.115926717 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, batch=17.1_20251118.1, name=rhosp17/openstack-collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, build-date=2025-11-18T22:51:28Z, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 collectd)
Dec 02 08:59:03 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:59:05 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:59:06 np0005541914.localdomain podman[101202]: 2025-12-02 08:59:06.077535607 +0000 UTC m=+0.082528935 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, version=17.1.12, vendor=Red Hat, Inc., release=1761123044, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:59:06 np0005541914.localdomain podman[101202]: 2025-12-02 08:59:06.086611794 +0000 UTC m=+0.091605122 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, container_name=iscsid, distribution-scope=public, build-date=2025-11-18T23:44:13Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 08:59:06 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:59:17 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:59:18 np0005541914.localdomain systemd[1]: tmp-crun.LFzGRP.mount: Deactivated successfully.
Dec 02 08:59:18 np0005541914.localdomain podman[101221]: 2025-12-02 08:59:18.074255454 +0000 UTC m=+0.077765730 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, container_name=metrics_qdr, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, distribution-scope=public, build-date=2025-11-18T22:49:46Z, release=1761123044, architecture=x86_64, io.buildah.version=1.41.4, url=https://www.redhat.com, batch=17.1_20251118.1)
Dec 02 08:59:18 np0005541914.localdomain podman[101221]: 2025-12-02 08:59:18.271344631 +0000 UTC m=+0.274854897 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-type=git, distribution-scope=public, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, release=1761123044, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4)
Dec 02 08:59:18 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:59:22 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:59:22 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:59:22 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:59:23 np0005541914.localdomain podman[101251]: 2025-12-02 08:59:23.088054954 +0000 UTC m=+0.085007900 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:11:48Z, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, container_name=ceilometer_agent_compute, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:59:23 np0005541914.localdomain podman[101251]: 2025-12-02 08:59:23.120901179 +0000 UTC m=+0.117854105 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, version=17.1.12, container_name=ceilometer_agent_compute, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, tcib_managed=true, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, url=https://www.redhat.com)
Dec 02 08:59:23 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:59:23 np0005541914.localdomain podman[101252]: 2025-12-02 08:59:23.142525531 +0000 UTC m=+0.137688302 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, version=17.1.12, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, config_id=tripleo_step4, vcs-type=git, release=1761123044, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, architecture=x86_64)
Dec 02 08:59:23 np0005541914.localdomain podman[101252]: 2025-12-02 08:59:23.176827999 +0000 UTC m=+0.171990770 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1761123044, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676)
Dec 02 08:59:23 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:59:23 np0005541914.localdomain podman[101250]: 2025-12-02 08:59:23.194554961 +0000 UTC m=+0.193247301 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, release=1761123044, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, version=17.1.12, distribution-scope=public, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git)
Dec 02 08:59:23 np0005541914.localdomain podman[101250]: 2025-12-02 08:59:23.204719812 +0000 UTC m=+0.203412182 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.component=openstack-cron-container, release=1761123044, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public)
Dec 02 08:59:23 np0005541914.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 02 08:59:23 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:59:23 np0005541914.localdomain recover_tripleo_nova_virtqemud[101324]: 61907
Dec 02 08:59:23 np0005541914.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 02 08:59:23 np0005541914.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 02 08:59:25 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:59:26 np0005541914.localdomain podman[101325]: 2025-12-02 08:59:26.07835828 +0000 UTC m=+0.078243484 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, container_name=nova_compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-type=git)
Dec 02 08:59:26 np0005541914.localdomain podman[101325]: 2025-12-02 08:59:26.108857213 +0000 UTC m=+0.108742447 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, build-date=2025-11-19T00:36:58Z, version=17.1.12, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_id=tripleo_step5, io.openshift.expose-services=, container_name=nova_compute, tcib_managed=true, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:59:26 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:59:27 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:59:27 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:59:27 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:59:27 np0005541914.localdomain systemd[1]: tmp-crun.TG8QmJ.mount: Deactivated successfully.
Dec 02 08:59:27 np0005541914.localdomain podman[101351]: 2025-12-02 08:59:27.115334265 +0000 UTC m=+0.078606015 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, tcib_managed=true, url=https://www.redhat.com, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, release=1761123044, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12)
Dec 02 08:59:27 np0005541914.localdomain systemd[1]: tmp-crun.9yEXs1.mount: Deactivated successfully.
Dec 02 08:59:27 np0005541914.localdomain podman[101352]: 2025-12-02 08:59:27.134585024 +0000 UTC m=+0.088456237 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public)
Dec 02 08:59:27 np0005541914.localdomain podman[101350]: 2025-12-02 08:59:27.176052323 +0000 UTC m=+0.136980921 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.buildah.version=1.41.4, vcs-type=git, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 02 08:59:27 np0005541914.localdomain podman[101351]: 2025-12-02 08:59:27.208199175 +0000 UTC m=+0.171470885 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step4, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, release=1761123044, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, container_name=ovn_controller, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc.)
Dec 02 08:59:27 np0005541914.localdomain podman[101351]: unhealthy
Dec 02 08:59:27 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:59:27 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 08:59:27 np0005541914.localdomain podman[101350]: 2025-12-02 08:59:27.282044274 +0000 UTC m=+0.242972872 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, version=17.1.12, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 02 08:59:27 np0005541914.localdomain podman[101350]: unhealthy
Dec 02 08:59:27 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:59:27 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 08:59:27 np0005541914.localdomain podman[101352]: 2025-12-02 08:59:27.516252277 +0000 UTC m=+0.470123520 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 08:59:27 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 08:59:33 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 08:59:34 np0005541914.localdomain podman[101407]: 2025-12-02 08:59:34.079806526 +0000 UTC m=+0.086989892 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=tripleo_step3, name=rhosp17/openstack-collectd, io.buildah.version=1.41.4, version=17.1.12, build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, batch=17.1_20251118.1, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 08:59:34 np0005541914.localdomain podman[101407]: 2025-12-02 08:59:34.089864114 +0000 UTC m=+0.097047460 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.component=openstack-collectd-container, distribution-scope=public, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, vcs-type=git, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.4)
Dec 02 08:59:34 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 08:59:36 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 08:59:37 np0005541914.localdomain systemd[1]: tmp-crun.J4XKVE.mount: Deactivated successfully.
Dec 02 08:59:37 np0005541914.localdomain podman[101427]: 2025-12-02 08:59:37.058936709 +0000 UTC m=+0.067806865 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, container_name=iscsid, name=rhosp17/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, release=1761123044, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.4, vcs-type=git, io.openshift.expose-services=, version=17.1.12, batch=17.1_20251118.1)
Dec 02 08:59:37 np0005541914.localdomain podman[101427]: 2025-12-02 08:59:37.093520347 +0000 UTC m=+0.102390513 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, release=1761123044, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:59:37 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 08:59:48 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 08:59:49 np0005541914.localdomain podman[101447]: 2025-12-02 08:59:49.098140764 +0000 UTC m=+0.101132454 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, config_id=tripleo_step1, batch=17.1_20251118.1, vcs-type=git, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, version=17.1.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 08:59:49 np0005541914.localdomain podman[101447]: 2025-12-02 08:59:49.291121606 +0000 UTC m=+0.294113376 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, version=17.1.12, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, config_id=tripleo_step1, container_name=metrics_qdr, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 08:59:49 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 08:59:53 np0005541914.localdomain sshd[101476]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 08:59:53 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 08:59:53 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 08:59:53 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 08:59:53 np0005541914.localdomain systemd[1]: tmp-crun.PzCFTc.mount: Deactivated successfully.
Dec 02 08:59:53 np0005541914.localdomain podman[101480]: 2025-12-02 08:59:53.705277557 +0000 UTC m=+0.138023833 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, release=1761123044, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, io.openshift.expose-services=, version=17.1.12, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:59:53 np0005541914.localdomain podman[101478]: 2025-12-02 08:59:53.669387149 +0000 UTC m=+0.106181418 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-cron, vcs-type=git, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron)
Dec 02 08:59:53 np0005541914.localdomain podman[101480]: 2025-12-02 08:59:53.734799659 +0000 UTC m=+0.167545885 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, managed_by=tripleo_ansible, release=1761123044, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, container_name=ceilometer_agent_ipmi)
Dec 02 08:59:53 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 08:59:53 np0005541914.localdomain podman[101478]: 2025-12-02 08:59:53.754522123 +0000 UTC m=+0.191316422 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20251118.1, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, container_name=logrotate_crond, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron)
Dec 02 08:59:53 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 08:59:53 np0005541914.localdomain podman[101479]: 2025-12-02 08:59:53.821103529 +0000 UTC m=+0.257896268 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, architecture=x86_64, url=https://www.redhat.com, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 08:59:53 np0005541914.localdomain podman[101479]: 2025-12-02 08:59:53.853943744 +0000 UTC m=+0.290736473 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 08:59:53 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 08:59:53 np0005541914.localdomain sshd[101476]: Invalid user ethereum from 45.148.10.240 port 55402
Dec 02 08:59:53 np0005541914.localdomain sshd[101476]: Connection closed by invalid user ethereum 45.148.10.240 port 55402 [preauth]
Dec 02 08:59:56 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 08:59:57 np0005541914.localdomain systemd[1]: tmp-crun.xOwr5T.mount: Deactivated successfully.
Dec 02 08:59:57 np0005541914.localdomain podman[101555]: 2025-12-02 08:59:57.070203638 +0000 UTC m=+0.075649985 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, maintainer=OpenStack TripleO Team, release=1761123044, config_id=tripleo_step5, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 02 08:59:57 np0005541914.localdomain podman[101555]: 2025-12-02 08:59:57.118800284 +0000 UTC m=+0.124246601 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=)
Dec 02 08:59:57 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 08:59:57 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 08:59:57 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 08:59:57 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 08:59:58 np0005541914.localdomain systemd[1]: tmp-crun.m2cSxp.mount: Deactivated successfully.
Dec 02 08:59:58 np0005541914.localdomain podman[101582]: 2025-12-02 08:59:58.07634734 +0000 UTC m=+0.079150052 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, container_name=nova_migration_target, url=https://www.redhat.com, version=17.1.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute)
Dec 02 08:59:58 np0005541914.localdomain podman[101581]: 2025-12-02 08:59:58.133575159 +0000 UTC m=+0.136664290 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, vendor=Red Hat, Inc., tcib_managed=true, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, maintainer=OpenStack TripleO Team, container_name=ovn_controller, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, vcs-type=git, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container)
Dec 02 08:59:58 np0005541914.localdomain podman[101581]: 2025-12-02 08:59:58.17282224 +0000 UTC m=+0.175911341 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, release=1761123044, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, version=17.1.12, architecture=x86_64, io.buildah.version=1.41.4, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller)
Dec 02 08:59:58 np0005541914.localdomain podman[101581]: unhealthy
Dec 02 08:59:58 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:59:58 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 08:59:58 np0005541914.localdomain podman[101580]: 2025-12-02 08:59:58.190352047 +0000 UTC m=+0.193673755 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1761123044, architecture=x86_64, io.buildah.version=1.41.4, config_id=tripleo_step4, container_name=ovn_metadata_agent, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 02 08:59:58 np0005541914.localdomain podman[101580]: 2025-12-02 08:59:58.198801475 +0000 UTC m=+0.202123113 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, version=17.1.12, release=1761123044, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 02 08:59:58 np0005541914.localdomain podman[101580]: unhealthy
Dec 02 08:59:58 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 08:59:58 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 08:59:58 np0005541914.localdomain podman[101582]: 2025-12-02 08:59:58.479768878 +0000 UTC m=+0.482571600 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, container_name=nova_migration_target, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12)
Dec 02 08:59:58 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 09:00:01 np0005541914.localdomain CROND[101643]: (root) CMD (sleep `expr ${RANDOM} % 90`; /usr/sbin/logrotate -s /var/lib/logrotate/logrotate-crond.status /etc/logrotate-crond.conf 2>&1|logger -t logrotate-crond)
Dec 02 09:00:01 np0005541914.localdomain sudo[101646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:00:01 np0005541914.localdomain sudo[101646]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:00:01 np0005541914.localdomain sudo[101646]: pam_unix(sudo:session): session closed for user root
Dec 02 09:00:01 np0005541914.localdomain sudo[101661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:00:01 np0005541914.localdomain sudo[101661]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:00:02 np0005541914.localdomain sudo[101661]: pam_unix(sudo:session): session closed for user root
Dec 02 09:00:03 np0005541914.localdomain sudo[101708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:00:03 np0005541914.localdomain sudo[101708]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:00:03 np0005541914.localdomain sudo[101708]: pam_unix(sudo:session): session closed for user root
Dec 02 09:00:04 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 09:00:05 np0005541914.localdomain podman[101723]: 2025-12-02 09:00:05.086049522 +0000 UTC m=+0.092173321 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.expose-services=, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, vendor=Red Hat, Inc., vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64)
Dec 02 09:00:05 np0005541914.localdomain podman[101723]: 2025-12-02 09:00:05.121489276 +0000 UTC m=+0.127613045 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, name=rhosp17/openstack-collectd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, config_id=tripleo_step3)
Dec 02 09:00:05 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 09:00:07 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 09:00:08 np0005541914.localdomain podman[101743]: 2025-12-02 09:00:08.090373966 +0000 UTC m=+0.088606752 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20251118.1)
Dec 02 09:00:08 np0005541914.localdomain podman[101743]: 2025-12-02 09:00:08.12387003 +0000 UTC m=+0.122102776 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, container_name=iscsid, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-iscsid)
Dec 02 09:00:08 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 09:00:16 np0005541914.localdomain CROND[101642]: (root) CMDEND (sleep `expr ${RANDOM} % 90`; /usr/sbin/logrotate -s /var/lib/logrotate/logrotate-crond.status /etc/logrotate-crond.conf 2>&1|logger -t logrotate-crond)
Dec 02 09:00:19 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 09:00:20 np0005541914.localdomain podman[101765]: 2025-12-02 09:00:20.080525469 +0000 UTC m=+0.086544138 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr)
Dec 02 09:00:20 np0005541914.localdomain podman[101765]: 2025-12-02 09:00:20.30029142 +0000 UTC m=+0.306310049 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1761123044, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2025-11-18T22:49:46Z, vcs-type=git, tcib_managed=true, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 09:00:20 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 09:00:23 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 09:00:24 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 09:00:24 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 09:00:24 np0005541914.localdomain systemd[1]: tmp-crun.jSjruy.mount: Deactivated successfully.
Dec 02 09:00:24 np0005541914.localdomain podman[101796]: 2025-12-02 09:00:24.083841136 +0000 UTC m=+0.069901179 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container)
Dec 02 09:00:24 np0005541914.localdomain podman[101795]: 2025-12-02 09:00:24.13400711 +0000 UTC m=+0.119920369 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, config_id=tripleo_step4, managed_by=tripleo_ansible, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, container_name=ceilometer_agent_compute, distribution-scope=public)
Dec 02 09:00:24 np0005541914.localdomain podman[101796]: 2025-12-02 09:00:24.138969761 +0000 UTC m=+0.125029814 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, release=1761123044, batch=17.1_20251118.1, version=17.1.12, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true)
Dec 02 09:00:24 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 09:00:24 np0005541914.localdomain systemd[1]: tmp-crun.gwai7X.mount: Deactivated successfully.
Dec 02 09:00:24 np0005541914.localdomain podman[101794]: 2025-12-02 09:00:24.188420554 +0000 UTC m=+0.181120291 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-11-18T22:49:32Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, container_name=logrotate_crond, config_id=tripleo_step4, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, name=rhosp17/openstack-cron, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12)
Dec 02 09:00:24 np0005541914.localdomain podman[101794]: 2025-12-02 09:00:24.198723249 +0000 UTC m=+0.191422986 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, build-date=2025-11-18T22:49:32Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1761123044, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=logrotate_crond, config_id=tripleo_step4)
Dec 02 09:00:24 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 09:00:24 np0005541914.localdomain podman[101795]: 2025-12-02 09:00:24.210176069 +0000 UTC m=+0.196089388 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.12, container_name=ceilometer_agent_compute, release=1761123044, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team)
Dec 02 09:00:24 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 09:00:27 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 09:00:28 np0005541914.localdomain systemd[1]: tmp-crun.2Gxrq9.mount: Deactivated successfully.
Dec 02 09:00:28 np0005541914.localdomain podman[101865]: 2025-12-02 09:00:28.080000382 +0000 UTC m=+0.084618159 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, architecture=x86_64, name=rhosp17/openstack-nova-compute, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, release=1761123044)
Dec 02 09:00:28 np0005541914.localdomain podman[101865]: 2025-12-02 09:00:28.11294011 +0000 UTC m=+0.117557867 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, container_name=nova_compute, version=17.1.12, io.buildah.version=1.41.4, url=https://www.redhat.com, config_id=tripleo_step5, vendor=Red Hat, Inc., release=1761123044, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 02 09:00:28 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 09:00:28 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 09:00:28 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 09:00:28 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 09:00:29 np0005541914.localdomain podman[101891]: 2025-12-02 09:00:29.073373394 +0000 UTC m=+0.072707715 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, distribution-scope=public, batch=17.1_20251118.1, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, vcs-type=git, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, tcib_managed=true)
Dec 02 09:00:29 np0005541914.localdomain podman[101891]: 2025-12-02 09:00:29.087962819 +0000 UTC m=+0.087297130 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, url=https://www.redhat.com, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, version=17.1.12, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, managed_by=tripleo_ansible, release=1761123044, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 02 09:00:29 np0005541914.localdomain podman[101891]: unhealthy
Dec 02 09:00:29 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:00:29 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 09:00:29 np0005541914.localdomain systemd[1]: tmp-crun.cBNQsg.mount: Deactivated successfully.
Dec 02 09:00:29 np0005541914.localdomain podman[101892]: 2025-12-02 09:00:29.17689737 +0000 UTC m=+0.173794877 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-type=git, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, batch=17.1_20251118.1)
Dec 02 09:00:29 np0005541914.localdomain podman[101890]: 2025-12-02 09:00:29.151546694 +0000 UTC m=+0.153605989 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, build-date=2025-11-19T00:14:25Z)
Dec 02 09:00:29 np0005541914.localdomain podman[101890]: 2025-12-02 09:00:29.231913012 +0000 UTC m=+0.233972317 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, distribution-scope=public, tcib_managed=true, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, config_id=tripleo_step4, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1761123044)
Dec 02 09:00:29 np0005541914.localdomain podman[101890]: unhealthy
Dec 02 09:00:29 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:00:29 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 09:00:29 np0005541914.localdomain podman[101892]: 2025-12-02 09:00:29.584852396 +0000 UTC m=+0.581749863 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, release=1761123044, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 09:00:29 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 09:00:34 np0005541914.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 02 09:00:35 np0005541914.localdomain recover_tripleo_nova_virtqemud[101952]: 61907
Dec 02 09:00:35 np0005541914.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 02 09:00:35 np0005541914.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 02 09:00:35 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 09:00:36 np0005541914.localdomain podman[101953]: 2025-12-02 09:00:36.082204 +0000 UTC m=+0.084059082 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, version=17.1.12, com.redhat.component=openstack-collectd-container, release=1761123044, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 09:00:36 np0005541914.localdomain podman[101953]: 2025-12-02 09:00:36.096988532 +0000 UTC m=+0.098843614 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.4, version=17.1.12, com.redhat.component=openstack-collectd-container, container_name=collectd, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc., batch=17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, build-date=2025-11-18T22:51:28Z, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 09:00:36 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 09:00:38 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 09:00:39 np0005541914.localdomain podman[101973]: 2025-12-02 09:00:39.087019827 +0000 UTC m=+0.084247017 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2025-11-18T23:44:13Z, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step3, batch=17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vcs-type=git, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, version=17.1.12)
Dec 02 09:00:39 np0005541914.localdomain podman[101973]: 2025-12-02 09:00:39.103153691 +0000 UTC m=+0.100380841 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, name=rhosp17/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid)
Dec 02 09:00:39 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 09:00:50 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 09:00:51 np0005541914.localdomain podman[101992]: 2025-12-02 09:00:51.07409633 +0000 UTC m=+0.080330447 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, container_name=metrics_qdr, distribution-scope=public, vcs-type=git, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z)
Dec 02 09:00:51 np0005541914.localdomain podman[101992]: 2025-12-02 09:00:51.292779468 +0000 UTC m=+0.299013565 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20251118.1, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-11-18T22:49:46Z, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, managed_by=tripleo_ansible, version=17.1.12, vcs-type=git, name=rhosp17/openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 09:00:51 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 09:00:54 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 09:00:54 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 09:00:54 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 09:00:55 np0005541914.localdomain podman[102023]: 2025-12-02 09:00:55.07100103 +0000 UTC m=+0.065942147 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.buildah.version=1.41.4, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1761123044, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Dec 02 09:00:55 np0005541914.localdomain podman[102023]: 2025-12-02 09:00:55.124778725 +0000 UTC m=+0.119719812 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, architecture=x86_64, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, version=17.1.12, io.openshift.expose-services=, io.buildah.version=1.41.4, tcib_managed=true, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public)
Dec 02 09:00:55 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 09:00:55 np0005541914.localdomain podman[102022]: 2025-12-02 09:00:55.141762464 +0000 UTC m=+0.141082345 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, tcib_managed=true, batch=17.1_20251118.1, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, url=https://www.redhat.com, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, version=17.1.12)
Dec 02 09:00:55 np0005541914.localdomain podman[102021]: 2025-12-02 09:00:55.190540816 +0000 UTC m=+0.192935721 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step4, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, release=1761123044, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, name=rhosp17/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, io.openshift.expose-services=, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron)
Dec 02 09:00:55 np0005541914.localdomain podman[102021]: 2025-12-02 09:00:55.197904911 +0000 UTC m=+0.200299746 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, batch=17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, name=rhosp17/openstack-cron, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container)
Dec 02 09:00:55 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 09:00:55 np0005541914.localdomain podman[102022]: 2025-12-02 09:00:55.221091601 +0000 UTC m=+0.220411412 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:11:48Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., distribution-scope=public, batch=17.1_20251118.1, url=https://www.redhat.com, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git)
Dec 02 09:00:55 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 09:00:58 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 09:00:59 np0005541914.localdomain podman[102093]: 2025-12-02 09:00:59.066579799 +0000 UTC m=+0.067366391 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.12, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, build-date=2025-11-19T00:36:58Z, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, distribution-scope=public, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, container_name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1)
Dec 02 09:00:59 np0005541914.localdomain podman[102093]: 2025-12-02 09:00:59.120963993 +0000 UTC m=+0.121750505 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-11-19T00:36:58Z, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, tcib_managed=true, distribution-scope=public, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4)
Dec 02 09:00:59 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 09:00:59 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 09:00:59 np0005541914.localdomain podman[102120]: 2025-12-02 09:00:59.228707798 +0000 UTC m=+0.074298033 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, distribution-scope=public, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.expose-services=, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, architecture=x86_64, version=17.1.12, container_name=ovn_controller, batch=17.1_20251118.1, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, managed_by=tripleo_ansible)
Dec 02 09:00:59 np0005541914.localdomain podman[102120]: 2025-12-02 09:00:59.244966785 +0000 UTC m=+0.090556960 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, tcib_managed=true, architecture=x86_64, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, release=1761123044, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_id=tripleo_step4)
Dec 02 09:00:59 np0005541914.localdomain podman[102120]: unhealthy
Dec 02 09:00:59 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:00:59 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 09:00:59 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 09:00:59 np0005541914.localdomain podman[102141]: 2025-12-02 09:00:59.35106214 +0000 UTC m=+0.068281859 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_id=tripleo_step4, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, batch=17.1_20251118.1, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, architecture=x86_64, url=https://www.redhat.com, vcs-type=git)
Dec 02 09:00:59 np0005541914.localdomain podman[102141]: 2025-12-02 09:00:59.36904796 +0000 UTC m=+0.086267719 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., url=https://www.redhat.com, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 02 09:00:59 np0005541914.localdomain podman[102141]: unhealthy
Dec 02 09:00:59 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:00:59 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 09:00:59 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 09:01:00 np0005541914.localdomain podman[102160]: 2025-12-02 09:01:00.080148058 +0000 UTC m=+0.087199488 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12, release=1761123044, name=rhosp17/openstack-nova-compute, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 02 09:01:00 np0005541914.localdomain podman[102160]: 2025-12-02 09:01:00.468037002 +0000 UTC m=+0.475088362 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, config_id=tripleo_step4, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 02 09:01:00 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 09:01:01 np0005541914.localdomain CROND[102184]: (root) CMD (run-parts /etc/cron.hourly)
Dec 02 09:01:01 np0005541914.localdomain run-parts[102187]: (/etc/cron.hourly) starting 0anacron
Dec 02 09:01:01 np0005541914.localdomain anacron[102195]: Anacron started on 2025-12-02
Dec 02 09:01:01 np0005541914.localdomain anacron[102195]: Will run job `cron.daily' in 13 min.
Dec 02 09:01:01 np0005541914.localdomain anacron[102195]: Will run job `cron.weekly' in 33 min.
Dec 02 09:01:01 np0005541914.localdomain anacron[102195]: Will run job `cron.monthly' in 53 min.
Dec 02 09:01:01 np0005541914.localdomain anacron[102195]: Jobs will be executed sequentially
Dec 02 09:01:01 np0005541914.localdomain run-parts[102197]: (/etc/cron.hourly) finished 0anacron
Dec 02 09:01:01 np0005541914.localdomain CROND[102183]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 02 09:01:01 np0005541914.localdomain CROND[102199]: (root) CMD (run-parts /etc/cron.hourly)
Dec 02 09:01:01 np0005541914.localdomain run-parts[102202]: (/etc/cron.hourly) starting 0anacron
Dec 02 09:01:01 np0005541914.localdomain run-parts[102208]: (/etc/cron.hourly) finished 0anacron
Dec 02 09:01:01 np0005541914.localdomain CROND[102198]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 02 09:01:03 np0005541914.localdomain sudo[102209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:01:03 np0005541914.localdomain sudo[102209]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:01:03 np0005541914.localdomain sudo[102209]: pam_unix(sudo:session): session closed for user root
Dec 02 09:01:03 np0005541914.localdomain sudo[102224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 02 09:01:03 np0005541914.localdomain sudo[102224]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:01:04 np0005541914.localdomain podman[102310]: 2025-12-02 09:01:04.230392408 +0000 UTC m=+0.098482793 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, GIT_BRANCH=main, release=1763362218, io.buildah.version=1.41.4, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7)
Dec 02 09:01:04 np0005541914.localdomain podman[102310]: 2025-12-02 09:01:04.338748821 +0000 UTC m=+0.206839236 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_CLEAN=True, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, vcs-type=git, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, name=rhceph, io.buildah.version=1.41.4, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 02 09:01:04 np0005541914.localdomain sudo[102224]: pam_unix(sudo:session): session closed for user root
Dec 02 09:01:04 np0005541914.localdomain sudo[102378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:01:04 np0005541914.localdomain sudo[102378]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:01:04 np0005541914.localdomain sudo[102378]: pam_unix(sudo:session): session closed for user root
Dec 02 09:01:04 np0005541914.localdomain sudo[102393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:01:04 np0005541914.localdomain sudo[102393]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:01:05 np0005541914.localdomain sudo[102393]: pam_unix(sudo:session): session closed for user root
Dec 02 09:01:06 np0005541914.localdomain sudo[102439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:01:06 np0005541914.localdomain sudo[102439]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:01:06 np0005541914.localdomain sudo[102439]: pam_unix(sudo:session): session closed for user root
Dec 02 09:01:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 09:01:07 np0005541914.localdomain systemd[1]: tmp-crun.mVBnf1.mount: Deactivated successfully.
Dec 02 09:01:07 np0005541914.localdomain podman[102454]: 2025-12-02 09:01:07.07740938 +0000 UTC m=+0.081327509 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:51:28Z, release=1761123044, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, tcib_managed=true, architecture=x86_64, name=rhosp17/openstack-collectd, version=17.1.12, config_id=tripleo_step3, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd)
Dec 02 09:01:07 np0005541914.localdomain podman[102454]: 2025-12-02 09:01:07.087562361 +0000 UTC m=+0.091480540 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, container_name=collectd, version=17.1.12, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 09:01:07 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 09:01:09 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 09:01:10 np0005541914.localdomain podman[102472]: 2025-12-02 09:01:10.073744869 +0000 UTC m=+0.078492661 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, batch=17.1_20251118.1, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, distribution-scope=public, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc.)
Dec 02 09:01:10 np0005541914.localdomain podman[102472]: 2025-12-02 09:01:10.112086651 +0000 UTC m=+0.116834453 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, release=1761123044, container_name=iscsid, tcib_managed=true, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, architecture=x86_64, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 09:01:10 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 09:01:21 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 09:01:22 np0005541914.localdomain podman[102492]: 2025-12-02 09:01:22.104130243 +0000 UTC m=+0.111220363 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-qdrouterd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, batch=17.1_20251118.1, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 09:01:22 np0005541914.localdomain podman[102492]: 2025-12-02 09:01:22.298434226 +0000 UTC m=+0.305524426 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_id=tripleo_step1, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, release=1761123044, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 09:01:22 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 09:01:25 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 09:01:25 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 09:01:25 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 09:01:26 np0005541914.localdomain systemd[1]: tmp-crun.SwLYbQ.mount: Deactivated successfully.
Dec 02 09:01:26 np0005541914.localdomain podman[102522]: 2025-12-02 09:01:26.063863888 +0000 UTC m=+0.066202545 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, container_name=logrotate_crond, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git)
Dec 02 09:01:26 np0005541914.localdomain podman[102523]: 2025-12-02 09:01:26.133320592 +0000 UTC m=+0.133572205 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, config_id=tripleo_step4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20251118.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true)
Dec 02 09:01:26 np0005541914.localdomain podman[102522]: 2025-12-02 09:01:26.149978912 +0000 UTC m=+0.152317519 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc.)
Dec 02 09:01:26 np0005541914.localdomain podman[102524]: 2025-12-02 09:01:26.097950361 +0000 UTC m=+0.092285643 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-type=git, url=https://www.redhat.com, build-date=2025-11-19T00:12:45Z, batch=17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, architecture=x86_64, tcib_managed=true, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 09:01:26 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 09:01:26 np0005541914.localdomain podman[102523]: 2025-12-02 09:01:26.166530498 +0000 UTC m=+0.166782091 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, build-date=2025-11-19T00:11:48Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 09:01:26 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 09:01:26 np0005541914.localdomain podman[102524]: 2025-12-02 09:01:26.233033402 +0000 UTC m=+0.227368694 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, build-date=2025-11-19T00:12:45Z, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 02 09:01:26 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 09:01:29 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 09:01:29 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 09:01:29 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 09:01:30 np0005541914.localdomain podman[102591]: 2025-12-02 09:01:30.080322829 +0000 UTC m=+0.085916368 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=nova_compute)
Dec 02 09:01:30 np0005541914.localdomain podman[102591]: 2025-12-02 09:01:30.116560557 +0000 UTC m=+0.122154056 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, url=https://www.redhat.com, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, version=17.1.12, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20251118.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5)
Dec 02 09:01:30 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 09:01:30 np0005541914.localdomain podman[102592]: 2025-12-02 09:01:30.130871455 +0000 UTC m=+0.130560064 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, io.buildah.version=1.41.4, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, architecture=x86_64, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, vcs-type=git, tcib_managed=true, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 02 09:01:30 np0005541914.localdomain podman[102592]: 2025-12-02 09:01:30.141040356 +0000 UTC m=+0.140728945 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, release=1761123044, container_name=ovn_metadata_agent, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']})
Dec 02 09:01:30 np0005541914.localdomain podman[102592]: unhealthy
Dec 02 09:01:30 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:01:30 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 09:01:30 np0005541914.localdomain podman[102593]: 2025-12-02 09:01:30.190129747 +0000 UTC m=+0.188788084 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, version=17.1.12, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20251118.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 02 09:01:30 np0005541914.localdomain podman[102593]: 2025-12-02 09:01:30.203353032 +0000 UTC m=+0.202011309 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, name=rhosp17/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Dec 02 09:01:30 np0005541914.localdomain podman[102593]: unhealthy
Dec 02 09:01:30 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:01:30 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 09:01:30 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 09:01:31 np0005541914.localdomain systemd[1]: tmp-crun.moMLLC.mount: Deactivated successfully.
Dec 02 09:01:31 np0005541914.localdomain systemd[1]: tmp-crun.wfXBPV.mount: Deactivated successfully.
Dec 02 09:01:31 np0005541914.localdomain podman[102657]: 2025-12-02 09:01:31.092275858 +0000 UTC m=+0.091418146 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=tripleo_step4, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_migration_target, version=17.1.12, batch=17.1_20251118.1, vcs-type=git, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, managed_by=tripleo_ansible, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=)
Dec 02 09:01:31 np0005541914.localdomain podman[102657]: 2025-12-02 09:01:31.431476983 +0000 UTC m=+0.430619301 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, build-date=2025-11-19T00:36:58Z, config_id=tripleo_step4, io.openshift.expose-services=, container_name=nova_migration_target, tcib_managed=true, version=17.1.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team)
Dec 02 09:01:31 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 09:01:37 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 09:01:38 np0005541914.localdomain systemd[1]: tmp-crun.NdebTW.mount: Deactivated successfully.
Dec 02 09:01:38 np0005541914.localdomain podman[102680]: 2025-12-02 09:01:38.089018947 +0000 UTC m=+0.090897640 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2025-11-18T22:51:28Z, architecture=x86_64, io.buildah.version=1.41.4, config_id=tripleo_step3, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-type=git, release=1761123044, com.redhat.component=openstack-collectd-container, version=17.1.12, container_name=collectd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd)
Dec 02 09:01:38 np0005541914.localdomain podman[102680]: 2025-12-02 09:01:38.12314483 +0000 UTC m=+0.125023593 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, managed_by=tripleo_ansible, config_id=tripleo_step3, architecture=x86_64, io.openshift.expose-services=, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, batch=17.1_20251118.1, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Dec 02 09:01:38 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 09:01:40 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 09:01:41 np0005541914.localdomain podman[102701]: 2025-12-02 09:01:41.059738852 +0000 UTC m=+0.064825793 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., vcs-type=git, url=https://www.redhat.com, managed_by=tripleo_ansible, version=17.1.12, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, release=1761123044, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, tcib_managed=true, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 09:01:41 np0005541914.localdomain podman[102701]: 2025-12-02 09:01:41.094729863 +0000 UTC m=+0.099816794 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, batch=17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid)
Dec 02 09:01:41 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 09:01:52 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 09:01:53 np0005541914.localdomain podman[102721]: 2025-12-02 09:01:53.081467243 +0000 UTC m=+0.080921576 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4)
Dec 02 09:01:53 np0005541914.localdomain podman[102721]: 2025-12-02 09:01:53.274621121 +0000 UTC m=+0.274075444 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64)
Dec 02 09:01:53 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 09:01:56 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 09:01:56 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 09:01:56 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 09:01:57 np0005541914.localdomain podman[102751]: 2025-12-02 09:01:57.079070367 +0000 UTC m=+0.080751641 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1761123044, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20251118.1, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=)
Dec 02 09:01:57 np0005541914.localdomain podman[102751]: 2025-12-02 09:01:57.135190783 +0000 UTC m=+0.136872047 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, tcib_managed=true, batch=17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, release=1761123044, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 02 09:01:57 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 09:01:57 np0005541914.localdomain podman[102752]: 2025-12-02 09:01:57.190707671 +0000 UTC m=+0.187764373 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, version=17.1.12, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true)
Dec 02 09:01:57 np0005541914.localdomain podman[102752]: 2025-12-02 09:01:57.220870773 +0000 UTC m=+0.217927515 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, distribution-scope=public, io.openshift.expose-services=, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, release=1761123044, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 02 09:01:57 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 09:01:57 np0005541914.localdomain podman[102750]: 2025-12-02 09:01:57.142205667 +0000 UTC m=+0.146366147 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:32Z, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, version=17.1.12, distribution-scope=public, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.4, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron)
Dec 02 09:01:57 np0005541914.localdomain podman[102750]: 2025-12-02 09:01:57.276129813 +0000 UTC m=+0.280290293 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, container_name=logrotate_crond, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, build-date=2025-11-18T22:49:32Z, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Dec 02 09:01:57 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 09:02:00 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 09:02:00 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 09:02:00 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 09:02:01 np0005541914.localdomain podman[102824]: 2025-12-02 09:02:01.082187597 +0000 UTC m=+0.076313615 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, config_id=tripleo_step4, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, release=1761123044, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team)
Dec 02 09:02:01 np0005541914.localdomain podman[102824]: 2025-12-02 09:02:01.097836105 +0000 UTC m=+0.091962033 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 02 09:02:01 np0005541914.localdomain podman[102824]: unhealthy
Dec 02 09:02:01 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:02:01 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 09:02:01 np0005541914.localdomain systemd[1]: tmp-crun.hxGpi2.mount: Deactivated successfully.
Dec 02 09:02:01 np0005541914.localdomain podman[102823]: 2025-12-02 09:02:01.135930631 +0000 UTC m=+0.132081381 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, version=17.1.12, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 02 09:02:01 np0005541914.localdomain podman[102823]: 2025-12-02 09:02:01.147787033 +0000 UTC m=+0.143937813 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.12, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-type=git, managed_by=tripleo_ansible)
Dec 02 09:02:01 np0005541914.localdomain podman[102823]: unhealthy
Dec 02 09:02:01 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:02:01 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 09:02:01 np0005541914.localdomain podman[102822]: 2025-12-02 09:02:01.185551779 +0000 UTC m=+0.186728313 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, config_id=tripleo_step5, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.12, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., tcib_managed=true, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, release=1761123044, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 09:02:01 np0005541914.localdomain podman[102822]: 2025-12-02 09:02:01.238732435 +0000 UTC m=+0.239908969 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.4, release=1761123044, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, distribution-scope=public, config_id=tripleo_step5, vendor=Red Hat, Inc., container_name=nova_compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 09:02:01 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 09:02:01 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 09:02:02 np0005541914.localdomain podman[102890]: 2025-12-02 09:02:02.065489931 +0000 UTC m=+0.074066277 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, version=17.1.12, architecture=x86_64, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vendor=Red Hat, Inc., vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true)
Dec 02 09:02:02 np0005541914.localdomain podman[102890]: 2025-12-02 09:02:02.416252238 +0000 UTC m=+0.424828574 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_migration_target, vendor=Red Hat, Inc., vcs-type=git, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64)
Dec 02 09:02:02 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 09:02:05 np0005541914.localdomain sshd[102913]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:02:05 np0005541914.localdomain sshd[102913]: Invalid user eth from 45.148.10.240 port 49792
Dec 02 09:02:05 np0005541914.localdomain sshd[102913]: Connection closed by invalid user eth 45.148.10.240 port 49792 [preauth]
Dec 02 09:02:06 np0005541914.localdomain sudo[102915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:02:06 np0005541914.localdomain sudo[102915]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:02:06 np0005541914.localdomain sudo[102915]: pam_unix(sudo:session): session closed for user root
Dec 02 09:02:06 np0005541914.localdomain sudo[102930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:02:06 np0005541914.localdomain sudo[102930]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:02:06 np0005541914.localdomain sudo[102930]: pam_unix(sudo:session): session closed for user root
Dec 02 09:02:07 np0005541914.localdomain sudo[102976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:02:07 np0005541914.localdomain sudo[102976]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:02:07 np0005541914.localdomain sudo[102976]: pam_unix(sudo:session): session closed for user root
Dec 02 09:02:08 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 09:02:09 np0005541914.localdomain podman[102991]: 2025-12-02 09:02:09.065827347 +0000 UTC m=+0.074292023 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4, managed_by=tripleo_ansible, container_name=collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.12, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 09:02:09 np0005541914.localdomain podman[102991]: 2025-12-02 09:02:09.102964283 +0000 UTC m=+0.111428959 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=collectd, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, version=17.1.12, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044)
Dec 02 09:02:09 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 09:02:11 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 09:02:12 np0005541914.localdomain podman[103012]: 2025-12-02 09:02:12.08225528 +0000 UTC m=+0.081702459 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, version=17.1.12, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-iscsid, release=1761123044, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:44:13Z, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid)
Dec 02 09:02:12 np0005541914.localdomain podman[103012]: 2025-12-02 09:02:12.11950839 +0000 UTC m=+0.118955539 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, name=rhosp17/openstack-iscsid, container_name=iscsid, release=1761123044, distribution-scope=public, config_id=tripleo_step3, io.openshift.expose-services=, version=17.1.12, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64)
Dec 02 09:02:12 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 09:02:23 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 09:02:24 np0005541914.localdomain podman[103031]: 2025-12-02 09:02:24.075718716 +0000 UTC m=+0.083284228 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, container_name=metrics_qdr, io.openshift.expose-services=, batch=17.1_20251118.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, config_id=tripleo_step1)
Dec 02 09:02:24 np0005541914.localdomain podman[103031]: 2025-12-02 09:02:24.246070145 +0000 UTC m=+0.253635627 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, container_name=metrics_qdr, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, version=17.1.12)
Dec 02 09:02:24 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 09:02:27 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 09:02:27 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 09:02:27 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 09:02:28 np0005541914.localdomain systemd[1]: tmp-crun.7rddvy.mount: Deactivated successfully.
Dec 02 09:02:28 np0005541914.localdomain podman[103063]: 2025-12-02 09:02:28.06263413 +0000 UTC m=+0.060692167 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, architecture=x86_64, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044)
Dec 02 09:02:28 np0005541914.localdomain podman[103062]: 2025-12-02 09:02:28.075620927 +0000 UTC m=+0.073018914 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, vendor=Red Hat, Inc., tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, distribution-scope=public, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 09:02:28 np0005541914.localdomain podman[103063]: 2025-12-02 09:02:28.084438757 +0000 UTC m=+0.082496744 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, tcib_managed=true, version=17.1.12, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 02 09:02:28 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 09:02:28 np0005541914.localdomain podman[103062]: 2025-12-02 09:02:28.100711284 +0000 UTC m=+0.098109281 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, build-date=2025-11-19T00:11:48Z, release=1761123044, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible)
Dec 02 09:02:28 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 09:02:28 np0005541914.localdomain podman[103061]: 2025-12-02 09:02:28.180637619 +0000 UTC m=+0.182082740 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.12, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond)
Dec 02 09:02:28 np0005541914.localdomain podman[103061]: 2025-12-02 09:02:28.213647249 +0000 UTC m=+0.215092360 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, distribution-scope=public, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, batch=17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron)
Dec 02 09:02:28 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 09:02:31 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 09:02:31 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 09:02:31 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 09:02:32 np0005541914.localdomain podman[103136]: 2025-12-02 09:02:32.094051796 +0000 UTC m=+0.088974932 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, version=17.1.12, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 09:02:32 np0005541914.localdomain podman[103136]: 2025-12-02 09:02:32.135906616 +0000 UTC m=+0.130829742 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, release=1761123044, config_id=tripleo_step4)
Dec 02 09:02:32 np0005541914.localdomain podman[103135]: 2025-12-02 09:02:32.139135975 +0000 UTC m=+0.137947740 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step5, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=)
Dec 02 09:02:32 np0005541914.localdomain podman[103136]: unhealthy
Dec 02 09:02:32 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:02:32 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 09:02:32 np0005541914.localdomain podman[103137]: 2025-12-02 09:02:32.187156313 +0000 UTC m=+0.178618354 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 09:02:32 np0005541914.localdomain podman[103137]: 2025-12-02 09:02:32.22855885 +0000 UTC m=+0.220020811 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, batch=17.1_20251118.1, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, name=rhosp17/openstack-ovn-controller, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, tcib_managed=true, architecture=x86_64, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container)
Dec 02 09:02:32 np0005541914.localdomain podman[103137]: unhealthy
Dec 02 09:02:32 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:02:32 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 09:02:32 np0005541914.localdomain podman[103135]: 2025-12-02 09:02:32.271238925 +0000 UTC m=+0.270050750 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, name=rhosp17/openstack-nova-compute, url=https://www.redhat.com, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, container_name=nova_compute, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12)
Dec 02 09:02:32 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 09:02:32 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 09:02:32 np0005541914.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 02 09:02:33 np0005541914.localdomain recover_tripleo_nova_virtqemud[103203]: 61907
Dec 02 09:02:33 np0005541914.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 02 09:02:33 np0005541914.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 02 09:02:33 np0005541914.localdomain systemd[1]: tmp-crun.lS1dOn.mount: Deactivated successfully.
Dec 02 09:02:33 np0005541914.localdomain podman[103195]: 2025-12-02 09:02:33.079268297 +0000 UTC m=+0.084872227 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.expose-services=, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, batch=17.1_20251118.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, name=rhosp17/openstack-nova-compute)
Dec 02 09:02:33 np0005541914.localdomain podman[103195]: 2025-12-02 09:02:33.443779446 +0000 UTC m=+0.449383316 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, config_id=tripleo_step4, vcs-type=git, container_name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true)
Dec 02 09:02:33 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 09:02:39 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 09:02:40 np0005541914.localdomain podman[103220]: 2025-12-02 09:02:40.082922046 +0000 UTC m=+0.085514137 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., container_name=collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1761123044, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-collectd-container, build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp17/openstack-collectd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd)
Dec 02 09:02:40 np0005541914.localdomain podman[103220]: 2025-12-02 09:02:40.095848951 +0000 UTC m=+0.098441032 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, release=1761123044, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., url=https://www.redhat.com, batch=17.1_20251118.1, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, tcib_managed=true, vcs-type=git)
Dec 02 09:02:40 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 09:02:42 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 09:02:43 np0005541914.localdomain podman[103241]: 2025-12-02 09:02:43.071804117 +0000 UTC m=+0.077319736 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vcs-type=git, com.redhat.component=openstack-iscsid-container, container_name=iscsid, tcib_managed=true, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, release=1761123044, version=17.1.12, io.buildah.version=1.41.4, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team)
Dec 02 09:02:43 np0005541914.localdomain podman[103241]: 2025-12-02 09:02:43.078910024 +0000 UTC m=+0.084425703 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, container_name=iscsid, build-date=2025-11-18T23:44:13Z, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Dec 02 09:02:43 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 09:02:49 np0005541914.localdomain sshd[103260]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:02:50 np0005541914.localdomain sshd[103260]: Invalid user solana from 80.94.92.182 port 55208
Dec 02 09:02:50 np0005541914.localdomain sshd[103260]: Connection closed by invalid user solana 80.94.92.182 port 55208 [preauth]
Dec 02 09:02:54 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 09:02:55 np0005541914.localdomain podman[103262]: 2025-12-02 09:02:55.080534327 +0000 UTC m=+0.084724802 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, io.openshift.expose-services=, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step1, version=17.1.12, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.4, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 09:02:55 np0005541914.localdomain podman[103262]: 2025-12-02 09:02:55.297996098 +0000 UTC m=+0.302186573 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.4, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, container_name=metrics_qdr, architecture=x86_64, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, release=1761123044)
Dec 02 09:02:55 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 09:02:58 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 09:02:58 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 09:02:58 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 09:02:59 np0005541914.localdomain podman[103294]: 2025-12-02 09:02:59.073207148 +0000 UTC m=+0.071715364 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12)
Dec 02 09:02:59 np0005541914.localdomain podman[103294]: 2025-12-02 09:02:59.12331306 +0000 UTC m=+0.121821496 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, vcs-type=git, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-compute, release=1761123044)
Dec 02 09:02:59 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 09:02:59 np0005541914.localdomain systemd[1]: tmp-crun.E8qxJQ.mount: Deactivated successfully.
Dec 02 09:02:59 np0005541914.localdomain podman[103295]: 2025-12-02 09:02:59.134418749 +0000 UTC m=+0.128199901 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, build-date=2025-11-19T00:12:45Z, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, distribution-scope=public, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 02 09:02:59 np0005541914.localdomain podman[103293]: 2025-12-02 09:02:59.190528125 +0000 UTC m=+0.188773844 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=17.1.12, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, release=1761123044, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, distribution-scope=public)
Dec 02 09:02:59 np0005541914.localdomain podman[103293]: 2025-12-02 09:02:59.197876271 +0000 UTC m=+0.196122010 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, container_name=logrotate_crond, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://www.redhat.com, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-cron-container)
Dec 02 09:02:59 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 09:02:59 np0005541914.localdomain podman[103295]: 2025-12-02 09:02:59.219865133 +0000 UTC m=+0.213646355 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, distribution-scope=public, release=1761123044, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp17/openstack-ceilometer-ipmi, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 02 09:02:59 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 09:03:02 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 09:03:02 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 09:03:02 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 09:03:03 np0005541914.localdomain systemd[1]: tmp-crun.02fzzy.mount: Deactivated successfully.
Dec 02 09:03:03 np0005541914.localdomain podman[103367]: 2025-12-02 09:03:03.083667192 +0000 UTC m=+0.089227230 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, name=rhosp17/openstack-nova-compute, version=17.1.12, architecture=x86_64, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., config_id=tripleo_step5, url=https://www.redhat.com, container_name=nova_compute, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1)
Dec 02 09:03:03 np0005541914.localdomain podman[103368]: 2025-12-02 09:03:03.089936714 +0000 UTC m=+0.088368903 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, distribution-scope=public, url=https://www.redhat.com, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, architecture=x86_64, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.buildah.version=1.41.4, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']})
Dec 02 09:03:03 np0005541914.localdomain podman[103368]: 2025-12-02 09:03:03.111307128 +0000 UTC m=+0.109739267 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=)
Dec 02 09:03:03 np0005541914.localdomain podman[103368]: unhealthy
Dec 02 09:03:03 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:03:03 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 09:03:03 np0005541914.localdomain podman[103367]: 2025-12-02 09:03:03.139190811 +0000 UTC m=+0.144750769 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, batch=17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true)
Dec 02 09:03:03 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 09:03:03 np0005541914.localdomain podman[103369]: 2025-12-02 09:03:03.199510145 +0000 UTC m=+0.194680545 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:34:05Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4)
Dec 02 09:03:03 np0005541914.localdomain podman[103369]: 2025-12-02 09:03:03.249011789 +0000 UTC m=+0.244182209 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.12)
Dec 02 09:03:03 np0005541914.localdomain podman[103369]: unhealthy
Dec 02 09:03:03 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:03:03 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 09:03:03 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 09:03:04 np0005541914.localdomain podman[103433]: 2025-12-02 09:03:04.056164234 +0000 UTC m=+0.068481325 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1761123044, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-nova-compute)
Dec 02 09:03:04 np0005541914.localdomain systemd[1]: tmp-crun.PHYaxV.mount: Deactivated successfully.
Dec 02 09:03:04 np0005541914.localdomain podman[103433]: 2025-12-02 09:03:04.422378815 +0000 UTC m=+0.434695886 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.12, io.openshift.expose-services=, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-type=git, container_name=nova_migration_target)
Dec 02 09:03:04 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 09:03:07 np0005541914.localdomain sudo[103456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:03:07 np0005541914.localdomain sudo[103456]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:03:07 np0005541914.localdomain sudo[103456]: pam_unix(sudo:session): session closed for user root
Dec 02 09:03:07 np0005541914.localdomain sudo[103471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:03:07 np0005541914.localdomain sudo[103471]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:03:08 np0005541914.localdomain sudo[103471]: pam_unix(sudo:session): session closed for user root
Dec 02 09:03:09 np0005541914.localdomain sudo[103518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:03:09 np0005541914.localdomain sudo[103518]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:03:09 np0005541914.localdomain sudo[103518]: pam_unix(sudo:session): session closed for user root
Dec 02 09:03:10 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 09:03:11 np0005541914.localdomain podman[103533]: 2025-12-02 09:03:11.095904456 +0000 UTC m=+0.092918643 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., io.buildah.version=1.41.4, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, release=1761123044, container_name=collectd, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 09:03:11 np0005541914.localdomain podman[103533]: 2025-12-02 09:03:11.132603448 +0000 UTC m=+0.129617615 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, distribution-scope=public, container_name=collectd, config_id=tripleo_step3, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-collectd, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.4)
Dec 02 09:03:11 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 09:03:13 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 09:03:14 np0005541914.localdomain podman[103553]: 2025-12-02 09:03:14.093167524 +0000 UTC m=+0.090093517 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, tcib_managed=true, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, com.redhat.component=openstack-iscsid-container, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-iscsid, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, container_name=iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, release=1761123044)
Dec 02 09:03:14 np0005541914.localdomain podman[103553]: 2025-12-02 09:03:14.129926118 +0000 UTC m=+0.126852131 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, release=1761123044, vcs-type=git, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Dec 02 09:03:14 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 09:03:25 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 09:03:26 np0005541914.localdomain podman[103573]: 2025-12-02 09:03:26.090332063 +0000 UTC m=+0.091932663 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, tcib_managed=true, version=17.1.12, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, release=1761123044, name=rhosp17/openstack-qdrouterd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team)
Dec 02 09:03:26 np0005541914.localdomain podman[103573]: 2025-12-02 09:03:26.259825196 +0000 UTC m=+0.261425736 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, container_name=metrics_qdr, distribution-scope=public, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp17/openstack-qdrouterd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1761123044, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 09:03:26 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 09:03:29 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 09:03:29 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 09:03:29 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 09:03:30 np0005541914.localdomain podman[103602]: 2025-12-02 09:03:30.090171264 +0000 UTC m=+0.092244952 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, release=1761123044, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, tcib_managed=true, version=17.1.12, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com)
Dec 02 09:03:30 np0005541914.localdomain podman[103602]: 2025-12-02 09:03:30.101511901 +0000 UTC m=+0.103585609 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., version=17.1.12, batch=17.1_20251118.1, distribution-scope=public, container_name=logrotate_crond, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2025-11-18T22:49:32Z, tcib_managed=true, config_id=tripleo_step4, name=rhosp17/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron)
Dec 02 09:03:30 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 09:03:30 np0005541914.localdomain podman[103603]: 2025-12-02 09:03:30.198410954 +0000 UTC m=+0.194602682 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1761123044, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, managed_by=tripleo_ansible, batch=17.1_20251118.1, vendor=Red Hat, Inc., name=rhosp17/openstack-ceilometer-compute, io.buildah.version=1.41.4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git)
Dec 02 09:03:30 np0005541914.localdomain podman[103604]: 2025-12-02 09:03:30.248771054 +0000 UTC m=+0.242213998 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, build-date=2025-11-19T00:12:45Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.4, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, config_id=tripleo_step4, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, version=17.1.12, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container)
Dec 02 09:03:30 np0005541914.localdomain podman[103603]: 2025-12-02 09:03:30.277489602 +0000 UTC m=+0.273681370 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, name=rhosp17/openstack-ceilometer-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, batch=17.1_20251118.1, build-date=2025-11-19T00:11:48Z, version=17.1.12, vendor=Red Hat, Inc., org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 02 09:03:30 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 09:03:30 np0005541914.localdomain podman[103604]: 2025-12-02 09:03:30.306851061 +0000 UTC m=+0.300293985 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, managed_by=tripleo_ansible, release=1761123044, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 02 09:03:30 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Deactivated successfully.
Dec 02 09:03:33 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 09:03:33 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 09:03:33 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 09:03:34 np0005541914.localdomain podman[103673]: 2025-12-02 09:03:34.068832416 +0000 UTC m=+0.076786270 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_compute, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, io.buildah.version=1.41.4, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 02 09:03:34 np0005541914.localdomain podman[103673]: 2025-12-02 09:03:34.121043893 +0000 UTC m=+0.128997777 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, version=17.1.12, container_name=nova_compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step5, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 09:03:34 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 09:03:34 np0005541914.localdomain podman[103674]: 2025-12-02 09:03:34.134915457 +0000 UTC m=+0.138479886 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.buildah.version=1.41.4, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, release=1761123044, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, config_id=tripleo_step4)
Dec 02 09:03:34 np0005541914.localdomain systemd[1]: tmp-crun.74QgRZ.mount: Deactivated successfully.
Dec 02 09:03:34 np0005541914.localdomain podman[103675]: 2025-12-02 09:03:34.197636606 +0000 UTC m=+0.196352676 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., release=1761123044, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Dec 02 09:03:34 np0005541914.localdomain podman[103674]: 2025-12-02 09:03:34.214369858 +0000 UTC m=+0.217934267 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, architecture=x86_64, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, url=https://www.redhat.com, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent)
Dec 02 09:03:34 np0005541914.localdomain podman[103674]: unhealthy
Dec 02 09:03:34 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:03:34 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 09:03:34 np0005541914.localdomain podman[103675]: 2025-12-02 09:03:34.241941841 +0000 UTC m=+0.240657911 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, release=1761123044, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, tcib_managed=true, container_name=ovn_controller)
Dec 02 09:03:34 np0005541914.localdomain podman[103675]: unhealthy
Dec 02 09:03:34 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:03:34 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 09:03:34 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 09:03:35 np0005541914.localdomain podman[103739]: 2025-12-02 09:03:35.051614794 +0000 UTC m=+0.062006419 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_migration_target, distribution-scope=public, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.12, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Dec 02 09:03:35 np0005541914.localdomain podman[103739]: 2025-12-02 09:03:35.358928432 +0000 UTC m=+0.369320057 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, tcib_managed=true, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, url=https://www.redhat.com, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.12, release=1761123044, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 09:03:35 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 09:03:41 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 09:03:42 np0005541914.localdomain podman[103761]: 2025-12-02 09:03:42.081540745 +0000 UTC m=+0.084866116 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, version=17.1.12, managed_by=tripleo_ansible, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, release=1761123044, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 09:03:42 np0005541914.localdomain podman[103761]: 2025-12-02 09:03:42.089149898 +0000 UTC m=+0.092475229 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-collectd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, release=1761123044, config_id=tripleo_step3, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 09:03:42 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 09:03:44 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 09:03:45 np0005541914.localdomain podman[103780]: 2025-12-02 09:03:45.085808577 +0000 UTC m=+0.088331542 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=iscsid, managed_by=tripleo_ansible, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vcs-type=git, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, name=rhosp17/openstack-iscsid, version=17.1.12, batch=17.1_20251118.1, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid)
Dec 02 09:03:45 np0005541914.localdomain podman[103780]: 2025-12-02 09:03:45.099104704 +0000 UTC m=+0.101627669 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, config_id=tripleo_step3, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044)
Dec 02 09:03:45 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 09:03:56 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 09:03:57 np0005541914.localdomain podman[103799]: 2025-12-02 09:03:57.071943528 +0000 UTC m=+0.079537113 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, batch=17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-qdrouterd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 09:03:57 np0005541914.localdomain podman[103799]: 2025-12-02 09:03:57.26590331 +0000 UTC m=+0.273496875 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, release=1761123044, vcs-type=git, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step1, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:46Z)
Dec 02 09:03:57 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 09:04:00 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 09:04:00 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 09:04:00 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 09:04:01 np0005541914.localdomain podman[103828]: 2025-12-02 09:04:01.07079194 +0000 UTC m=+0.076096719 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, version=17.1.12, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=)
Dec 02 09:04:01 np0005541914.localdomain podman[103829]: 2025-12-02 09:04:01.089506081 +0000 UTC m=+0.087496316 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, build-date=2025-11-19T00:11:48Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, version=17.1.12)
Dec 02 09:04:01 np0005541914.localdomain podman[103828]: 2025-12-02 09:04:01.104976065 +0000 UTC m=+0.110280824 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, batch=17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, release=1761123044, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, build-date=2025-11-18T22:49:32Z, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64)
Dec 02 09:04:01 np0005541914.localdomain podman[103829]: 2025-12-02 09:04:01.115792316 +0000 UTC m=+0.113782561 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, container_name=ceilometer_agent_compute, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 02 09:04:01 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 09:04:01 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 09:04:01 np0005541914.localdomain podman[103833]: 2025-12-02 09:04:01.195041699 +0000 UTC m=+0.188276739 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi)
Dec 02 09:04:01 np0005541914.localdomain podman[103833]: 2025-12-02 09:04:01.22190053 +0000 UTC m=+0.215135630 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.12, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, release=1761123044, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, build-date=2025-11-19T00:12:45Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1)
Dec 02 09:04:01 np0005541914.localdomain podman[103833]: unhealthy
Dec 02 09:04:01 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:04:01 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Failed with result 'exit-code'.
Dec 02 09:04:04 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 09:04:04 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 09:04:04 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 09:04:05 np0005541914.localdomain podman[103903]: 2025-12-02 09:04:05.063710898 +0000 UTC m=+0.068986042 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, name=rhosp17/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.12, build-date=2025-11-19T00:14:25Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.4, managed_by=tripleo_ansible, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, distribution-scope=public, tcib_managed=true)
Dec 02 09:04:05 np0005541914.localdomain podman[103902]: 2025-12-02 09:04:05.129141498 +0000 UTC m=+0.134085992 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1761123044, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.4, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vendor=Red Hat, Inc., build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, batch=17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, distribution-scope=public, architecture=x86_64, container_name=nova_compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 09:04:05 np0005541914.localdomain podman[103904]: 2025-12-02 09:04:05.091160647 +0000 UTC m=+0.090761837 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, release=1761123044, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_id=tripleo_step4, architecture=x86_64, name=rhosp17/openstack-ovn-controller, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, container_name=ovn_controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 02 09:04:05 np0005541914.localdomain podman[103903]: 2025-12-02 09:04:05.150130281 +0000 UTC m=+0.155405445 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, build-date=2025-11-19T00:14:25Z, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com)
Dec 02 09:04:05 np0005541914.localdomain podman[103903]: unhealthy
Dec 02 09:04:05 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:04:05 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 09:04:05 np0005541914.localdomain podman[103902]: 2025-12-02 09:04:05.162606972 +0000 UTC m=+0.167551406 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, build-date=2025-11-19T00:36:58Z, version=17.1.12, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 02 09:04:05 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Deactivated successfully.
Dec 02 09:04:05 np0005541914.localdomain podman[103904]: 2025-12-02 09:04:05.176051793 +0000 UTC m=+0.175653013 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, version=17.1.12, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., release=1761123044, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, container_name=ovn_controller, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 02 09:04:05 np0005541914.localdomain podman[103904]: unhealthy
Dec 02 09:04:05 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:04:05 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 09:04:05 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 09:04:06 np0005541914.localdomain systemd[1]: tmp-crun.CCnxJk.mount: Deactivated successfully.
Dec 02 09:04:06 np0005541914.localdomain podman[103964]: 2025-12-02 09:04:06.067338432 +0000 UTC m=+0.075548131 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-nova-compute, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, release=1761123044, vendor=Red Hat, Inc., url=https://www.redhat.com, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 02 09:04:06 np0005541914.localdomain podman[103964]: 2025-12-02 09:04:06.443991162 +0000 UTC m=+0.452200781 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.12, name=rhosp17/openstack-nova-compute, release=1761123044, batch=17.1_20251118.1, managed_by=tripleo_ansible, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true)
Dec 02 09:04:06 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 09:04:09 np0005541914.localdomain sudo[103986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:04:09 np0005541914.localdomain sudo[103986]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:04:09 np0005541914.localdomain sudo[103986]: pam_unix(sudo:session): session closed for user root
Dec 02 09:04:09 np0005541914.localdomain sudo[104001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:04:09 np0005541914.localdomain sudo[104001]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:04:09 np0005541914.localdomain sudo[104001]: pam_unix(sudo:session): session closed for user root
Dec 02 09:04:10 np0005541914.localdomain sudo[104047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:04:10 np0005541914.localdomain sudo[104047]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:04:10 np0005541914.localdomain sudo[104047]: pam_unix(sudo:session): session closed for user root
Dec 02 09:04:12 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 09:04:13 np0005541914.localdomain systemd[1]: tmp-crun.2cf6hE.mount: Deactivated successfully.
Dec 02 09:04:13 np0005541914.localdomain podman[104062]: 2025-12-02 09:04:13.095367256 +0000 UTC m=+0.099520846 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-18T22:51:28Z, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, batch=17.1_20251118.1, container_name=collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, name=rhosp17/openstack-collectd)
Dec 02 09:04:13 np0005541914.localdomain podman[104062]: 2025-12-02 09:04:13.106186197 +0000 UTC m=+0.110339807 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, tcib_managed=true, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, vcs-type=git, container_name=collectd, io.openshift.expose-services=, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 09:04:13 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 09:04:15 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 09:04:16 np0005541914.localdomain podman[104083]: 2025-12-02 09:04:16.069080892 +0000 UTC m=+0.075904252 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, build-date=2025-11-18T23:44:13Z, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, config_id=tripleo_step3, vendor=Red Hat, Inc., version=17.1.12, managed_by=tripleo_ansible, io.buildah.version=1.41.4)
Dec 02 09:04:16 np0005541914.localdomain podman[104083]: 2025-12-02 09:04:16.082784001 +0000 UTC m=+0.089607361 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, container_name=iscsid, url=https://www.redhat.com, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, distribution-scope=public, name=rhosp17/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, build-date=2025-11-18T23:44:13Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, config_id=tripleo_step3, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container)
Dec 02 09:04:16 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 09:04:20 np0005541914.localdomain sshd[104102]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:04:21 np0005541914.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 02 09:04:21 np0005541914.localdomain sshd[104102]: Invalid user solv from 45.148.10.240 port 38350
Dec 02 09:04:21 np0005541914.localdomain recover_tripleo_nova_virtqemud[104105]: 61907
Dec 02 09:04:21 np0005541914.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 02 09:04:21 np0005541914.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 02 09:04:21 np0005541914.localdomain sshd[104102]: Connection closed by invalid user solv 45.148.10.240 port 38350 [preauth]
Dec 02 09:04:27 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 09:04:28 np0005541914.localdomain podman[104106]: 2025-12-02 09:04:28.088679767 +0000 UTC m=+0.083289709 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:49:46Z, url=https://www.redhat.com, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step1, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, tcib_managed=true, io.buildah.version=1.41.4, vcs-type=git, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=)
Dec 02 09:04:28 np0005541914.localdomain podman[104106]: 2025-12-02 09:04:28.309232532 +0000 UTC m=+0.303842494 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, distribution-scope=public, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_id=tripleo_step1, version=17.1.12, container_name=metrics_qdr, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Dec 02 09:04:28 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 09:04:31 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 09:04:31 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 09:04:31 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 09:04:32 np0005541914.localdomain systemd[1]: tmp-crun.QNoUyI.mount: Deactivated successfully.
Dec 02 09:04:32 np0005541914.localdomain podman[104136]: 2025-12-02 09:04:32.07387235 +0000 UTC m=+0.073450668 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.12, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-type=git, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 09:04:32 np0005541914.localdomain podman[104135]: 2025-12-02 09:04:32.116561035 +0000 UTC m=+0.116395041 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1)
Dec 02 09:04:32 np0005541914.localdomain podman[104136]: 2025-12-02 09:04:32.146862292 +0000 UTC m=+0.146440580 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ceilometer-compute, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, tcib_managed=true, batch=17.1_20251118.1, url=https://www.redhat.com, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute)
Dec 02 09:04:32 np0005541914.localdomain podman[104135]: 2025-12-02 09:04:32.154873516 +0000 UTC m=+0.154707522 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-cron, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-cron-container, config_id=tripleo_step4)
Dec 02 09:04:32 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 09:04:32 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 09:04:32 np0005541914.localdomain podman[104137]: 2025-12-02 09:04:32.232106949 +0000 UTC m=+0.227082116 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true)
Dec 02 09:04:32 np0005541914.localdomain podman[104137]: 2025-12-02 09:04:32.258839327 +0000 UTC m=+0.253814454 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., build-date=2025-11-19T00:12:45Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.4, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, name=rhosp17/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, release=1761123044)
Dec 02 09:04:32 np0005541914.localdomain podman[104137]: unhealthy
Dec 02 09:04:32 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:04:32 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Failed with result 'exit-code'.
Dec 02 09:04:35 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 09:04:35 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 09:04:35 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 09:04:36 np0005541914.localdomain systemd[1]: tmp-crun.tFmtG1.mount: Deactivated successfully.
Dec 02 09:04:36 np0005541914.localdomain podman[104211]: 2025-12-02 09:04:36.08612571 +0000 UTC m=+0.085536758 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, batch=17.1_20251118.1, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn)
Dec 02 09:04:36 np0005541914.localdomain podman[104210]: 2025-12-02 09:04:36.065367884 +0000 UTC m=+0.069730674 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, container_name=nova_compute, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 09:04:36 np0005541914.localdomain podman[104212]: 2025-12-02 09:04:36.125120852 +0000 UTC m=+0.122017733 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., batch=17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.4, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1761123044, container_name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.12, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272)
Dec 02 09:04:36 np0005541914.localdomain podman[104211]: 2025-12-02 09:04:36.124973428 +0000 UTC m=+0.124384476 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1761123044, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git)
Dec 02 09:04:36 np0005541914.localdomain podman[104211]: unhealthy
Dec 02 09:04:36 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:04:36 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 09:04:36 np0005541914.localdomain podman[104210]: 2025-12-02 09:04:36.144096152 +0000 UTC m=+0.148458942 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, distribution-scope=public, name=rhosp17/openstack-nova-compute, io.openshift.expose-services=, io.buildah.version=1.41.4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, release=1761123044)
Dec 02 09:04:36 np0005541914.localdomain podman[104210]: unhealthy
Dec 02 09:04:36 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:04:36 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Failed with result 'exit-code'.
Dec 02 09:04:36 np0005541914.localdomain podman[104212]: 2025-12-02 09:04:36.194328828 +0000 UTC m=+0.191225679 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=ovn_controller, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_id=tripleo_step4, distribution-scope=public)
Dec 02 09:04:36 np0005541914.localdomain podman[104212]: unhealthy
Dec 02 09:04:36 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:04:36 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 09:04:36 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 09:04:37 np0005541914.localdomain podman[104268]: 2025-12-02 09:04:37.068539145 +0000 UTC m=+0.076259493 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_id=tripleo_step4, distribution-scope=public, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute)
Dec 02 09:04:37 np0005541914.localdomain podman[104268]: 2025-12-02 09:04:37.385609403 +0000 UTC m=+0.393329781 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, distribution-scope=public, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 09:04:37 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 09:04:43 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 09:04:44 np0005541914.localdomain podman[104292]: 2025-12-02 09:04:44.081701743 +0000 UTC m=+0.084678920 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., distribution-scope=public, release=1761123044, com.redhat.component=openstack-collectd-container, version=17.1.12, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd)
Dec 02 09:04:44 np0005541914.localdomain podman[104292]: 2025-12-02 09:04:44.114897419 +0000 UTC m=+0.117874596 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1761123044, batch=17.1_20251118.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.buildah.version=1.41.4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd)
Dec 02 09:04:44 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 09:04:46 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 09:04:47 np0005541914.localdomain podman[104312]: 2025-12-02 09:04:47.06030048 +0000 UTC m=+0.067116364 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, name=rhosp17/openstack-iscsid, container_name=iscsid, release=1761123044, version=17.1.12, build-date=2025-11-18T23:44:13Z, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, vcs-type=git)
Dec 02 09:04:47 np0005541914.localdomain podman[104312]: 2025-12-02 09:04:47.094243768 +0000 UTC m=+0.101059662 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, batch=17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, build-date=2025-11-18T23:44:13Z, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, version=17.1.12, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, distribution-scope=public, managed_by=tripleo_ansible, container_name=iscsid, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 09:04:47 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 09:04:58 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 09:04:59 np0005541914.localdomain systemd[1]: tmp-crun.T98Prj.mount: Deactivated successfully.
Dec 02 09:04:59 np0005541914.localdomain podman[104332]: 2025-12-02 09:04:59.085350472 +0000 UTC m=+0.092587713 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.openshift.expose-services=, url=https://www.redhat.com, release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd)
Dec 02 09:04:59 np0005541914.localdomain podman[104332]: 2025-12-02 09:04:59.306170676 +0000 UTC m=+0.313407897 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, config_id=tripleo_step1, tcib_managed=true, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, version=17.1.12, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, io.openshift.expose-services=)
Dec 02 09:04:59 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 09:05:02 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 09:05:02 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 09:05:02 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 09:05:03 np0005541914.localdomain podman[104361]: 2025-12-02 09:05:03.086851082 +0000 UTC m=+0.088658462 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, name=rhosp17/openstack-cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., build-date=2025-11-18T22:49:32Z)
Dec 02 09:05:03 np0005541914.localdomain podman[104361]: 2025-12-02 09:05:03.094213798 +0000 UTC m=+0.096021228 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, vcs-type=git, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., version=17.1.12, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-cron, url=https://www.redhat.com, io.buildah.version=1.41.4, container_name=logrotate_crond, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_id=tripleo_step4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron)
Dec 02 09:05:03 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 09:05:03 np0005541914.localdomain podman[104363]: 2025-12-02 09:05:03.135882603 +0000 UTC m=+0.131287337 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, release=1761123044, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, vendor=Red Hat, Inc., url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, io.buildah.version=1.41.4, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 02 09:05:03 np0005541914.localdomain podman[104362]: 2025-12-02 09:05:03.192485153 +0000 UTC m=+0.190460766 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1761123044, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container)
Dec 02 09:05:03 np0005541914.localdomain podman[104363]: 2025-12-02 09:05:03.215061604 +0000 UTC m=+0.210466338 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, version=17.1.12, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container)
Dec 02 09:05:03 np0005541914.localdomain podman[104363]: unhealthy
Dec 02 09:05:03 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:05:03 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Failed with result 'exit-code'.
Dec 02 09:05:03 np0005541914.localdomain podman[104362]: 2025-12-02 09:05:03.269061955 +0000 UTC m=+0.267037548 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20251118.1, config_id=tripleo_step4, io.buildah.version=1.41.4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.12, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, name=rhosp17/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, build-date=2025-11-19T00:11:48Z, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 02 09:05:03 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 09:05:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 09:05:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 09:05:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 09:05:07 np0005541914.localdomain systemd[1]: tmp-crun.jgFNvW.mount: Deactivated successfully.
Dec 02 09:05:07 np0005541914.localdomain podman[104433]: 2025-12-02 09:05:07.084987671 +0000 UTC m=+0.085247348 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, release=1761123044, io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, version=17.1.12, maintainer=OpenStack TripleO Team)
Dec 02 09:05:07 np0005541914.localdomain podman[104433]: 2025-12-02 09:05:07.133906478 +0000 UTC m=+0.134166125 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, build-date=2025-11-19T00:36:58Z, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, container_name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 09:05:07 np0005541914.localdomain podman[104433]: unhealthy
Dec 02 09:05:07 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:05:07 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Failed with result 'exit-code'.
Dec 02 09:05:07 np0005541914.localdomain podman[104435]: 2025-12-02 09:05:07.178741899 +0000 UTC m=+0.173765955 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_controller, version=17.1.12, io.buildah.version=1.41.4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1)
Dec 02 09:05:07 np0005541914.localdomain podman[104434]: 2025-12-02 09:05:07.133724392 +0000 UTC m=+0.132569126 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, url=https://www.redhat.com, release=1761123044, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, config_id=tripleo_step4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible)
Dec 02 09:05:07 np0005541914.localdomain podman[104435]: 2025-12-02 09:05:07.216858134 +0000 UTC m=+0.211882170 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, version=17.1.12, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc.)
Dec 02 09:05:07 np0005541914.localdomain podman[104434]: 2025-12-02 09:05:07.217419142 +0000 UTC m=+0.216263896 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']})
Dec 02 09:05:07 np0005541914.localdomain podman[104435]: unhealthy
Dec 02 09:05:07 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:05:07 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 09:05:07 np0005541914.localdomain podman[104434]: unhealthy
Dec 02 09:05:07 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:05:07 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 09:05:07 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 09:05:07 np0005541914.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 02 09:05:08 np0005541914.localdomain recover_tripleo_nova_virtqemud[104495]: 61907
Dec 02 09:05:08 np0005541914.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 02 09:05:08 np0005541914.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 02 09:05:08 np0005541914.localdomain podman[104493]: 2025-12-02 09:05:08.05830651 +0000 UTC m=+0.065224617 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, release=1761123044, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, version=17.1.12, build-date=2025-11-19T00:36:58Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, config_id=tripleo_step4, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team)
Dec 02 09:05:08 np0005541914.localdomain podman[104493]: 2025-12-02 09:05:08.392992425 +0000 UTC m=+0.399910572 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, version=17.1.12, distribution-scope=public, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044)
Dec 02 09:05:08 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 09:05:10 np0005541914.localdomain sudo[104518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:05:10 np0005541914.localdomain sudo[104518]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:05:10 np0005541914.localdomain sudo[104518]: pam_unix(sudo:session): session closed for user root
Dec 02 09:05:11 np0005541914.localdomain sudo[104533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:05:11 np0005541914.localdomain sudo[104533]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:05:11 np0005541914.localdomain sudo[104533]: pam_unix(sudo:session): session closed for user root
Dec 02 09:05:12 np0005541914.localdomain sudo[104581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:05:12 np0005541914.localdomain sudo[104581]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:05:12 np0005541914.localdomain sudo[104581]: pam_unix(sudo:session): session closed for user root
Dec 02 09:05:14 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 09:05:15 np0005541914.localdomain podman[104596]: 2025-12-02 09:05:15.098377372 +0000 UTC m=+0.096369509 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, vcs-type=git, release=1761123044, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, build-date=2025-11-18T22:51:28Z, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp17/openstack-collectd, vendor=Red Hat, Inc.)
Dec 02 09:05:15 np0005541914.localdomain podman[104596]: 2025-12-02 09:05:15.107900943 +0000 UTC m=+0.105893090 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.12, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, release=1761123044, architecture=x86_64, config_id=tripleo_step3, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp17/openstack-collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, build-date=2025-11-18T22:51:28Z)
Dec 02 09:05:15 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 09:05:17 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 09:05:18 np0005541914.localdomain podman[104616]: 2025-12-02 09:05:18.072314866 +0000 UTC m=+0.076436918 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., io.buildah.version=1.41.4, release=1761123044, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:44:13Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp17/openstack-iscsid, com.redhat.component=openstack-iscsid-container, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.12, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vcs-type=git, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, distribution-scope=public, tcib_managed=true)
Dec 02 09:05:18 np0005541914.localdomain podman[104616]: 2025-12-02 09:05:18.107329928 +0000 UTC m=+0.111452020 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, batch=17.1_20251118.1, version=17.1.12, url=https://www.redhat.com, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, name=rhosp17/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3)
Dec 02 09:05:18 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 09:05:19 np0005541914.localdomain sshd[104636]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:05:19 np0005541914.localdomain sshd[104636]: Accepted publickey for zuul from 192.168.122.30 port 49886 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 09:05:19 np0005541914.localdomain systemd-logind[760]: New session 35 of user zuul.
Dec 02 09:05:19 np0005541914.localdomain systemd[1]: Started Session 35 of User zuul.
Dec 02 09:05:19 np0005541914.localdomain sshd[104636]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 09:05:20 np0005541914.localdomain sudo[104729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cysrhpvcaxjitpqjrjvtanxolxqmjyns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666319.6292634-28-279310946873971/AnsiballZ_stat.py
Dec 02 09:05:20 np0005541914.localdomain sudo[104729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:05:20 np0005541914.localdomain python3.9[104731]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:05:20 np0005541914.localdomain sudo[104729]: pam_unix(sudo:session): session closed for user root
Dec 02 09:05:20 np0005541914.localdomain sudo[104823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bkztuicagxuqitebsemphkoorssrurfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666320.4965072-64-77008892948464/AnsiballZ_command.py
Dec 02 09:05:20 np0005541914.localdomain sudo[104823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:05:21 np0005541914.localdomain python3.9[104825]: ansible-ansible.legacy.command Invoked with cmd=python3 -c "import configparser as c; p = c.ConfigParser(strict=False); p.read('/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf'); print(p['DEFAULT']['host'])"
                                                             _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:05:21 np0005541914.localdomain sudo[104823]: pam_unix(sudo:session): session closed for user root
Dec 02 09:05:21 np0005541914.localdomain sudo[104916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oikekoxywnyadjoemskafkryclsvimrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666321.4163294-89-29622503390855/AnsiballZ_stat.py
Dec 02 09:05:21 np0005541914.localdomain sudo[104916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:05:21 np0005541914.localdomain python3.9[104918]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:05:21 np0005541914.localdomain sudo[104916]: pam_unix(sudo:session): session closed for user root
Dec 02 09:05:22 np0005541914.localdomain sudo[105010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kszzowqqggvcgzcwuhutzkvzyszvmnfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666322.0573053-113-199057085882380/AnsiballZ_command.py
Dec 02 09:05:22 np0005541914.localdomain sudo[105010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:05:22 np0005541914.localdomain python3.9[105012]: ansible-ansible.legacy.command Invoked with cmd=python3 -c "import configparser as c; p = c.ConfigParser(strict=False); p.read('/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf'); print(p['DEFAULT']['host'])"
                                                             _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:05:22 np0005541914.localdomain sudo[105010]: pam_unix(sudo:session): session closed for user root
Dec 02 09:05:22 np0005541914.localdomain sudo[105103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ksmgzibikhriajlmldxuhuduagmvcxwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666322.7352703-139-253731687950168/AnsiballZ_command.py
Dec 02 09:05:22 np0005541914.localdomain sudo[105103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:05:23 np0005541914.localdomain python3.9[105105]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:05:23 np0005541914.localdomain sudo[105103]: pam_unix(sudo:session): session closed for user root
Dec 02 09:05:23 np0005541914.localdomain python3.9[105196]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 02 09:05:25 np0005541914.localdomain python3.9[105286]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:05:26 np0005541914.localdomain python3.9[105378]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 02 09:05:27 np0005541914.localdomain python3.9[105468]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 02 09:05:28 np0005541914.localdomain python3.9[105516]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 02 09:05:28 np0005541914.localdomain sshd[104636]: pam_unix(sshd:session): session closed for user zuul
Dec 02 09:05:28 np0005541914.localdomain systemd[1]: session-35.scope: Deactivated successfully.
Dec 02 09:05:28 np0005541914.localdomain systemd[1]: session-35.scope: Consumed 4.465s CPU time.
Dec 02 09:05:28 np0005541914.localdomain systemd-logind[760]: Session 35 logged out. Waiting for processes to exit.
Dec 02 09:05:28 np0005541914.localdomain systemd-logind[760]: Removed session 35.
Dec 02 09:05:29 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 09:05:30 np0005541914.localdomain podman[105532]: 2025-12-02 09:05:30.081652216 +0000 UTC m=+0.088358673 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, release=1761123044, config_id=tripleo_step1, architecture=x86_64, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr)
Dec 02 09:05:30 np0005541914.localdomain podman[105532]: 2025-12-02 09:05:30.274801783 +0000 UTC m=+0.281508230 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, io.buildah.version=1.41.4, version=17.1.12, config_id=tripleo_step1, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, name=rhosp17/openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 09:05:30 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 09:05:33 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20671 DF PROTO=TCP SPT=48726 DPT=9882 SEQ=1985574544 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CA8800000000001030307) 
Dec 02 09:05:33 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 09:05:33 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 09:05:33 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 09:05:34 np0005541914.localdomain podman[105562]: 2025-12-02 09:05:34.106866033 +0000 UTC m=+0.110835301 container health_status 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, release=1761123044, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, version=17.1.12, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute)
Dec 02 09:05:34 np0005541914.localdomain podman[105562]: 2025-12-02 09:05:34.157778529 +0000 UTC m=+0.161747777 container exec_died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, build-date=2025-11-19T00:11:48Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, release=1761123044, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, container_name=ceilometer_agent_compute, vcs-type=git, vendor=Red Hat, Inc., version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible)
Dec 02 09:05:34 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Deactivated successfully.
Dec 02 09:05:34 np0005541914.localdomain podman[105563]: 2025-12-02 09:05:34.168099745 +0000 UTC m=+0.168790072 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=unhealthy, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, release=1761123044, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, url=https://www.redhat.com, architecture=x86_64)
Dec 02 09:05:34 np0005541914.localdomain podman[105563]: 2025-12-02 09:05:34.217529858 +0000 UTC m=+0.218220175 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1761123044, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, name=rhosp17/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, version=17.1.12)
Dec 02 09:05:34 np0005541914.localdomain podman[105563]: unhealthy
Dec 02 09:05:34 np0005541914.localdomain podman[105561]: 2025-12-02 09:05:34.223575882 +0000 UTC m=+0.226759206 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.4, tcib_managed=true, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-cron, managed_by=tripleo_ansible, version=17.1.12, description=Red Hat OpenStack Platform 17.1 cron)
Dec 02 09:05:34 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:05:34 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Failed with result 'exit-code'.
Dec 02 09:05:34 np0005541914.localdomain podman[105561]: 2025-12-02 09:05:34.233885637 +0000 UTC m=+0.237068911 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, release=1761123044, name=rhosp17/openstack-cron, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Dec 02 09:05:34 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 09:05:34 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20672 DF PROTO=TCP SPT=48726 DPT=9882 SEQ=1985574544 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CACA30000000001030307) 
Dec 02 09:05:36 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20673 DF PROTO=TCP SPT=48726 DPT=9882 SEQ=1985574544 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CB4A20000000001030307) 
Dec 02 09:05:37 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 09:05:37 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 09:05:37 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 09:05:38 np0005541914.localdomain podman[105637]: 2025-12-02 09:05:38.06475946 +0000 UTC m=+0.063979298 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, release=1761123044, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, container_name=ovn_controller)
Dec 02 09:05:38 np0005541914.localdomain podman[105636]: 2025-12-02 09:05:38.135186263 +0000 UTC m=+0.134584637 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:14:25Z, vcs-type=git, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 09:05:38 np0005541914.localdomain systemd[1]: tmp-crun.v9VtVW.mount: Deactivated successfully.
Dec 02 09:05:38 np0005541914.localdomain podman[105635]: 2025-12-02 09:05:38.173246778 +0000 UTC m=+0.175195440 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.4, batch=17.1_20251118.1, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, container_name=nova_compute, tcib_managed=true, name=rhosp17/openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d)
Dec 02 09:05:38 np0005541914.localdomain podman[105636]: 2025-12-02 09:05:38.203623507 +0000 UTC m=+0.203021851 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, batch=17.1_20251118.1, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, version=17.1.12, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, build-date=2025-11-19T00:14:25Z, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4)
Dec 02 09:05:38 np0005541914.localdomain podman[105636]: unhealthy
Dec 02 09:05:38 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:05:38 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 09:05:38 np0005541914.localdomain podman[105635]: 2025-12-02 09:05:38.216040936 +0000 UTC m=+0.217989548 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step5, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, name=rhosp17/openstack-nova-compute, container_name=nova_compute, version=17.1.12)
Dec 02 09:05:38 np0005541914.localdomain podman[105635]: unhealthy
Dec 02 09:05:38 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:05:38 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Failed with result 'exit-code'.
Dec 02 09:05:38 np0005541914.localdomain podman[105637]: 2025-12-02 09:05:38.256139892 +0000 UTC m=+0.255359760 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, name=rhosp17/openstack-ovn-controller, io.openshift.expose-services=, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, tcib_managed=true, release=1761123044, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Dec 02 09:05:38 np0005541914.localdomain podman[105637]: unhealthy
Dec 02 09:05:38 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:05:38 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 09:05:38 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10670 DF PROTO=TCP SPT=46272 DPT=9100 SEQ=2936596451 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CBB560000000001030307) 
Dec 02 09:05:38 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 09:05:39 np0005541914.localdomain podman[105698]: 2025-12-02 09:05:39.067636551 +0000 UTC m=+0.075838071 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.12, config_id=tripleo_step4, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=)
Dec 02 09:05:39 np0005541914.localdomain podman[105698]: 2025-12-02 09:05:39.402836103 +0000 UTC m=+0.411037633 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., batch=17.1_20251118.1, io.openshift.expose-services=, release=1761123044, version=17.1.12, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, container_name=nova_migration_target, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute)
Dec 02 09:05:39 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 09:05:39 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10671 DF PROTO=TCP SPT=46272 DPT=9100 SEQ=2936596451 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CBF620000000001030307) 
Dec 02 09:05:40 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20674 DF PROTO=TCP SPT=48726 DPT=9882 SEQ=1985574544 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CC4630000000001030307) 
Dec 02 09:05:41 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10672 DF PROTO=TCP SPT=46272 DPT=9100 SEQ=2936596451 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CC7620000000001030307) 
Dec 02 09:05:42 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30749 DF PROTO=TCP SPT=40650 DPT=9105 SEQ=4228247310 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CC9960000000001030307) 
Dec 02 09:05:43 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30750 DF PROTO=TCP SPT=40650 DPT=9105 SEQ=4228247310 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CCDA20000000001030307) 
Dec 02 09:05:45 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30751 DF PROTO=TCP SPT=40650 DPT=9105 SEQ=4228247310 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CD5A20000000001030307) 
Dec 02 09:05:45 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10673 DF PROTO=TCP SPT=46272 DPT=9100 SEQ=2936596451 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CD7220000000001030307) 
Dec 02 09:05:45 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 09:05:46 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39343 DF PROTO=TCP SPT=36356 DPT=9102 SEQ=1817403735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CD8DE0000000001030307) 
Dec 02 09:05:46 np0005541914.localdomain podman[105721]: 2025-12-02 09:05:46.07133864 +0000 UTC m=+0.075660465 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp17/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, com.redhat.component=openstack-collectd-container, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, vcs-type=git)
Dec 02 09:05:46 np0005541914.localdomain podman[105721]: 2025-12-02 09:05:46.107939719 +0000 UTC m=+0.112261584 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1761123044, tcib_managed=true, container_name=collectd, vcs-type=git, name=rhosp17/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:51:28Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12)
Dec 02 09:05:46 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 09:05:47 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39344 DF PROTO=TCP SPT=36356 DPT=9102 SEQ=1817403735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CDCE30000000001030307) 
Dec 02 09:05:48 np0005541914.localdomain sshd[105741]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:05:48 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 09:05:48 np0005541914.localdomain sshd[105741]: Accepted publickey for zuul from 192.168.122.30 port 51026 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 09:05:48 np0005541914.localdomain systemd-logind[760]: New session 36 of user zuul.
Dec 02 09:05:48 np0005541914.localdomain systemd[1]: Started Session 36 of User zuul.
Dec 02 09:05:48 np0005541914.localdomain sshd[105741]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 09:05:48 np0005541914.localdomain systemd[1]: tmp-crun.x0Mbcp.mount: Deactivated successfully.
Dec 02 09:05:48 np0005541914.localdomain podman[105743]: 2025-12-02 09:05:48.801395236 +0000 UTC m=+0.092771839 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.12, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, container_name=iscsid, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:44:13Z)
Dec 02 09:05:48 np0005541914.localdomain podman[105743]: 2025-12-02 09:05:48.836968833 +0000 UTC m=+0.128345436 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2025-11-18T23:44:13Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, name=rhosp17/openstack-iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 09:05:48 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 09:05:49 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39345 DF PROTO=TCP SPT=36356 DPT=9102 SEQ=1817403735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CE4E20000000001030307) 
Dec 02 09:05:49 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20675 DF PROTO=TCP SPT=48726 DPT=9882 SEQ=1985574544 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CE5220000000001030307) 
Dec 02 09:05:49 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30752 DF PROTO=TCP SPT=40650 DPT=9105 SEQ=4228247310 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CE5630000000001030307) 
Dec 02 09:05:49 np0005541914.localdomain sudo[105852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yadnodoogmovpvzagexpjeatzsqgyxqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666348.8544025-25-111619085001367/AnsiballZ_systemd_service.py
Dec 02 09:05:49 np0005541914.localdomain sudo[105852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:05:49 np0005541914.localdomain python3.9[105854]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 02 09:05:49 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:05:49 np0005541914.localdomain systemd-rc-local-generator[105875]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:05:49 np0005541914.localdomain systemd-sysv-generator[105878]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:05:49 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:05:50 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35764 DF PROTO=TCP SPT=36922 DPT=9101 SEQ=3845722813 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CE8A00000000001030307) 
Dec 02 09:05:50 np0005541914.localdomain sudo[105852]: pam_unix(sudo:session): session closed for user root
Dec 02 09:05:50 np0005541914.localdomain python3.9[105980]: ansible-ansible.builtin.service_facts Invoked
Dec 02 09:05:51 np0005541914.localdomain network[105997]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 02 09:05:51 np0005541914.localdomain network[105998]: 'network-scripts' will be removed from distribution in near future.
Dec 02 09:05:51 np0005541914.localdomain network[105999]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 02 09:05:51 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35765 DF PROTO=TCP SPT=36922 DPT=9101 SEQ=3845722813 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CECA20000000001030307) 
Dec 02 09:05:53 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35766 DF PROTO=TCP SPT=36922 DPT=9101 SEQ=3845722813 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CF4A30000000001030307) 
Dec 02 09:05:53 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39346 DF PROTO=TCP SPT=36356 DPT=9102 SEQ=1817403735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CF4A30000000001030307) 
Dec 02 09:05:53 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:05:53 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10674 DF PROTO=TCP SPT=46272 DPT=9100 SEQ=2936596451 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52CF7230000000001030307) 
Dec 02 09:05:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35767 DF PROTO=TCP SPT=36922 DPT=9101 SEQ=3845722813 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52D04620000000001030307) 
Dec 02 09:05:57 np0005541914.localdomain python3.9[106197]: ansible-ansible.builtin.service_facts Invoked
Dec 02 09:05:57 np0005541914.localdomain network[106214]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 02 09:05:57 np0005541914.localdomain network[106215]: 'network-scripts' will be removed from distribution in near future.
Dec 02 09:05:57 np0005541914.localdomain network[106216]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 02 09:05:58 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:06:00 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 09:06:00 np0005541914.localdomain podman[106314]: 2025-12-02 09:06:00.432351145 +0000 UTC m=+0.097910276 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp17/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, build-date=2025-11-18T22:49:46Z, tcib_managed=true, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, release=1761123044, url=https://www.redhat.com, io.openshift.expose-services=, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vcs-type=git, version=17.1.12, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Dec 02 09:06:00 np0005541914.localdomain podman[106314]: 2025-12-02 09:06:00.653864289 +0000 UTC m=+0.319423420 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-qdrouterd, io.openshift.expose-services=, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 09:06:00 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 09:06:01 np0005541914.localdomain sudo[106445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pzqkwmtnyqzctfanuduyqvcptfxsfzbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666361.1057053-116-269040357210901/AnsiballZ_systemd_service.py
Dec 02 09:06:01 np0005541914.localdomain sudo[106445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:06:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39347 DF PROTO=TCP SPT=36356 DPT=9102 SEQ=1817403735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52D15220000000001030307) 
Dec 02 09:06:01 np0005541914.localdomain python3.9[106447]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:06:01 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:06:01 np0005541914.localdomain systemd-sysv-generator[106480]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:06:01 np0005541914.localdomain systemd-rc-local-generator[106473]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:06:01 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:06:02 np0005541914.localdomain systemd[1]: Stopping ceilometer_agent_compute container...
Dec 02 09:06:02 np0005541914.localdomain systemd[1]: tmp-crun.7ZhWyg.mount: Deactivated successfully.
Dec 02 09:06:03 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45110 DF PROTO=TCP SPT=37222 DPT=9882 SEQ=3299814066 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52D1DB10000000001030307) 
Dec 02 09:06:04 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 09:06:04 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 09:06:04 np0005541914.localdomain podman[106502]: Error: container 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae is not running
Dec 02 09:06:04 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Main process exited, code=exited, status=125/n/a
Dec 02 09:06:04 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Failed with result 'exit-code'.
Dec 02 09:06:04 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 09:06:04 np0005541914.localdomain systemd[1]: tmp-crun.jRpwZV.mount: Deactivated successfully.
Dec 02 09:06:04 np0005541914.localdomain podman[106503]: 2025-12-02 09:06:04.408578974 +0000 UTC m=+0.158512830 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=unhealthy, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.4, release=1761123044, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, batch=17.1_20251118.1, distribution-scope=public)
Dec 02 09:06:04 np0005541914.localdomain podman[106523]: 2025-12-02 09:06:04.466383101 +0000 UTC m=+0.115086401 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1761123044, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64)
Dec 02 09:06:04 np0005541914.localdomain podman[106503]: 2025-12-02 09:06:04.474938902 +0000 UTC m=+0.224872748 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 02 09:06:04 np0005541914.localdomain podman[106503]: unhealthy
Dec 02 09:06:04 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:06:04 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Failed with result 'exit-code'.
Dec 02 09:06:04 np0005541914.localdomain podman[106523]: 2025-12-02 09:06:04.5049399 +0000 UTC m=+0.153643190 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, container_name=logrotate_crond, batch=17.1_20251118.1, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1761123044, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 09:06:04 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 09:06:04 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45111 DF PROTO=TCP SPT=37222 DPT=9882 SEQ=3299814066 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52D21A20000000001030307) 
Dec 02 09:06:06 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45112 DF PROTO=TCP SPT=37222 DPT=9882 SEQ=3299814066 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52D29A20000000001030307) 
Dec 02 09:06:08 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 09:06:08 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 09:06:08 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 09:06:08 np0005541914.localdomain podman[106555]: 2025-12-02 09:06:08.597348341 +0000 UTC m=+0.090789478 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, build-date=2025-11-19T00:14:25Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, container_name=ovn_metadata_agent, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 09:06:08 np0005541914.localdomain podman[106554]: 2025-12-02 09:06:08.643271705 +0000 UTC m=+0.143768488 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, config_id=tripleo_step5, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:36:58Z, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-nova-compute-container, version=17.1.12, vendor=Red Hat, Inc., batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git)
Dec 02 09:06:08 np0005541914.localdomain podman[106555]: 2025-12-02 09:06:08.664445853 +0000 UTC m=+0.157886940 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, release=1761123044, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20251118.1, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vcs-type=git, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 02 09:06:08 np0005541914.localdomain podman[106555]: unhealthy
Dec 02 09:06:08 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:06:08 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 09:06:08 np0005541914.localdomain podman[106554]: 2025-12-02 09:06:08.693927905 +0000 UTC m=+0.194424688 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-compute-container, release=1761123044, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=nova_compute, name=rhosp17/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 09:06:08 np0005541914.localdomain podman[106554]: unhealthy
Dec 02 09:06:08 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:06:08 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Failed with result 'exit-code'.
Dec 02 09:06:08 np0005541914.localdomain systemd[1]: tmp-crun.29fWcx.mount: Deactivated successfully.
Dec 02 09:06:08 np0005541914.localdomain podman[106561]: 2025-12-02 09:06:08.750735742 +0000 UTC m=+0.237983749 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, name=rhosp17/openstack-ovn-controller, tcib_managed=true, batch=17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., version=17.1.12, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team)
Dec 02 09:06:08 np0005541914.localdomain podman[106561]: 2025-12-02 09:06:08.768835745 +0000 UTC m=+0.256083782 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, managed_by=tripleo_ansible, config_id=tripleo_step4, name=rhosp17/openstack-ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, com.redhat.component=openstack-ovn-controller-container, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=)
Dec 02 09:06:08 np0005541914.localdomain podman[106561]: unhealthy
Dec 02 09:06:08 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:06:08 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 09:06:09 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 09:06:10 np0005541914.localdomain podman[106619]: 2025-12-02 09:06:10.075041885 +0000 UTC m=+0.078542724 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, vcs-type=git, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 09:06:10 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10675 DF PROTO=TCP SPT=46272 DPT=9100 SEQ=2936596451 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52D37220000000001030307) 
Dec 02 09:06:10 np0005541914.localdomain podman[106619]: 2025-12-02 09:06:10.446777254 +0000 UTC m=+0.450278023 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, config_id=tripleo_step4, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:36:58Z, distribution-scope=public, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044)
Dec 02 09:06:10 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 09:06:12 np0005541914.localdomain sudo[106643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:06:12 np0005541914.localdomain sudo[106643]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:06:12 np0005541914.localdomain sudo[106643]: pam_unix(sudo:session): session closed for user root
Dec 02 09:06:12 np0005541914.localdomain sudo[106658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:06:12 np0005541914.localdomain sudo[106658]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:06:13 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52301 DF PROTO=TCP SPT=34734 DPT=9105 SEQ=1251079634 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52D42E20000000001030307) 
Dec 02 09:06:13 np0005541914.localdomain sudo[106658]: pam_unix(sudo:session): session closed for user root
Dec 02 09:06:14 np0005541914.localdomain sudo[106705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:06:14 np0005541914.localdomain sudo[106705]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:06:14 np0005541914.localdomain sudo[106705]: pam_unix(sudo:session): session closed for user root
Dec 02 09:06:16 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55973 DF PROTO=TCP SPT=33098 DPT=9102 SEQ=469635889 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52D4E0E0000000001030307) 
Dec 02 09:06:16 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 09:06:16 np0005541914.localdomain podman[106720]: 2025-12-02 09:06:16.583660764 +0000 UTC m=+0.091082758 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20251118.1, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, distribution-scope=public, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=collectd, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, tcib_managed=true, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd)
Dec 02 09:06:16 np0005541914.localdomain podman[106720]: 2025-12-02 09:06:16.621945515 +0000 UTC m=+0.129367589 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, batch=17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, tcib_managed=true, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., release=1761123044, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2025-11-18T22:51:28Z, url=https://www.redhat.com, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd)
Dec 02 09:06:16 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 09:06:18 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45114 DF PROTO=TCP SPT=37222 DPT=9882 SEQ=3299814066 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52D59220000000001030307) 
Dec 02 09:06:18 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 09:06:19 np0005541914.localdomain podman[106739]: 2025-12-02 09:06:19.077182055 +0000 UTC m=+0.077334587 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhosp17/openstack-iscsid, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, release=1761123044, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2025-11-18T23:44:13Z, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=iscsid, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, version=17.1.12, batch=17.1_20251118.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Dec 02 09:06:19 np0005541914.localdomain podman[106739]: 2025-12-02 09:06:19.114731964 +0000 UTC m=+0.114884475 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, tcib_managed=true, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, build-date=2025-11-18T23:44:13Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git)
Dec 02 09:06:19 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 09:06:21 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35769 DF PROTO=TCP SPT=36922 DPT=9101 SEQ=3845722813 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52D65220000000001030307) 
Dec 02 09:06:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5822 DF PROTO=TCP SPT=59090 DPT=9101 SEQ=3229159924 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52D79A30000000001030307) 
Dec 02 09:06:30 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 09:06:30 np0005541914.localdomain podman[106758]: 2025-12-02 09:06:30.836318593 +0000 UTC m=+0.084142594 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, io.buildah.version=1.41.4, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, architecture=x86_64, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2025-11-18T22:49:46Z, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team)
Dec 02 09:06:31 np0005541914.localdomain podman[106758]: 2025-12-02 09:06:31.038964891 +0000 UTC m=+0.286788882 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, config_id=tripleo_step1, io.openshift.expose-services=, release=1761123044, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2025-11-18T22:49:46Z, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible)
Dec 02 09:06:31 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 09:06:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55977 DF PROTO=TCP SPT=33098 DPT=9102 SEQ=469635889 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52D8B220000000001030307) 
Dec 02 09:06:33 np0005541914.localdomain sshd[106786]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:06:33 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50034 DF PROTO=TCP SPT=54924 DPT=9882 SEQ=2544141676 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52D92E00000000001030307) 
Dec 02 09:06:33 np0005541914.localdomain sshd[106786]: Invalid user ubuntu from 45.148.10.240 port 33496
Dec 02 09:06:33 np0005541914.localdomain sshd[106786]: Connection closed by invalid user ubuntu 45.148.10.240 port 33496 [preauth]
Dec 02 09:06:34 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 09:06:34 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 09:06:34 np0005541914.localdomain podman[106788]: Error: container 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae is not running
Dec 02 09:06:34 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Main process exited, code=exited, status=125/n/a
Dec 02 09:06:34 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Failed with result 'exit-code'.
Dec 02 09:06:34 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 09:06:34 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50035 DF PROTO=TCP SPT=54924 DPT=9882 SEQ=2544141676 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52D96E20000000001030307) 
Dec 02 09:06:34 np0005541914.localdomain systemd[1]: tmp-crun.9omwLL.mount: Deactivated successfully.
Dec 02 09:06:34 np0005541914.localdomain podman[106812]: 2025-12-02 09:06:34.715779092 +0000 UTC m=+0.078719269 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-cron, com.redhat.component=openstack-cron-container, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.buildah.version=1.41.4, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, build-date=2025-11-18T22:49:32Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, architecture=x86_64, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 09:06:34 np0005541914.localdomain podman[106789]: 2025-12-02 09:06:34.682850475 +0000 UTC m=+0.138164967 container health_status a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=unhealthy, build-date=2025-11-19T00:12:45Z, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., config_id=tripleo_step4, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, name=rhosp17/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, version=17.1.12, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 09:06:34 np0005541914.localdomain podman[106812]: 2025-12-02 09:06:34.751851545 +0000 UTC m=+0.114791722 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, io.buildah.version=1.41.4, managed_by=tripleo_ansible, version=17.1.12, com.redhat.component=openstack-cron-container, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2025-11-18T22:49:32Z, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 09:06:34 np0005541914.localdomain podman[106789]: 2025-12-02 09:06:34.766001788 +0000 UTC m=+0.221316250 container exec_died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2025-11-19T00:12:45Z, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, version=17.1.12, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, name=rhosp17/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container)
Dec 02 09:06:34 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 09:06:34 np0005541914.localdomain podman[106789]: unhealthy
Dec 02 09:06:34 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:06:34 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Failed with result 'exit-code'.
Dec 02 09:06:36 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50036 DF PROTO=TCP SPT=54924 DPT=9882 SEQ=2544141676 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52D9EE20000000001030307) 
Dec 02 09:06:38 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 09:06:38 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 09:06:38 np0005541914.localdomain systemd[1]: tmp-crun.sA3piG.mount: Deactivated successfully.
Dec 02 09:06:38 np0005541914.localdomain podman[106848]: 2025-12-02 09:06:38.825360809 +0000 UTC m=+0.075278492 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, batch=17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, release=1761123044, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']})
Dec 02 09:06:38 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 09:06:38 np0005541914.localdomain podman[106878]: 2025-12-02 09:06:38.913414283 +0000 UTC m=+0.065467024 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1761123044, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, container_name=ovn_controller, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Dec 02 09:06:38 np0005541914.localdomain podman[106847]: 2025-12-02 09:06:38.884702635 +0000 UTC m=+0.133781853 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, architecture=x86_64, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, release=1761123044, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 09:06:38 np0005541914.localdomain podman[106878]: 2025-12-02 09:06:38.948293659 +0000 UTC m=+0.100346420 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20251118.1, vendor=Red Hat, Inc., build-date=2025-11-18T23:34:05Z, distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4)
Dec 02 09:06:38 np0005541914.localdomain podman[106878]: unhealthy
Dec 02 09:06:38 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:06:38 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 09:06:38 np0005541914.localdomain podman[106848]: 2025-12-02 09:06:38.965258438 +0000 UTC m=+0.215176071 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.buildah.version=1.41.4, version=17.1.12, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, architecture=x86_64, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, release=1761123044, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 02 09:06:38 np0005541914.localdomain podman[106848]: unhealthy
Dec 02 09:06:38 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:06:38 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 09:06:39 np0005541914.localdomain podman[106847]: 2025-12-02 09:06:39.015732751 +0000 UTC m=+0.264812009 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.4, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:36:58Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1)
Dec 02 09:06:39 np0005541914.localdomain podman[106847]: unhealthy
Dec 02 09:06:39 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:06:39 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Failed with result 'exit-code'.
Dec 02 09:06:40 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4215 DF PROTO=TCP SPT=58242 DPT=9100 SEQ=4061684492 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52DAD230000000001030307) 
Dec 02 09:06:40 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 09:06:41 np0005541914.localdomain podman[106908]: 2025-12-02 09:06:41.081235142 +0000 UTC m=+0.082932108 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2025-11-19T00:36:58Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.41.4, vendor=Red Hat, Inc., url=https://www.redhat.com, release=1761123044, version=17.1.12, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true)
Dec 02 09:06:41 np0005541914.localdomain podman[106908]: 2025-12-02 09:06:41.422874701 +0000 UTC m=+0.424571677 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:36:58Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, container_name=nova_migration_target, url=https://www.redhat.com, vcs-type=git, version=17.1.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, batch=17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4)
Dec 02 09:06:41 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 09:06:43 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41522 DF PROTO=TCP SPT=35936 DPT=9105 SEQ=2769818466 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52DB7E20000000001030307) 
Dec 02 09:06:44 np0005541914.localdomain podman[106488]: time="2025-12-02T09:06:44Z" level=warning msg="StopSignal SIGTERM failed to stop container ceilometer_agent_compute in 42 seconds, resorting to SIGKILL"
Dec 02 09:06:44 np0005541914.localdomain systemd[1]: libpod-814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.scope: Deactivated successfully.
Dec 02 09:06:44 np0005541914.localdomain systemd[1]: libpod-814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.scope: Consumed 5.940s CPU time.
Dec 02 09:06:44 np0005541914.localdomain podman[106488]: 2025-12-02 09:06:44.107073723 +0000 UTC m=+42.083374370 container stop 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, architecture=x86_64, batch=17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, name=rhosp17/openstack-ceilometer-compute, build-date=2025-11-19T00:11:48Z, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git)
Dec 02 09:06:44 np0005541914.localdomain podman[106488]: 2025-12-02 09:06:44.139775742 +0000 UTC m=+42.116076349 container died 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-19T00:11:48Z, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, release=1761123044, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.4, batch=17.1_20251118.1, name=rhosp17/openstack-ceilometer-compute)
Dec 02 09:06:44 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.timer: Deactivated successfully.
Dec 02 09:06:44 np0005541914.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.
Dec 02 09:06:44 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Failed to open /run/systemd/transient/814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: No such file or directory
Dec 02 09:06:44 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae-userdata-shm.mount: Deactivated successfully.
Dec 02 09:06:44 np0005541914.localdomain podman[106488]: 2025-12-02 09:06:44.261055772 +0000 UTC m=+42.237356379 container cleanup 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2025-11-19T00:11:48Z, architecture=x86_64, name=rhosp17/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Dec 02 09:06:44 np0005541914.localdomain podman[106488]: ceilometer_agent_compute
Dec 02 09:06:44 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.timer: Failed to open /run/systemd/transient/814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.timer: No such file or directory
Dec 02 09:06:44 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Failed to open /run/systemd/transient/814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: No such file or directory
Dec 02 09:06:44 np0005541914.localdomain podman[106932]: 2025-12-02 09:06:44.277198935 +0000 UTC m=+0.154836746 container cleanup 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, maintainer=OpenStack TripleO Team, version=17.1.12, build-date=2025-11-19T00:11:48Z, name=rhosp17/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, config_id=tripleo_step4, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, release=1761123044, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20251118.1)
Dec 02 09:06:44 np0005541914.localdomain systemd[1]: libpod-conmon-814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.scope: Deactivated successfully.
Dec 02 09:06:44 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.timer: Failed to open /run/systemd/transient/814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.timer: No such file or directory
Dec 02 09:06:44 np0005541914.localdomain systemd[1]: 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: Failed to open /run/systemd/transient/814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae.service: No such file or directory
Dec 02 09:06:44 np0005541914.localdomain podman[106947]: 2025-12-02 09:06:44.376484262 +0000 UTC m=+0.066883987 container cleanup 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-ceilometer-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:11:48Z, batch=17.1_20251118.1, url=https://www.redhat.com, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.12, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Dec 02 09:06:44 np0005541914.localdomain podman[106947]: ceilometer_agent_compute
Dec 02 09:06:44 np0005541914.localdomain systemd[1]: tripleo_ceilometer_agent_compute.service: Deactivated successfully.
Dec 02 09:06:44 np0005541914.localdomain systemd[1]: Stopped ceilometer_agent_compute container.
Dec 02 09:06:44 np0005541914.localdomain systemd[1]: tripleo_ceilometer_agent_compute.service: Consumed 1.116s CPU time, no IO.
Dec 02 09:06:44 np0005541914.localdomain sudo[106445]: pam_unix(sudo:session): session closed for user root
Dec 02 09:06:44 np0005541914.localdomain sudo[107050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rkfmkgocmvuhqcztiidiuwgtwwdpdnoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666404.5453763-116-228042013995768/AnsiballZ_systemd_service.py
Dec 02 09:06:44 np0005541914.localdomain sudo[107050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:06:45 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a0089ea487a0d5fd991d7e6cecf5db6fae8c1b61a42816d2acbe202fbd50d575-merged.mount: Deactivated successfully.
Dec 02 09:06:45 np0005541914.localdomain python3.9[107052]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:06:45 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:06:45 np0005541914.localdomain systemd-rc-local-generator[107076]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:06:45 np0005541914.localdomain systemd-sysv-generator[107080]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:06:45 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:06:45 np0005541914.localdomain systemd[1]: Stopping ceilometer_agent_ipmi container...
Dec 02 09:06:45 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:5b:ed:d2 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.110 DST=192.168.122.108 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=6379 DPT=37490 SEQ=4000989390 ACK=0 WINDOW=0 RES=0x00 RST URGP=0 
Dec 02 09:06:46 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 09:06:47 np0005541914.localdomain podman[107108]: 2025-12-02 09:06:47.057253571 +0000 UTC m=+0.064219265 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2025-11-18T22:51:28Z, container_name=collectd, config_id=tripleo_step3, batch=17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.4, com.redhat.component=openstack-collectd-container, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Dec 02 09:06:47 np0005541914.localdomain podman[107108]: 2025-12-02 09:06:47.069726462 +0000 UTC m=+0.076692106 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, container_name=collectd, name=rhosp17/openstack-collectd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, release=1761123044, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T22:51:28Z, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, tcib_managed=true, architecture=x86_64, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 09:06:47 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 09:06:49 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50038 DF PROTO=TCP SPT=54924 DPT=9882 SEQ=2544141676 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52DCF220000000001030307) 
Dec 02 09:06:49 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 09:06:49 np0005541914.localdomain podman[107128]: 2025-12-02 09:06:49.338386036 +0000 UTC m=+0.089553070 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, batch=17.1_20251118.1, build-date=2025-11-18T23:44:13Z, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, version=17.1.12, io.openshift.expose-services=, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, architecture=x86_64, name=rhosp17/openstack-iscsid, vcs-type=git, release=1761123044, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.buildah.version=1.41.4)
Dec 02 09:06:49 np0005541914.localdomain podman[107128]: 2025-12-02 09:06:49.372554681 +0000 UTC m=+0.123721665 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, build-date=2025-11-18T23:44:13Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp17/openstack-iscsid, batch=17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, version=17.1.12, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=)
Dec 02 09:06:49 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 09:06:53 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54639 DF PROTO=TCP SPT=33674 DPT=9102 SEQ=1068771527 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52DDF220000000001030307) 
Dec 02 09:06:54 np0005541914.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 02 09:06:54 np0005541914.localdomain recover_tripleo_nova_virtqemud[107148]: 61907
Dec 02 09:06:54 np0005541914.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 02 09:06:54 np0005541914.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 02 09:06:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17515 DF PROTO=TCP SPT=41364 DPT=9101 SEQ=3827894801 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52DEEE20000000001030307) 
Dec 02 09:07:01 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 09:07:01 np0005541914.localdomain systemd[1]: tmp-crun.qYmsHN.mount: Deactivated successfully.
Dec 02 09:07:01 np0005541914.localdomain podman[107149]: 2025-12-02 09:07:01.307591769 +0000 UTC m=+0.068334141 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, url=https://www.redhat.com, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, config_id=tripleo_step1, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:46Z, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=)
Dec 02 09:07:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54640 DF PROTO=TCP SPT=33674 DPT=9102 SEQ=1068771527 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52DFF220000000001030307) 
Dec 02 09:07:01 np0005541914.localdomain podman[107149]: 2025-12-02 09:07:01.521299974 +0000 UTC m=+0.282042376 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, name=rhosp17/openstack-qdrouterd, batch=17.1_20251118.1, container_name=metrics_qdr, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, maintainer=OpenStack TripleO Team, build-date=2025-11-18T22:49:46Z, release=1761123044, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1)
Dec 02 09:07:01 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 09:07:03 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41096 DF PROTO=TCP SPT=46120 DPT=9882 SEQ=2372430334 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52E08110000000001030307) 
Dec 02 09:07:04 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41097 DF PROTO=TCP SPT=46120 DPT=9882 SEQ=2372430334 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52E0C220000000001030307) 
Dec 02 09:07:04 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 09:07:04 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 09:07:05 np0005541914.localdomain podman[107179]: Error: container a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 is not running
Dec 02 09:07:05 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Main process exited, code=exited, status=125/n/a
Dec 02 09:07:05 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Failed with result 'exit-code'.
Dec 02 09:07:05 np0005541914.localdomain podman[107178]: 2025-12-02 09:07:05.138128331 +0000 UTC m=+0.141299402 container health_status 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, release=1761123044, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=rhosp17/openstack-cron, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, container_name=logrotate_crond, build-date=2025-11-18T22:49:32Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron)
Dec 02 09:07:05 np0005541914.localdomain podman[107178]: 2025-12-02 09:07:05.150857151 +0000 UTC m=+0.154028232 container exec_died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.12, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:49:32Z, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, distribution-scope=public, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, container_name=logrotate_crond, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, name=rhosp17/openstack-cron)
Dec 02 09:07:05 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Deactivated successfully.
Dec 02 09:07:06 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41098 DF PROTO=TCP SPT=46120 DPT=9882 SEQ=2372430334 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52E14220000000001030307) 
Dec 02 09:07:08 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 09:07:09 np0005541914.localdomain podman[107210]: 2025-12-02 09:07:09.064092894 +0000 UTC m=+0.071446196 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, release=1761123044, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, vcs-type=git, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, batch=17.1_20251118.1, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container)
Dec 02 09:07:09 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 09:07:09 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 09:07:09 np0005541914.localdomain podman[107210]: 2025-12-02 09:07:09.101761216 +0000 UTC m=+0.109114508 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, tcib_managed=true, io.buildah.version=1.41.4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20251118.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git)
Dec 02 09:07:09 np0005541914.localdomain podman[107210]: unhealthy
Dec 02 09:07:09 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:07:09 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 09:07:09 np0005541914.localdomain systemd[1]: tmp-crun.UROc6a.mount: Deactivated successfully.
Dec 02 09:07:09 np0005541914.localdomain podman[107230]: 2025-12-02 09:07:09.170598592 +0000 UTC m=+0.084985391 container health_status 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp17/openstack-nova-compute, io.buildah.version=1.41.4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:36:58Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vcs-type=git, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=)
Dec 02 09:07:09 np0005541914.localdomain podman[107230]: 2025-12-02 09:07:09.187563321 +0000 UTC m=+0.101950110 container exec_died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20251118.1, name=rhosp17/openstack-nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, build-date=2025-11-19T00:36:58Z, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 09:07:09 np0005541914.localdomain podman[107231]: 2025-12-02 09:07:09.20587915 +0000 UTC m=+0.116043479 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, vcs-type=git, release=1761123044, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 09:07:09 np0005541914.localdomain podman[107230]: unhealthy
Dec 02 09:07:09 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:07:09 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Failed with result 'exit-code'.
Dec 02 09:07:09 np0005541914.localdomain podman[107231]: 2025-12-02 09:07:09.293661056 +0000 UTC m=+0.203825425 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20251118.1, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, url=https://www.redhat.com, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12)
Dec 02 09:07:09 np0005541914.localdomain podman[107231]: unhealthy
Dec 02 09:07:09 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:07:09 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 09:07:10 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=369 DF PROTO=TCP SPT=58674 DPT=9100 SEQ=2170451609 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52E21220000000001030307) 
Dec 02 09:07:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 09:07:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 4800.1 total, 600.0 interval
                                                          Cumulative writes: 4846 writes, 21K keys, 4846 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 4846 writes, 677 syncs, 7.16 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 02 09:07:11 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 09:07:11 np0005541914.localdomain systemd[1]: tmp-crun.b4Qa3M.mount: Deactivated successfully.
Dec 02 09:07:11 np0005541914.localdomain podman[107272]: 2025-12-02 09:07:11.833084341 +0000 UTC m=+0.087455826 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, version=17.1.12, container_name=nova_migration_target, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 09:07:12 np0005541914.localdomain podman[107272]: 2025-12-02 09:07:12.24864227 +0000 UTC m=+0.503013815 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.4, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, distribution-scope=public, batch=17.1_20251118.1, release=1761123044, build-date=2025-11-19T00:36:58Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, version=17.1.12)
Dec 02 09:07:12 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 09:07:13 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16501 DF PROTO=TCP SPT=46310 DPT=9105 SEQ=1984692707 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52E2D220000000001030307) 
Dec 02 09:07:14 np0005541914.localdomain sudo[107295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:07:14 np0005541914.localdomain sudo[107295]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:07:14 np0005541914.localdomain sudo[107295]: pam_unix(sudo:session): session closed for user root
Dec 02 09:07:14 np0005541914.localdomain sudo[107310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:07:14 np0005541914.localdomain sudo[107310]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:07:15 np0005541914.localdomain sudo[107310]: pam_unix(sudo:session): session closed for user root
Dec 02 09:07:15 np0005541914.localdomain sudo[107357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:07:15 np0005541914.localdomain sudo[107357]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:07:15 np0005541914.localdomain sudo[107357]: pam_unix(sudo:session): session closed for user root
Dec 02 09:07:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 09:07:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 4800.2 total, 600.0 interval
                                                          Cumulative writes: 5767 writes, 25K keys, 5767 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5767 writes, 746 syncs, 7.73 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 02 09:07:16 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11351 DF PROTO=TCP SPT=51514 DPT=9102 SEQ=2630889921 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52E386E0000000001030307) 
Dec 02 09:07:17 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 09:07:17 np0005541914.localdomain podman[107372]: 2025-12-02 09:07:17.338717362 +0000 UTC m=+0.083964409 container health_status 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, version=17.1.12, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-type=git, release=1761123044, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, name=rhosp17/openstack-collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, tcib_managed=true, distribution-scope=public, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team)
Dec 02 09:07:17 np0005541914.localdomain podman[107372]: 2025-12-02 09:07:17.373761435 +0000 UTC m=+0.119008472 container exec_died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-collectd, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, release=1761123044, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, container_name=collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, build-date=2025-11-18T22:51:28Z)
Dec 02 09:07:17 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Deactivated successfully.
Dec 02 09:07:19 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11353 DF PROTO=TCP SPT=51514 DPT=9102 SEQ=2630889921 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52E44620000000001030307) 
Dec 02 09:07:19 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 09:07:19 np0005541914.localdomain podman[107392]: 2025-12-02 09:07:19.574549212 +0000 UTC m=+0.077314995 container health_status f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=iscsid, release=1761123044, batch=17.1_20251118.1, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, version=17.1.12, name=rhosp17/openstack-iscsid, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T23:44:13Z)
Dec 02 09:07:19 np0005541914.localdomain podman[107392]: 2025-12-02 09:07:19.612965517 +0000 UTC m=+0.115731320 container exec_died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, vcs-type=git, build-date=2025-11-18T23:44:13Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, batch=17.1_20251118.1, name=rhosp17/openstack-iscsid, architecture=x86_64, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, config_id=tripleo_step3, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid)
Dec 02 09:07:19 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Deactivated successfully.
Dec 02 09:07:21 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17517 DF PROTO=TCP SPT=41364 DPT=9101 SEQ=3827894801 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52E4F220000000001030307) 
Dec 02 09:07:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17822 DF PROTO=TCP SPT=35444 DPT=9101 SEQ=1492685268 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52E63E20000000001030307) 
Dec 02 09:07:27 np0005541914.localdomain podman[107092]: time="2025-12-02T09:07:27Z" level=warning msg="StopSignal SIGTERM failed to stop container ceilometer_agent_ipmi in 42 seconds, resorting to SIGKILL"
Dec 02 09:07:27 np0005541914.localdomain systemd[1]: tmp-crun.UAbWed.mount: Deactivated successfully.
Dec 02 09:07:27 np0005541914.localdomain systemd[1]: libpod-a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.scope: Deactivated successfully.
Dec 02 09:07:27 np0005541914.localdomain systemd[1]: libpod-a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.scope: Consumed 6.746s CPU time.
Dec 02 09:07:27 np0005541914.localdomain podman[107092]: 2025-12-02 09:07:27.822931359 +0000 UTC m=+42.107635982 container stop a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, managed_by=tripleo_ansible, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, url=https://www.redhat.com, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:12:45Z, config_id=tripleo_step4, io.openshift.expose-services=, io.buildah.version=1.41.4, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Dec 02 09:07:28 np0005541914.localdomain podman[107092]: 2025-12-02 09:07:28.171678385 +0000 UTC m=+42.456383038 container died a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, release=1761123044, url=https://www.redhat.com, distribution-scope=public, batch=17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, build-date=2025-11-19T00:12:45Z, name=rhosp17/openstack-ceilometer-ipmi, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true)
Dec 02 09:07:28 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.timer: Deactivated successfully.
Dec 02 09:07:28 np0005541914.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.
Dec 02 09:07:28 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Failed to open /run/systemd/transient/a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: No such file or directory
Dec 02 09:07:28 np0005541914.localdomain podman[107092]: 2025-12-02 09:07:28.270375974 +0000 UTC m=+42.555080597 container cleanup a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp17/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20251118.1, tcib_managed=true, config_id=tripleo_step4, build-date=2025-11-19T00:12:45Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12)
Dec 02 09:07:28 np0005541914.localdomain podman[107092]: ceilometer_agent_ipmi
Dec 02 09:07:28 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.timer: Failed to open /run/systemd/transient/a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.timer: No such file or directory
Dec 02 09:07:28 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Failed to open /run/systemd/transient/a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: No such file or directory
Dec 02 09:07:28 np0005541914.localdomain podman[107413]: 2025-12-02 09:07:28.284122324 +0000 UTC m=+0.126114979 container cleanup a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, name=rhosp17/openstack-ceilometer-ipmi, build-date=2025-11-19T00:12:45Z, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.4, config_id=tripleo_step4, vcs-type=git, batch=17.1_20251118.1, vendor=Red Hat, Inc., release=1761123044, tcib_managed=true, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, version=17.1.12)
Dec 02 09:07:28 np0005541914.localdomain systemd[1]: libpod-conmon-a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.scope: Deactivated successfully.
Dec 02 09:07:28 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.timer: Failed to open /run/systemd/transient/a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.timer: No such file or directory
Dec 02 09:07:28 np0005541914.localdomain systemd[1]: a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: Failed to open /run/systemd/transient/a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497.service: No such file or directory
Dec 02 09:07:28 np0005541914.localdomain podman[107425]: 2025-12-02 09:07:28.388904019 +0000 UTC m=+0.071296782 container cleanup a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.12, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, release=1761123044, config_id=tripleo_step4, tcib_managed=true, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, name=rhosp17/openstack-ceilometer-ipmi, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-19T00:12:45Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.4)
Dec 02 09:07:28 np0005541914.localdomain podman[107425]: ceilometer_agent_ipmi
Dec 02 09:07:28 np0005541914.localdomain systemd[1]: tripleo_ceilometer_agent_ipmi.service: Deactivated successfully.
Dec 02 09:07:28 np0005541914.localdomain systemd[1]: Stopped ceilometer_agent_ipmi container.
Dec 02 09:07:28 np0005541914.localdomain sudo[107050]: pam_unix(sudo:session): session closed for user root
Dec 02 09:07:28 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d06b9618ea7afeaba672d022a7f469c1b4fb954818b2395f63391bb50912ecbb-merged.mount: Deactivated successfully.
Dec 02 09:07:28 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a54bd4e6af27dff5c45bd7b1ee36dbd6569918db36d4068f8a350fff416b1497-userdata-shm.mount: Deactivated successfully.
Dec 02 09:07:28 np0005541914.localdomain sudo[107527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-btgnpnjcxwhahatliqfhwyesaecilhsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666448.5452151-116-29875010437581/AnsiballZ_systemd_service.py
Dec 02 09:07:28 np0005541914.localdomain sudo[107527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:07:29 np0005541914.localdomain python3.9[107529]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_collectd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:07:29 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:07:29 np0005541914.localdomain systemd-sysv-generator[107558]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:07:29 np0005541914.localdomain systemd-rc-local-generator[107553]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:07:29 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:07:29 np0005541914.localdomain systemd[1]: Stopping collectd container...
Dec 02 09:07:31 np0005541914.localdomain systemd[1]: libpod-2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.scope: Deactivated successfully.
Dec 02 09:07:31 np0005541914.localdomain systemd[1]: libpod-2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.scope: Consumed 2.164s CPU time.
Dec 02 09:07:31 np0005541914.localdomain podman[107570]: 2025-12-02 09:07:31.481372277 +0000 UTC m=+1.970685611 container died 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp17/openstack-collectd, version=17.1.12, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-18T22:51:28Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd)
Dec 02 09:07:31 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.timer: Deactivated successfully.
Dec 02 09:07:31 np0005541914.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.
Dec 02 09:07:31 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Failed to open /run/systemd/transient/2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: No such file or directory
Dec 02 09:07:31 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c-userdata-shm.mount: Deactivated successfully.
Dec 02 09:07:31 np0005541914.localdomain podman[107570]: 2025-12-02 09:07:31.54259163 +0000 UTC m=+2.031904884 container cleanup 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., config_id=tripleo_step3, release=1761123044, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, url=https://www.redhat.com, name=rhosp17/openstack-collectd, build-date=2025-11-18T22:51:28Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 09:07:31 np0005541914.localdomain podman[107570]: collectd
Dec 02 09:07:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11355 DF PROTO=TCP SPT=51514 DPT=9102 SEQ=2630889921 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52E75220000000001030307) 
Dec 02 09:07:31 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 09:07:31 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.timer: Failed to open /run/systemd/transient/2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.timer: No such file or directory
Dec 02 09:07:31 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Failed to open /run/systemd/transient/2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: No such file or directory
Dec 02 09:07:31 np0005541914.localdomain podman[107583]: 2025-12-02 09:07:31.587758171 +0000 UTC m=+0.091569702 container cleanup 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, build-date=2025-11-18T22:51:28Z, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20251118.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, managed_by=tripleo_ansible, url=https://www.redhat.com, config_id=tripleo_step3, name=rhosp17/openstack-collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044)
Dec 02 09:07:31 np0005541914.localdomain systemd[1]: tripleo_collectd.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:07:31 np0005541914.localdomain systemd[1]: libpod-conmon-2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.scope: Deactivated successfully.
Dec 02 09:07:31 np0005541914.localdomain podman[107635]: error opening file `/run/crun/2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c/status`: No such file or directory
Dec 02 09:07:31 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.timer: Failed to open /run/systemd/transient/2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.timer: No such file or directory
Dec 02 09:07:31 np0005541914.localdomain systemd[1]: 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: Failed to open /run/systemd/transient/2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c.service: No such file or directory
Dec 02 09:07:31 np0005541914.localdomain podman[107608]: 2025-12-02 09:07:31.708037569 +0000 UTC m=+0.083175225 container cleanup 2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1761123044, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, name=rhosp17/openstack-collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd31718fcd17fdeee6489534105191c7a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2025-11-18T22:51:28Z, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 09:07:31 np0005541914.localdomain podman[107608]: collectd
Dec 02 09:07:31 np0005541914.localdomain systemd[1]: tripleo_collectd.service: Failed with result 'exit-code'.
Dec 02 09:07:31 np0005541914.localdomain systemd[1]: Stopped collectd container.
Dec 02 09:07:31 np0005541914.localdomain podman[107599]: 2025-12-02 09:07:31.676668389 +0000 UTC m=+0.102008910 container health_status 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, name=rhosp17/openstack-qdrouterd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, release=1761123044, tcib_managed=true, vcs-type=git, build-date=2025-11-18T22:49:46Z, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, container_name=metrics_qdr, io.buildah.version=1.41.4, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, io.openshift.expose-services=, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc.)
Dec 02 09:07:31 np0005541914.localdomain sudo[107527]: pam_unix(sudo:session): session closed for user root
Dec 02 09:07:31 np0005541914.localdomain podman[107599]: 2025-12-02 09:07:31.915833195 +0000 UTC m=+0.341173686 container exec_died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-18T22:49:46Z, config_id=tripleo_step1, io.openshift.expose-services=, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, name=rhosp17/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.buildah.version=1.41.4, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.12, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 09:07:31 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Deactivated successfully.
Dec 02 09:07:32 np0005541914.localdomain sudo[107736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kjacphurprnipijpjeowokllznladfer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666451.8763711-116-133659539979744/AnsiballZ_systemd_service.py
Dec 02 09:07:32 np0005541914.localdomain sudo[107736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:07:32 np0005541914.localdomain python3.9[107738]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_iscsid.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:07:32 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c13e199db7335dd51d53d563216fcc1a3ed75eba14190a583a84b8f73b6c9d42-merged.mount: Deactivated successfully.
Dec 02 09:07:32 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:07:32 np0005541914.localdomain systemd-sysv-generator[107769]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:07:32 np0005541914.localdomain systemd-rc-local-generator[107764]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:07:32 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:07:32 np0005541914.localdomain systemd[1]: Stopping iscsid container...
Dec 02 09:07:32 np0005541914.localdomain systemd[1]: libpod-f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.scope: Deactivated successfully.
Dec 02 09:07:32 np0005541914.localdomain systemd[1]: libpod-f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.scope: Consumed 1.150s CPU time.
Dec 02 09:07:32 np0005541914.localdomain podman[107778]: 2025-12-02 09:07:32.959973819 +0000 UTC m=+0.096472073 container died f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.12, io.buildah.version=1.41.4, name=rhosp17/openstack-iscsid, vcs-type=git, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2025-11-18T23:44:13Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, container_name=iscsid, release=1761123044, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d)
Dec 02 09:07:32 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.timer: Deactivated successfully.
Dec 02 09:07:32 np0005541914.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.
Dec 02 09:07:32 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Failed to open /run/systemd/transient/f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: No such file or directory
Dec 02 09:07:32 np0005541914.localdomain systemd[1]: tmp-crun.fGrCVn.mount: Deactivated successfully.
Dec 02 09:07:33 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b-userdata-shm.mount: Deactivated successfully.
Dec 02 09:07:33 np0005541914.localdomain podman[107778]: 2025-12-02 09:07:33.012532675 +0000 UTC m=+0.149030929 container cleanup f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1761123044, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.12, config_id=tripleo_step3, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, com.redhat.component=openstack-iscsid-container, distribution-scope=public, name=rhosp17/openstack-iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z)
Dec 02 09:07:33 np0005541914.localdomain podman[107778]: iscsid
Dec 02 09:07:33 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.timer: Failed to open /run/systemd/transient/f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.timer: No such file or directory
Dec 02 09:07:33 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Failed to open /run/systemd/transient/f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: No such file or directory
Dec 02 09:07:33 np0005541914.localdomain podman[107790]: 2025-12-02 09:07:33.055748617 +0000 UTC m=+0.083545957 container cleanup f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.4, architecture=x86_64, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, batch=17.1_20251118.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, container_name=iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, build-date=2025-11-18T23:44:13Z, name=rhosp17/openstack-iscsid, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, release=1761123044, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Dec 02 09:07:33 np0005541914.localdomain systemd[1]: libpod-conmon-f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.scope: Deactivated successfully.
Dec 02 09:07:33 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.timer: Failed to open /run/systemd/transient/f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.timer: No such file or directory
Dec 02 09:07:33 np0005541914.localdomain systemd[1]: f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: Failed to open /run/systemd/transient/f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b.service: No such file or directory
Dec 02 09:07:33 np0005541914.localdomain podman[107806]: 2025-12-02 09:07:33.163876934 +0000 UTC m=+0.075637344 container cleanup f10238aaadb4d75ed7859697e235a88e611205b720938656ebb77f33b187267b (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, name=rhosp17/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20251118.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.12, build-date=2025-11-18T23:44:13Z, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, vendor=Red Hat, Inc., org.opencontainers.image.revision=5714445d3136fb8f8cd5e0726e4e3e709c68ad0d, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, release=1761123044, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public)
Dec 02 09:07:33 np0005541914.localdomain podman[107806]: iscsid
Dec 02 09:07:33 np0005541914.localdomain systemd[1]: tripleo_iscsid.service: Deactivated successfully.
Dec 02 09:07:33 np0005541914.localdomain systemd[1]: Stopped iscsid container.
Dec 02 09:07:33 np0005541914.localdomain sudo[107736]: pam_unix(sudo:session): session closed for user root
Dec 02 09:07:33 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-63f5c4d65539870ee2bafb1f7e39854f191dd3f1ae459b319446f5932294db9e-merged.mount: Deactivated successfully.
Dec 02 09:07:33 np0005541914.localdomain sudo[107909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wonvjvbakzcfgwkpdudiemrvghxgfdcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666453.3394654-116-125827298736557/AnsiballZ_systemd_service.py
Dec 02 09:07:33 np0005541914.localdomain sudo[107909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:07:33 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33014 DF PROTO=TCP SPT=59520 DPT=9882 SEQ=1649973150 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52E7D410000000001030307) 
Dec 02 09:07:33 np0005541914.localdomain python3.9[107911]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_logrotate_crond.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:07:33 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:07:34 np0005541914.localdomain systemd-rc-local-generator[107935]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:07:34 np0005541914.localdomain systemd-sysv-generator[107941]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:07:34 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:07:34 np0005541914.localdomain systemd[1]: Stopping logrotate_crond container...
Dec 02 09:07:34 np0005541914.localdomain crond[70671]: (CRON) INFO (Shutting down)
Dec 02 09:07:34 np0005541914.localdomain systemd[1]: libpod-7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.scope: Deactivated successfully.
Dec 02 09:07:34 np0005541914.localdomain systemd[1]: libpod-7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.scope: Consumed 1.116s CPU time.
Dec 02 09:07:34 np0005541914.localdomain podman[107951]: 2025-12-02 09:07:34.401092893 +0000 UTC m=+0.088745806 container died 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, build-date=2025-11-18T22:49:32Z, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, maintainer=OpenStack TripleO Team, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20251118.1, io.buildah.version=1.41.4, tcib_managed=true, config_id=tripleo_step4, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, name=rhosp17/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible)
Dec 02 09:07:34 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.timer: Deactivated successfully.
Dec 02 09:07:34 np0005541914.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.
Dec 02 09:07:34 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Failed to open /run/systemd/transient/7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: No such file or directory
Dec 02 09:07:34 np0005541914.localdomain podman[107951]: 2025-12-02 09:07:34.460702446 +0000 UTC m=+0.148355359 container cleanup 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., io.buildah.version=1.41.4, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, build-date=2025-11-18T22:49:32Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, architecture=x86_64, container_name=logrotate_crond, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1761123044, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp17/openstack-cron)
Dec 02 09:07:34 np0005541914.localdomain podman[107951]: logrotate_crond
Dec 02 09:07:34 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d4bf0a50fd432b1e17b5b60f382aa20fe21251bda35e0089667eec28efb9c70f-merged.mount: Deactivated successfully.
Dec 02 09:07:34 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae-userdata-shm.mount: Deactivated successfully.
Dec 02 09:07:34 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.timer: Failed to open /run/systemd/transient/7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.timer: No such file or directory
Dec 02 09:07:34 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Failed to open /run/systemd/transient/7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: No such file or directory
Dec 02 09:07:34 np0005541914.localdomain podman[107964]: 2025-12-02 09:07:34.511801718 +0000 UTC m=+0.099717850 container cleanup 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, name=rhosp17/openstack-cron, container_name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, build-date=2025-11-18T22:49:32Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 cron, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 09:07:34 np0005541914.localdomain systemd[1]: libpod-conmon-7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.scope: Deactivated successfully.
Dec 02 09:07:34 np0005541914.localdomain podman[107993]: error opening file `/run/crun/7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae/status`: No such file or directory
Dec 02 09:07:34 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.timer: Failed to open /run/systemd/transient/7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.timer: No such file or directory
Dec 02 09:07:34 np0005541914.localdomain systemd[1]: 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: Failed to open /run/systemd/transient/7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae.service: No such file or directory
Dec 02 09:07:34 np0005541914.localdomain podman[107981]: 2025-12-02 09:07:34.629284161 +0000 UTC m=+0.084874876 container cleanup 7caefd5a7e818d6baf87bc722ccadf88c7c3356ced3737a5957b8c8fa456c1ae (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2025-11-18T22:49:32Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, vcs-type=git, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.4, container_name=logrotate_crond, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp17/openstack-cron)
Dec 02 09:07:34 np0005541914.localdomain podman[107981]: logrotate_crond
Dec 02 09:07:34 np0005541914.localdomain systemd[1]: tripleo_logrotate_crond.service: Deactivated successfully.
Dec 02 09:07:34 np0005541914.localdomain systemd[1]: Stopped logrotate_crond container.
Dec 02 09:07:34 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33015 DF PROTO=TCP SPT=59520 DPT=9882 SEQ=1649973150 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52E81620000000001030307) 
Dec 02 09:07:34 np0005541914.localdomain sudo[107909]: pam_unix(sudo:session): session closed for user root
Dec 02 09:07:35 np0005541914.localdomain sudo[108084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cjievdugggvvvvghmjdapaxxsyamjxnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666454.8101897-116-33801821078823/AnsiballZ_systemd_service.py
Dec 02 09:07:35 np0005541914.localdomain sudo[108084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:07:35 np0005541914.localdomain python3.9[108086]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_metrics_qdr.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:07:35 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:07:35 np0005541914.localdomain systemd-rc-local-generator[108113]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:07:35 np0005541914.localdomain systemd-sysv-generator[108119]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:07:35 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:07:35 np0005541914.localdomain systemd[1]: Stopping metrics_qdr container...
Dec 02 09:07:35 np0005541914.localdomain systemd[1]: tmp-crun.Plzjlz.mount: Deactivated successfully.
Dec 02 09:07:35 np0005541914.localdomain kernel: qdrouterd[54544]: segfault at 0 ip 00007f9dd87187cb sp 00007ffeb7d21e60 error 4 in libc.so.6[7f9dd86b5000+175000]
Dec 02 09:07:35 np0005541914.localdomain kernel: Code: 0b 00 64 44 89 23 85 c0 75 d4 e9 2b ff ff ff e8 db a5 00 00 e9 fd fe ff ff e8 41 1d 0d 00 90 f3 0f 1e fa 41 54 55 48 89 fd 53 <8b> 07 f6 c4 20 0f 85 aa 00 00 00 89 c2 81 e2 00 80 00 00 0f 84 a9
Dec 02 09:07:35 np0005541914.localdomain systemd[1]: Created slice Slice /system/systemd-coredump.
Dec 02 09:07:35 np0005541914.localdomain systemd[1]: Started Process Core Dump (PID 108140/UID 0).
Dec 02 09:07:36 np0005541914.localdomain systemd-coredump[108141]: Resource limits disable core dumping for process 54544 (qdrouterd).
Dec 02 09:07:36 np0005541914.localdomain systemd-coredump[108141]: Process 54544 (qdrouterd) of user 42465 dumped core.
Dec 02 09:07:36 np0005541914.localdomain systemd[1]: libpod-67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.scope: Deactivated successfully.
Dec 02 09:07:36 np0005541914.localdomain systemd[1]: libpod-67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.scope: Consumed 29.209s CPU time.
Dec 02 09:07:36 np0005541914.localdomain systemd[1]: systemd-coredump@0-108140-0.service: Deactivated successfully.
Dec 02 09:07:36 np0005541914.localdomain podman[108127]: 2025-12-02 09:07:36.013406624 +0000 UTC m=+0.218432211 container died 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, version=17.1.12, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, release=1761123044, build-date=2025-11-18T22:49:46Z, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, url=https://www.redhat.com, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp17/openstack-qdrouterd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=)
Dec 02 09:07:36 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.timer: Deactivated successfully.
Dec 02 09:07:36 np0005541914.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.
Dec 02 09:07:36 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Failed to open /run/systemd/transient/67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: No such file or directory
Dec 02 09:07:36 np0005541914.localdomain podman[108127]: 2025-12-02 09:07:36.076941077 +0000 UTC m=+0.281966584 container cleanup 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step1, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, vcs-type=git, build-date=2025-11-18T22:49:46Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1761123044, io.buildah.version=1.41.4, version=17.1.12)
Dec 02 09:07:36 np0005541914.localdomain podman[108127]: metrics_qdr
Dec 02 09:07:36 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.timer: Failed to open /run/systemd/transient/67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.timer: No such file or directory
Dec 02 09:07:36 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Failed to open /run/systemd/transient/67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: No such file or directory
Dec 02 09:07:36 np0005541914.localdomain podman[108145]: 2025-12-02 09:07:36.111998459 +0000 UTC m=+0.086130215 container cleanup 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, batch=17.1_20251118.1, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, distribution-scope=public, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, build-date=2025-11-18T22:49:46Z, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1761123044, vendor=Red Hat, Inc., vcs-type=git, url=https://www.redhat.com, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 09:07:36 np0005541914.localdomain systemd[1]: tripleo_metrics_qdr.service: Main process exited, code=exited, status=139/n/a
Dec 02 09:07:36 np0005541914.localdomain systemd[1]: libpod-conmon-67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.scope: Deactivated successfully.
Dec 02 09:07:36 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.timer: Failed to open /run/systemd/transient/67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.timer: No such file or directory
Dec 02 09:07:36 np0005541914.localdomain systemd[1]: 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: Failed to open /run/systemd/transient/67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7.service: No such file or directory
Dec 02 09:07:36 np0005541914.localdomain podman[108162]: 2025-12-02 09:07:36.228206263 +0000 UTC m=+0.077643796 container cleanup 67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp17/openstack-qdrouterd, container_name=metrics_qdr, org.opencontainers.image.revision=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, version=17.1.12, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2025-11-18T22:49:46Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'b56066700c0c3079c35d037ee6698236'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, vcs-ref=7ecaafae6fa9301c7dd5c0fca835eecf10dd147a)
Dec 02 09:07:36 np0005541914.localdomain podman[108162]: metrics_qdr
Dec 02 09:07:36 np0005541914.localdomain systemd[1]: tripleo_metrics_qdr.service: Failed with result 'exit-code'.
Dec 02 09:07:36 np0005541914.localdomain systemd[1]: Stopped metrics_qdr container.
Dec 02 09:07:36 np0005541914.localdomain sudo[108084]: pam_unix(sudo:session): session closed for user root
Dec 02 09:07:36 np0005541914.localdomain sudo[108265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tbdqestyewyyfgdancaddnwfpcwcawcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666456.3874333-116-45580455189277/AnsiballZ_systemd_service.py
Dec 02 09:07:36 np0005541914.localdomain sudo[108265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:07:36 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33016 DF PROTO=TCP SPT=59520 DPT=9882 SEQ=1649973150 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52E89620000000001030307) 
Dec 02 09:07:36 np0005541914.localdomain systemd[1]: tmp-crun.fJeRuO.mount: Deactivated successfully.
Dec 02 09:07:36 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-46d22fb86a8cbaa2935fad3e910e4610328c0a9c2837bb75cb2a0cd28ff52849-merged.mount: Deactivated successfully.
Dec 02 09:07:36 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-67eb451cf46dcbddf11ed7101760e30990f6e32137e18cc3ae855faa77667da7-userdata-shm.mount: Deactivated successfully.
Dec 02 09:07:36 np0005541914.localdomain python3.9[108267]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_dhcp.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:07:36 np0005541914.localdomain sudo[108265]: pam_unix(sudo:session): session closed for user root
Dec 02 09:07:37 np0005541914.localdomain sudo[108358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dlylqnbyrhvbbvzpgoxxyhlawgnfyvyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666457.096627-116-139858606461469/AnsiballZ_systemd_service.py
Dec 02 09:07:37 np0005541914.localdomain sudo[108358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:07:37 np0005541914.localdomain python3.9[108360]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_l3_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:07:37 np0005541914.localdomain sudo[108358]: pam_unix(sudo:session): session closed for user root
Dec 02 09:07:38 np0005541914.localdomain sudo[108451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dcbezyvmayqndecnjurbiwubrbhvyvma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666457.8140945-116-153276509826836/AnsiballZ_systemd_service.py
Dec 02 09:07:38 np0005541914.localdomain sudo[108451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:07:38 np0005541914.localdomain python3.9[108453]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_ovs_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:07:38 np0005541914.localdomain sudo[108451]: pam_unix(sudo:session): session closed for user root
Dec 02 09:07:38 np0005541914.localdomain sudo[108544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-twiwrwarsfcwxcniblgcyuxqtvbfzkzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666458.5311098-116-185824890247032/AnsiballZ_systemd_service.py
Dec 02 09:07:38 np0005541914.localdomain sudo[108544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:07:39 np0005541914.localdomain python3.9[108546]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:07:39 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 09:07:39 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:07:39 np0005541914.localdomain podman[108548]: 2025-12-02 09:07:39.261705959 +0000 UTC m=+0.108956473 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, batch=17.1_20251118.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-ovn-controller, release=1761123044, maintainer=OpenStack TripleO Team, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, url=https://www.redhat.com)
Dec 02 09:07:39 np0005541914.localdomain podman[108548]: 2025-12-02 09:07:39.29769789 +0000 UTC m=+0.144948374 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, url=https://www.redhat.com, version=17.1.12, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, build-date=2025-11-18T23:34:05Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, vendor=Red Hat, Inc.)
Dec 02 09:07:39 np0005541914.localdomain systemd-sysv-generator[108594]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:07:39 np0005541914.localdomain systemd-rc-local-generator[108589]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:07:39 np0005541914.localdomain podman[108548]: unhealthy
Dec 02 09:07:39 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:07:39 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:07:39 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 09:07:39 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 09:07:39 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 09:07:39 np0005541914.localdomain systemd[1]: Stopping nova_compute container...
Dec 02 09:07:39 np0005541914.localdomain podman[108602]: Error: container 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e is not running
Dec 02 09:07:39 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Main process exited, code=exited, status=125/n/a
Dec 02 09:07:39 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Failed with result 'exit-code'.
Dec 02 09:07:39 np0005541914.localdomain podman[108603]: 2025-12-02 09:07:39.73680822 +0000 UTC m=+0.191871690 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, io.buildah.version=1.41.4, name=rhosp17/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-19T00:14:25Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, release=1761123044, url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1)
Dec 02 09:07:39 np0005541914.localdomain podman[108603]: 2025-12-02 09:07:39.758964188 +0000 UTC m=+0.214027658 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20251118.1, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044, url=https://www.redhat.com, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, build-date=2025-11-19T00:14:25Z, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 09:07:39 np0005541914.localdomain podman[108603]: unhealthy
Dec 02 09:07:39 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:07:39 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 09:07:40 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4004 DF PROTO=TCP SPT=34714 DPT=9100 SEQ=1261654211 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52E97220000000001030307) 
Dec 02 09:07:42 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 09:07:42 np0005541914.localdomain systemd[1]: tmp-crun.Cs0bp8.mount: Deactivated successfully.
Dec 02 09:07:42 np0005541914.localdomain podman[108647]: 2025-12-02 09:07:42.845300538 +0000 UTC m=+0.097610746 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-11-19T00:36:58Z, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20251118.1, distribution-scope=public)
Dec 02 09:07:43 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18763 DF PROTO=TCP SPT=54336 DPT=9105 SEQ=2188115077 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52EA2620000000001030307) 
Dec 02 09:07:43 np0005541914.localdomain podman[108647]: 2025-12-02 09:07:43.212444327 +0000 UTC m=+0.464754575 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, url=https://www.redhat.com, config_id=tripleo_step4, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target)
Dec 02 09:07:43 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 09:07:45 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41527 DF PROTO=TCP SPT=35936 DPT=9105 SEQ=2769818466 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52EAD220000000001030307) 
Dec 02 09:07:48 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33018 DF PROTO=TCP SPT=59520 DPT=9882 SEQ=1649973150 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52EB9220000000001030307) 
Dec 02 09:07:52 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17824 DF PROTO=TCP SPT=35444 DPT=9101 SEQ=1492685268 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52EC5220000000001030307) 
Dec 02 09:07:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42473 DF PROTO=TCP SPT=37852 DPT=9101 SEQ=1754784311 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52ED9220000000001030307) 
Dec 02 09:07:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18766 DF PROTO=TCP SPT=54336 DPT=9105 SEQ=2188115077 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52EDB220000000001030307) 
Dec 02 09:08:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43905 DF PROTO=TCP SPT=51142 DPT=9102 SEQ=890444739 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52EE9220000000001030307) 
Dec 02 09:08:04 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52849 DF PROTO=TCP SPT=38178 DPT=9882 SEQ=556695907 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52EF6620000000001030307) 
Dec 02 09:08:06 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52850 DF PROTO=TCP SPT=38178 DPT=9882 SEQ=556695907 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52EFE620000000001030307) 
Dec 02 09:08:09 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 09:08:09 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 09:08:09 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 09:08:10 np0005541914.localdomain podman[108672]: 2025-12-02 09:08:10.085585011 +0000 UTC m=+0.079336097 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, version=17.1.12, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, build-date=2025-11-18T23:34:05Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, release=1761123044)
Dec 02 09:08:10 np0005541914.localdomain podman[108672]: 2025-12-02 09:08:10.099239219 +0000 UTC m=+0.092990315 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1761123044, architecture=x86_64, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp17/openstack-ovn-controller, container_name=ovn_controller, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container)
Dec 02 09:08:10 np0005541914.localdomain podman[108672]: unhealthy
Dec 02 09:08:10 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:08:10 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 09:08:10 np0005541914.localdomain podman[108671]: 2025-12-02 09:08:10.141738488 +0000 UTC m=+0.138945130 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, name=rhosp17/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.12, config_id=tripleo_step4, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, vcs-type=git, tcib_managed=true, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 02 09:08:10 np0005541914.localdomain podman[108671]: 2025-12-02 09:08:10.155497349 +0000 UTC m=+0.152703931 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4, container_name=ovn_metadata_agent, url=https://www.redhat.com, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2025-11-19T00:14:25Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.4, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1761123044)
Dec 02 09:08:10 np0005541914.localdomain podman[108671]: unhealthy
Dec 02 09:08:10 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:08:10 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 09:08:10 np0005541914.localdomain podman[108670]: Error: container 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e is not running
Dec 02 09:08:10 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Main process exited, code=exited, status=125/n/a
Dec 02 09:08:10 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Failed with result 'exit-code'.
Dec 02 09:08:10 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17332 DF PROTO=TCP SPT=39816 DPT=9100 SEQ=817660967 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52F0D230000000001030307) 
Dec 02 09:08:13 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52558 DF PROTO=TCP SPT=36662 DPT=9105 SEQ=3234899454 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52F17A30000000001030307) 
Dec 02 09:08:13 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 09:08:13 np0005541914.localdomain podman[108721]: 2025-12-02 09:08:13.822669447 +0000 UTC m=+0.079369028 container health_status f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., container_name=nova_migration_target, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 09:08:14 np0005541914.localdomain podman[108721]: 2025-12-02 09:08:14.208098246 +0000 UTC m=+0.464797767 container exec_died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-compute, vendor=Red Hat, Inc., vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.12, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1)
Dec 02 09:08:14 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Deactivated successfully.
Dec 02 09:08:15 np0005541914.localdomain sudo[108745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:08:15 np0005541914.localdomain sudo[108745]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:08:15 np0005541914.localdomain sudo[108745]: pam_unix(sudo:session): session closed for user root
Dec 02 09:08:15 np0005541914.localdomain sudo[108760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:08:15 np0005541914.localdomain sudo[108760]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:08:16 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20172 DF PROTO=TCP SPT=41378 DPT=9102 SEQ=2341515912 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52F22CE0000000001030307) 
Dec 02 09:08:16 np0005541914.localdomain sudo[108760]: pam_unix(sudo:session): session closed for user root
Dec 02 09:08:19 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20174 DF PROTO=TCP SPT=41378 DPT=9102 SEQ=2341515912 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52F2EE20000000001030307) 
Dec 02 09:08:19 np0005541914.localdomain sudo[108806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:08:20 np0005541914.localdomain sudo[108806]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:08:20 np0005541914.localdomain sudo[108806]: pam_unix(sudo:session): session closed for user root
Dec 02 09:08:21 np0005541914.localdomain podman[108615]: time="2025-12-02T09:08:21Z" level=warning msg="StopSignal SIGTERM failed to stop container nova_compute in 42 seconds, resorting to SIGKILL"
Dec 02 09:08:21 np0005541914.localdomain systemd[1]: libpod-6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.scope: Deactivated successfully.
Dec 02 09:08:21 np0005541914.localdomain systemd[1]: libpod-6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.scope: Consumed 28.668s CPU time.
Dec 02 09:08:21 np0005541914.localdomain podman[108615]: 2025-12-02 09:08:21.641166055 +0000 UTC m=+42.086446735 container died 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, release=1761123044, version=17.1.12, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, config_id=tripleo_step5, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp17/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2025-11-19T00:36:58Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team)
Dec 02 09:08:21 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.timer: Deactivated successfully.
Dec 02 09:08:21 np0005541914.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.
Dec 02 09:08:21 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Failed to open /run/systemd/transient/6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: No such file or directory
Dec 02 09:08:21 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-1e1d8b5716686b6ea155be98d0f313571788c49d87ac4366e7f84d4f947d1b6e-merged.mount: Deactivated successfully.
Dec 02 09:08:21 np0005541914.localdomain podman[108615]: 2025-12-02 09:08:21.703049087 +0000 UTC m=+42.148329777 container cleanup 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., release=1761123044, distribution-scope=public, version=17.1.12, io.openshift.expose-services=, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20251118.1, config_id=tripleo_step5, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, vcs-type=git, build-date=2025-11-19T00:36:58Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.buildah.version=1.41.4, url=https://www.redhat.com, architecture=x86_64)
Dec 02 09:08:21 np0005541914.localdomain podman[108615]: nova_compute
Dec 02 09:08:21 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42475 DF PROTO=TCP SPT=37852 DPT=9101 SEQ=1754784311 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52F39220000000001030307) 
Dec 02 09:08:21 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.timer: Failed to open /run/systemd/transient/6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.timer: No such file or directory
Dec 02 09:08:21 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Failed to open /run/systemd/transient/6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: No such file or directory
Dec 02 09:08:21 np0005541914.localdomain podman[108822]: 2025-12-02 09:08:21.762794944 +0000 UTC m=+0.114894005 container cleanup 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, release=1761123044, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Dec 02 09:08:21 np0005541914.localdomain systemd[1]: libpod-conmon-6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.scope: Deactivated successfully.
Dec 02 09:08:21 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.timer: Failed to open /run/systemd/transient/6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.timer: No such file or directory
Dec 02 09:08:21 np0005541914.localdomain systemd[1]: 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: Failed to open /run/systemd/transient/6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e.service: No such file or directory
Dec 02 09:08:21 np0005541914.localdomain podman[108838]: 2025-12-02 09:08:21.856626234 +0000 UTC m=+0.057754767 container cleanup 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2025-11-19T00:36:58Z, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, architecture=x86_64, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, container_name=nova_compute, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, name=rhosp17/openstack-nova-compute)
Dec 02 09:08:21 np0005541914.localdomain podman[108838]: nova_compute
Dec 02 09:08:21 np0005541914.localdomain systemd[1]: tripleo_nova_compute.service: Deactivated successfully.
Dec 02 09:08:21 np0005541914.localdomain systemd[1]: Stopped nova_compute container.
Dec 02 09:08:21 np0005541914.localdomain systemd[1]: tripleo_nova_compute.service: Consumed 1.098s CPU time, no IO.
Dec 02 09:08:21 np0005541914.localdomain sudo[108544]: pam_unix(sudo:session): session closed for user root
Dec 02 09:08:22 np0005541914.localdomain sudo[108938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-udtupuclylszxmgimrllvhwmxczubuzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666501.974602-116-87389535358260/AnsiballZ_systemd_service.py
Dec 02 09:08:22 np0005541914.localdomain sudo[108938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:08:22 np0005541914.localdomain python3.9[108940]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:08:22 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:08:22 np0005541914.localdomain systemd-rc-local-generator[108963]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:08:22 np0005541914.localdomain systemd-sysv-generator[108969]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:08:22 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:08:22 np0005541914.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 02 09:08:22 np0005541914.localdomain systemd[1]: Stopping nova_migration_target container...
Dec 02 09:08:22 np0005541914.localdomain recover_tripleo_nova_virtqemud[108981]: 61907
Dec 02 09:08:22 np0005541914.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 02 09:08:22 np0005541914.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 02 09:08:23 np0005541914.localdomain sshd[70922]: Received signal 15; terminating.
Dec 02 09:08:23 np0005541914.localdomain systemd[1]: libpod-f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.scope: Deactivated successfully.
Dec 02 09:08:23 np0005541914.localdomain systemd[1]: libpod-f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.scope: Consumed 34.846s CPU time.
Dec 02 09:08:23 np0005541914.localdomain podman[108983]: 2025-12-02 09:08:23.032561139 +0000 UTC m=+0.072212070 container died f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., batch=17.1_20251118.1, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.4, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1761123044, io.openshift.expose-services=, container_name=nova_migration_target, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-compute, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step4)
Dec 02 09:08:23 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.timer: Deactivated successfully.
Dec 02 09:08:23 np0005541914.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.
Dec 02 09:08:23 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Failed to open /run/systemd/transient/f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: No such file or directory
Dec 02 09:08:23 np0005541914.localdomain systemd[1]: tmp-crun.B9GroG.mount: Deactivated successfully.
Dec 02 09:08:23 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc-userdata-shm.mount: Deactivated successfully.
Dec 02 09:08:23 np0005541914.localdomain podman[108983]: 2025-12-02 09:08:23.082417214 +0000 UTC m=+0.122068095 container cleanup f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step4, tcib_managed=true, name=rhosp17/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.12, release=1761123044, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.4, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2025-11-19T00:36:58Z, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.expose-services=)
Dec 02 09:08:23 np0005541914.localdomain podman[108983]: nova_migration_target
Dec 02 09:08:23 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.timer: Failed to open /run/systemd/transient/f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.timer: No such file or directory
Dec 02 09:08:23 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Failed to open /run/systemd/transient/f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: No such file or directory
Dec 02 09:08:23 np0005541914.localdomain podman[108995]: 2025-12-02 09:08:23.145872244 +0000 UTC m=+0.106546729 container cleanup f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, distribution-scope=public, name=rhosp17/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, build-date=2025-11-19T00:36:58Z, io.buildah.version=1.41.4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., version=17.1.12, managed_by=tripleo_ansible, config_id=tripleo_step4, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com)
Dec 02 09:08:23 np0005541914.localdomain systemd[1]: libpod-conmon-f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.scope: Deactivated successfully.
Dec 02 09:08:23 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.timer: Failed to open /run/systemd/transient/f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.timer: No such file or directory
Dec 02 09:08:23 np0005541914.localdomain systemd[1]: f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: Failed to open /run/systemd/transient/f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc.service: No such file or directory
Dec 02 09:08:23 np0005541914.localdomain podman[109011]: 2025-12-02 09:08:23.249227085 +0000 UTC m=+0.066437923 container cleanup f01a33154eba3fbaa7ce9b4db56bf033e3eca5bf0cc8dbf03b0a5a3e84e5b1dc (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2025-11-19T00:36:58Z, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-compute, version=17.1.12, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, config_id=tripleo_step4, container_name=nova_migration_target, io.buildah.version=1.41.4)
Dec 02 09:08:23 np0005541914.localdomain podman[109011]: nova_migration_target
Dec 02 09:08:23 np0005541914.localdomain systemd[1]: tripleo_nova_migration_target.service: Deactivated successfully.
Dec 02 09:08:23 np0005541914.localdomain systemd[1]: Stopped nova_migration_target container.
Dec 02 09:08:23 np0005541914.localdomain sudo[108938]: pam_unix(sudo:session): session closed for user root
Dec 02 09:08:24 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-becbc927e1a2defd8b98f9313e9ae54e436a645a48c9af865764923e7f3644aa-merged.mount: Deactivated successfully.
Dec 02 09:08:24 np0005541914.localdomain sudo[109113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-asxcucltlqfgeukcsyckcqioiqigkgdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666503.5303042-116-31308312886866/AnsiballZ_systemd_service.py
Dec 02 09:08:24 np0005541914.localdomain sudo[109113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:08:24 np0005541914.localdomain python3.9[109115]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:08:24 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:08:24 np0005541914.localdomain systemd-rc-local-generator[109138]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:08:24 np0005541914.localdomain systemd-sysv-generator[109143]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:08:24 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:08:24 np0005541914.localdomain systemd[1]: Stopping nova_virtlogd_wrapper container...
Dec 02 09:08:24 np0005541914.localdomain systemd[1]: libpod-fae4e39fbb099510d3e0c1e1174ca074b49d200a38fbd9e586e6ffec92dff36b.scope: Deactivated successfully.
Dec 02 09:08:24 np0005541914.localdomain podman[109156]: 2025-12-02 09:08:24.855099929 +0000 UTC m=+0.077789989 container died fae4e39fbb099510d3e0c1e1174ca074b49d200a38fbd9e586e6ffec92dff36b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., io.buildah.version=1.41.4, name=rhosp17/openstack-nova-libvirt, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, tcib_managed=true, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, container_name=nova_virtlogd_wrapper, vcs-type=git, build-date=2025-11-19T00:35:22Z, managed_by=tripleo_ansible)
Dec 02 09:08:24 np0005541914.localdomain podman[109156]: 2025-12-02 09:08:24.902233801 +0000 UTC m=+0.124923811 container cleanup fae4e39fbb099510d3e0c1e1174ca074b49d200a38fbd9e586e6ffec92dff36b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, version=17.1.12, build-date=2025-11-19T00:35:22Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, container_name=nova_virtlogd_wrapper, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt)
Dec 02 09:08:24 np0005541914.localdomain podman[109156]: nova_virtlogd_wrapper
Dec 02 09:08:24 np0005541914.localdomain podman[109170]: 2025-12-02 09:08:24.935125197 +0000 UTC m=+0.075501740 container cleanup fae4e39fbb099510d3e0c1e1174ca074b49d200a38fbd9e586e6ffec92dff36b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, name=rhosp17/openstack-nova-libvirt, release=1761123044, tcib_managed=true, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., build-date=2025-11-19T00:35:22Z, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, container_name=nova_virtlogd_wrapper, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.buildah.version=1.41.4, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05)
Dec 02 09:08:25 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3c63bc0da00de6e07d0e525df0b33132c133b0af89f53ce43169161426eaeb98-merged.mount: Deactivated successfully.
Dec 02 09:08:25 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fae4e39fbb099510d3e0c1e1174ca074b49d200a38fbd9e586e6ffec92dff36b-userdata-shm.mount: Deactivated successfully.
Dec 02 09:08:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42558 DF PROTO=TCP SPT=55940 DPT=9101 SEQ=2379699008 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52F4E620000000001030307) 
Dec 02 09:08:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20176 DF PROTO=TCP SPT=41378 DPT=9102 SEQ=2341515912 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52F5F220000000001030307) 
Dec 02 09:08:33 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22713 DF PROTO=TCP SPT=43664 DPT=9882 SEQ=3084388863 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52F67A10000000001030307) 
Dec 02 09:08:34 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22714 DF PROTO=TCP SPT=43664 DPT=9882 SEQ=3084388863 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52F6BA20000000001030307) 
Dec 02 09:08:36 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22715 DF PROTO=TCP SPT=43664 DPT=9882 SEQ=3084388863 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52F73A30000000001030307) 
Dec 02 09:08:40 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47140 DF PROTO=TCP SPT=56506 DPT=9100 SEQ=154281784 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52F81220000000001030307) 
Dec 02 09:08:40 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 09:08:40 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 09:08:40 np0005541914.localdomain podman[109186]: 2025-12-02 09:08:40.590494749 +0000 UTC m=+0.083784403 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.buildah.version=1.41.4, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, build-date=2025-11-18T23:34:05Z, version=17.1.12, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, release=1761123044, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 02 09:08:40 np0005541914.localdomain podman[109185]: 2025-12-02 09:08:40.638031943 +0000 UTC m=+0.134205946 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.buildah.version=1.41.4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp17/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_metadata_agent, release=1761123044, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-19T00:14:25Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, version=17.1.12, batch=17.1_20251118.1, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public)
Dec 02 09:08:40 np0005541914.localdomain podman[109186]: 2025-12-02 09:08:40.660445638 +0000 UTC m=+0.153735272 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, version=17.1.12, release=1761123044, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, distribution-scope=public, config_id=tripleo_step4, vcs-type=git, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, batch=17.1_20251118.1, architecture=x86_64)
Dec 02 09:08:40 np0005541914.localdomain podman[109186]: unhealthy
Dec 02 09:08:40 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:08:40 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 09:08:40 np0005541914.localdomain podman[109185]: 2025-12-02 09:08:40.677670535 +0000 UTC m=+0.173844498 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, release=1761123044, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, name=rhosp17/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, distribution-scope=public, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Dec 02 09:08:40 np0005541914.localdomain podman[109185]: unhealthy
Dec 02 09:08:40 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:08:40 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 09:08:42 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17333 DF PROTO=TCP SPT=39816 DPT=9100 SEQ=817660967 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52F8B220000000001030307) 
Dec 02 09:08:46 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52019 DF PROTO=TCP SPT=47254 DPT=9102 SEQ=2912015696 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52F97FE0000000001030307) 
Dec 02 09:08:48 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22717 DF PROTO=TCP SPT=43664 DPT=9882 SEQ=3084388863 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52FA3220000000001030307) 
Dec 02 09:08:50 np0005541914.localdomain sshd[109228]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:08:50 np0005541914.localdomain sshd[109228]: Invalid user ubuntu from 45.148.10.240 port 42254
Dec 02 09:08:51 np0005541914.localdomain sshd[109228]: Connection closed by invalid user ubuntu 45.148.10.240 port 42254 [preauth]
Dec 02 09:08:51 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42560 DF PROTO=TCP SPT=55940 DPT=9101 SEQ=2379699008 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52FAF220000000001030307) 
Dec 02 09:08:56 np0005541914.localdomain sshd[109230]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:08:57 np0005541914.localdomain sshd[109230]: Invalid user solv from 80.94.92.182 port 57860
Dec 02 09:08:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4811 DF PROTO=TCP SPT=47240 DPT=9101 SEQ=2026001691 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52FC3A30000000001030307) 
Dec 02 09:08:57 np0005541914.localdomain sshd[109230]: Connection closed by invalid user solv 80.94.92.182 port 57860 [preauth]
Dec 02 09:09:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52023 DF PROTO=TCP SPT=47254 DPT=9102 SEQ=2912015696 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52FD5230000000001030307) 
Dec 02 09:09:03 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8208 DF PROTO=TCP SPT=40790 DPT=9882 SEQ=684199273 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52FDCD10000000001030307) 
Dec 02 09:09:04 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8209 DF PROTO=TCP SPT=40790 DPT=9882 SEQ=684199273 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52FE0E30000000001030307) 
Dec 02 09:09:06 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8210 DF PROTO=TCP SPT=40790 DPT=9882 SEQ=684199273 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52FE8E20000000001030307) 
Dec 02 09:09:10 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59327 DF PROTO=TCP SPT=55048 DPT=9100 SEQ=2405397969 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD52FF7230000000001030307) 
Dec 02 09:09:10 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 09:09:10 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 09:09:10 np0005541914.localdomain systemd[1]: tmp-crun.aOAflb.mount: Deactivated successfully.
Dec 02 09:09:10 np0005541914.localdomain podman[109233]: 2025-12-02 09:09:10.875208421 +0000 UTC m=+0.125948053 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, release=1761123044, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public)
Dec 02 09:09:10 np0005541914.localdomain podman[109232]: 2025-12-02 09:09:10.840738447 +0000 UTC m=+0.094059308 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2025-11-19T00:14:25Z, managed_by=tripleo_ansible, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, tcib_managed=true, container_name=ovn_metadata_agent, vcs-type=git, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp17/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 02 09:09:10 np0005541914.localdomain podman[109233]: 2025-12-02 09:09:10.915598966 +0000 UTC m=+0.166338548 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, version=17.1.12, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, build-date=2025-11-18T23:34:05Z, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, config_id=tripleo_step4, io.buildah.version=1.41.4, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, release=1761123044)
Dec 02 09:09:10 np0005541914.localdomain podman[109233]: unhealthy
Dec 02 09:09:10 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:09:10 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 09:09:10 np0005541914.localdomain podman[109232]: 2025-12-02 09:09:10.972523897 +0000 UTC m=+0.225844828 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, io.buildah.version=1.41.4, release=1761123044, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, url=https://www.redhat.com, container_name=ovn_metadata_agent, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, maintainer=OpenStack TripleO Team, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64)
Dec 02 09:09:10 np0005541914.localdomain podman[109232]: unhealthy
Dec 02 09:09:10 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:09:10 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 09:09:13 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14592 DF PROTO=TCP SPT=46348 DPT=9105 SEQ=3247990143 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53001E20000000001030307) 
Dec 02 09:09:15 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52563 DF PROTO=TCP SPT=36662 DPT=9105 SEQ=3234899454 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5300D220000000001030307) 
Dec 02 09:09:19 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56448 DF PROTO=TCP SPT=43798 DPT=9102 SEQ=3098469964 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53019230000000001030307) 
Dec 02 09:09:20 np0005541914.localdomain sudo[109271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:09:20 np0005541914.localdomain sudo[109271]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:09:20 np0005541914.localdomain sudo[109271]: pam_unix(sudo:session): session closed for user root
Dec 02 09:09:20 np0005541914.localdomain sudo[109286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 02 09:09:20 np0005541914.localdomain sudo[109286]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:09:20 np0005541914.localdomain sudo[109286]: pam_unix(sudo:session): session closed for user root
Dec 02 09:09:20 np0005541914.localdomain sudo[109322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:09:20 np0005541914.localdomain sudo[109322]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:09:20 np0005541914.localdomain sudo[109322]: pam_unix(sudo:session): session closed for user root
Dec 02 09:09:20 np0005541914.localdomain sudo[109337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:09:20 np0005541914.localdomain sudo[109337]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:09:21 np0005541914.localdomain sudo[109337]: pam_unix(sudo:session): session closed for user root
Dec 02 09:09:21 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4813 DF PROTO=TCP SPT=47240 DPT=9101 SEQ=2026001691 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53023230000000001030307) 
Dec 02 09:09:22 np0005541914.localdomain sudo[109384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:09:22 np0005541914.localdomain sudo[109384]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:09:22 np0005541914.localdomain sudo[109384]: pam_unix(sudo:session): session closed for user root
Dec 02 09:09:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15816 DF PROTO=TCP SPT=59078 DPT=9101 SEQ=200685646 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53038A30000000001030307) 
Dec 02 09:09:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56450 DF PROTO=TCP SPT=43798 DPT=9102 SEQ=3098469964 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53049220000000001030307) 
Dec 02 09:09:33 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28179 DF PROTO=TCP SPT=39914 DPT=9882 SEQ=726185489 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53052010000000001030307) 
Dec 02 09:09:34 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28180 DF PROTO=TCP SPT=39914 DPT=9882 SEQ=726185489 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53056220000000001030307) 
Dec 02 09:09:36 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28181 DF PROTO=TCP SPT=39914 DPT=9882 SEQ=726185489 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5305E220000000001030307) 
Dec 02 09:09:40 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19784 DF PROTO=TCP SPT=55354 DPT=9100 SEQ=1019660958 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5306B220000000001030307) 
Dec 02 09:09:40 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 09:09:40 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 09:09:41 np0005541914.localdomain systemd[1]: tmp-crun.yAJmhh.mount: Deactivated successfully.
Dec 02 09:09:41 np0005541914.localdomain podman[109399]: 2025-12-02 09:09:41.073923271 +0000 UTC m=+0.073548191 container health_status b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, version=17.1.12, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2025-11-18T23:34:05Z, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=ovn_controller, architecture=x86_64, vcs-type=git, name=rhosp17/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044)
Dec 02 09:09:41 np0005541914.localdomain podman[109400]: 2025-12-02 09:09:41.132945226 +0000 UTC m=+0.126851230 container health_status 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.4, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, managed_by=tripleo_ansible, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-type=git, url=https://www.redhat.com, release=1761123044, distribution-scope=public, version=17.1.12, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 02 09:09:41 np0005541914.localdomain podman[109399]: 2025-12-02 09:09:41.168758292 +0000 UTC m=+0.168383212 container exec_died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.12, release=1761123044, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.12 17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, container_name=ovn_controller, build-date=2025-11-18T23:34:05Z, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.4, managed_by=tripleo_ansible, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, distribution-scope=public, architecture=x86_64)
Dec 02 09:09:41 np0005541914.localdomain podman[109399]: unhealthy
Dec 02 09:09:41 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:09:41 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed with result 'exit-code'.
Dec 02 09:09:41 np0005541914.localdomain podman[109400]: 2025-12-02 09:09:41.224150145 +0000 UTC m=+0.218056219 container exec_died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp17/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, version=17.1.12, batch=17.1_20251118.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, release=1761123044, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, io.openshift.expose-services=, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_id=tripleo_step4)
Dec 02 09:09:41 np0005541914.localdomain podman[109400]: unhealthy
Dec 02 09:09:41 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:09:41 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed with result 'exit-code'.
Dec 02 09:09:43 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56247 DF PROTO=TCP SPT=54086 DPT=9105 SEQ=4183364387 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53077220000000001030307) 
Dec 02 09:09:46 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56985 DF PROTO=TCP SPT=34814 DPT=9102 SEQ=1041132309 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD530825E0000000001030307) 
Dec 02 09:09:48 np0005541914.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Dec 02 09:09:48 np0005541914.localdomain recover_tripleo_nova_virtqemud[109440]: 61907
Dec 02 09:09:48 np0005541914.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Dec 02 09:09:48 np0005541914.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Dec 02 09:09:48 np0005541914.localdomain systemd[1]: tripleo_nova_virtlogd_wrapper.service: State 'stop-sigterm' timed out. Killing.
Dec 02 09:09:48 np0005541914.localdomain systemd[1]: tripleo_nova_virtlogd_wrapper.service: Killing process 61145 (conmon) with signal SIGKILL.
Dec 02 09:09:48 np0005541914.localdomain systemd[1]: tripleo_nova_virtlogd_wrapper.service: Main process exited, code=killed, status=9/KILL
Dec 02 09:09:48 np0005541914.localdomain systemd[1]: libpod-conmon-fae4e39fbb099510d3e0c1e1174ca074b49d200a38fbd9e586e6ffec92dff36b.scope: Deactivated successfully.
Dec 02 09:09:49 np0005541914.localdomain podman[109452]: error opening file `/run/crun/fae4e39fbb099510d3e0c1e1174ca074b49d200a38fbd9e586e6ffec92dff36b/status`: No such file or directory
Dec 02 09:09:49 np0005541914.localdomain podman[109441]: 2025-12-02 09:09:49.065936548 +0000 UTC m=+0.069901879 container cleanup fae4e39fbb099510d3e0c1e1174ca074b49d200a38fbd9e586e6ffec92dff36b (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, config_id=tripleo_step3, build-date=2025-11-19T00:35:22Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, version=17.1.12, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vcs-type=git, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtlogd_wrapper, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=)
Dec 02 09:09:49 np0005541914.localdomain podman[109441]: nova_virtlogd_wrapper
Dec 02 09:09:49 np0005541914.localdomain systemd[1]: tmp-crun.c1fb1P.mount: Deactivated successfully.
Dec 02 09:09:49 np0005541914.localdomain systemd[1]: tripleo_nova_virtlogd_wrapper.service: Failed with result 'timeout'.
Dec 02 09:09:49 np0005541914.localdomain systemd[1]: Stopped nova_virtlogd_wrapper container.
Dec 02 09:09:49 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56987 DF PROTO=TCP SPT=34814 DPT=9102 SEQ=1041132309 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5308E630000000001030307) 
Dec 02 09:09:49 np0005541914.localdomain sudo[109113]: pam_unix(sudo:session): session closed for user root
Dec 02 09:09:49 np0005541914.localdomain sudo[109543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wuzglalmbzwfvqpzewslozwlxffbrhpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666589.2159448-116-95324100419976/AnsiballZ_systemd_service.py
Dec 02 09:09:49 np0005541914.localdomain sudo[109543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:09:49 np0005541914.localdomain python3.9[109545]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:09:49 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:09:49 np0005541914.localdomain systemd-sysv-generator[109575]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:09:49 np0005541914.localdomain systemd-rc-local-generator[109571]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:09:50 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:09:50 np0005541914.localdomain systemd[1]: Stopping nova_virtnodedevd container...
Dec 02 09:09:50 np0005541914.localdomain systemd[1]: libpod-380936fd184910f75d26f0daadef5c0e8a2dd7b0ccf2a1fab48d9a9f23b2b8f3.scope: Deactivated successfully.
Dec 02 09:09:50 np0005541914.localdomain systemd[1]: libpod-380936fd184910f75d26f0daadef5c0e8a2dd7b0ccf2a1fab48d9a9f23b2b8f3.scope: Consumed 1.459s CPU time.
Dec 02 09:09:50 np0005541914.localdomain podman[109586]: 2025-12-02 09:09:50.241867821 +0000 UTC m=+0.076352555 container died 380936fd184910f75d26f0daadef5c0e8a2dd7b0ccf2a1fab48d9a9f23b2b8f3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, release=1761123044, architecture=x86_64, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, version=17.1.12, container_name=nova_virtnodedevd, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., konflux.additional-tags=17.1.12 17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2025-11-19T00:35:22Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, vcs-type=git, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=)
Dec 02 09:09:50 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-380936fd184910f75d26f0daadef5c0e8a2dd7b0ccf2a1fab48d9a9f23b2b8f3-userdata-shm.mount: Deactivated successfully.
Dec 02 09:09:50 np0005541914.localdomain podman[109586]: 2025-12-02 09:09:50.279246275 +0000 UTC m=+0.113730959 container cleanup 380936fd184910f75d26f0daadef5c0e8a2dd7b0ccf2a1fab48d9a9f23b2b8f3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, version=17.1.12, container_name=nova_virtnodedevd, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, release=1761123044, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, name=rhosp17/openstack-nova-libvirt, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git)
Dec 02 09:09:50 np0005541914.localdomain podman[109586]: nova_virtnodedevd
Dec 02 09:09:50 np0005541914.localdomain podman[109601]: 2025-12-02 09:09:50.345361847 +0000 UTC m=+0.093291384 container cleanup 380936fd184910f75d26f0daadef5c0e8a2dd7b0ccf2a1fab48d9a9f23b2b8f3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., distribution-scope=public, container_name=nova_virtnodedevd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, name=rhosp17/openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, build-date=2025-11-19T00:35:22Z, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, release=1761123044, config_id=tripleo_step3, architecture=x86_64, managed_by=tripleo_ansible)
Dec 02 09:09:50 np0005541914.localdomain systemd[1]: libpod-conmon-380936fd184910f75d26f0daadef5c0e8a2dd7b0ccf2a1fab48d9a9f23b2b8f3.scope: Deactivated successfully.
Dec 02 09:09:50 np0005541914.localdomain podman[109628]: error opening file `/run/crun/380936fd184910f75d26f0daadef5c0e8a2dd7b0ccf2a1fab48d9a9f23b2b8f3/status`: No such file or directory
Dec 02 09:09:50 np0005541914.localdomain podman[109617]: 2025-12-02 09:09:50.433348618 +0000 UTC m=+0.056870791 container cleanup 380936fd184910f75d26f0daadef5c0e8a2dd7b0ccf2a1fab48d9a9f23b2b8f3 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, container_name=nova_virtnodedevd, version=17.1.12, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, distribution-scope=public, build-date=2025-11-19T00:35:22Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, managed_by=tripleo_ansible, io.openshift.expose-services=, architecture=x86_64, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, vcs-type=git, vendor=Red Hat, Inc., release=1761123044, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true)
Dec 02 09:09:50 np0005541914.localdomain podman[109617]: nova_virtnodedevd
Dec 02 09:09:50 np0005541914.localdomain systemd[1]: tripleo_nova_virtnodedevd.service: Deactivated successfully.
Dec 02 09:09:50 np0005541914.localdomain systemd[1]: Stopped nova_virtnodedevd container.
Dec 02 09:09:50 np0005541914.localdomain sudo[109543]: pam_unix(sudo:session): session closed for user root
Dec 02 09:09:50 np0005541914.localdomain sudo[109719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-elhmgtwwfzbvxllovwhirzkmujrlizfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666590.5860398-116-193663473368685/AnsiballZ_systemd_service.py
Dec 02 09:09:50 np0005541914.localdomain sudo[109719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:09:51 np0005541914.localdomain python3.9[109721]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:09:51 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d3c0368aac3df7a24e1cc908793cb027783f4fd6a7c0af2cb89163a01527dd3a-merged.mount: Deactivated successfully.
Dec 02 09:09:51 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15818 DF PROTO=TCP SPT=59078 DPT=9101 SEQ=200685646 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53099230000000001030307) 
Dec 02 09:09:52 np0005541914.localdomain sshd[109723]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:09:52 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:09:52 np0005541914.localdomain systemd-sysv-generator[109751]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:09:52 np0005541914.localdomain systemd-rc-local-generator[109746]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:09:52 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:09:52 np0005541914.localdomain systemd[1]: Stopping nova_virtproxyd container...
Dec 02 09:09:52 np0005541914.localdomain systemd[1]: libpod-f29a5f0fd81e25a86ce75a1b4ca9b6107de1c1148568894414cd55dacac64358.scope: Deactivated successfully.
Dec 02 09:09:52 np0005541914.localdomain podman[109764]: 2025-12-02 09:09:52.630751712 +0000 UTC m=+0.061968806 container died f29a5f0fd81e25a86ce75a1b4ca9b6107de1c1148568894414cd55dacac64358 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, container_name=nova_virtproxyd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, managed_by=tripleo_ansible, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.buildah.version=1.41.4, config_id=tripleo_step3, tcib_managed=true, url=https://www.redhat.com, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, batch=17.1_20251118.1, build-date=2025-11-19T00:35:22Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-libvirt)
Dec 02 09:09:52 np0005541914.localdomain systemd[1]: tmp-crun.r9vTAM.mount: Deactivated successfully.
Dec 02 09:09:52 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f29a5f0fd81e25a86ce75a1b4ca9b6107de1c1148568894414cd55dacac64358-userdata-shm.mount: Deactivated successfully.
Dec 02 09:09:52 np0005541914.localdomain podman[109764]: 2025-12-02 09:09:52.670498408 +0000 UTC m=+0.101715462 container cleanup f29a5f0fd81e25a86ce75a1b4ca9b6107de1c1148568894414cd55dacac64358 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_id=tripleo_step3, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, name=rhosp17/openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, url=https://www.redhat.com, container_name=nova_virtproxyd, description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, architecture=x86_64, vendor=Red Hat, Inc.)
Dec 02 09:09:52 np0005541914.localdomain podman[109764]: nova_virtproxyd
Dec 02 09:09:52 np0005541914.localdomain podman[109779]: 2025-12-02 09:09:52.714435072 +0000 UTC m=+0.071376504 container cleanup f29a5f0fd81e25a86ce75a1b4ca9b6107de1c1148568894414cd55dacac64358 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, build-date=2025-11-19T00:35:22Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtproxyd, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_id=tripleo_step3, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, name=rhosp17/openstack-nova-libvirt)
Dec 02 09:09:52 np0005541914.localdomain systemd[1]: libpod-conmon-f29a5f0fd81e25a86ce75a1b4ca9b6107de1c1148568894414cd55dacac64358.scope: Deactivated successfully.
Dec 02 09:09:52 np0005541914.localdomain podman[109806]: error opening file `/run/crun/f29a5f0fd81e25a86ce75a1b4ca9b6107de1c1148568894414cd55dacac64358/status`: No such file or directory
Dec 02 09:09:52 np0005541914.localdomain podman[109794]: 2025-12-02 09:09:52.82393084 +0000 UTC m=+0.071503217 container cleanup f29a5f0fd81e25a86ce75a1b4ca9b6107de1c1148568894414cd55dacac64358 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, version=17.1.12, architecture=x86_64, tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, build-date=2025-11-19T00:35:22Z, release=1761123044, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, batch=17.1_20251118.1, container_name=nova_virtproxyd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 09:09:52 np0005541914.localdomain podman[109794]: nova_virtproxyd
Dec 02 09:09:52 np0005541914.localdomain systemd[1]: tripleo_nova_virtproxyd.service: Deactivated successfully.
Dec 02 09:09:52 np0005541914.localdomain systemd[1]: Stopped nova_virtproxyd container.
Dec 02 09:09:52 np0005541914.localdomain sudo[109719]: pam_unix(sudo:session): session closed for user root
Dec 02 09:09:53 np0005541914.localdomain sudo[109897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-myshetasvroaanclmpjcikuunidemtoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666592.9903016-116-238343918730913/AnsiballZ_systemd_service.py
Dec 02 09:09:53 np0005541914.localdomain sudo[109897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:09:53 np0005541914.localdomain sshd[109723]: Invalid user ubuntu from 43.225.159.111 port 38712
Dec 02 09:09:53 np0005541914.localdomain python3.9[109899]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:09:53 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e02df3188ed09c76117009d9e268cf57a20be20a288a1b1dd5d724192cbba084-merged.mount: Deactivated successfully.
Dec 02 09:09:53 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:09:53 np0005541914.localdomain sshd[109723]: Received disconnect from 43.225.159.111 port 38712:11:  [preauth]
Dec 02 09:09:53 np0005541914.localdomain sshd[109723]: Disconnected from invalid user ubuntu 43.225.159.111 port 38712 [preauth]
Dec 02 09:09:53 np0005541914.localdomain systemd-rc-local-generator[109923]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:09:53 np0005541914.localdomain systemd-sysv-generator[109929]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:09:53 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:09:53 np0005541914.localdomain systemd[1]: tripleo_nova_virtqemud_recover.timer: Deactivated successfully.
Dec 02 09:09:53 np0005541914.localdomain systemd[1]: Stopped Check and recover tripleo_nova_virtqemud every 10m.
Dec 02 09:09:54 np0005541914.localdomain systemd[1]: Stopping nova_virtqemud container...
Dec 02 09:09:54 np0005541914.localdomain systemd[1]: libpod-cdeb0b383234c27a470905ec1b27681b773b0e8391f64240e9c886e50faf6aa7.scope: Deactivated successfully.
Dec 02 09:09:54 np0005541914.localdomain systemd[1]: libpod-cdeb0b383234c27a470905ec1b27681b773b0e8391f64240e9c886e50faf6aa7.scope: Consumed 2.120s CPU time.
Dec 02 09:09:54 np0005541914.localdomain podman[109940]: 2025-12-02 09:09:54.105430394 +0000 UTC m=+0.083655410 container stop cdeb0b383234c27a470905ec1b27681b773b0e8391f64240e9c886e50faf6aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, build-date=2025-11-19T00:35:22Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, url=https://www.redhat.com, version=17.1.12, batch=17.1_20251118.1, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, container_name=nova_virtqemud, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt)
Dec 02 09:09:54 np0005541914.localdomain podman[109940]: 2025-12-02 09:09:54.140720813 +0000 UTC m=+0.118945839 container died cdeb0b383234c27a470905ec1b27681b773b0e8391f64240e9c886e50faf6aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, vendor=Red Hat, Inc., build-date=2025-11-19T00:35:22Z, io.buildah.version=1.41.4, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, name=rhosp17/openstack-nova-libvirt, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, container_name=nova_virtqemud, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com)
Dec 02 09:09:54 np0005541914.localdomain podman[109940]: 2025-12-02 09:09:54.181959774 +0000 UTC m=+0.160184770 container cleanup cdeb0b383234c27a470905ec1b27681b773b0e8391f64240e9c886e50faf6aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, tcib_managed=true, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, url=https://www.redhat.com, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_virtqemud, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, batch=17.1_20251118.1, release=1761123044, managed_by=tripleo_ansible, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, version=17.1.12, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 09:09:54 np0005541914.localdomain podman[109940]: nova_virtqemud
Dec 02 09:09:54 np0005541914.localdomain podman[109953]: 2025-12-02 09:09:54.20471399 +0000 UTC m=+0.081593146 container cleanup cdeb0b383234c27a470905ec1b27681b773b0e8391f64240e9c886e50faf6aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, release=1761123044, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtqemud, build-date=2025-11-19T00:35:22Z, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.12, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 09:09:54 np0005541914.localdomain systemd[1]: libpod-conmon-cdeb0b383234c27a470905ec1b27681b773b0e8391f64240e9c886e50faf6aa7.scope: Deactivated successfully.
Dec 02 09:09:54 np0005541914.localdomain podman[109983]: error opening file `/run/crun/cdeb0b383234c27a470905ec1b27681b773b0e8391f64240e9c886e50faf6aa7/status`: No such file or directory
Dec 02 09:09:54 np0005541914.localdomain podman[109972]: 2025-12-02 09:09:54.317974034 +0000 UTC m=+0.076186611 container cleanup cdeb0b383234c27a470905ec1b27681b773b0e8391f64240e9c886e50faf6aa7 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_virtqemud, name=rhosp17/openstack-nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-libvirt-container, build-date=2025-11-19T00:35:22Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, version=17.1.12, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, konflux.additional-tags=17.1.12 17.1_20251118.1, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-type=git, io.buildah.version=1.41.4, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible)
Dec 02 09:09:54 np0005541914.localdomain podman[109972]: nova_virtqemud
Dec 02 09:09:54 np0005541914.localdomain systemd[1]: tripleo_nova_virtqemud.service: Deactivated successfully.
Dec 02 09:09:54 np0005541914.localdomain systemd[1]: Stopped nova_virtqemud container.
Dec 02 09:09:54 np0005541914.localdomain sudo[109897]: pam_unix(sudo:session): session closed for user root
Dec 02 09:09:54 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-307fde9f9a17104e6d254f3661d03569d645ee844efb3016652158492a4ae8a6-merged.mount: Deactivated successfully.
Dec 02 09:09:54 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cdeb0b383234c27a470905ec1b27681b773b0e8391f64240e9c886e50faf6aa7-userdata-shm.mount: Deactivated successfully.
Dec 02 09:09:54 np0005541914.localdomain sudo[110074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kokiddcyanmflporaovncdorhesodtsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666594.4945984-116-54500585179652/AnsiballZ_systemd_service.py
Dec 02 09:09:54 np0005541914.localdomain sudo[110074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:09:55 np0005541914.localdomain python3.9[110076]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud_recover.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:09:55 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:09:55 np0005541914.localdomain systemd-rc-local-generator[110101]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:09:55 np0005541914.localdomain systemd-sysv-generator[110106]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:09:55 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:09:55 np0005541914.localdomain systemd[1]: Starting dnf makecache...
Dec 02 09:09:55 np0005541914.localdomain sudo[110074]: pam_unix(sudo:session): session closed for user root
Dec 02 09:09:55 np0005541914.localdomain dnf[110114]: Updating Subscription Management repositories.
Dec 02 09:09:55 np0005541914.localdomain sudo[110204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qawhvyqlxgnaaboiumwjtehpuilyppiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666595.6238787-116-223706878594183/AnsiballZ_systemd_service.py
Dec 02 09:09:55 np0005541914.localdomain sudo[110204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:09:56 np0005541914.localdomain python3.9[110206]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:09:56 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:09:56 np0005541914.localdomain systemd-rc-local-generator[110231]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:09:56 np0005541914.localdomain systemd-sysv-generator[110235]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:09:56 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:09:56 np0005541914.localdomain systemd[1]: Stopping nova_virtsecretd container...
Dec 02 09:09:56 np0005541914.localdomain systemd[1]: libpod-c52787fc6278444352c6e9fc9a31127ec6ce41ddcd861f2779c74dbb5cb69b10.scope: Deactivated successfully.
Dec 02 09:09:56 np0005541914.localdomain podman[110246]: 2025-12-02 09:09:56.572010941 +0000 UTC m=+0.060868643 container died c52787fc6278444352c6e9fc9a31127ec6ce41ddcd861f2779c74dbb5cb69b10 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, com.redhat.component=openstack-nova-libvirt-container, version=17.1.12, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, release=1761123044, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp17/openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, distribution-scope=public, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_virtsecretd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt)
Dec 02 09:09:56 np0005541914.localdomain podman[110246]: 2025-12-02 09:09:56.618042148 +0000 UTC m=+0.106899860 container cleanup c52787fc6278444352c6e9fc9a31127ec6ce41ddcd861f2779c74dbb5cb69b10 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, release=1761123044, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64, io.openshift.expose-services=, build-date=2025-11-19T00:35:22Z, name=rhosp17/openstack-nova-libvirt, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtsecretd, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, batch=17.1_20251118.1, io.buildah.version=1.41.4, distribution-scope=public)
Dec 02 09:09:56 np0005541914.localdomain podman[110246]: nova_virtsecretd
Dec 02 09:09:56 np0005541914.localdomain podman[110259]: 2025-12-02 09:09:56.650420519 +0000 UTC m=+0.069352482 container cleanup c52787fc6278444352c6e9fc9a31127ec6ce41ddcd861f2779c74dbb5cb69b10 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, build-date=2025-11-19T00:35:22Z, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., name=rhosp17/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtsecretd, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, release=1761123044)
Dec 02 09:09:56 np0005541914.localdomain systemd[1]: libpod-conmon-c52787fc6278444352c6e9fc9a31127ec6ce41ddcd861f2779c74dbb5cb69b10.scope: Deactivated successfully.
Dec 02 09:09:56 np0005541914.localdomain podman[110285]: error opening file `/run/crun/c52787fc6278444352c6e9fc9a31127ec6ce41ddcd861f2779c74dbb5cb69b10/status`: No such file or directory
Dec 02 09:09:56 np0005541914.localdomain podman[110276]: 2025-12-02 09:09:56.770079759 +0000 UTC m=+0.089205669 container cleanup c52787fc6278444352c6e9fc9a31127ec6ce41ddcd861f2779c74dbb5cb69b10 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, io.buildah.version=1.41.4, container_name=nova_virtsecretd, url=https://www.redhat.com, vcs-type=git, build-date=2025-11-19T00:35:22Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, release=1761123044, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20251118.1, config_id=tripleo_step3, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, architecture=x86_64)
Dec 02 09:09:56 np0005541914.localdomain podman[110276]: nova_virtsecretd
Dec 02 09:09:56 np0005541914.localdomain systemd[1]: tripleo_nova_virtsecretd.service: Deactivated successfully.
Dec 02 09:09:56 np0005541914.localdomain systemd[1]: Stopped nova_virtsecretd container.
Dec 02 09:09:56 np0005541914.localdomain sudo[110204]: pam_unix(sudo:session): session closed for user root
Dec 02 09:09:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30926 DF PROTO=TCP SPT=44932 DPT=9101 SEQ=154037446 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD530ADE20000000001030307) 
Dec 02 09:09:57 np0005541914.localdomain sudo[110377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yzyoqbjucmhdhodrakraytqaxgqqkbpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666596.9242268-116-91843949716472/AnsiballZ_systemd_service.py
Dec 02 09:09:57 np0005541914.localdomain sudo[110377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:09:57 np0005541914.localdomain dnf[110114]: Metadata cache refreshed recently.
Dec 02 09:09:57 np0005541914.localdomain python3.9[110379]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:09:57 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:09:57 np0005541914.localdomain systemd-rc-local-generator[110408]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:09:57 np0005541914.localdomain systemd-sysv-generator[110411]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:09:57 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:09:57 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c52787fc6278444352c6e9fc9a31127ec6ce41ddcd861f2779c74dbb5cb69b10-userdata-shm.mount: Deactivated successfully.
Dec 02 09:09:57 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e99c986d4857ab1fa44ce62584eec376fd6f28bcc79d8fb56e2c5847b897969a-merged.mount: Deactivated successfully.
Dec 02 09:09:57 np0005541914.localdomain systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec 02 09:09:57 np0005541914.localdomain systemd[1]: Finished dnf makecache.
Dec 02 09:09:57 np0005541914.localdomain systemd[1]: dnf-makecache.service: Consumed 2.040s CPU time.
Dec 02 09:09:57 np0005541914.localdomain systemd[1]: Stopping nova_virtstoraged container...
Dec 02 09:09:57 np0005541914.localdomain systemd[1]: libpod-f40fa7232d1891a6529748e28e7c1664ec9dcff5f8e50a1478bc8a15766c7379.scope: Deactivated successfully.
Dec 02 09:09:57 np0005541914.localdomain podman[110419]: 2025-12-02 09:09:57.90697398 +0000 UTC m=+0.081427893 container died f40fa7232d1891a6529748e28e7c1664ec9dcff5f8e50a1478bc8a15766c7379 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, release=1761123044, url=https://www.redhat.com, batch=17.1_20251118.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, io.buildah.version=1.41.4, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2025-11-19T00:35:22Z, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, konflux.additional-tags=17.1.12 17.1_20251118.1, container_name=nova_virtstoraged, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible)
Dec 02 09:09:57 np0005541914.localdomain podman[110419]: 2025-12-02 09:09:57.942949059 +0000 UTC m=+0.117402992 container cleanup f40fa7232d1891a6529748e28e7c1664ec9dcff5f8e50a1478bc8a15766c7379 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, tcib_managed=true, build-date=2025-11-19T00:35:22Z, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, container_name=nova_virtstoraged, url=https://www.redhat.com, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.12, vcs-type=git, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible, name=rhosp17/openstack-nova-libvirt, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.buildah.version=1.41.4, io.openshift.expose-services=)
Dec 02 09:09:57 np0005541914.localdomain podman[110419]: nova_virtstoraged
Dec 02 09:09:57 np0005541914.localdomain podman[110433]: 2025-12-02 09:09:57.983753847 +0000 UTC m=+0.065333108 container cleanup f40fa7232d1891a6529748e28e7c1664ec9dcff5f8e50a1478bc8a15766c7379 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20251118.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1761123044, name=rhosp17/openstack-nova-libvirt, build-date=2025-11-19T00:35:22Z, url=https://www.redhat.com, io.buildah.version=1.41.4, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public, config_id=tripleo_step3, version=17.1.12, maintainer=OpenStack TripleO Team, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, container_name=nova_virtstoraged, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1)
Dec 02 09:09:58 np0005541914.localdomain systemd[1]: libpod-conmon-f40fa7232d1891a6529748e28e7c1664ec9dcff5f8e50a1478bc8a15766c7379.scope: Deactivated successfully.
Dec 02 09:09:58 np0005541914.localdomain podman[110459]: error opening file `/run/crun/f40fa7232d1891a6529748e28e7c1664ec9dcff5f8e50a1478bc8a15766c7379/status`: No such file or directory
Dec 02 09:09:58 np0005541914.localdomain podman[110448]: 2025-12-02 09:09:58.079527216 +0000 UTC m=+0.064544895 container cleanup f40fa7232d1891a6529748e28e7c1664ec9dcff5f8e50a1478bc8a15766c7379 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, managed_by=tripleo_ansible, build-date=2025-11-19T00:35:22Z, io.buildah.version=1.41.4, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, release=1761123044, version=17.1.12, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '51230b537c6b56095225b7a0a6b952d0'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, container_name=nova_virtstoraged, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-nova-libvirt, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, architecture=x86_64)
Dec 02 09:09:58 np0005541914.localdomain podman[110448]: nova_virtstoraged
Dec 02 09:09:58 np0005541914.localdomain systemd[1]: tripleo_nova_virtstoraged.service: Deactivated successfully.
Dec 02 09:09:58 np0005541914.localdomain systemd[1]: Stopped nova_virtstoraged container.
Dec 02 09:09:58 np0005541914.localdomain sudo[110377]: pam_unix(sudo:session): session closed for user root
Dec 02 09:09:58 np0005541914.localdomain sudo[110552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ubersykvtuddpzwbsfxxfhpnemlueywg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666598.2331169-116-163631125424127/AnsiballZ_systemd_service.py
Dec 02 09:09:58 np0005541914.localdomain sudo[110552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:09:58 np0005541914.localdomain python3.9[110554]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_controller.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:09:58 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-14ddf7e0c76befb63a54b1348ab4f9ad7d65a2f392d0685c8169eecf2841ddca-merged.mount: Deactivated successfully.
Dec 02 09:09:58 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f40fa7232d1891a6529748e28e7c1664ec9dcff5f8e50a1478bc8a15766c7379-userdata-shm.mount: Deactivated successfully.
Dec 02 09:09:58 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:09:58 np0005541914.localdomain systemd-rc-local-generator[110581]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:09:58 np0005541914.localdomain systemd-sysv-generator[110587]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:09:58 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:09:59 np0005541914.localdomain systemd[1]: Stopping ovn_controller container...
Dec 02 09:09:59 np0005541914.localdomain systemd[1]: tmp-crun.DOW8Ub.mount: Deactivated successfully.
Dec 02 09:09:59 np0005541914.localdomain systemd[1]: libpod-b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.scope: Deactivated successfully.
Dec 02 09:09:59 np0005541914.localdomain systemd[1]: libpod-b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.scope: Consumed 2.586s CPU time.
Dec 02 09:09:59 np0005541914.localdomain podman[110595]: 2025-12-02 09:09:59.21879933 +0000 UTC m=+0.081069911 container died b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1761123044, tcib_managed=true, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.openshift.expose-services=, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, version=17.1.12, io.buildah.version=1.41.4, konflux.additional-tags=17.1.12 17.1_20251118.1, managed_by=tripleo_ansible, name=rhosp17/openstack-ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 09:09:59 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.timer: Deactivated successfully.
Dec 02 09:09:59 np0005541914.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.
Dec 02 09:09:59 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed to open /run/systemd/transient/b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: No such file or directory
Dec 02 09:09:59 np0005541914.localdomain podman[110595]: 2025-12-02 09:09:59.259420192 +0000 UTC m=+0.121690773 container cleanup b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, config_id=tripleo_step4, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, release=1761123044, version=17.1.12, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, build-date=2025-11-18T23:34:05Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 02 09:09:59 np0005541914.localdomain podman[110595]: ovn_controller
Dec 02 09:09:59 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.timer: Failed to open /run/systemd/transient/b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.timer: No such file or directory
Dec 02 09:09:59 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed to open /run/systemd/transient/b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: No such file or directory
Dec 02 09:09:59 np0005541914.localdomain podman[110608]: 2025-12-02 09:09:59.312816625 +0000 UTC m=+0.083020560 container cleanup b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2025-11-18T23:34:05Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp17/openstack-ovn-controller, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, batch=17.1_20251118.1, container_name=ovn_controller, vcs-type=git, io.openshift.expose-services=, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, release=1761123044, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, version=17.1.12)
Dec 02 09:09:59 np0005541914.localdomain systemd[1]: libpod-conmon-b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.scope: Deactivated successfully.
Dec 02 09:09:59 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.timer: Failed to open /run/systemd/transient/b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.timer: No such file or directory
Dec 02 09:09:59 np0005541914.localdomain systemd[1]: b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: Failed to open /run/systemd/transient/b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d.service: No such file or directory
Dec 02 09:09:59 np0005541914.localdomain podman[110624]: 2025-12-02 09:09:59.388056966 +0000 UTC m=+0.050049032 container cleanup b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, config_id=tripleo_step4, batch=17.1_20251118.1, io.buildah.version=1.41.4, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.12 17.1_20251118.1, version=17.1.12, vendor=Red Hat, Inc., container_name=ovn_controller, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, distribution-scope=public, release=1761123044, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, name=rhosp17/openstack-ovn-controller, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, build-date=2025-11-18T23:34:05Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller)
Dec 02 09:09:59 np0005541914.localdomain podman[110624]: ovn_controller
Dec 02 09:09:59 np0005541914.localdomain systemd[1]: tripleo_ovn_controller.service: Deactivated successfully.
Dec 02 09:09:59 np0005541914.localdomain systemd[1]: Stopped ovn_controller container.
Dec 02 09:09:59 np0005541914.localdomain sudo[110552]: pam_unix(sudo:session): session closed for user root
Dec 02 09:09:59 np0005541914.localdomain sudo[110725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-drbqofpnzzhvitngqylusuzkpxbxnexn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666599.5332174-116-52642917569125/AnsiballZ_systemd_service.py
Dec 02 09:09:59 np0005541914.localdomain sudo[110725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:09:59 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-8d25cd45e405537f342915e53026fb2ea6ae337ec52f5b72439f9a37d98e6337-merged.mount: Deactivated successfully.
Dec 02 09:09:59 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d-userdata-shm.mount: Deactivated successfully.
Dec 02 09:10:00 np0005541914.localdomain python3.9[110727]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_metadata_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:10:00 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:10:00 np0005541914.localdomain systemd-rc-local-generator[110751]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:10:00 np0005541914.localdomain systemd-sysv-generator[110757]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:10:00 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:10:00 np0005541914.localdomain systemd[1]: Stopping ovn_metadata_agent container...
Dec 02 09:10:01 np0005541914.localdomain systemd[1]: libpod-6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.scope: Deactivated successfully.
Dec 02 09:10:01 np0005541914.localdomain systemd[1]: libpod-6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.scope: Consumed 9.520s CPU time.
Dec 02 09:10:01 np0005541914.localdomain podman[110769]: 2025-12-02 09:10:01.313161603 +0000 UTC m=+0.814524272 container died 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ovn_metadata_agent, url=https://www.redhat.com, batch=17.1_20251118.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, vendor=Red Hat, Inc., io.buildah.version=1.41.4, build-date=2025-11-19T00:14:25Z, vcs-type=git, managed_by=tripleo_ansible, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.12, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_id=tripleo_step4, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 09:10:01 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.timer: Deactivated successfully.
Dec 02 09:10:01 np0005541914.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.
Dec 02 09:10:01 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed to open /run/systemd/transient/6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: No such file or directory
Dec 02 09:10:01 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b-userdata-shm.mount: Deactivated successfully.
Dec 02 09:10:01 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a895fb8ef70030e2b27c789af81d44f745a1833cc8dfd0936f4f5302c8f5799a-merged.mount: Deactivated successfully.
Dec 02 09:10:01 np0005541914.localdomain podman[110769]: 2025-12-02 09:10:01.371386564 +0000 UTC m=+0.872749213 container cleanup 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20251118.1, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-neutron-metadata-agent-ovn, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step4, release=1761123044, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.12, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, url=https://www.redhat.com, io.buildah.version=1.41.4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.12 17.1_20251118.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 09:10:01 np0005541914.localdomain podman[110769]: ovn_metadata_agent
Dec 02 09:10:01 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.timer: Failed to open /run/systemd/transient/6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.timer: No such file or directory
Dec 02 09:10:01 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed to open /run/systemd/transient/6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: No such file or directory
Dec 02 09:10:01 np0005541914.localdomain podman[110783]: 2025-12-02 09:10:01.409886982 +0000 UTC m=+0.083822055 container cleanup 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, batch=17.1_20251118.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, url=https://www.redhat.com, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, build-date=2025-11-19T00:14:25Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, konflux.additional-tags=17.1.12 17.1_20251118.1, name=rhosp17/openstack-neutron-metadata-agent-ovn, version=17.1.12, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1761123044, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=)
Dec 02 09:10:01 np0005541914.localdomain systemd[1]: libpod-conmon-6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.scope: Deactivated successfully.
Dec 02 09:10:01 np0005541914.localdomain podman[110811]: error opening file `/run/crun/6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b/status`: No such file or directory
Dec 02 09:10:01 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.timer: Failed to open /run/systemd/transient/6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.timer: No such file or directory
Dec 02 09:10:01 np0005541914.localdomain systemd[1]: 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: Failed to open /run/systemd/transient/6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b.service: No such file or directory
Dec 02 09:10:01 np0005541914.localdomain podman[110799]: 2025-12-02 09:10:01.513613474 +0000 UTC m=+0.071023943 container cleanup 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, name=rhosp17/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1761123044, vendor=Red Hat, Inc., build-date=2025-11-19T00:14:25Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.4, batch=17.1_20251118.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, version=17.1.12, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c)
Dec 02 09:10:01 np0005541914.localdomain podman[110799]: ovn_metadata_agent
Dec 02 09:10:01 np0005541914.localdomain systemd[1]: tripleo_ovn_metadata_agent.service: Deactivated successfully.
Dec 02 09:10:01 np0005541914.localdomain systemd[1]: Stopped ovn_metadata_agent container.
Dec 02 09:10:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56989 DF PROTO=TCP SPT=34814 DPT=9102 SEQ=1041132309 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD530BF220000000001030307) 
Dec 02 09:10:01 np0005541914.localdomain sudo[110725]: pam_unix(sudo:session): session closed for user root
Dec 02 09:10:01 np0005541914.localdomain sudo[110902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kknnndcbofvjgyvymfloltsfmrevmmhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666601.6652448-116-171075840493204/AnsiballZ_systemd_service.py
Dec 02 09:10:01 np0005541914.localdomain sudo[110902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:10:02 np0005541914.localdomain python3.9[110904]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_rsyslog.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:10:02 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:10:02 np0005541914.localdomain systemd-rc-local-generator[110930]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:10:02 np0005541914.localdomain systemd-sysv-generator[110933]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:10:02 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:10:02 np0005541914.localdomain sudo[110902]: pam_unix(sudo:session): session closed for user root
Dec 02 09:10:03 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12965 DF PROTO=TCP SPT=48056 DPT=9882 SEQ=547223113 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD530C7300000000001030307) 
Dec 02 09:10:04 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12966 DF PROTO=TCP SPT=48056 DPT=9882 SEQ=547223113 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD530CB230000000001030307) 
Dec 02 09:10:06 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12967 DF PROTO=TCP SPT=48056 DPT=9882 SEQ=547223113 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD530D3230000000001030307) 
Dec 02 09:10:10 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26792 DF PROTO=TCP SPT=46334 DPT=9100 SEQ=4060823706 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD530E1220000000001030307) 
Dec 02 09:10:13 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52667 DF PROTO=TCP SPT=38722 DPT=9105 SEQ=4171385029 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD530EC620000000001030307) 
Dec 02 09:10:15 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14597 DF PROTO=TCP SPT=46348 DPT=9105 SEQ=3247990143 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD530F7220000000001030307) 
Dec 02 09:10:18 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12969 DF PROTO=TCP SPT=48056 DPT=9882 SEQ=547223113 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53103220000000001030307) 
Dec 02 09:10:22 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30928 DF PROTO=TCP SPT=44932 DPT=9101 SEQ=154037446 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5310F220000000001030307) 
Dec 02 09:10:22 np0005541914.localdomain sudo[110957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:10:22 np0005541914.localdomain sudo[110957]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:10:22 np0005541914.localdomain sudo[110957]: pam_unix(sudo:session): session closed for user root
Dec 02 09:10:22 np0005541914.localdomain sudo[110972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:10:22 np0005541914.localdomain sudo[110972]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:10:22 np0005541914.localdomain sudo[110972]: pam_unix(sudo:session): session closed for user root
Dec 02 09:10:23 np0005541914.localdomain sudo[111019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:10:23 np0005541914.localdomain sudo[111019]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:10:23 np0005541914.localdomain sudo[111019]: pam_unix(sudo:session): session closed for user root
Dec 02 09:10:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37505 DF PROTO=TCP SPT=59904 DPT=9101 SEQ=1754290358 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53123220000000001030307) 
Dec 02 09:10:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52670 DF PROTO=TCP SPT=38722 DPT=9105 SEQ=4171385029 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53125230000000001030307) 
Dec 02 09:10:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34520 DF PROTO=TCP SPT=47500 DPT=9102 SEQ=757529416 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53133220000000001030307) 
Dec 02 09:10:34 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9598 DF PROTO=TCP SPT=37754 DPT=9882 SEQ=1079819386 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53140620000000001030307) 
Dec 02 09:10:36 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9599 DF PROTO=TCP SPT=37754 DPT=9882 SEQ=1079819386 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53148620000000001030307) 
Dec 02 09:10:40 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36885 DF PROTO=TCP SPT=53100 DPT=9100 SEQ=2961383613 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53157220000000001030307) 
Dec 02 09:10:43 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32005 DF PROTO=TCP SPT=54762 DPT=9105 SEQ=3236264239 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53161620000000001030307) 
Dec 02 09:10:46 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7552 DF PROTO=TCP SPT=52780 DPT=9102 SEQ=999050472 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5316CBE0000000001030307) 
Dec 02 09:10:49 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7554 DF PROTO=TCP SPT=52780 DPT=9102 SEQ=999050472 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53178E30000000001030307) 
Dec 02 09:10:51 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37507 DF PROTO=TCP SPT=59904 DPT=9101 SEQ=1754290358 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53183220000000001030307) 
Dec 02 09:10:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43170 DF PROTO=TCP SPT=54284 DPT=9101 SEQ=529320762 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53198620000000001030307) 
Dec 02 09:11:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7556 DF PROTO=TCP SPT=52780 DPT=9102 SEQ=999050472 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD531A9230000000001030307) 
Dec 02 09:11:03 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63290 DF PROTO=TCP SPT=34284 DPT=9882 SEQ=2949267371 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD531B1900000000001030307) 
Dec 02 09:11:03 np0005541914.localdomain sshd[111034]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:11:04 np0005541914.localdomain sshd[111034]: Invalid user ubuntu from 45.148.10.240 port 53458
Dec 02 09:11:04 np0005541914.localdomain sshd[111034]: Connection closed by invalid user ubuntu 45.148.10.240 port 53458 [preauth]
Dec 02 09:11:04 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63291 DF PROTO=TCP SPT=34284 DPT=9882 SEQ=2949267371 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD531B5A20000000001030307) 
Dec 02 09:11:06 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63292 DF PROTO=TCP SPT=34284 DPT=9882 SEQ=2949267371 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD531BDA20000000001030307) 
Dec 02 09:11:10 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34842 DF PROTO=TCP SPT=59378 DPT=9100 SEQ=654284128 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD531CB220000000001030307) 
Dec 02 09:11:12 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36886 DF PROTO=TCP SPT=53100 DPT=9100 SEQ=2961383613 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD531D5230000000001030307) 
Dec 02 09:11:16 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44201 DF PROTO=TCP SPT=51550 DPT=9102 SEQ=2399346664 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD531E1EF0000000001030307) 
Dec 02 09:11:18 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63294 DF PROTO=TCP SPT=34284 DPT=9882 SEQ=2949267371 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD531ED220000000001030307) 
Dec 02 09:11:22 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43172 DF PROTO=TCP SPT=54284 DPT=9101 SEQ=529320762 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD531F99F0000000001030307) 
Dec 02 09:11:22 np0005541914.localdomain sshd[105756]: Received disconnect from 192.168.122.30 port 51026:11: disconnected by user
Dec 02 09:11:22 np0005541914.localdomain sshd[105756]: Disconnected from user zuul 192.168.122.30 port 51026
Dec 02 09:11:22 np0005541914.localdomain sshd[105741]: pam_unix(sshd:session): session closed for user zuul
Dec 02 09:11:22 np0005541914.localdomain systemd[1]: session-36.scope: Deactivated successfully.
Dec 02 09:11:22 np0005541914.localdomain systemd[1]: session-36.scope: Consumed 18.508s CPU time.
Dec 02 09:11:22 np0005541914.localdomain systemd-logind[760]: Session 36 logged out. Waiting for processes to exit.
Dec 02 09:11:22 np0005541914.localdomain systemd-logind[760]: Removed session 36.
Dec 02 09:11:23 np0005541914.localdomain sudo[111036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:11:23 np0005541914.localdomain sudo[111036]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:11:23 np0005541914.localdomain sudo[111036]: pam_unix(sudo:session): session closed for user root
Dec 02 09:11:23 np0005541914.localdomain sudo[111051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 02 09:11:23 np0005541914.localdomain sudo[111051]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:11:24 np0005541914.localdomain systemd[1]: tmp-crun.6NbGsy.mount: Deactivated successfully.
Dec 02 09:11:24 np0005541914.localdomain podman[111140]: 2025-12-02 09:11:24.503288539 +0000 UTC m=+0.076277338 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, name=rhceph, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 02 09:11:24 np0005541914.localdomain podman[111140]: 2025-12-02 09:11:24.601751976 +0000 UTC m=+0.174740765 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, vendor=Red Hat, Inc., version=7, com.redhat.component=rhceph-container, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, CEPH_POINT_RELEASE=, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, release=1763362218)
Dec 02 09:11:24 np0005541914.localdomain sudo[111051]: pam_unix(sudo:session): session closed for user root
Dec 02 09:11:24 np0005541914.localdomain sudo[111207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:11:24 np0005541914.localdomain sudo[111207]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:11:24 np0005541914.localdomain sudo[111207]: pam_unix(sudo:session): session closed for user root
Dec 02 09:11:25 np0005541914.localdomain sudo[111222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:11:25 np0005541914.localdomain sudo[111222]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:11:25 np0005541914.localdomain sudo[111222]: pam_unix(sudo:session): session closed for user root
Dec 02 09:11:26 np0005541914.localdomain sudo[111269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:11:26 np0005541914.localdomain sudo[111269]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:11:26 np0005541914.localdomain sudo[111269]: pam_unix(sudo:session): session closed for user root
Dec 02 09:11:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1928 DF PROTO=TCP SPT=54738 DPT=9101 SEQ=89299610 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5320D620000000001030307) 
Dec 02 09:11:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44205 DF PROTO=TCP SPT=51550 DPT=9102 SEQ=2399346664 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5321D220000000001030307) 
Dec 02 09:11:33 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23642 DF PROTO=TCP SPT=52856 DPT=9882 SEQ=3339190473 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53226C10000000001030307) 
Dec 02 09:11:34 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23643 DF PROTO=TCP SPT=52856 DPT=9882 SEQ=3339190473 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5322AE20000000001030307) 
Dec 02 09:11:36 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23644 DF PROTO=TCP SPT=52856 DPT=9882 SEQ=3339190473 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53232E20000000001030307) 
Dec 02 09:11:40 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42322 DF PROTO=TCP SPT=39816 DPT=9100 SEQ=4098914354 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53241220000000001030307) 
Dec 02 09:11:43 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4070 DF PROTO=TCP SPT=57934 DPT=9105 SEQ=2366396435 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5324BE20000000001030307) 
Dec 02 09:11:46 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25777 DF PROTO=TCP SPT=49174 DPT=9102 SEQ=209736273 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD532571E0000000001030307) 
Dec 02 09:11:49 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25779 DF PROTO=TCP SPT=49174 DPT=9102 SEQ=209736273 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53263230000000001030307) 
Dec 02 09:11:51 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1930 DF PROTO=TCP SPT=54738 DPT=9101 SEQ=89299610 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5326D220000000001030307) 
Dec 02 09:11:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40830 DF PROTO=TCP SPT=35152 DPT=9101 SEQ=2088900458 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53282A30000000001030307) 
Dec 02 09:12:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25781 DF PROTO=TCP SPT=49174 DPT=9102 SEQ=209736273 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53293230000000001030307) 
Dec 02 09:12:03 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37029 DF PROTO=TCP SPT=34378 DPT=9882 SEQ=1608111940 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5329BF10000000001030307) 
Dec 02 09:12:04 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37030 DF PROTO=TCP SPT=34378 DPT=9882 SEQ=1608111940 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5329FE30000000001030307) 
Dec 02 09:12:06 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37031 DF PROTO=TCP SPT=34378 DPT=9882 SEQ=1608111940 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD532A7E20000000001030307) 
Dec 02 09:12:10 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37709 DF PROTO=TCP SPT=60128 DPT=9100 SEQ=1773251718 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD532B5220000000001030307) 
Dec 02 09:12:12 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42323 DF PROTO=TCP SPT=39816 DPT=9100 SEQ=4098914354 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD532BF220000000001030307) 
Dec 02 09:12:16 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50118 DF PROTO=TCP SPT=35034 DPT=9102 SEQ=986971140 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD532CC4F0000000001030307) 
Dec 02 09:12:18 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37033 DF PROTO=TCP SPT=34378 DPT=9882 SEQ=1608111940 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD532D7230000000001030307) 
Dec 02 09:12:21 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40832 DF PROTO=TCP SPT=35152 DPT=9101 SEQ=2088900458 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD532E3220000000001030307) 
Dec 02 09:12:26 np0005541914.localdomain sudo[111284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:12:26 np0005541914.localdomain sudo[111284]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:12:26 np0005541914.localdomain sudo[111284]: pam_unix(sudo:session): session closed for user root
Dec 02 09:12:26 np0005541914.localdomain sudo[111299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:12:26 np0005541914.localdomain sudo[111299]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:12:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23663 DF PROTO=TCP SPT=48094 DPT=9101 SEQ=3243340766 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD532F7E20000000001030307) 
Dec 02 09:12:27 np0005541914.localdomain sudo[111299]: pam_unix(sudo:session): session closed for user root
Dec 02 09:12:28 np0005541914.localdomain sudo[111347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:12:28 np0005541914.localdomain sudo[111347]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:12:28 np0005541914.localdomain sudo[111347]: pam_unix(sudo:session): session closed for user root
Dec 02 09:12:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50122 DF PROTO=TCP SPT=35034 DPT=9102 SEQ=986971140 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53309220000000001030307) 
Dec 02 09:12:33 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=714 DF PROTO=TCP SPT=32882 DPT=9882 SEQ=4106522726 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53311210000000001030307) 
Dec 02 09:12:34 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=715 DF PROTO=TCP SPT=32882 DPT=9882 SEQ=4106522726 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53315220000000001030307) 
Dec 02 09:12:36 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=716 DF PROTO=TCP SPT=32882 DPT=9882 SEQ=4106522726 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5331D220000000001030307) 
Dec 02 09:12:40 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56104 DF PROTO=TCP SPT=42270 DPT=9100 SEQ=3870687681 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5332B220000000001030307) 
Dec 02 09:12:43 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64517 DF PROTO=TCP SPT=59300 DPT=9105 SEQ=3427386491 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53336220000000001030307) 
Dec 02 09:12:45 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4075 DF PROTO=TCP SPT=57934 DPT=9105 SEQ=2366396435 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53341220000000001030307) 
Dec 02 09:12:48 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=718 DF PROTO=TCP SPT=32882 DPT=9882 SEQ=4106522726 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5334D220000000001030307) 
Dec 02 09:12:52 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23665 DF PROTO=TCP SPT=48094 DPT=9101 SEQ=3243340766 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53359220000000001030307) 
Dec 02 09:12:52 np0005541914.localdomain sshd[111362]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:12:52 np0005541914.localdomain sshd[111362]: Accepted publickey for zuul from 192.168.122.30 port 55214 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 09:12:52 np0005541914.localdomain systemd-logind[760]: New session 37 of user zuul.
Dec 02 09:12:52 np0005541914.localdomain systemd[1]: Started Session 37 of User zuul.
Dec 02 09:12:52 np0005541914.localdomain sshd[111362]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 09:12:52 np0005541914.localdomain sudo[111441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uqgeemdxvzbayguipalvlokjmwplidhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666772.279814-566-97818170688092/AnsiballZ_file.py
Dec 02 09:12:52 np0005541914.localdomain sudo[111441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:12:52 np0005541914.localdomain python3.9[111443]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:12:52 np0005541914.localdomain sudo[111441]: pam_unix(sudo:session): session closed for user root
Dec 02 09:12:53 np0005541914.localdomain sudo[111533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iwiltpuxfttspkfxzouccydvhewlktfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666772.8345733-566-159048266543253/AnsiballZ_file.py
Dec 02 09:12:53 np0005541914.localdomain sudo[111533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:12:53 np0005541914.localdomain python3.9[111535]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:12:53 np0005541914.localdomain sudo[111533]: pam_unix(sudo:session): session closed for user root
Dec 02 09:12:53 np0005541914.localdomain sudo[111625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ntckbsflhzyuwzurmrjpgrazyauoafwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666773.4312222-566-265446136064740/AnsiballZ_file.py
Dec 02 09:12:53 np0005541914.localdomain sudo[111625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:12:53 np0005541914.localdomain python3.9[111627]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_collectd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:12:53 np0005541914.localdomain sudo[111625]: pam_unix(sudo:session): session closed for user root
Dec 02 09:12:54 np0005541914.localdomain sudo[111717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rkqgoogoipvndoxqihdndzpbjkjfcfdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666773.9811828-566-95043798987541/AnsiballZ_file.py
Dec 02 09:12:54 np0005541914.localdomain sudo[111717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:12:54 np0005541914.localdomain python3.9[111719]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_iscsid.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:12:54 np0005541914.localdomain sudo[111717]: pam_unix(sudo:session): session closed for user root
Dec 02 09:12:54 np0005541914.localdomain sudo[111809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dwqzizplxeiaikjiwbvaycedakpdxoeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666774.5268185-566-5445656584856/AnsiballZ_file.py
Dec 02 09:12:54 np0005541914.localdomain sudo[111809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:12:54 np0005541914.localdomain python3.9[111811]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_logrotate_crond.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:12:54 np0005541914.localdomain sudo[111809]: pam_unix(sudo:session): session closed for user root
Dec 02 09:12:55 np0005541914.localdomain sudo[111901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tgfsmmftfstbwnxwntoehbsecbkwooqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666775.0492377-566-58013469773456/AnsiballZ_file.py
Dec 02 09:12:55 np0005541914.localdomain sudo[111901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:12:55 np0005541914.localdomain python3.9[111903]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_metrics_qdr.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:12:55 np0005541914.localdomain sudo[111901]: pam_unix(sudo:session): session closed for user root
Dec 02 09:12:55 np0005541914.localdomain sudo[111993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-brtohqjajasdncxeuwskzbcbtxdwcwiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666775.5434494-566-149754908299963/AnsiballZ_file.py
Dec 02 09:12:55 np0005541914.localdomain sudo[111993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:12:55 np0005541914.localdomain python3.9[111995]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_dhcp.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:12:56 np0005541914.localdomain sudo[111993]: pam_unix(sudo:session): session closed for user root
Dec 02 09:12:56 np0005541914.localdomain sudo[112085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zfpektxdptdyelfyhvyhyrnvdnpqwvgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666776.1386254-566-78309495300320/AnsiballZ_file.py
Dec 02 09:12:56 np0005541914.localdomain sudo[112085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:12:56 np0005541914.localdomain python3.9[112087]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_l3_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:12:56 np0005541914.localdomain sudo[112085]: pam_unix(sudo:session): session closed for user root
Dec 02 09:12:56 np0005541914.localdomain sudo[112177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ozegfwuwrfzgszmewmdyqodhblhjfxip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666776.659913-566-50336038158188/AnsiballZ_file.py
Dec 02 09:12:56 np0005541914.localdomain sudo[112177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:12:57 np0005541914.localdomain python3.9[112179]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_ovs_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:12:57 np0005541914.localdomain sudo[112177]: pam_unix(sudo:session): session closed for user root
Dec 02 09:12:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33 DF PROTO=TCP SPT=40840 DPT=9101 SEQ=1232125286 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5336D230000000001030307) 
Dec 02 09:12:57 np0005541914.localdomain sudo[112269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-btuwirtnjeomdvmidgzrddeuttyfqcqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666777.2084115-566-167134040860032/AnsiballZ_file.py
Dec 02 09:12:57 np0005541914.localdomain sudo[112269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:12:57 np0005541914.localdomain python3.9[112271]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:12:57 np0005541914.localdomain sudo[112269]: pam_unix(sudo:session): session closed for user root
Dec 02 09:12:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64520 DF PROTO=TCP SPT=59300 DPT=9105 SEQ=3427386491 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5336F220000000001030307) 
Dec 02 09:12:58 np0005541914.localdomain sudo[112361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gadfqmyjsliaaceuemywuidjutvqlqnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666777.7751791-566-106200731409722/AnsiballZ_file.py
Dec 02 09:12:58 np0005541914.localdomain sudo[112361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:12:58 np0005541914.localdomain python3.9[112363]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:12:58 np0005541914.localdomain sudo[112361]: pam_unix(sudo:session): session closed for user root
Dec 02 09:12:58 np0005541914.localdomain sudo[112453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xkhwisjthymuhyjkysazydiezxbwchvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666778.3308985-566-220812101744417/AnsiballZ_file.py
Dec 02 09:12:58 np0005541914.localdomain sudo[112453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:12:58 np0005541914.localdomain python3.9[112455]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:12:58 np0005541914.localdomain sudo[112453]: pam_unix(sudo:session): session closed for user root
Dec 02 09:12:59 np0005541914.localdomain sudo[112545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-azmlxfohquzumbesgckzillbluylsgxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666778.8793879-566-218701078262039/AnsiballZ_file.py
Dec 02 09:12:59 np0005541914.localdomain sudo[112545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:12:59 np0005541914.localdomain python3.9[112547]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:12:59 np0005541914.localdomain sudo[112545]: pam_unix(sudo:session): session closed for user root
Dec 02 09:12:59 np0005541914.localdomain sudo[112637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qcoqmnrigoktcddmgufblgsofabuzqtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666779.420526-566-165318897082471/AnsiballZ_file.py
Dec 02 09:12:59 np0005541914.localdomain sudo[112637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:12:59 np0005541914.localdomain python3.9[112639]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:12:59 np0005541914.localdomain sudo[112637]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:00 np0005541914.localdomain sudo[112729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wofigmdzsufjgpwbqjjcxigtmjomqqvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666779.9454696-566-225374291768036/AnsiballZ_file.py
Dec 02 09:13:00 np0005541914.localdomain sudo[112729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:00 np0005541914.localdomain python3.9[112731]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:13:00 np0005541914.localdomain sudo[112729]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:00 np0005541914.localdomain sudo[112821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gouiudcmukkpvyxkscxidrrkkoinauhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666780.4874687-566-264074552146252/AnsiballZ_file.py
Dec 02 09:13:00 np0005541914.localdomain sudo[112821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:00 np0005541914.localdomain python3.9[112823]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud_recover.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:13:00 np0005541914.localdomain sudo[112821]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61830 DF PROTO=TCP SPT=58996 DPT=9102 SEQ=2908849782 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5337D220000000001030307) 
Dec 02 09:13:01 np0005541914.localdomain sudo[112913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zjxtxxqyravbieshydmrbqcgnymbudzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666781.0916502-566-78933870039533/AnsiballZ_file.py
Dec 02 09:13:01 np0005541914.localdomain sudo[112913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:01 np0005541914.localdomain python3.9[112915]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:13:01 np0005541914.localdomain sudo[112913]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:01 np0005541914.localdomain sudo[113005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xvnikelvwhlhiusjcvhmjnuwnmjfvizz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666781.602626-566-239863243932264/AnsiballZ_file.py
Dec 02 09:13:01 np0005541914.localdomain sudo[113005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:02 np0005541914.localdomain python3.9[113007]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:13:02 np0005541914.localdomain sudo[113005]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:02 np0005541914.localdomain sudo[113097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qqvhsoxbvgrzerhtfktxartzjndaqbuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666782.1400566-566-131545038027141/AnsiballZ_file.py
Dec 02 09:13:02 np0005541914.localdomain sudo[113097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:02 np0005541914.localdomain python3.9[113099]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_controller.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:13:02 np0005541914.localdomain sudo[113097]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:02 np0005541914.localdomain sudo[113189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-abawchchmwkqgtcpgpbhqjzsauocpiom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666782.6609638-566-55487068447369/AnsiballZ_file.py
Dec 02 09:13:02 np0005541914.localdomain sudo[113189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:03 np0005541914.localdomain python3.9[113191]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_metadata_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:13:03 np0005541914.localdomain sudo[113189]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:03 np0005541914.localdomain sudo[113281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mrqdhdapeisqlqirgfzpnupbkwhumlsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666783.1672242-566-7158444144320/AnsiballZ_file.py
Dec 02 09:13:03 np0005541914.localdomain sudo[113281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:03 np0005541914.localdomain python3.9[113283]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_rsyslog.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:13:03 np0005541914.localdomain sudo[113281]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:04 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59566 DF PROTO=TCP SPT=56196 DPT=9882 SEQ=1383719902 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5338A630000000001030307) 
Dec 02 09:13:04 np0005541914.localdomain sudo[113373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cgsvpttgomqpvyvasrcqslabumdtavem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666784.6246805-1016-236725837426991/AnsiballZ_file.py
Dec 02 09:13:04 np0005541914.localdomain sudo[113373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:05 np0005541914.localdomain python3.9[113375]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:13:05 np0005541914.localdomain sudo[113373]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:05 np0005541914.localdomain sudo[113465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zcnccffyauibrurhsebfmipihqmavnbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666785.2033787-1016-186205370242270/AnsiballZ_file.py
Dec 02 09:13:05 np0005541914.localdomain sudo[113465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:05 np0005541914.localdomain python3.9[113467]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:13:05 np0005541914.localdomain sudo[113465]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:06 np0005541914.localdomain sudo[113557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qyloougsrijqbfedernlnaufxnparoeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666785.781507-1016-198937777414557/AnsiballZ_file.py
Dec 02 09:13:06 np0005541914.localdomain sudo[113557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:06 np0005541914.localdomain python3.9[113559]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_collectd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:13:06 np0005541914.localdomain sudo[113557]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:06 np0005541914.localdomain sudo[113649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ftusciixqvklcwgzrukordyxbppxyqxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666786.3660364-1016-30970006692437/AnsiballZ_file.py
Dec 02 09:13:06 np0005541914.localdomain sudo[113649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:06 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59567 DF PROTO=TCP SPT=56196 DPT=9882 SEQ=1383719902 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53392620000000001030307) 
Dec 02 09:13:06 np0005541914.localdomain python3.9[113651]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_iscsid.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:13:06 np0005541914.localdomain sudo[113649]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:07 np0005541914.localdomain sudo[113741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mhutvbzygtxcrxlczdzhzysbwgymwdyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666786.9145849-1016-153203802678094/AnsiballZ_file.py
Dec 02 09:13:07 np0005541914.localdomain sudo[113741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:07 np0005541914.localdomain python3.9[113743]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_logrotate_crond.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:13:07 np0005541914.localdomain sudo[113741]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:07 np0005541914.localdomain sudo[113833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jvplizfskpzwybyqgnxfjkmjtycvbwzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666787.4432943-1016-3156767654807/AnsiballZ_file.py
Dec 02 09:13:07 np0005541914.localdomain sudo[113833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:07 np0005541914.localdomain python3.9[113835]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_metrics_qdr.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:13:07 np0005541914.localdomain sudo[113833]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:08 np0005541914.localdomain sudo[113925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vitatmwvqqaojrzbmifohiooycyzcfna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666787.9253674-1016-41826234082605/AnsiballZ_file.py
Dec 02 09:13:08 np0005541914.localdomain sudo[113925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:08 np0005541914.localdomain python3.9[113927]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_dhcp.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:13:08 np0005541914.localdomain sudo[113925]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:08 np0005541914.localdomain sudo[114017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tqyilpwgvbcalssrbouxkdaqcntugtuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666788.4710114-1016-164528520593516/AnsiballZ_file.py
Dec 02 09:13:08 np0005541914.localdomain sudo[114017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:08 np0005541914.localdomain python3.9[114019]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_l3_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:13:08 np0005541914.localdomain sudo[114017]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:09 np0005541914.localdomain sudo[114109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uvwuhlhpiufepsqqttgzzggbxnfoweim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666789.0158331-1016-94196590721944/AnsiballZ_file.py
Dec 02 09:13:09 np0005541914.localdomain sudo[114109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:09 np0005541914.localdomain python3.9[114111]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_ovs_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:13:09 np0005541914.localdomain sudo[114109]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:09 np0005541914.localdomain sudo[114201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dupwauxsikqriceqpnelcbehbycofwge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666789.5848467-1016-189338131092896/AnsiballZ_file.py
Dec 02 09:13:09 np0005541914.localdomain sudo[114201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:09 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62276 DF PROTO=TCP SPT=36024 DPT=9100 SEQ=2944991294 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5339F220000000001030307) 
Dec 02 09:13:10 np0005541914.localdomain python3.9[114203]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:13:10 np0005541914.localdomain sudo[114201]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:10 np0005541914.localdomain sudo[114293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oiysfoekammsjahppncycdnbeyjzyraq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666790.1870089-1016-191143397311651/AnsiballZ_file.py
Dec 02 09:13:10 np0005541914.localdomain sudo[114293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:10 np0005541914.localdomain python3.9[114295]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:13:10 np0005541914.localdomain sudo[114293]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:10 np0005541914.localdomain sudo[114385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-msyypflybqrbagmigfgrfnywuwepiura ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666790.763131-1016-10142741767327/AnsiballZ_file.py
Dec 02 09:13:10 np0005541914.localdomain sudo[114385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:11 np0005541914.localdomain python3.9[114387]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:13:11 np0005541914.localdomain sudo[114385]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:11 np0005541914.localdomain sudo[114477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vjftuysfmoledhrxtvfixhaoibgizqtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666791.3026135-1016-26780426191907/AnsiballZ_file.py
Dec 02 09:13:11 np0005541914.localdomain sudo[114477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:11 np0005541914.localdomain python3.9[114479]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:13:11 np0005541914.localdomain sudo[114477]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:12 np0005541914.localdomain sudo[114569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wvudaoecmlupefmpklxxeefsfcmwaeph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666791.86238-1016-47677605466167/AnsiballZ_file.py
Dec 02 09:13:12 np0005541914.localdomain sudo[114569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:12 np0005541914.localdomain python3.9[114571]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:13:12 np0005541914.localdomain sudo[114569]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:12 np0005541914.localdomain sshd[114592]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:13:12 np0005541914.localdomain sudo[114663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xmqkftdnbgfuhkglbnyizbvhmwkqwmqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666792.4147234-1016-205304624625164/AnsiballZ_file.py
Dec 02 09:13:12 np0005541914.localdomain sudo[114663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:12 np0005541914.localdomain sshd[114592]: Invalid user ubuntu from 45.148.10.240 port 34126
Dec 02 09:13:12 np0005541914.localdomain python3.9[114665]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:13:12 np0005541914.localdomain sudo[114663]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:12 np0005541914.localdomain sshd[114592]: Connection closed by invalid user ubuntu 45.148.10.240 port 34126 [preauth]
Dec 02 09:13:13 np0005541914.localdomain sudo[114755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ufizvvxnxktrhegvhcyrnefphksvcxzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666792.8570404-1016-271137614127267/AnsiballZ_file.py
Dec 02 09:13:13 np0005541914.localdomain sudo[114755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:13 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45637 DF PROTO=TCP SPT=56896 DPT=9105 SEQ=4178981333 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD533AB620000000001030307) 
Dec 02 09:13:13 np0005541914.localdomain python3.9[114757]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:13:13 np0005541914.localdomain sudo[114755]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:13 np0005541914.localdomain sudo[114847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-seixfufxeybyhqyriyyawfjovzaejupf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666793.3880968-1016-83087300944336/AnsiballZ_file.py
Dec 02 09:13:13 np0005541914.localdomain sudo[114847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:13 np0005541914.localdomain python3.9[114849]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:13:13 np0005541914.localdomain sudo[114847]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:14 np0005541914.localdomain sudo[114939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nyvcmseglbnvqtlhtildtmgrfkxysbms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666793.921301-1016-197326944167583/AnsiballZ_file.py
Dec 02 09:13:14 np0005541914.localdomain sudo[114939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:14 np0005541914.localdomain python3.9[114941]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:13:14 np0005541914.localdomain sudo[114939]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:14 np0005541914.localdomain sudo[115031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lcaeuznhbhxugvzhtkpkbupdohscucsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666794.4494107-1016-118982668062573/AnsiballZ_file.py
Dec 02 09:13:14 np0005541914.localdomain sudo[115031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:14 np0005541914.localdomain python3.9[115033]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_controller.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:13:14 np0005541914.localdomain sudo[115031]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:15 np0005541914.localdomain sudo[115123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ggrsdirsvduuvhuunpiogattbqaqprea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666794.9983659-1016-269114000086834/AnsiballZ_file.py
Dec 02 09:13:15 np0005541914.localdomain sudo[115123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:15 np0005541914.localdomain python3.9[115125]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:13:15 np0005541914.localdomain sudo[115123]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:15 np0005541914.localdomain sudo[115215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bfyrihvithrucachxgdxstalzghbtddz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666795.5435092-1016-48500470236118/AnsiballZ_file.py
Dec 02 09:13:15 np0005541914.localdomain sudo[115215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:15 np0005541914.localdomain python3.9[115217]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_rsyslog.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:13:15 np0005541914.localdomain sudo[115215]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:16 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49685 DF PROTO=TCP SPT=35108 DPT=9102 SEQ=696421513 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD533B6AE0000000001030307) 
Dec 02 09:13:16 np0005541914.localdomain sudo[115307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pfohanwlteufdhwsmbxzptsrzoxihpjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666796.4460797-1463-238674261766637/AnsiballZ_command.py
Dec 02 09:13:16 np0005541914.localdomain sudo[115307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:17 np0005541914.localdomain python3.9[115309]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                                              systemctl disable --now certmonger.service
                                                              test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                                            fi
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:13:17 np0005541914.localdomain sudo[115307]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:17 np0005541914.localdomain python3.9[115401]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 02 09:13:18 np0005541914.localdomain sudo[115491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vwaqxlxhyytxccnuumglfbeufdoxwbon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666798.1831007-1517-29429966268261/AnsiballZ_systemd_service.py
Dec 02 09:13:18 np0005541914.localdomain sudo[115491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:18 np0005541914.localdomain python3.9[115493]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 02 09:13:18 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:13:18 np0005541914.localdomain systemd-rc-local-generator[115515]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:13:18 np0005541914.localdomain systemd-sysv-generator[115521]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:13:18 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:13:19 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49687 DF PROTO=TCP SPT=35108 DPT=9102 SEQ=696421513 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD533C2A20000000001030307) 
Dec 02 09:13:19 np0005541914.localdomain sudo[115491]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:19 np0005541914.localdomain sudo[115619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uvfeqdutlbzobsqfkfskkuzocaowutqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666799.248554-1541-95491679059018/AnsiballZ_command.py
Dec 02 09:13:19 np0005541914.localdomain sudo[115619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:19 np0005541914.localdomain python3.9[115621]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:13:19 np0005541914.localdomain sudo[115619]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:20 np0005541914.localdomain sudo[115712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bsnzttcxpdmivsnfjsxtfdzuewsqeipn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666799.8561227-1541-39443125052414/AnsiballZ_command.py
Dec 02 09:13:20 np0005541914.localdomain sudo[115712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:20 np0005541914.localdomain python3.9[115714]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:13:21 np0005541914.localdomain sudo[115712]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:21 np0005541914.localdomain sudo[115805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xczkjhwywjvtltgwkzlnkjcgwdtnmrjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666801.406515-1541-276991497810286/AnsiballZ_command.py
Dec 02 09:13:21 np0005541914.localdomain sudo[115805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:21 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35 DF PROTO=TCP SPT=40840 DPT=9101 SEQ=1232125286 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD533CD220000000001030307) 
Dec 02 09:13:21 np0005541914.localdomain python3.9[115807]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_collectd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:13:21 np0005541914.localdomain sudo[115805]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:22 np0005541914.localdomain sudo[115898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cmisgvqycjtryojwbarzbkfbydxeesbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666802.0004265-1541-71069862346467/AnsiballZ_command.py
Dec 02 09:13:22 np0005541914.localdomain sudo[115898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:22 np0005541914.localdomain python3.9[115900]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_iscsid.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:13:22 np0005541914.localdomain sudo[115898]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:22 np0005541914.localdomain sudo[115991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vtmwhdafzybmhzjzdsybhfmqzfwtslmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666802.5740325-1541-198237048185338/AnsiballZ_command.py
Dec 02 09:13:22 np0005541914.localdomain sudo[115991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:23 np0005541914.localdomain python3.9[115993]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_logrotate_crond.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:13:23 np0005541914.localdomain sudo[115991]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:23 np0005541914.localdomain sudo[116084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yckcntnxmqrpeoogfverzdrewxzxjgnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666803.1550603-1541-113617210555713/AnsiballZ_command.py
Dec 02 09:13:23 np0005541914.localdomain sudo[116084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:23 np0005541914.localdomain python3.9[116086]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_metrics_qdr.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:13:23 np0005541914.localdomain sudo[116084]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:23 np0005541914.localdomain sudo[116177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pxysjgggeqxxbhiektzglmszfeqzmuyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666803.720632-1541-220273840920788/AnsiballZ_command.py
Dec 02 09:13:23 np0005541914.localdomain sudo[116177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:24 np0005541914.localdomain python3.9[116179]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_dhcp.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:13:24 np0005541914.localdomain sudo[116177]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:24 np0005541914.localdomain sudo[116270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kwehevksvjmdflzfxrdgmjcgkomjyfjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666804.2809856-1541-232082345038981/AnsiballZ_command.py
Dec 02 09:13:24 np0005541914.localdomain sudo[116270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:24 np0005541914.localdomain python3.9[116272]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_l3_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:13:24 np0005541914.localdomain sudo[116270]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:25 np0005541914.localdomain sudo[116363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gyoxjzfbczwdjycuiffwmlafkkmglqnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666804.8192365-1541-161676296685182/AnsiballZ_command.py
Dec 02 09:13:25 np0005541914.localdomain sudo[116363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:25 np0005541914.localdomain python3.9[116365]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_ovs_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:13:25 np0005541914.localdomain sudo[116363]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:25 np0005541914.localdomain sudo[116456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-djzuvkmwziokimjvkvwrhtdeueafvuew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666805.3806963-1541-57934277365594/AnsiballZ_command.py
Dec 02 09:13:25 np0005541914.localdomain sudo[116456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:25 np0005541914.localdomain python3.9[116458]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:13:25 np0005541914.localdomain sudo[116456]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:26 np0005541914.localdomain sudo[116549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-thlylqqsnkluztdtrqjwzrphnshuklmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666805.9440303-1541-257917539411327/AnsiballZ_command.py
Dec 02 09:13:26 np0005541914.localdomain sudo[116549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:26 np0005541914.localdomain python3.9[116551]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:13:26 np0005541914.localdomain sudo[116549]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:26 np0005541914.localdomain sudo[116642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bnviixzjjntuivacwqqhrahosmnpidyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666806.524064-1541-147111167532592/AnsiballZ_command.py
Dec 02 09:13:26 np0005541914.localdomain sudo[116642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:26 np0005541914.localdomain python3.9[116644]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:13:27 np0005541914.localdomain sudo[116642]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65491 DF PROTO=TCP SPT=36098 DPT=9101 SEQ=515077669 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD533E2220000000001030307) 
Dec 02 09:13:27 np0005541914.localdomain sudo[116735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hbgnglcfrpfesrdysayelvhqkcrbpdtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666807.1040683-1541-182060165406160/AnsiballZ_command.py
Dec 02 09:13:27 np0005541914.localdomain sudo[116735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:27 np0005541914.localdomain python3.9[116737]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:13:27 np0005541914.localdomain sudo[116735]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:28 np0005541914.localdomain sudo[116829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zkxuycsdxmzowfcdtdqcnphswtzpadod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666808.0213418-1541-131164473131586/AnsiballZ_command.py
Dec 02 09:13:28 np0005541914.localdomain sudo[116829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:28 np0005541914.localdomain python3.9[116831]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:13:28 np0005541914.localdomain sudo[116829]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:28 np0005541914.localdomain sudo[116873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:13:28 np0005541914.localdomain sudo[116873]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:13:28 np0005541914.localdomain sudo[116873]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:28 np0005541914.localdomain sudo[116907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:13:28 np0005541914.localdomain sudo[116907]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:13:28 np0005541914.localdomain sudo[116952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bfpeamnliiwuqwrwesmbjgpczotgwasx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666808.4916496-1541-271618579453902/AnsiballZ_command.py
Dec 02 09:13:28 np0005541914.localdomain sudo[116952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:28 np0005541914.localdomain python3.9[116954]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:13:28 np0005541914.localdomain sudo[116952]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:29 np0005541914.localdomain sudo[116907]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:29 np0005541914.localdomain sudo[117078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-noitrsqfclsjkrnybkmdrnfmcuonlgoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666809.026977-1541-207675552275857/AnsiballZ_command.py
Dec 02 09:13:29 np0005541914.localdomain sudo[117078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:29 np0005541914.localdomain python3.9[117080]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud_recover.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:13:29 np0005541914.localdomain sudo[117078]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:29 np0005541914.localdomain sudo[117171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ibskoykxmjhxfxkgafxbxvtttssvovsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666809.613656-1541-9839365185714/AnsiballZ_command.py
Dec 02 09:13:29 np0005541914.localdomain sudo[117171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:30 np0005541914.localdomain python3.9[117173]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:13:30 np0005541914.localdomain sudo[117171]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:30 np0005541914.localdomain sudo[117174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:13:30 np0005541914.localdomain sudo[117174]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:13:30 np0005541914.localdomain sudo[117174]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:30 np0005541914.localdomain sudo[117279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-smwnaykuysfgaoayaviiuqcdfntckbcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666810.1737936-1541-258507786094686/AnsiballZ_command.py
Dec 02 09:13:30 np0005541914.localdomain sudo[117279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:30 np0005541914.localdomain python3.9[117281]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:13:30 np0005541914.localdomain sudo[117279]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:30 np0005541914.localdomain sudo[117372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-axudcwtonxmmbjjgdfvmtbinlakxbnhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666810.72442-1541-177975716035121/AnsiballZ_command.py
Dec 02 09:13:30 np0005541914.localdomain sudo[117372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:31 np0005541914.localdomain python3.9[117374]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_controller.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:13:31 np0005541914.localdomain sudo[117372]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49689 DF PROTO=TCP SPT=35108 DPT=9102 SEQ=696421513 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD533F3230000000001030307) 
Dec 02 09:13:31 np0005541914.localdomain sudo[117465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-toptcgftqdjqpvfxagtwadqdqltwefwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666811.3420162-1541-188810885717852/AnsiballZ_command.py
Dec 02 09:13:31 np0005541914.localdomain sudo[117465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:31 np0005541914.localdomain python3.9[117467]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_metadata_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:13:31 np0005541914.localdomain sudo[117465]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:32 np0005541914.localdomain sudo[117558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wbiauslhoqriuagyoycycsgrdwdeieej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666811.9436724-1541-3870722569822/AnsiballZ_command.py
Dec 02 09:13:32 np0005541914.localdomain sudo[117558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:32 np0005541914.localdomain python3.9[117560]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_rsyslog.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:13:32 np0005541914.localdomain sudo[117558]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:32 np0005541914.localdomain sshd[111362]: pam_unix(sshd:session): session closed for user zuul
Dec 02 09:13:32 np0005541914.localdomain systemd[1]: session-37.scope: Deactivated successfully.
Dec 02 09:13:32 np0005541914.localdomain systemd[1]: session-37.scope: Consumed 29.633s CPU time.
Dec 02 09:13:32 np0005541914.localdomain systemd-logind[760]: Session 37 logged out. Waiting for processes to exit.
Dec 02 09:13:32 np0005541914.localdomain systemd-logind[760]: Removed session 37.
Dec 02 09:13:33 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57879 DF PROTO=TCP SPT=41738 DPT=9882 SEQ=4154017471 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD533FB800000000001030307) 
Dec 02 09:13:34 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57880 DF PROTO=TCP SPT=41738 DPT=9882 SEQ=4154017471 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD533FFA20000000001030307) 
Dec 02 09:13:36 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57881 DF PROTO=TCP SPT=41738 DPT=9882 SEQ=4154017471 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53407A20000000001030307) 
Dec 02 09:13:40 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12817 DF PROTO=TCP SPT=50014 DPT=9100 SEQ=2942203590 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53415230000000001030307) 
Dec 02 09:13:43 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32033 DF PROTO=TCP SPT=49552 DPT=9105 SEQ=1345071773 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53420A20000000001030307) 
Dec 02 09:13:46 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26541 DF PROTO=TCP SPT=56402 DPT=9102 SEQ=1972207252 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5342BDE0000000001030307) 
Dec 02 09:13:48 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57883 DF PROTO=TCP SPT=41738 DPT=9882 SEQ=4154017471 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53437220000000001030307) 
Dec 02 09:13:51 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65493 DF PROTO=TCP SPT=36098 DPT=9101 SEQ=515077669 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53443230000000001030307) 
Dec 02 09:13:52 np0005541914.localdomain sshd[117576]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:13:52 np0005541914.localdomain sshd[117576]: Accepted publickey for zuul from 192.168.122.30 port 42392 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 09:13:52 np0005541914.localdomain systemd-logind[760]: New session 38 of user zuul.
Dec 02 09:13:52 np0005541914.localdomain systemd[1]: Started Session 38 of User zuul.
Dec 02 09:13:52 np0005541914.localdomain sshd[117576]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 09:13:52 np0005541914.localdomain python3.9[117669]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 02 09:13:54 np0005541914.localdomain python3.9[117773]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 09:13:54 np0005541914.localdomain sudo[117863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-abnmwdkbjhgwayuegtyhiattatwxazcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666834.4119492-96-85871970875393/AnsiballZ_command.py
Dec 02 09:13:54 np0005541914.localdomain sudo[117863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:54 np0005541914.localdomain python3.9[117865]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:13:54 np0005541914.localdomain sudo[117863]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:55 np0005541914.localdomain sudo[117956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zvsbiictnpgpmjjexfaelvoiznmctyhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666835.357664-131-66532952967765/AnsiballZ_stat.py
Dec 02 09:13:55 np0005541914.localdomain sudo[117956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:55 np0005541914.localdomain python3.9[117958]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:13:55 np0005541914.localdomain sudo[117956]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:56 np0005541914.localdomain sudo[118048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vuehhijxmezzlxofhztaobxouzwhswpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666836.137341-156-14445381018592/AnsiballZ_file.py
Dec 02 09:13:56 np0005541914.localdomain sudo[118048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:56 np0005541914.localdomain python3.9[118050]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:13:56 np0005541914.localdomain sudo[118048]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5137 DF PROTO=TCP SPT=34382 DPT=9101 SEQ=272883982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53457630000000001030307) 
Dec 02 09:13:57 np0005541914.localdomain sudo[118140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-exrslecyrwtgnsiembkgldbhdlwmscnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666836.910215-180-74293053735131/AnsiballZ_stat.py
Dec 02 09:13:57 np0005541914.localdomain sudo[118140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:57 np0005541914.localdomain python3.9[118142]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:13:57 np0005541914.localdomain sudo[118140]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:57 np0005541914.localdomain sudo[118213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jbkqmdivvbialhpvtbekfjiywnfgehps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666836.910215-180-74293053735131/AnsiballZ_copy.py
Dec 02 09:13:57 np0005541914.localdomain sudo[118213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:58 np0005541914.localdomain python3.9[118215]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764666836.910215-180-74293053735131/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:13:58 np0005541914.localdomain sudo[118213]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:58 np0005541914.localdomain sudo[118305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qvxwdwbeimzwsdrgchfyihrafbsmgztc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666838.225437-224-72700068374445/AnsiballZ_setup.py
Dec 02 09:13:58 np0005541914.localdomain sudo[118305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:58 np0005541914.localdomain python3.9[118307]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 09:13:59 np0005541914.localdomain sudo[118305]: pam_unix(sudo:session): session closed for user root
Dec 02 09:13:59 np0005541914.localdomain sudo[118401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zowzpszbkaaqtnbpgbjbnfgffmisxvgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666839.2058582-249-6833148419443/AnsiballZ_file.py
Dec 02 09:13:59 np0005541914.localdomain sudo[118401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:13:59 np0005541914.localdomain python3.9[118403]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:13:59 np0005541914.localdomain sudo[118401]: pam_unix(sudo:session): session closed for user root
Dec 02 09:14:00 np0005541914.localdomain sudo[118493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gnhhgzemvhyxnysqcarvtywgezpsuibh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666839.897625-275-157607127072174/AnsiballZ_file.py
Dec 02 09:14:00 np0005541914.localdomain sudo[118493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:14:00 np0005541914.localdomain python3.9[118495]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:14:00 np0005541914.localdomain sudo[118493]: pam_unix(sudo:session): session closed for user root
Dec 02 09:14:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26545 DF PROTO=TCP SPT=56402 DPT=9102 SEQ=1972207252 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53467220000000001030307) 
Dec 02 09:14:01 np0005541914.localdomain python3.9[118585]: ansible-ansible.builtin.service_facts Invoked
Dec 02 09:14:01 np0005541914.localdomain network[118602]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 02 09:14:01 np0005541914.localdomain network[118603]: 'network-scripts' will be removed from distribution in near future.
Dec 02 09:14:01 np0005541914.localdomain network[118604]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 02 09:14:03 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59198 DF PROTO=TCP SPT=44674 DPT=9882 SEQ=3678166113 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53470B10000000001030307) 
Dec 02 09:14:04 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59199 DF PROTO=TCP SPT=44674 DPT=9882 SEQ=3678166113 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53474A20000000001030307) 
Dec 02 09:14:04 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:14:06 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59200 DF PROTO=TCP SPT=44674 DPT=9882 SEQ=3678166113 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5347CA20000000001030307) 
Dec 02 09:14:08 np0005541914.localdomain python3.9[118802]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:14:08 np0005541914.localdomain python3.9[118892]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 09:14:09 np0005541914.localdomain sudo[118986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yokgrykrnrztqncolqbkanatycfehtws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666849.3498583-378-49979738139958/AnsiballZ_command.py
Dec 02 09:14:09 np0005541914.localdomain sudo[118986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:14:09 np0005541914.localdomain python3.9[118988]: ansible-ansible.legacy.command Invoked with _raw_params=# This is a hack to deploy RDO Delorean repos to RHEL as if it were Centos 9 Stream
                                                            set -euxo pipefail
                                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                                            python3 -m venv ./venv
                                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./repo-setup-main
                                                            # This is required for FIPS enabled until trunk.rdoproject.org
                                                            # is not being served from a centos7 host, tracked by
                                                            # https://issues.redhat.com/browse/RHOSZUUL-1517
                                                            dnf -y install crypto-policies
                                                            update-crypto-policies --set FIPS:NO-ENFORCE-EMS
                                                            ./venv/bin/repo-setup current-podified -b antelope -d centos9 --stream
                                                            
                                                            # Exclude ceph-common-18.2.7 as it's pulling newer openssl not compatible
                                                            # with rhel 9.2 openssh
                                                            dnf config-manager --setopt centos9-storage.exclude="ceph-common-18.2.7" --save
                                                            # FIXME: perform dnf upgrade for other packages in EDPM ansible
                                                            # here we only ensuring that decontainerized libvirt can start
                                                            dnf -y upgrade openstack-selinux
                                                            rm -f /run/virtlogd.pid
                                                            
                                                            rm -rf repo-setup-main
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:14:10 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28643 DF PROTO=TCP SPT=51150 DPT=9100 SEQ=3538816596 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5348B220000000001030307) 
Dec 02 09:14:13 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19967 DF PROTO=TCP SPT=35512 DPT=9105 SEQ=3679674891 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53495E20000000001030307) 
Dec 02 09:14:16 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51621 DF PROTO=TCP SPT=37264 DPT=9102 SEQ=2236483668 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD534A10E0000000001030307) 
Dec 02 09:14:18 np0005541914.localdomain systemd[1]: Stopping OpenSSH server daemon...
Dec 02 09:14:18 np0005541914.localdomain sshd[45417]: Received signal 15; terminating.
Dec 02 09:14:18 np0005541914.localdomain systemd[1]: sshd.service: Deactivated successfully.
Dec 02 09:14:18 np0005541914.localdomain systemd[1]: Stopped OpenSSH server daemon.
Dec 02 09:14:18 np0005541914.localdomain systemd[1]: sshd.service: Consumed 3.693s CPU time.
Dec 02 09:14:18 np0005541914.localdomain systemd[1]: Stopped target sshd-keygen.target.
Dec 02 09:14:18 np0005541914.localdomain systemd[1]: Stopping sshd-keygen.target...
Dec 02 09:14:18 np0005541914.localdomain systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 02 09:14:18 np0005541914.localdomain systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 02 09:14:18 np0005541914.localdomain systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 02 09:14:18 np0005541914.localdomain systemd[1]: Reached target sshd-keygen.target.
Dec 02 09:14:19 np0005541914.localdomain systemd[1]: Starting OpenSSH server daemon...
Dec 02 09:14:19 np0005541914.localdomain sshd[119031]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:14:19 np0005541914.localdomain sshd[119031]: Server listening on 0.0.0.0 port 22.
Dec 02 09:14:19 np0005541914.localdomain sshd[119031]: Server listening on :: port 22.
Dec 02 09:14:19 np0005541914.localdomain systemd[1]: Started OpenSSH server daemon.
Dec 02 09:14:19 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59202 DF PROTO=TCP SPT=44674 DPT=9882 SEQ=3678166113 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD534AD220000000001030307) 
Dec 02 09:14:19 np0005541914.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 02 09:14:19 np0005541914.localdomain systemd[1]: Starting man-db-cache-update.service...
Dec 02 09:14:19 np0005541914.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 02 09:14:19 np0005541914.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 02 09:14:19 np0005541914.localdomain systemd[1]: Finished man-db-cache-update.service.
Dec 02 09:14:19 np0005541914.localdomain systemd[1]: run-r80ad94e07750407e8a9c5b852d2bd790.service: Deactivated successfully.
Dec 02 09:14:19 np0005541914.localdomain systemd[1]: run-r98bd105f013d45dca59e584de63389eb.service: Deactivated successfully.
Dec 02 09:14:20 np0005541914.localdomain systemd[1]: Stopping OpenSSH server daemon...
Dec 02 09:14:20 np0005541914.localdomain sshd[119031]: Received signal 15; terminating.
Dec 02 09:14:20 np0005541914.localdomain systemd[1]: sshd.service: Deactivated successfully.
Dec 02 09:14:20 np0005541914.localdomain systemd[1]: Stopped OpenSSH server daemon.
Dec 02 09:14:20 np0005541914.localdomain systemd[1]: Stopped target sshd-keygen.target.
Dec 02 09:14:20 np0005541914.localdomain systemd[1]: Stopping sshd-keygen.target...
Dec 02 09:14:20 np0005541914.localdomain systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 02 09:14:20 np0005541914.localdomain systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 02 09:14:20 np0005541914.localdomain systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 02 09:14:20 np0005541914.localdomain systemd[1]: Reached target sshd-keygen.target.
Dec 02 09:14:20 np0005541914.localdomain systemd[1]: Starting OpenSSH server daemon...
Dec 02 09:14:20 np0005541914.localdomain sshd[119204]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:14:20 np0005541914.localdomain sshd[119204]: Server listening on 0.0.0.0 port 22.
Dec 02 09:14:20 np0005541914.localdomain sshd[119204]: Server listening on :: port 22.
Dec 02 09:14:20 np0005541914.localdomain systemd[1]: Started OpenSSH server daemon.
Dec 02 09:14:21 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5139 DF PROTO=TCP SPT=34382 DPT=9101 SEQ=272883982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD534B7220000000001030307) 
Dec 02 09:14:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21660 DF PROTO=TCP SPT=43870 DPT=9101 SEQ=3299707963 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD534CCA20000000001030307) 
Dec 02 09:14:30 np0005541914.localdomain sudo[119270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:14:30 np0005541914.localdomain sudo[119270]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:14:30 np0005541914.localdomain sudo[119270]: pam_unix(sudo:session): session closed for user root
Dec 02 09:14:30 np0005541914.localdomain sudo[119285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:14:30 np0005541914.localdomain sudo[119285]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:14:30 np0005541914.localdomain sudo[119285]: pam_unix(sudo:session): session closed for user root
Dec 02 09:14:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51625 DF PROTO=TCP SPT=37264 DPT=9102 SEQ=2236483668 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD534DD220000000001030307) 
Dec 02 09:14:31 np0005541914.localdomain sudo[119363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:14:31 np0005541914.localdomain sudo[119363]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:14:31 np0005541914.localdomain sudo[119363]: pam_unix(sudo:session): session closed for user root
Dec 02 09:14:33 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57603 DF PROTO=TCP SPT=33302 DPT=9882 SEQ=2211759833 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD534E5E00000000001030307) 
Dec 02 09:14:34 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57604 DF PROTO=TCP SPT=33302 DPT=9882 SEQ=2211759833 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD534E9E20000000001030307) 
Dec 02 09:14:36 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57605 DF PROTO=TCP SPT=33302 DPT=9882 SEQ=2211759833 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD534F1E20000000001030307) 
Dec 02 09:14:40 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38222 DF PROTO=TCP SPT=40570 DPT=9100 SEQ=1002037188 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD534FF230000000001030307) 
Dec 02 09:14:42 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28644 DF PROTO=TCP SPT=51150 DPT=9100 SEQ=3538816596 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53509220000000001030307) 
Dec 02 09:14:46 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61241 DF PROTO=TCP SPT=39538 DPT=9102 SEQ=1073381646 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD535163F0000000001030307) 
Dec 02 09:14:48 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57607 DF PROTO=TCP SPT=33302 DPT=9882 SEQ=2211759833 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53521220000000001030307) 
Dec 02 09:14:51 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21662 DF PROTO=TCP SPT=43870 DPT=9101 SEQ=3299707963 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5352D220000000001030307) 
Dec 02 09:14:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33201 DF PROTO=TCP SPT=51146 DPT=9101 SEQ=761795435 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53541E30000000001030307) 
Dec 02 09:15:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61245 DF PROTO=TCP SPT=39538 DPT=9102 SEQ=1073381646 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53553230000000001030307) 
Dec 02 09:15:03 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10311 DF PROTO=TCP SPT=60256 DPT=9882 SEQ=983512334 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5355B110000000001030307) 
Dec 02 09:15:04 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10312 DF PROTO=TCP SPT=60256 DPT=9882 SEQ=983512334 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5355F220000000001030307) 
Dec 02 09:15:06 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10313 DF PROTO=TCP SPT=60256 DPT=9882 SEQ=983512334 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53567220000000001030307) 
Dec 02 09:15:10 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27259 DF PROTO=TCP SPT=51864 DPT=9100 SEQ=2946643447 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53575220000000001030307) 
Dec 02 09:15:13 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12982 DF PROTO=TCP SPT=59050 DPT=9105 SEQ=1692665145 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53580220000000001030307) 
Dec 02 09:15:15 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19972 DF PROTO=TCP SPT=35512 DPT=9105 SEQ=3679674891 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5358B220000000001030307) 
Dec 02 09:15:16 np0005541914.localdomain sshd[119613]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:15:19 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10315 DF PROTO=TCP SPT=60256 DPT=9882 SEQ=983512334 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53597220000000001030307) 
Dec 02 09:15:19 np0005541914.localdomain sshd[119615]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:15:19 np0005541914.localdomain sshd[119613]: Invalid user sol from 80.94.92.182 port 60520
Dec 02 09:15:19 np0005541914.localdomain sshd[119613]: Connection closed by invalid user sol 80.94.92.182 port 60520 [preauth]
Dec 02 09:15:22 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33203 DF PROTO=TCP SPT=51146 DPT=9101 SEQ=761795435 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD535A3230000000001030307) 
Dec 02 09:15:25 np0005541914.localdomain sshd[119649]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:15:25 np0005541914.localdomain sshd[119649]: Invalid user ubuntu from 45.148.10.240 port 34834
Dec 02 09:15:25 np0005541914.localdomain sshd[119649]: Connection closed by invalid user ubuntu 45.148.10.240 port 34834 [preauth]
Dec 02 09:15:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47195 DF PROTO=TCP SPT=57954 DPT=9101 SEQ=2147832935 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD535B6E20000000001030307) 
Dec 02 09:15:27 np0005541914.localdomain sshd[119615]: Connection closed by 138.68.131.233 port 49856 [preauth]
Dec 02 09:15:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12985 DF PROTO=TCP SPT=59050 DPT=9105 SEQ=1692665145 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD535B9220000000001030307) 
Dec 02 09:15:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13493 DF PROTO=TCP SPT=34252 DPT=9102 SEQ=2253941747 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD535C7220000000001030307) 
Dec 02 09:15:31 np0005541914.localdomain kernel: SELinux:  Converting 2741 SID table entries...
Dec 02 09:15:31 np0005541914.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 02 09:15:31 np0005541914.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 02 09:15:31 np0005541914.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 02 09:15:31 np0005541914.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 02 09:15:31 np0005541914.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 02 09:15:31 np0005541914.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 02 09:15:31 np0005541914.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 02 09:15:31 np0005541914.localdomain sudo[119757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:15:31 np0005541914.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=17 res=1
Dec 02 09:15:31 np0005541914.localdomain sudo[119757]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:15:31 np0005541914.localdomain sudo[119757]: pam_unix(sudo:session): session closed for user root
Dec 02 09:15:31 np0005541914.localdomain sudo[119772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:15:31 np0005541914.localdomain sudo[119772]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:15:32 np0005541914.localdomain sudo[119772]: pam_unix(sudo:session): session closed for user root
Dec 02 09:15:32 np0005541914.localdomain sudo[118986]: pam_unix(sudo:session): session closed for user root
Dec 02 09:15:34 np0005541914.localdomain sudo[119908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-afseinxzolbisbawxivdtclszwepcmxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666933.7766733-404-95553553129055/AnsiballZ_file.py
Dec 02 09:15:34 np0005541914.localdomain sudo[119908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:15:34 np0005541914.localdomain python3.9[119910]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:15:34 np0005541914.localdomain sudo[119908]: pam_unix(sudo:session): session closed for user root
Dec 02 09:15:34 np0005541914.localdomain sudo[120000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tdznkqtpycvwqxfcdsbfyxrbsvltfdqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666934.4324012-428-122359247689998/AnsiballZ_stat.py
Dec 02 09:15:34 np0005541914.localdomain sudo[120000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:15:34 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64936 DF PROTO=TCP SPT=40650 DPT=9882 SEQ=425084901 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD535D4620000000001030307) 
Dec 02 09:15:34 np0005541914.localdomain python3.9[120002]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/edpm.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:15:34 np0005541914.localdomain sudo[120000]: pam_unix(sudo:session): session closed for user root
Dec 02 09:15:35 np0005541914.localdomain sudo[120073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jwwmulqswczyfgrbyeexyxsltjlyupgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666934.4324012-428-122359247689998/AnsiballZ_copy.py
Dec 02 09:15:35 np0005541914.localdomain sudo[120073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:15:35 np0005541914.localdomain python3.9[120075]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/edpm.fact mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764666934.4324012-428-122359247689998/.source.fact _original_basename=.ruw2xycs follow=False checksum=03aee63dcf9b49b0ac4473b2f1a1b5d3783aa639 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:15:35 np0005541914.localdomain sudo[120073]: pam_unix(sudo:session): session closed for user root
Dec 02 09:15:36 np0005541914.localdomain python3.9[120165]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 09:15:36 np0005541914.localdomain sudo[120170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:15:36 np0005541914.localdomain sudo[120170]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:15:36 np0005541914.localdomain sudo[120170]: pam_unix(sudo:session): session closed for user root
Dec 02 09:15:36 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64937 DF PROTO=TCP SPT=40650 DPT=9882 SEQ=425084901 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD535DC620000000001030307) 
Dec 02 09:15:36 np0005541914.localdomain sudo[120276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-srdikctuuzstjzqtxnzoxgelpwietpmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666936.7260752-504-191308375784903/AnsiballZ_setup.py
Dec 02 09:15:36 np0005541914.localdomain sudo[120276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:15:37 np0005541914.localdomain python3.9[120278]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 02 09:15:37 np0005541914.localdomain sudo[120276]: pam_unix(sudo:session): session closed for user root
Dec 02 09:15:37 np0005541914.localdomain sudo[120330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ucqptpkwfexiroalodbiihfjyqvyafwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666936.7260752-504-191308375784903/AnsiballZ_dnf.py
Dec 02 09:15:37 np0005541914.localdomain sudo[120330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:15:38 np0005541914.localdomain python3.9[120332]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 02 09:15:39 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47519 DF PROTO=TCP SPT=44026 DPT=9100 SEQ=2383907933 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD535E9230000000001030307) 
Dec 02 09:15:41 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:15:41 np0005541914.localdomain systemd-sysv-generator[120370]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:15:41 np0005541914.localdomain systemd-rc-local-generator[120366]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:15:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:15:41 np0005541914.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Dec 02 09:15:42 np0005541914.localdomain sudo[120330]: pam_unix(sudo:session): session closed for user root
Dec 02 09:15:43 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3081 DF PROTO=TCP SPT=48796 DPT=9105 SEQ=903693248 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD535F5630000000001030307) 
Dec 02 09:15:44 np0005541914.localdomain sudo[120469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fqvtihaetqvpgvayedeckusuyyfcmqao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666944.0469046-540-195773760727094/AnsiballZ_command.py
Dec 02 09:15:44 np0005541914.localdomain sudo[120469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:15:44 np0005541914.localdomain python3.9[120471]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:15:45 np0005541914.localdomain sudo[120469]: pam_unix(sudo:session): session closed for user root
Dec 02 09:15:45 np0005541914.localdomain sudo[120708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cyeftygqbjrvpaymhmkgprafoqhxjbwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666945.3706133-564-277501297478846/AnsiballZ_selinux.py
Dec 02 09:15:45 np0005541914.localdomain sudo[120708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:15:46 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51851 DF PROTO=TCP SPT=47874 DPT=9102 SEQ=2931664264 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD536009E0000000001030307) 
Dec 02 09:15:46 np0005541914.localdomain python3.9[120710]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec 02 09:15:46 np0005541914.localdomain sudo[120708]: pam_unix(sudo:session): session closed for user root
Dec 02 09:15:46 np0005541914.localdomain sudo[120800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bogqanakdtxsuxbwedzzplerihtwfpui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666946.650694-597-109983647905359/AnsiballZ_command.py
Dec 02 09:15:46 np0005541914.localdomain sudo[120800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:15:47 np0005541914.localdomain python3.9[120802]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec 02 09:15:47 np0005541914.localdomain sudo[120800]: pam_unix(sudo:session): session closed for user root
Dec 02 09:15:48 np0005541914.localdomain sudo[120893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tlgvdzcdwdhwjomecowggfnqsiyltlxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666947.954509-621-102007289982049/AnsiballZ_file.py
Dec 02 09:15:48 np0005541914.localdomain sudo[120893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:15:48 np0005541914.localdomain python3.9[120895]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:15:48 np0005541914.localdomain sudo[120893]: pam_unix(sudo:session): session closed for user root
Dec 02 09:15:48 np0005541914.localdomain sudo[120985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-odazfscdvfoofbvhcpuxwhmbytcyopkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666948.5737019-644-37458058680017/AnsiballZ_mount.py
Dec 02 09:15:48 np0005541914.localdomain sudo[120985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:15:49 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51853 DF PROTO=TCP SPT=47874 DPT=9102 SEQ=2931664264 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5360CA30000000001030307) 
Dec 02 09:15:49 np0005541914.localdomain python3.9[120987]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec 02 09:15:49 np0005541914.localdomain sudo[120985]: pam_unix(sudo:session): session closed for user root
Dec 02 09:15:50 np0005541914.localdomain sudo[121077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-shygtjrioiypqxxesroignohglzpmjvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666950.5791829-729-247988110550357/AnsiballZ_file.py
Dec 02 09:15:50 np0005541914.localdomain sudo[121077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:15:50 np0005541914.localdomain python3.9[121079]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:15:51 np0005541914.localdomain sudo[121077]: pam_unix(sudo:session): session closed for user root
Dec 02 09:15:51 np0005541914.localdomain sudo[121169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oxtpaarwbebqedojrskqjrqesfmmrxwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666951.224332-753-158469701170302/AnsiballZ_stat.py
Dec 02 09:15:51 np0005541914.localdomain sudo[121169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:15:51 np0005541914.localdomain python3.9[121171]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:15:51 np0005541914.localdomain sudo[121169]: pam_unix(sudo:session): session closed for user root
Dec 02 09:15:51 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47197 DF PROTO=TCP SPT=57954 DPT=9101 SEQ=2147832935 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53617230000000001030307) 
Dec 02 09:15:52 np0005541914.localdomain sudo[121242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rdzplpglbzjqqxtmoaeeawuhmcufzubi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666951.224332-753-158469701170302/AnsiballZ_copy.py
Dec 02 09:15:52 np0005541914.localdomain sudo[121242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:15:52 np0005541914.localdomain python3.9[121244]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764666951.224332-753-158469701170302/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=73226dd0fbcefd6bca2e777d65fae037e6bf10fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:15:52 np0005541914.localdomain sudo[121242]: pam_unix(sudo:session): session closed for user root
Dec 02 09:15:53 np0005541914.localdomain sudo[121334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ylkcernjqcdkvixthhandfqlsdfrwbwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666952.9707656-825-141493241404079/AnsiballZ_stat.py
Dec 02 09:15:53 np0005541914.localdomain sudo[121334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:15:53 np0005541914.localdomain python3.9[121336]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:15:53 np0005541914.localdomain sudo[121334]: pam_unix(sudo:session): session closed for user root
Dec 02 09:15:54 np0005541914.localdomain sudo[121428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hqjwdesbzgfsksnfujmhfawzbqhfyuln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666954.0121143-864-156257115087862/AnsiballZ_getent.py
Dec 02 09:15:54 np0005541914.localdomain sudo[121428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:15:54 np0005541914.localdomain python3.9[121430]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec 02 09:15:54 np0005541914.localdomain sudo[121428]: pam_unix(sudo:session): session closed for user root
Dec 02 09:15:55 np0005541914.localdomain sudo[121521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cvgxesmjxjiyjvskpuhhtaoejeuwcquy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666955.0466235-894-182525953258064/AnsiballZ_getent.py
Dec 02 09:15:55 np0005541914.localdomain sudo[121521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:15:55 np0005541914.localdomain python3.9[121523]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec 02 09:15:55 np0005541914.localdomain sudo[121521]: pam_unix(sudo:session): session closed for user root
Dec 02 09:15:56 np0005541914.localdomain sudo[121614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-njtfvpkjrxvnwcauxkwlwcxdaphthlxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666955.6770139-918-48299357864790/AnsiballZ_group.py
Dec 02 09:15:56 np0005541914.localdomain sudo[121614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:15:56 np0005541914.localdomain python3.9[121616]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 02 09:15:56 np0005541914.localdomain groupmod[121617]: group changed in /etc/group (group hugetlbfs/985, new gid: 42477)
Dec 02 09:15:56 np0005541914.localdomain groupmod[121617]: group changed in /etc/passwd (group hugetlbfs/985, new gid: 42477)
Dec 02 09:15:56 np0005541914.localdomain sudo[121614]: pam_unix(sudo:session): session closed for user root
Dec 02 09:15:56 np0005541914.localdomain sudo[121712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-scwseeblnqlpyjaquzgupkmxpjylgdms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666956.589324-944-22382500524268/AnsiballZ_file.py
Dec 02 09:15:56 np0005541914.localdomain sudo[121712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:15:57 np0005541914.localdomain python3.9[121714]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec 02 09:15:57 np0005541914.localdomain sudo[121712]: pam_unix(sudo:session): session closed for user root
Dec 02 09:15:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20497 DF PROTO=TCP SPT=39928 DPT=9101 SEQ=657770300 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5362C220000000001030307) 
Dec 02 09:15:58 np0005541914.localdomain sudo[121804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pzkzksipearioonaxmiilzsonvhtikrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666957.7388403-978-142918033088674/AnsiballZ_dnf.py
Dec 02 09:15:58 np0005541914.localdomain sudo[121804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:15:58 np0005541914.localdomain python3.9[121806]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 02 09:16:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51855 DF PROTO=TCP SPT=47874 DPT=9102 SEQ=2931664264 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5363D230000000001030307) 
Dec 02 09:16:01 np0005541914.localdomain sudo[121804]: pam_unix(sudo:session): session closed for user root
Dec 02 09:16:03 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23601 DF PROTO=TCP SPT=45792 DPT=9882 SEQ=1899204899 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53645710000000001030307) 
Dec 02 09:16:04 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23602 DF PROTO=TCP SPT=45792 DPT=9882 SEQ=1899204899 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53649630000000001030307) 
Dec 02 09:16:06 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23603 DF PROTO=TCP SPT=45792 DPT=9882 SEQ=1899204899 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53651620000000001030307) 
Dec 02 09:16:08 np0005541914.localdomain sudo[121898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qvatpafjsutlpthwhtooyrhqkhalskjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666967.80261-1003-19099415151461/AnsiballZ_file.py
Dec 02 09:16:08 np0005541914.localdomain sudo[121898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:16:10 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23080 DF PROTO=TCP SPT=56834 DPT=9100 SEQ=4047028326 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5365F220000000001030307) 
Dec 02 09:16:11 np0005541914.localdomain python3.9[121900]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:16:11 np0005541914.localdomain sudo[121898]: pam_unix(sudo:session): session closed for user root
Dec 02 09:16:12 np0005541914.localdomain sudo[121990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fbbfyfjaolkjqvyedxjcvetcamgjejft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666972.128122-1026-229181094482736/AnsiballZ_stat.py
Dec 02 09:16:12 np0005541914.localdomain sudo[121990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:16:13 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15657 DF PROTO=TCP SPT=47886 DPT=9105 SEQ=1501209150 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5366AA30000000001030307) 
Dec 02 09:16:13 np0005541914.localdomain python3.9[121992]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:16:13 np0005541914.localdomain sudo[121990]: pam_unix(sudo:session): session closed for user root
Dec 02 09:16:14 np0005541914.localdomain sudo[122063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oduqizhipawlitjjejgrvuanmmngmcnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666972.128122-1026-229181094482736/AnsiballZ_copy.py
Dec 02 09:16:14 np0005541914.localdomain sudo[122063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:16:14 np0005541914.localdomain python3.9[122065]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764666972.128122-1026-229181094482736/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:16:14 np0005541914.localdomain sudo[122063]: pam_unix(sudo:session): session closed for user root
Dec 02 09:16:15 np0005541914.localdomain sudo[122155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aojbarpdesnbbtfklcfdelclpekdkvly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666974.5630817-1071-166189917539835/AnsiballZ_systemd.py
Dec 02 09:16:15 np0005541914.localdomain sudo[122155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:16:15 np0005541914.localdomain python3.9[122157]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 02 09:16:15 np0005541914.localdomain systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 02 09:16:15 np0005541914.localdomain systemd[1]: Stopped Load Kernel Modules.
Dec 02 09:16:15 np0005541914.localdomain systemd[1]: Stopping Load Kernel Modules...
Dec 02 09:16:15 np0005541914.localdomain systemd[1]: Starting Load Kernel Modules...
Dec 02 09:16:15 np0005541914.localdomain systemd-modules-load[122161]: Module 'msr' is built in
Dec 02 09:16:15 np0005541914.localdomain systemd[1]: Finished Load Kernel Modules.
Dec 02 09:16:15 np0005541914.localdomain sudo[122155]: pam_unix(sudo:session): session closed for user root
Dec 02 09:16:15 np0005541914.localdomain sudo[122252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-soeecudxaexmydarqhaqlyvteuvkhhld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666975.7022333-1095-208242625907491/AnsiballZ_stat.py
Dec 02 09:16:15 np0005541914.localdomain sudo[122252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:16:16 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5675 DF PROTO=TCP SPT=49346 DPT=9102 SEQ=2874799217 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53675CE0000000001030307) 
Dec 02 09:16:16 np0005541914.localdomain python3.9[122254]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:16:16 np0005541914.localdomain sudo[122252]: pam_unix(sudo:session): session closed for user root
Dec 02 09:16:16 np0005541914.localdomain sudo[122325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ornqphoiopznlxzpajkkcuvqhtrbmyzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666975.7022333-1095-208242625907491/AnsiballZ_copy.py
Dec 02 09:16:16 np0005541914.localdomain sudo[122325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:16:16 np0005541914.localdomain python3.9[122327]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764666975.7022333-1095-208242625907491/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:16:16 np0005541914.localdomain sudo[122325]: pam_unix(sudo:session): session closed for user root
Dec 02 09:16:17 np0005541914.localdomain sudo[122417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kimyqgceraqshsshakovidorkorksvvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666977.181394-1149-144591488218139/AnsiballZ_dnf.py
Dec 02 09:16:17 np0005541914.localdomain sudo[122417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:16:17 np0005541914.localdomain python3.9[122419]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 02 09:16:18 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23605 DF PROTO=TCP SPT=45792 DPT=9882 SEQ=1899204899 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53681220000000001030307) 
Dec 02 09:16:20 np0005541914.localdomain sudo[122417]: pam_unix(sudo:session): session closed for user root
Dec 02 09:16:21 np0005541914.localdomain python3.9[122511]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:16:21 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20499 DF PROTO=TCP SPT=39928 DPT=9101 SEQ=657770300 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5368D220000000001030307) 
Dec 02 09:16:23 np0005541914.localdomain python3.9[122603]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 02 09:16:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51826 DF PROTO=TCP SPT=39712 DPT=9101 SEQ=3500661361 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD536A1620000000001030307) 
Dec 02 09:16:27 np0005541914.localdomain python3.9[122693]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:16:28 np0005541914.localdomain sudo[122783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pgvqjakgbwsvdkaogedmynoxgdnnospd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666988.0016024-1273-204032097885809/AnsiballZ_systemd.py
Dec 02 09:16:28 np0005541914.localdomain sudo[122783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:16:28 np0005541914.localdomain python3.9[122785]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:16:28 np0005541914.localdomain systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 02 09:16:28 np0005541914.localdomain systemd[1]: tuned.service: Deactivated successfully.
Dec 02 09:16:28 np0005541914.localdomain systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 02 09:16:28 np0005541914.localdomain systemd[1]: tuned.service: Consumed 1.830s CPU time, no IO.
Dec 02 09:16:28 np0005541914.localdomain systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 02 09:16:29 np0005541914.localdomain systemd[1]: Started Dynamic System Tuning Daemon.
Dec 02 09:16:29 np0005541914.localdomain sudo[122783]: pam_unix(sudo:session): session closed for user root
Dec 02 09:16:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5679 DF PROTO=TCP SPT=49346 DPT=9102 SEQ=2874799217 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD536B1220000000001030307) 
Dec 02 09:16:32 np0005541914.localdomain python3.9[122888]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 02 09:16:33 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2757 DF PROTO=TCP SPT=55756 DPT=9882 SEQ=3921470690 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD536BAA10000000001030307) 
Dec 02 09:16:34 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2758 DF PROTO=TCP SPT=55756 DPT=9882 SEQ=3921470690 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD536BEA20000000001030307) 
Dec 02 09:16:35 np0005541914.localdomain sudo[122978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tddrvldvdrzxwdxvagesayiafcpdgczh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666995.1880336-1442-247729703378981/AnsiballZ_systemd.py
Dec 02 09:16:35 np0005541914.localdomain sudo[122978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:16:35 np0005541914.localdomain python3.9[122980]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:16:36 np0005541914.localdomain sudo[122983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:16:36 np0005541914.localdomain sudo[122983]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:16:36 np0005541914.localdomain sudo[122983]: pam_unix(sudo:session): session closed for user root
Dec 02 09:16:36 np0005541914.localdomain sudo[122998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:16:36 np0005541914.localdomain sudo[122998]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:16:36 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2759 DF PROTO=TCP SPT=55756 DPT=9882 SEQ=3921470690 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD536C6A20000000001030307) 
Dec 02 09:16:36 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:16:36 np0005541914.localdomain systemd-sysv-generator[123054]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:16:36 np0005541914.localdomain systemd-rc-local-generator[123050]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:16:36 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:16:37 np0005541914.localdomain sudo[122998]: pam_unix(sudo:session): session closed for user root
Dec 02 09:16:37 np0005541914.localdomain sudo[122978]: pam_unix(sudo:session): session closed for user root
Dec 02 09:16:37 np0005541914.localdomain sudo[123139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:16:37 np0005541914.localdomain sudo[123139]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:16:37 np0005541914.localdomain sudo[123139]: pam_unix(sudo:session): session closed for user root
Dec 02 09:16:37 np0005541914.localdomain sudo[123172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Dec 02 09:16:37 np0005541914.localdomain sudo[123172]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:16:37 np0005541914.localdomain sudo[123197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kdcuiqlyvtvrmctmonanwjimtwitxtqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666997.2630684-1442-39276848851857/AnsiballZ_systemd.py
Dec 02 09:16:37 np0005541914.localdomain sudo[123197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:16:37 np0005541914.localdomain python3.9[123201]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:16:37 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:16:37 np0005541914.localdomain sudo[123172]: pam_unix(sudo:session): session closed for user root
Dec 02 09:16:37 np0005541914.localdomain systemd-rc-local-generator[123248]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:16:37 np0005541914.localdomain systemd-sysv-generator[123251]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:16:38 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:16:38 np0005541914.localdomain sudo[123197]: pam_unix(sudo:session): session closed for user root
Dec 02 09:16:38 np0005541914.localdomain sudo[123349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ysndoajkovjothtusshnmdhuisctmyco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666998.4820743-1491-227352864255081/AnsiballZ_command.py
Dec 02 09:16:38 np0005541914.localdomain sudo[123349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:16:38 np0005541914.localdomain python3.9[123351]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:16:38 np0005541914.localdomain sudo[123349]: pam_unix(sudo:session): session closed for user root
Dec 02 09:16:39 np0005541914.localdomain sudo[123442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-epykckjylimdrmylwofnslngsobqmiif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666999.178569-1515-123005928792898/AnsiballZ_command.py
Dec 02 09:16:39 np0005541914.localdomain sudo[123442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:16:39 np0005541914.localdomain python3.9[123444]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:16:39 np0005541914.localdomain kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k FS
Dec 02 09:16:39 np0005541914.localdomain sudo[123442]: pam_unix(sudo:session): session closed for user root
Dec 02 09:16:40 np0005541914.localdomain sudo[123535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-riulvtnidrssqilcigywzebxbnfhdubv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764666999.8788464-1539-234746525321377/AnsiballZ_command.py
Dec 02 09:16:40 np0005541914.localdomain sudo[123535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:16:40 np0005541914.localdomain python3.9[123537]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:16:40 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56357 DF PROTO=TCP SPT=37398 DPT=9100 SEQ=1149734740 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD536D5230000000001030307) 
Dec 02 09:16:40 np0005541914.localdomain sudo[123541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:16:40 np0005541914.localdomain sudo[123541]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:16:40 np0005541914.localdomain sudo[123541]: pam_unix(sudo:session): session closed for user root
Dec 02 09:16:41 np0005541914.localdomain sudo[123535]: pam_unix(sudo:session): session closed for user root
Dec 02 09:16:42 np0005541914.localdomain sudo[123649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bsjyrrnykxlevfqtqqrhiwvirpdjtcmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667001.8626678-1563-89593819567365/AnsiballZ_command.py
Dec 02 09:16:42 np0005541914.localdomain sudo[123649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:16:42 np0005541914.localdomain python3.9[123651]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:16:43 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38798 DF PROTO=TCP SPT=51132 DPT=9105 SEQ=2584331989 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD536DFA20000000001030307) 
Dec 02 09:16:43 np0005541914.localdomain sudo[123649]: pam_unix(sudo:session): session closed for user root
Dec 02 09:16:43 np0005541914.localdomain sudo[123742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ydchltsmzbetnhqzmuftrmwtmgeyjozh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667003.648793-1587-216349434386998/AnsiballZ_systemd.py
Dec 02 09:16:43 np0005541914.localdomain sudo[123742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:16:44 np0005541914.localdomain python3.9[123744]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 02 09:16:44 np0005541914.localdomain systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 02 09:16:44 np0005541914.localdomain systemd[1]: Stopped Apply Kernel Variables.
Dec 02 09:16:44 np0005541914.localdomain systemd[1]: Stopping Apply Kernel Variables...
Dec 02 09:16:44 np0005541914.localdomain systemd[1]: Starting Apply Kernel Variables...
Dec 02 09:16:44 np0005541914.localdomain systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 02 09:16:44 np0005541914.localdomain systemd[1]: Finished Apply Kernel Variables.
Dec 02 09:16:44 np0005541914.localdomain sudo[123742]: pam_unix(sudo:session): session closed for user root
Dec 02 09:16:45 np0005541914.localdomain sshd[117576]: pam_unix(sshd:session): session closed for user zuul
Dec 02 09:16:45 np0005541914.localdomain systemd[1]: session-38.scope: Deactivated successfully.
Dec 02 09:16:45 np0005541914.localdomain systemd[1]: session-38.scope: Consumed 1min 57.032s CPU time.
Dec 02 09:16:45 np0005541914.localdomain systemd-logind[760]: Session 38 logged out. Waiting for processes to exit.
Dec 02 09:16:45 np0005541914.localdomain systemd-logind[760]: Removed session 38.
Dec 02 09:16:46 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65498 DF PROTO=TCP SPT=56068 DPT=9102 SEQ=3437420992 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD536EAFF0000000001030307) 
Dec 02 09:16:49 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65500 DF PROTO=TCP SPT=56068 DPT=9102 SEQ=3437420992 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD536F7220000000001030307) 
Dec 02 09:16:51 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51828 DF PROTO=TCP SPT=39712 DPT=9101 SEQ=3500661361 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53701220000000001030307) 
Dec 02 09:16:52 np0005541914.localdomain sshd[123765]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:16:52 np0005541914.localdomain sshd[123765]: Accepted publickey for zuul from 192.168.122.30 port 49670 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 09:16:52 np0005541914.localdomain systemd-logind[760]: New session 39 of user zuul.
Dec 02 09:16:52 np0005541914.localdomain systemd[1]: Started Session 39 of User zuul.
Dec 02 09:16:52 np0005541914.localdomain sshd[123765]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 09:16:53 np0005541914.localdomain python3.9[123858]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 09:16:55 np0005541914.localdomain python3.9[123952]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 09:16:56 np0005541914.localdomain sudo[124046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jjqymtcmyrozkavubcezjnkndrcpabno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667015.9775689-113-171323858333395/AnsiballZ_command.py
Dec 02 09:16:56 np0005541914.localdomain sudo[124046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:16:56 np0005541914.localdomain python3.9[124048]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:16:56 np0005541914.localdomain sudo[124046]: pam_unix(sudo:session): session closed for user root
Dec 02 09:16:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12405 DF PROTO=TCP SPT=36480 DPT=9101 SEQ=1553097679 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53716A20000000001030307) 
Dec 02 09:16:57 np0005541914.localdomain python3.9[124139]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 09:16:58 np0005541914.localdomain sudo[124233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zrzkhfsgqrinsjplutvpbnnokvyyqkit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667018.0658255-172-173694524387296/AnsiballZ_setup.py
Dec 02 09:16:58 np0005541914.localdomain sudo[124233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:16:58 np0005541914.localdomain python3.9[124235]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 02 09:16:58 np0005541914.localdomain sudo[124233]: pam_unix(sudo:session): session closed for user root
Dec 02 09:16:59 np0005541914.localdomain sudo[124287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-icnwqininazqyofgddlyxdxhoeyojpil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667018.0658255-172-173694524387296/AnsiballZ_dnf.py
Dec 02 09:16:59 np0005541914.localdomain sudo[124287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:16:59 np0005541914.localdomain python3.9[124289]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 02 09:17:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65502 DF PROTO=TCP SPT=56068 DPT=9102 SEQ=3437420992 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53727220000000001030307) 
Dec 02 09:17:02 np0005541914.localdomain sudo[124287]: pam_unix(sudo:session): session closed for user root
Dec 02 09:17:03 np0005541914.localdomain sudo[124381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-glpolawkrapdwtlxojayxgvfeusjqrmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667023.3149314-208-208030621721545/AnsiballZ_setup.py
Dec 02 09:17:03 np0005541914.localdomain sudo[124381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:17:03 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16826 DF PROTO=TCP SPT=39868 DPT=9882 SEQ=1628927917 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5372FD00000000001030307) 
Dec 02 09:17:03 np0005541914.localdomain python3.9[124383]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 02 09:17:04 np0005541914.localdomain sudo[124381]: pam_unix(sudo:session): session closed for user root
Dec 02 09:17:04 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16827 DF PROTO=TCP SPT=39868 DPT=9882 SEQ=1628927917 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53733E20000000001030307) 
Dec 02 09:17:05 np0005541914.localdomain sudo[124528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pwjgtelituvinampqskzzniknlbcmukk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667025.0226407-241-22873659645912/AnsiballZ_file.py
Dec 02 09:17:05 np0005541914.localdomain sudo[124528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:17:05 np0005541914.localdomain python3.9[124530]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:17:05 np0005541914.localdomain sudo[124528]: pam_unix(sudo:session): session closed for user root
Dec 02 09:17:06 np0005541914.localdomain sudo[124620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-poxalnyatjvhnoirsbgvhcimihrqmkqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667025.8573194-266-262188266871609/AnsiballZ_command.py
Dec 02 09:17:06 np0005541914.localdomain sudo[124620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:17:06 np0005541914.localdomain python3.9[124622]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:17:06 np0005541914.localdomain sudo[124620]: pam_unix(sudo:session): session closed for user root
Dec 02 09:17:06 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16828 DF PROTO=TCP SPT=39868 DPT=9882 SEQ=1628927917 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5373BE20000000001030307) 
Dec 02 09:17:07 np0005541914.localdomain sudo[124723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nftmlrhtufnauzncqzfdthcnebqzmbmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667026.9032476-290-49617791401811/AnsiballZ_stat.py
Dec 02 09:17:07 np0005541914.localdomain sudo[124723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:17:07 np0005541914.localdomain python3.9[124725]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:17:07 np0005541914.localdomain sudo[124723]: pam_unix(sudo:session): session closed for user root
Dec 02 09:17:07 np0005541914.localdomain sudo[124771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jdhfjrohkqmaqlexaawzeelhmeymujwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667026.9032476-290-49617791401811/AnsiballZ_file.py
Dec 02 09:17:07 np0005541914.localdomain sudo[124771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:17:07 np0005541914.localdomain python3.9[124773]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:17:07 np0005541914.localdomain sudo[124771]: pam_unix(sudo:session): session closed for user root
Dec 02 09:17:08 np0005541914.localdomain sudo[124863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-caioecfnsnsgqrynuqzonumnisnbufzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667028.164889-325-170069321373359/AnsiballZ_stat.py
Dec 02 09:17:08 np0005541914.localdomain sudo[124863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:17:08 np0005541914.localdomain python3.9[124865]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:17:08 np0005541914.localdomain sudo[124863]: pam_unix(sudo:session): session closed for user root
Dec 02 09:17:09 np0005541914.localdomain sudo[124936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bvnhhpdeaqkdqxiyrnmgpahsgdfggueu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667028.164889-325-170069321373359/AnsiballZ_copy.py
Dec 02 09:17:09 np0005541914.localdomain sudo[124936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:17:09 np0005541914.localdomain python3.9[124938]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764667028.164889-325-170069321373359/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:17:09 np0005541914.localdomain sudo[124936]: pam_unix(sudo:session): session closed for user root
Dec 02 09:17:10 np0005541914.localdomain sudo[125028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yyqbscrayzlvtxjthayapftnycoyooxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667029.626772-374-70283167236073/AnsiballZ_ini_file.py
Dec 02 09:17:10 np0005541914.localdomain sudo[125028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:17:10 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59967 DF PROTO=TCP SPT=41984 DPT=9100 SEQ=1476989852 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53749230000000001030307) 
Dec 02 09:17:10 np0005541914.localdomain python3.9[125030]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:17:10 np0005541914.localdomain sudo[125028]: pam_unix(sudo:session): session closed for user root
Dec 02 09:17:10 np0005541914.localdomain sudo[125120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tsevdmttnjosrhouostinnqdzshzwneg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667030.3587146-374-73812797030204/AnsiballZ_ini_file.py
Dec 02 09:17:10 np0005541914.localdomain sudo[125120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:17:10 np0005541914.localdomain python3.9[125122]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:17:10 np0005541914.localdomain sudo[125120]: pam_unix(sudo:session): session closed for user root
Dec 02 09:17:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 09:17:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 5400.1 total, 600.0 interval
                                                          Cumulative writes: 4846 writes, 21K keys, 4846 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 4846 writes, 677 syncs, 7.16 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 02 09:17:11 np0005541914.localdomain sudo[125212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mtuxgkbqeyugxwjvzxhzlfxreajnwues ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667030.9072971-374-44542707909459/AnsiballZ_ini_file.py
Dec 02 09:17:11 np0005541914.localdomain sudo[125212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:17:11 np0005541914.localdomain python3.9[125214]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:17:11 np0005541914.localdomain sudo[125212]: pam_unix(sudo:session): session closed for user root
Dec 02 09:17:11 np0005541914.localdomain sudo[125304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-foewfaciaqnixgveqpfbfaeoavklzmgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667031.6139536-374-31079376350178/AnsiballZ_ini_file.py
Dec 02 09:17:11 np0005541914.localdomain sudo[125304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:17:12 np0005541914.localdomain python3.9[125306]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:17:12 np0005541914.localdomain sudo[125304]: pam_unix(sudo:session): session closed for user root
Dec 02 09:17:12 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56358 DF PROTO=TCP SPT=37398 DPT=9100 SEQ=1149734740 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53753220000000001030307) 
Dec 02 09:17:13 np0005541914.localdomain python3.9[125396]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 09:17:13 np0005541914.localdomain sudo[125488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wwdorimqphqoiwdwanqlxpyovvuskhic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667033.4658442-493-201694683362728/AnsiballZ_dnf.py
Dec 02 09:17:13 np0005541914.localdomain sudo[125488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:17:13 np0005541914.localdomain python3.9[125490]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 02 09:17:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 09:17:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 5400.2 total, 600.0 interval
                                                          Cumulative writes: 5767 writes, 25K keys, 5767 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5767 writes, 746 syncs, 7.73 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 02 09:17:16 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31048 DF PROTO=TCP SPT=60386 DPT=9102 SEQ=1629376715 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53760320000000001030307) 
Dec 02 09:17:17 np0005541914.localdomain sudo[125488]: pam_unix(sudo:session): session closed for user root
Dec 02 09:17:17 np0005541914.localdomain sudo[125582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xvyvnuaefuqwxohucjkgamqfaklszinf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667037.3453422-517-240913701765370/AnsiballZ_dnf.py
Dec 02 09:17:17 np0005541914.localdomain sudo[125582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:17:17 np0005541914.localdomain python3.9[125584]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 02 09:17:18 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16830 DF PROTO=TCP SPT=39868 DPT=9882 SEQ=1628927917 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5376B220000000001030307) 
Dec 02 09:17:20 np0005541914.localdomain sudo[125582]: pam_unix(sudo:session): session closed for user root
Dec 02 09:17:21 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12407 DF PROTO=TCP SPT=36480 DPT=9101 SEQ=1553097679 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53777220000000001030307) 
Dec 02 09:17:22 np0005541914.localdomain sudo[125676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xmgigzsybmskkgmcpfebdstijnuwgmxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667041.5272586-547-193379205152637/AnsiballZ_dnf.py
Dec 02 09:17:22 np0005541914.localdomain sudo[125676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:17:22 np0005541914.localdomain python3.9[125678]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 02 09:17:25 np0005541914.localdomain sudo[125676]: pam_unix(sudo:session): session closed for user root
Dec 02 09:17:26 np0005541914.localdomain sudo[125776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-whurwsekwyvplvcygglkswzywilrwrde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667045.8025055-574-224197321080500/AnsiballZ_dnf.py
Dec 02 09:17:26 np0005541914.localdomain sudo[125776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:17:26 np0005541914.localdomain python3.9[125778]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 02 09:17:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30372 DF PROTO=TCP SPT=52198 DPT=9101 SEQ=1518831331 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5378BA30000000001030307) 
Dec 02 09:17:29 np0005541914.localdomain sudo[125776]: pam_unix(sudo:session): session closed for user root
Dec 02 09:17:30 np0005541914.localdomain sudo[125870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vvxybkpwiwysjohqjbkjyxvyolzomubf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667050.00517-610-229303738901592/AnsiballZ_dnf.py
Dec 02 09:17:30 np0005541914.localdomain sudo[125870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:17:30 np0005541914.localdomain python3.9[125872]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 02 09:17:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31052 DF PROTO=TCP SPT=60386 DPT=9102 SEQ=1629376715 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5379D220000000001030307) 
Dec 02 09:17:33 np0005541914.localdomain sudo[125870]: pam_unix(sudo:session): session closed for user root
Dec 02 09:17:33 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54346 DF PROTO=TCP SPT=54104 DPT=9882 SEQ=1804301071 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD537A5000000000001030307) 
Dec 02 09:17:34 np0005541914.localdomain sudo[125964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ocblfdwyequolwpdktszzdzhayxeojmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667054.0239568-637-8265102616284/AnsiballZ_dnf.py
Dec 02 09:17:34 np0005541914.localdomain sudo[125964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:17:34 np0005541914.localdomain python3.9[125966]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 02 09:17:34 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54347 DF PROTO=TCP SPT=54104 DPT=9882 SEQ=1804301071 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD537A9220000000001030307) 
Dec 02 09:17:36 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54348 DF PROTO=TCP SPT=54104 DPT=9882 SEQ=1804301071 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD537B1220000000001030307) 
Dec 02 09:17:37 np0005541914.localdomain sudo[125964]: pam_unix(sudo:session): session closed for user root
Dec 02 09:17:37 np0005541914.localdomain sshd[125983]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:17:38 np0005541914.localdomain sshd[125983]: Invalid user ubuntu from 45.148.10.240 port 47218
Dec 02 09:17:38 np0005541914.localdomain sshd[125983]: Connection closed by invalid user ubuntu 45.148.10.240 port 47218 [preauth]
Dec 02 09:17:38 np0005541914.localdomain sudo[126060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-klgwcntskwrwkivwgoenxoukqxohqvni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667058.3150957-664-54616276373928/AnsiballZ_dnf.py
Dec 02 09:17:38 np0005541914.localdomain sudo[126060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:17:38 np0005541914.localdomain python3.9[126062]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 02 09:17:40 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23655 DF PROTO=TCP SPT=46322 DPT=9100 SEQ=702705207 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD537BF220000000001030307) 
Dec 02 09:17:40 np0005541914.localdomain sudo[126065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:17:40 np0005541914.localdomain sudo[126065]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:17:40 np0005541914.localdomain sudo[126065]: pam_unix(sudo:session): session closed for user root
Dec 02 09:17:40 np0005541914.localdomain sudo[126080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:17:40 np0005541914.localdomain sudo[126080]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:17:41 np0005541914.localdomain sudo[126080]: pam_unix(sudo:session): session closed for user root
Dec 02 09:17:41 np0005541914.localdomain sudo[126128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:17:41 np0005541914.localdomain sudo[126128]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:17:41 np0005541914.localdomain sudo[126128]: pam_unix(sudo:session): session closed for user root
Dec 02 09:17:41 np0005541914.localdomain sudo[126143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074 -- inventory --format=json-pretty --filter-for-batch
Dec 02 09:17:41 np0005541914.localdomain sudo[126143]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:17:42 np0005541914.localdomain podman[126202]: 
Dec 02 09:17:42 np0005541914.localdomain podman[126202]: 2025-12-02 09:17:42.527118089 +0000 UTC m=+0.078106974 container create 84316233a7d54e45e1efcba5acc3baa522a5434554e38aeb13d4d42d2bc0ad54 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_curie, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_BRANCH=main, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.openshift.expose-services=, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, distribution-scope=public)
Dec 02 09:17:42 np0005541914.localdomain systemd[1]: Started libpod-conmon-84316233a7d54e45e1efcba5acc3baa522a5434554e38aeb13d4d42d2bc0ad54.scope.
Dec 02 09:17:42 np0005541914.localdomain podman[126202]: 2025-12-02 09:17:42.483942005 +0000 UTC m=+0.034930940 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:17:42 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:17:42 np0005541914.localdomain podman[126202]: 2025-12-02 09:17:42.599744174 +0000 UTC m=+0.150733039 container init 84316233a7d54e45e1efcba5acc3baa522a5434554e38aeb13d4d42d2bc0ad54 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_curie, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., version=7, architecture=x86_64, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph)
Dec 02 09:17:42 np0005541914.localdomain podman[126202]: 2025-12-02 09:17:42.611920277 +0000 UTC m=+0.162909132 container start 84316233a7d54e45e1efcba5acc3baa522a5434554e38aeb13d4d42d2bc0ad54 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_curie, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, GIT_CLEAN=True, io.buildah.version=1.41.4, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, RELEASE=main, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, com.redhat.component=rhceph-container, vcs-type=git)
Dec 02 09:17:42 np0005541914.localdomain podman[126202]: 2025-12-02 09:17:42.61204096 +0000 UTC m=+0.163029815 container attach 84316233a7d54e45e1efcba5acc3baa522a5434554e38aeb13d4d42d2bc0ad54 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_curie, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, architecture=x86_64, release=1763362218, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_BRANCH=main, RELEASE=main, io.buildah.version=1.41.4, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True)
Dec 02 09:17:42 np0005541914.localdomain wizardly_curie[126218]: 167 167
Dec 02 09:17:42 np0005541914.localdomain systemd[1]: libpod-84316233a7d54e45e1efcba5acc3baa522a5434554e38aeb13d4d42d2bc0ad54.scope: Deactivated successfully.
Dec 02 09:17:42 np0005541914.localdomain podman[126202]: 2025-12-02 09:17:42.615484226 +0000 UTC m=+0.166473101 container died 84316233a7d54e45e1efcba5acc3baa522a5434554e38aeb13d4d42d2bc0ad54 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_curie, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, distribution-scope=public, name=rhceph, version=7, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_CLEAN=True, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.openshift.tags=rhceph ceph)
Dec 02 09:17:42 np0005541914.localdomain podman[126223]: 2025-12-02 09:17:42.710077734 +0000 UTC m=+0.086530252 container remove 84316233a7d54e45e1efcba5acc3baa522a5434554e38aeb13d4d42d2bc0ad54 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wizardly_curie, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.openshift.expose-services=, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.buildah.version=1.41.4, name=rhceph, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7)
Dec 02 09:17:42 np0005541914.localdomain systemd[1]: libpod-conmon-84316233a7d54e45e1efcba5acc3baa522a5434554e38aeb13d4d42d2bc0ad54.scope: Deactivated successfully.
Dec 02 09:17:42 np0005541914.localdomain podman[126244]: 
Dec 02 09:17:42 np0005541914.localdomain podman[126244]: 2025-12-02 09:17:42.881747963 +0000 UTC m=+0.054905472 container create b6a06aebef59ef1d5aaebe1ee208969e7075459c742f5efa234878123282178a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_swirles, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, distribution-scope=public, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, name=rhceph, vcs-type=git, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, io.openshift.expose-services=)
Dec 02 09:17:42 np0005541914.localdomain systemd[1]: Started libpod-conmon-b6a06aebef59ef1d5aaebe1ee208969e7075459c742f5efa234878123282178a.scope.
Dec 02 09:17:42 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:17:42 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21697b82c9ebe365447263db4453f93ccea58d8bfbc09a3e546e4839bd0993c8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 02 09:17:42 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21697b82c9ebe365447263db4453f93ccea58d8bfbc09a3e546e4839bd0993c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 02 09:17:42 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21697b82c9ebe365447263db4453f93ccea58d8bfbc09a3e546e4839bd0993c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 02 09:17:42 np0005541914.localdomain podman[126244]: 2025-12-02 09:17:42.942268828 +0000 UTC m=+0.115426337 container init b6a06aebef59ef1d5aaebe1ee208969e7075459c742f5efa234878123282178a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_swirles, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, name=rhceph, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7)
Dec 02 09:17:42 np0005541914.localdomain podman[126244]: 2025-12-02 09:17:42.953074579 +0000 UTC m=+0.126232088 container start b6a06aebef59ef1d5aaebe1ee208969e7075459c742f5efa234878123282178a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_swirles, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, release=1763362218, vendor=Red Hat, Inc., distribution-scope=public, version=7, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, RELEASE=main, com.redhat.component=rhceph-container)
Dec 02 09:17:42 np0005541914.localdomain podman[126244]: 2025-12-02 09:17:42.953503262 +0000 UTC m=+0.126660811 container attach b6a06aebef59ef1d5aaebe1ee208969e7075459c742f5efa234878123282178a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_swirles, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, build-date=2025-11-26T19:44:28Z, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, ceph=True, release=1763362218, version=7, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, RELEASE=main)
Dec 02 09:17:42 np0005541914.localdomain podman[126244]: 2025-12-02 09:17:42.857726788 +0000 UTC m=+0.030884287 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:17:43 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44300 DF PROTO=TCP SPT=41902 DPT=9105 SEQ=642120871 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD537CA230000000001030307) 
Dec 02 09:17:43 np0005541914.localdomain systemd[1]: tmp-crun.wMEOTZ.mount: Deactivated successfully.
Dec 02 09:17:43 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-dc91164455d8c3f143df14bb7242840af61b8f13dcda1adc54792e6afe87cac6-merged.mount: Deactivated successfully.
Dec 02 09:17:43 np0005541914.localdomain hardcore_swirles[126259]: [
Dec 02 09:17:43 np0005541914.localdomain hardcore_swirles[126259]:     {
Dec 02 09:17:43 np0005541914.localdomain hardcore_swirles[126259]:         "available": false,
Dec 02 09:17:43 np0005541914.localdomain hardcore_swirles[126259]:         "ceph_device": false,
Dec 02 09:17:43 np0005541914.localdomain hardcore_swirles[126259]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 02 09:17:43 np0005541914.localdomain hardcore_swirles[126259]:         "lsm_data": {},
Dec 02 09:17:43 np0005541914.localdomain hardcore_swirles[126259]:         "lvs": [],
Dec 02 09:17:43 np0005541914.localdomain hardcore_swirles[126259]:         "path": "/dev/sr0",
Dec 02 09:17:43 np0005541914.localdomain hardcore_swirles[126259]:         "rejected_reasons": [
Dec 02 09:17:43 np0005541914.localdomain hardcore_swirles[126259]:             "Has a FileSystem",
Dec 02 09:17:43 np0005541914.localdomain hardcore_swirles[126259]:             "Insufficient space (<5GB)"
Dec 02 09:17:43 np0005541914.localdomain hardcore_swirles[126259]:         ],
Dec 02 09:17:43 np0005541914.localdomain hardcore_swirles[126259]:         "sys_api": {
Dec 02 09:17:43 np0005541914.localdomain hardcore_swirles[126259]:             "actuators": null,
Dec 02 09:17:43 np0005541914.localdomain hardcore_swirles[126259]:             "device_nodes": "sr0",
Dec 02 09:17:43 np0005541914.localdomain hardcore_swirles[126259]:             "human_readable_size": "482.00 KB",
Dec 02 09:17:43 np0005541914.localdomain hardcore_swirles[126259]:             "id_bus": "ata",
Dec 02 09:17:43 np0005541914.localdomain hardcore_swirles[126259]:             "model": "QEMU DVD-ROM",
Dec 02 09:17:43 np0005541914.localdomain hardcore_swirles[126259]:             "nr_requests": "2",
Dec 02 09:17:43 np0005541914.localdomain hardcore_swirles[126259]:             "partitions": {},
Dec 02 09:17:43 np0005541914.localdomain hardcore_swirles[126259]:             "path": "/dev/sr0",
Dec 02 09:17:43 np0005541914.localdomain hardcore_swirles[126259]:             "removable": "1",
Dec 02 09:17:43 np0005541914.localdomain hardcore_swirles[126259]:             "rev": "2.5+",
Dec 02 09:17:43 np0005541914.localdomain hardcore_swirles[126259]:             "ro": "0",
Dec 02 09:17:43 np0005541914.localdomain hardcore_swirles[126259]:             "rotational": "1",
Dec 02 09:17:43 np0005541914.localdomain hardcore_swirles[126259]:             "sas_address": "",
Dec 02 09:17:43 np0005541914.localdomain hardcore_swirles[126259]:             "sas_device_handle": "",
Dec 02 09:17:43 np0005541914.localdomain hardcore_swirles[126259]:             "scheduler_mode": "mq-deadline",
Dec 02 09:17:43 np0005541914.localdomain hardcore_swirles[126259]:             "sectors": 0,
Dec 02 09:17:43 np0005541914.localdomain hardcore_swirles[126259]:             "sectorsize": "2048",
Dec 02 09:17:43 np0005541914.localdomain hardcore_swirles[126259]:             "size": 493568.0,
Dec 02 09:17:43 np0005541914.localdomain hardcore_swirles[126259]:             "support_discard": "0",
Dec 02 09:17:43 np0005541914.localdomain hardcore_swirles[126259]:             "type": "disk",
Dec 02 09:17:43 np0005541914.localdomain hardcore_swirles[126259]:             "vendor": "QEMU"
Dec 02 09:17:43 np0005541914.localdomain hardcore_swirles[126259]:         }
Dec 02 09:17:43 np0005541914.localdomain hardcore_swirles[126259]:     }
Dec 02 09:17:43 np0005541914.localdomain hardcore_swirles[126259]: ]
Dec 02 09:17:43 np0005541914.localdomain systemd[1]: libpod-b6a06aebef59ef1d5aaebe1ee208969e7075459c742f5efa234878123282178a.scope: Deactivated successfully.
Dec 02 09:17:43 np0005541914.localdomain podman[127620]: 2025-12-02 09:17:43.860392778 +0000 UTC m=+0.040236555 container died b6a06aebef59ef1d5aaebe1ee208969e7075459c742f5efa234878123282178a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_swirles, architecture=x86_64, version=7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, io.openshift.tags=rhceph ceph, vcs-type=git, name=rhceph, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7)
Dec 02 09:17:43 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-21697b82c9ebe365447263db4453f93ccea58d8bfbc09a3e546e4839bd0993c8-merged.mount: Deactivated successfully.
Dec 02 09:17:43 np0005541914.localdomain podman[127620]: 2025-12-02 09:17:43.922101348 +0000 UTC m=+0.101945065 container remove b6a06aebef59ef1d5aaebe1ee208969e7075459c742f5efa234878123282178a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_swirles, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, com.redhat.component=rhceph-container, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, distribution-scope=public, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, RELEASE=main, ceph=True)
Dec 02 09:17:43 np0005541914.localdomain systemd[1]: libpod-conmon-b6a06aebef59ef1d5aaebe1ee208969e7075459c742f5efa234878123282178a.scope: Deactivated successfully.
Dec 02 09:17:43 np0005541914.localdomain sudo[126143]: pam_unix(sudo:session): session closed for user root
Dec 02 09:17:44 np0005541914.localdomain sudo[127635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:17:44 np0005541914.localdomain sudo[127635]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:17:44 np0005541914.localdomain sudo[127635]: pam_unix(sudo:session): session closed for user root
Dec 02 09:17:45 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38803 DF PROTO=TCP SPT=51132 DPT=9105 SEQ=2584331989 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD537D5230000000001030307) 
Dec 02 09:17:49 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54350 DF PROTO=TCP SPT=54104 DPT=9882 SEQ=1804301071 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD537E1230000000001030307) 
Dec 02 09:17:49 np0005541914.localdomain sudo[126060]: pam_unix(sudo:session): session closed for user root
Dec 02 09:17:50 np0005541914.localdomain sudo[127806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dzgpwxtpmyrafbqonayuopcjlkapntov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667070.1204638-700-265955849658234/AnsiballZ_file.py
Dec 02 09:17:50 np0005541914.localdomain sudo[127806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:17:50 np0005541914.localdomain python3.9[127808]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:17:50 np0005541914.localdomain sudo[127806]: pam_unix(sudo:session): session closed for user root
Dec 02 09:17:51 np0005541914.localdomain sudo[127911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-unvfvacopvfhdwgajrehfdhpmgrduvpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667070.8170419-724-182753056520520/AnsiballZ_stat.py
Dec 02 09:17:51 np0005541914.localdomain sudo[127911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:17:51 np0005541914.localdomain python3.9[127913]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:17:51 np0005541914.localdomain sudo[127911]: pam_unix(sudo:session): session closed for user root
Dec 02 09:17:51 np0005541914.localdomain sudo[127984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xjnulkhcnbcbowhgluhybvrbmwnacsex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667070.8170419-724-182753056520520/AnsiballZ_copy.py
Dec 02 09:17:51 np0005541914.localdomain sudo[127984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:17:51 np0005541914.localdomain python3.9[127986]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764667070.8170419-724-182753056520520/.source.json _original_basename=.kifw4mzc follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:17:51 np0005541914.localdomain sudo[127984]: pam_unix(sudo:session): session closed for user root
Dec 02 09:17:52 np0005541914.localdomain sudo[128076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-czemcgkachnxhblzupsziychuuylpooo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667072.4630742-778-125854801504822/AnsiballZ_podman_image.py
Dec 02 09:17:52 np0005541914.localdomain sudo[128076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:17:53 np0005541914.localdomain python3.9[128078]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 02 09:17:53 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31696 DF PROTO=TCP SPT=34768 DPT=9102 SEQ=2977458884 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD537F1220000000001030307) 
Dec 02 09:17:53 np0005541914.localdomain systemd-journald[47679]: Field hash table of /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal has a fill level at 77.5 (258 of 333 items), suggesting rotation.
Dec 02 09:17:53 np0005541914.localdomain systemd-journald[47679]: /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal: Journal header limits reached or header out-of-date, rotating.
Dec 02 09:17:53 np0005541914.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 02 09:17:53 np0005541914.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 02 09:17:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26091 DF PROTO=TCP SPT=44254 DPT=9101 SEQ=638889751 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53800E20000000001030307) 
Dec 02 09:17:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44303 DF PROTO=TCP SPT=41902 DPT=9105 SEQ=642120871 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53803220000000001030307) 
Dec 02 09:17:59 np0005541914.localdomain podman[128092]: 2025-12-02 09:17:53.188212481 +0000 UTC m=+0.052800728 image pull  quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 02 09:18:00 np0005541914.localdomain sudo[128076]: pam_unix(sudo:session): session closed for user root
Dec 02 09:18:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31697 DF PROTO=TCP SPT=34768 DPT=9102 SEQ=2977458884 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53811220000000001030307) 
Dec 02 09:18:02 np0005541914.localdomain sudo[128291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qbgsleygvgczzmjuirngfeqpwrflhfbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667082.235192-811-140311132801114/AnsiballZ_podman_image.py
Dec 02 09:18:02 np0005541914.localdomain sudo[128291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:18:02 np0005541914.localdomain python3.9[128293]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 02 09:18:04 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7702 DF PROTO=TCP SPT=60074 DPT=9882 SEQ=1240822841 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5381E220000000001030307) 
Dec 02 09:18:06 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7703 DF PROTO=TCP SPT=60074 DPT=9882 SEQ=1240822841 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53826230000000001030307) 
Dec 02 09:18:10 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19422 DF PROTO=TCP SPT=45502 DPT=9100 SEQ=1113676127 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53833220000000001030307) 
Dec 02 09:18:10 np0005541914.localdomain podman[128306]: 2025-12-02 09:18:02.838128654 +0000 UTC m=+0.043448772 image pull  quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 02 09:18:11 np0005541914.localdomain sudo[128291]: pam_unix(sudo:session): session closed for user root
Dec 02 09:18:13 np0005541914.localdomain sudo[128503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dabvxscescokjxsuvemvcuzhjshjuxch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667092.8314686-847-70683887910598/AnsiballZ_podman_image.py
Dec 02 09:18:13 np0005541914.localdomain sudo[128503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:18:13 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18411 DF PROTO=TCP SPT=44942 DPT=9105 SEQ=67867514 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5383F630000000001030307) 
Dec 02 09:18:13 np0005541914.localdomain python3.9[128505]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 02 09:18:14 np0005541914.localdomain podman[128518]: 2025-12-02 09:18:13.354654668 +0000 UTC m=+0.042979387 image pull  quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified
Dec 02 09:18:15 np0005541914.localdomain sudo[128503]: pam_unix(sudo:session): session closed for user root
Dec 02 09:18:16 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54606 DF PROTO=TCP SPT=56516 DPT=9102 SEQ=1703738146 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5384A8E0000000001030307) 
Dec 02 09:18:16 np0005541914.localdomain sudo[128682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wseyuryakykbdgquqetsxbxzwyjadxqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667095.8781505-874-14516217939524/AnsiballZ_podman_image.py
Dec 02 09:18:16 np0005541914.localdomain sudo[128682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:18:16 np0005541914.localdomain python3.9[128684]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 02 09:18:18 np0005541914.localdomain podman[128696]: 2025-12-02 09:18:16.437990565 +0000 UTC m=+0.040791851 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 02 09:18:18 np0005541914.localdomain sudo[128682]: pam_unix(sudo:session): session closed for user root
Dec 02 09:18:18 np0005541914.localdomain sudo[128861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vepsuxtkhfabwtdwtdqtqrlgsslehmah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667098.7217011-901-182705567102283/AnsiballZ_podman_image.py
Dec 02 09:18:18 np0005541914.localdomain sudo[128861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:18:19 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54608 DF PROTO=TCP SPT=56516 DPT=9102 SEQ=1703738146 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53856A20000000001030307) 
Dec 02 09:18:19 np0005541914.localdomain python3.9[128863]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 02 09:18:21 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26093 DF PROTO=TCP SPT=44254 DPT=9101 SEQ=638889751 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53861220000000001030307) 
Dec 02 09:18:23 np0005541914.localdomain podman[128875]: 2025-12-02 09:18:19.299358651 +0000 UTC m=+0.042434991 image pull  quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified
Dec 02 09:18:23 np0005541914.localdomain sudo[128861]: pam_unix(sudo:session): session closed for user root
Dec 02 09:18:23 np0005541914.localdomain sudo[129051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-coiyhnoietsmqvvzftdnvmohziccgcyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667103.5630522-901-176278232547584/AnsiballZ_podman_image.py
Dec 02 09:18:23 np0005541914.localdomain sudo[129051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:18:24 np0005541914.localdomain python3.9[129053]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec 02 09:18:26 np0005541914.localdomain podman[129067]: 2025-12-02 09:18:24.195961922 +0000 UTC m=+0.051867420 image pull  quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c
Dec 02 09:18:26 np0005541914.localdomain sudo[129051]: pam_unix(sudo:session): session closed for user root
Dec 02 09:18:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63788 DF PROTO=TCP SPT=60670 DPT=9101 SEQ=959441373 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53876220000000001030307) 
Dec 02 09:18:27 np0005541914.localdomain sshd[123765]: pam_unix(sshd:session): session closed for user zuul
Dec 02 09:18:27 np0005541914.localdomain systemd[1]: session-39.scope: Deactivated successfully.
Dec 02 09:18:27 np0005541914.localdomain systemd[1]: session-39.scope: Consumed 1min 28.632s CPU time.
Dec 02 09:18:27 np0005541914.localdomain systemd-logind[760]: Session 39 logged out. Waiting for processes to exit.
Dec 02 09:18:27 np0005541914.localdomain systemd-logind[760]: Removed session 39.
Dec 02 09:18:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54610 DF PROTO=TCP SPT=56516 DPT=9102 SEQ=1703738146 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53887220000000001030307) 
Dec 02 09:18:33 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43813 DF PROTO=TCP SPT=55314 DPT=9882 SEQ=2744117416 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5388F610000000001030307) 
Dec 02 09:18:33 np0005541914.localdomain sshd[129206]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:18:33 np0005541914.localdomain sshd[129206]: Accepted publickey for zuul from 192.168.122.30 port 42960 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 09:18:33 np0005541914.localdomain systemd-logind[760]: New session 40 of user zuul.
Dec 02 09:18:33 np0005541914.localdomain systemd[1]: Started Session 40 of User zuul.
Dec 02 09:18:33 np0005541914.localdomain sshd[129206]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 09:18:34 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43814 DF PROTO=TCP SPT=55314 DPT=9882 SEQ=2744117416 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53893620000000001030307) 
Dec 02 09:18:34 np0005541914.localdomain python3.9[129299]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 09:18:36 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43815 DF PROTO=TCP SPT=55314 DPT=9882 SEQ=2744117416 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5389B620000000001030307) 
Dec 02 09:18:37 np0005541914.localdomain sudo[129495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tqlbqtkplihlvsihsuhzkycsehyonyqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667116.9057465-71-159341390311049/AnsiballZ_getent.py
Dec 02 09:18:37 np0005541914.localdomain sudo[129495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:18:37 np0005541914.localdomain python3.9[129497]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec 02 09:18:37 np0005541914.localdomain sudo[129495]: pam_unix(sudo:session): session closed for user root
Dec 02 09:18:38 np0005541914.localdomain sudo[129610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zhyibsidnzkjjybzojbexnqcfwhoziat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667118.6151667-106-182760544432426/AnsiballZ_setup.py
Dec 02 09:18:38 np0005541914.localdomain sudo[129610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:18:39 np0005541914.localdomain python3.9[129612]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 02 09:18:39 np0005541914.localdomain sudo[129610]: pam_unix(sudo:session): session closed for user root
Dec 02 09:18:39 np0005541914.localdomain sudo[129664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rjiwharwtdfhhkatrfhlqpjafpwgkldc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667118.6151667-106-182760544432426/AnsiballZ_dnf.py
Dec 02 09:18:39 np0005541914.localdomain sudo[129664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:18:40 np0005541914.localdomain python3.9[129666]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch3.3'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 02 09:18:40 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30891 DF PROTO=TCP SPT=48796 DPT=9100 SEQ=711203781 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD538A9220000000001030307) 
Dec 02 09:18:43 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61966 DF PROTO=TCP SPT=55490 DPT=9105 SEQ=2702850273 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD538B4620000000001030307) 
Dec 02 09:18:44 np0005541914.localdomain sudo[129664]: pam_unix(sudo:session): session closed for user root
Dec 02 09:18:45 np0005541914.localdomain sudo[129939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:18:45 np0005541914.localdomain sudo[129939]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:18:45 np0005541914.localdomain sudo[129939]: pam_unix(sudo:session): session closed for user root
Dec 02 09:18:45 np0005541914.localdomain sudo[129959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:18:45 np0005541914.localdomain sudo[129959]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:18:45 np0005541914.localdomain sudo[130044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-abueoarcharqpglahgrblaajtwvjwkfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667125.133065-148-18781278030962/AnsiballZ_dnf.py
Dec 02 09:18:45 np0005541914.localdomain sudo[130044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:18:45 np0005541914.localdomain python3.9[130046]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 02 09:18:45 np0005541914.localdomain sudo[129959]: pam_unix(sudo:session): session closed for user root
Dec 02 09:18:46 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45430 DF PROTO=TCP SPT=42016 DPT=9102 SEQ=432651410 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD538BFBE0000000001030307) 
Dec 02 09:18:48 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43817 DF PROTO=TCP SPT=55314 DPT=9882 SEQ=2744117416 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD538CB220000000001030307) 
Dec 02 09:18:48 np0005541914.localdomain sudo[130044]: pam_unix(sudo:session): session closed for user root
Dec 02 09:18:48 np0005541914.localdomain sudo[130081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:18:48 np0005541914.localdomain sudo[130081]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:18:48 np0005541914.localdomain sudo[130081]: pam_unix(sudo:session): session closed for user root
Dec 02 09:18:49 np0005541914.localdomain sudo[130185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pkmrnawqbgsfuuyggzhjvjvjeklssicq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667129.0921278-173-97592171025557/AnsiballZ_systemd.py
Dec 02 09:18:49 np0005541914.localdomain sudo[130185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:18:49 np0005541914.localdomain python3.9[130187]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 02 09:18:49 np0005541914.localdomain sudo[130185]: pam_unix(sudo:session): session closed for user root
Dec 02 09:18:51 np0005541914.localdomain python3.9[130280]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 09:18:51 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63790 DF PROTO=TCP SPT=60670 DPT=9101 SEQ=959441373 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD538D7220000000001030307) 
Dec 02 09:18:52 np0005541914.localdomain sudo[130370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pbwsmfaoosmtftuenqjuodsblzqruixr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667132.1599698-226-36103118698828/AnsiballZ_sefcontext.py
Dec 02 09:18:52 np0005541914.localdomain sudo[130370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:18:52 np0005541914.localdomain python3.9[130372]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec 02 09:18:54 np0005541914.localdomain kernel: SELinux:  Converting 2743 SID table entries...
Dec 02 09:18:54 np0005541914.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 02 09:18:54 np0005541914.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 02 09:18:54 np0005541914.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 02 09:18:54 np0005541914.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 02 09:18:54 np0005541914.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 02 09:18:54 np0005541914.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 02 09:18:54 np0005541914.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 02 09:18:54 np0005541914.localdomain sudo[130370]: pam_unix(sudo:session): session closed for user root
Dec 02 09:18:55 np0005541914.localdomain python3.9[130501]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 09:18:56 np0005541914.localdomain sudo[130597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-niuvnggvbhibpksvqkspoifrhqkmdmki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667135.8704326-280-189600919844956/AnsiballZ_dnf.py
Dec 02 09:18:56 np0005541914.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=18 res=1
Dec 02 09:18:56 np0005541914.localdomain sudo[130597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:18:56 np0005541914.localdomain python3.9[130599]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 02 09:18:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23292 DF PROTO=TCP SPT=59818 DPT=9101 SEQ=1199525092 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD538EB620000000001030307) 
Dec 02 09:18:59 np0005541914.localdomain sudo[130597]: pam_unix(sudo:session): session closed for user root
Dec 02 09:19:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45434 DF PROTO=TCP SPT=42016 DPT=9102 SEQ=432651410 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD538FB220000000001030307) 
Dec 02 09:19:01 np0005541914.localdomain sudo[130691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ttkrbceuaaiybniyevdcixwjfxovlsgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667141.598117-304-58982401414086/AnsiballZ_command.py
Dec 02 09:19:01 np0005541914.localdomain sudo[130691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:19:02 np0005541914.localdomain python3.9[130693]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:19:02 np0005541914.localdomain sudo[130691]: pam_unix(sudo:session): session closed for user root
Dec 02 09:19:03 np0005541914.localdomain sudo[130936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gclvkzbetalupkxwnkxvibxfmtepgghp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667143.1558933-328-142546731071535/AnsiballZ_file.py
Dec 02 09:19:03 np0005541914.localdomain sudo[130936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:19:03 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7094 DF PROTO=TCP SPT=40110 DPT=9882 SEQ=1422231829 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53904900000000001030307) 
Dec 02 09:19:03 np0005541914.localdomain python3.9[130938]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 02 09:19:03 np0005541914.localdomain sudo[130936]: pam_unix(sudo:session): session closed for user root
Dec 02 09:19:04 np0005541914.localdomain python3.9[131028]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:19:04 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7095 DF PROTO=TCP SPT=40110 DPT=9882 SEQ=1422231829 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53908A20000000001030307) 
Dec 02 09:19:05 np0005541914.localdomain sudo[131120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fjqvpjfnfykfkobybdyeojurshhplqwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667144.8214304-382-237430366897711/AnsiballZ_dnf.py
Dec 02 09:19:05 np0005541914.localdomain sudo[131120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:19:05 np0005541914.localdomain python3.9[131122]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 02 09:19:06 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7096 DF PROTO=TCP SPT=40110 DPT=9882 SEQ=1422231829 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53910A20000000001030307) 
Dec 02 09:19:08 np0005541914.localdomain sudo[131120]: pam_unix(sudo:session): session closed for user root
Dec 02 09:19:09 np0005541914.localdomain sudo[131214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gfulpjyhbjujbvbvpypbnlsrcnafhhrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667149.1838582-406-181531534245377/AnsiballZ_dnf.py
Dec 02 09:19:09 np0005541914.localdomain sudo[131214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:19:09 np0005541914.localdomain python3.9[131216]: ansible-ansible.legacy.dnf Invoked with name=['openstack-network-scripts'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 02 09:19:10 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21631 DF PROTO=TCP SPT=50206 DPT=9100 SEQ=2930293193 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5391F220000000001030307) 
Dec 02 09:19:12 np0005541914.localdomain sudo[131214]: pam_unix(sudo:session): session closed for user root
Dec 02 09:19:13 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23692 DF PROTO=TCP SPT=45836 DPT=9105 SEQ=3907984531 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53929A20000000001030307) 
Dec 02 09:19:13 np0005541914.localdomain sudo[131308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zljharmhhzigskpmlgihbxgqtlbqdxgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667153.1422176-430-91535765662093/AnsiballZ_systemd.py
Dec 02 09:19:13 np0005541914.localdomain sudo[131308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:19:13 np0005541914.localdomain python3.9[131310]: ansible-ansible.builtin.systemd Invoked with enabled=True name=network daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 02 09:19:13 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:19:13 np0005541914.localdomain systemd-rc-local-generator[131342]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:19:13 np0005541914.localdomain systemd-sysv-generator[131345]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:19:13 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:19:14 np0005541914.localdomain sudo[131308]: pam_unix(sudo:session): session closed for user root
Dec 02 09:19:14 np0005541914.localdomain sudo[131440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pjytcxjvhhqcwuufmdtfmlboxnfoxrfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667154.401022-461-123370124685481/AnsiballZ_stat.py
Dec 02 09:19:14 np0005541914.localdomain sudo[131440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:19:14 np0005541914.localdomain python3.9[131442]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:19:14 np0005541914.localdomain sudo[131440]: pam_unix(sudo:session): session closed for user root
Dec 02 09:19:15 np0005541914.localdomain sudo[131532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gwvxoulcppxckqdssvaqholitdujbzqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667155.0891042-487-55632989734861/AnsiballZ_ini_file.py
Dec 02 09:19:15 np0005541914.localdomain sudo[131532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:19:15 np0005541914.localdomain python3.9[131534]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:19:15 np0005541914.localdomain sudo[131532]: pam_unix(sudo:session): session closed for user root
Dec 02 09:19:16 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41448 DF PROTO=TCP SPT=48142 DPT=9102 SEQ=4222163239 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53934EE0000000001030307) 
Dec 02 09:19:16 np0005541914.localdomain sudo[131626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-goeozfuzfzlvvxrjmvykfhvhptkzdphk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667155.9421666-511-1136361271975/AnsiballZ_ini_file.py
Dec 02 09:19:16 np0005541914.localdomain sudo[131626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:19:16 np0005541914.localdomain python3.9[131628]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:19:16 np0005541914.localdomain sudo[131626]: pam_unix(sudo:session): session closed for user root
Dec 02 09:19:16 np0005541914.localdomain sudo[131718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ukahjahletwwjgnnjnjhbdnmwbkbhyyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667156.591106-535-37468016436274/AnsiballZ_ini_file.py
Dec 02 09:19:16 np0005541914.localdomain sudo[131718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:19:17 np0005541914.localdomain python3.9[131720]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:19:17 np0005541914.localdomain sudo[131718]: pam_unix(sudo:session): session closed for user root
Dec 02 09:19:17 np0005541914.localdomain sudo[131810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kljddjrpazmipxnxbepecswblgjfkubp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667157.4495926-565-21763257614740/AnsiballZ_stat.py
Dec 02 09:19:17 np0005541914.localdomain sudo[131810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:19:17 np0005541914.localdomain python3.9[131812]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:19:17 np0005541914.localdomain sudo[131810]: pam_unix(sudo:session): session closed for user root
Dec 02 09:19:18 np0005541914.localdomain sudo[131883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kxnjtuypsjhdcnzodouvtrciudgbtikz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667157.4495926-565-21763257614740/AnsiballZ_copy.py
Dec 02 09:19:18 np0005541914.localdomain sudo[131883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:19:18 np0005541914.localdomain python3.9[131885]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667157.4495926-565-21763257614740/.source _original_basename=.78yhok8x follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:19:18 np0005541914.localdomain sudo[131883]: pam_unix(sudo:session): session closed for user root
Dec 02 09:19:19 np0005541914.localdomain sudo[131975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-irrpikoqjtdxhygztkpodrubfdqzpfxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667158.7550693-610-114960353479884/AnsiballZ_file.py
Dec 02 09:19:19 np0005541914.localdomain sudo[131975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:19:19 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41450 DF PROTO=TCP SPT=48142 DPT=9102 SEQ=4222163239 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53940E20000000001030307) 
Dec 02 09:19:19 np0005541914.localdomain python3.9[131977]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:19:19 np0005541914.localdomain sudo[131975]: pam_unix(sudo:session): session closed for user root
Dec 02 09:19:19 np0005541914.localdomain sudo[132067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zlwnuvfntdhaewuuaojazcotdezvptoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667159.3897154-634-175995036524377/AnsiballZ_edpm_os_net_config_mappings.py
Dec 02 09:19:19 np0005541914.localdomain sudo[132067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:19:19 np0005541914.localdomain python3.9[132069]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec 02 09:19:19 np0005541914.localdomain sudo[132067]: pam_unix(sudo:session): session closed for user root
Dec 02 09:19:20 np0005541914.localdomain sudo[132159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dpbcecdcvrxbeszzsyypayfzjmnbmngj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667160.251203-662-219994722559360/AnsiballZ_file.py
Dec 02 09:19:20 np0005541914.localdomain sudo[132159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:19:20 np0005541914.localdomain python3.9[132161]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:19:20 np0005541914.localdomain sudo[132159]: pam_unix(sudo:session): session closed for user root
Dec 02 09:19:21 np0005541914.localdomain sudo[132251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oyelzhixoaezywcpwywrcvpuwmuqdwyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667161.0989594-691-238459937524891/AnsiballZ_stat.py
Dec 02 09:19:21 np0005541914.localdomain sudo[132251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:19:21 np0005541914.localdomain python3.9[132253]: ansible-ansible.legacy.stat Invoked with path=/etc/os-net-config/config.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:19:21 np0005541914.localdomain sudo[132251]: pam_unix(sudo:session): session closed for user root
Dec 02 09:19:21 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23294 DF PROTO=TCP SPT=59818 DPT=9101 SEQ=1199525092 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5394B220000000001030307) 
Dec 02 09:19:22 np0005541914.localdomain sudo[132324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zbpqsmglxxckjveylyfykgcdtuqpbafa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667161.0989594-691-238459937524891/AnsiballZ_copy.py
Dec 02 09:19:22 np0005541914.localdomain sudo[132324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:19:22 np0005541914.localdomain python3.9[132326]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/os-net-config/config.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667161.0989594-691-238459937524891/.source.yaml _original_basename=.563189n1 follow=False checksum=4c28d1662755c608a6ffaa942e27a2488c0a78a3 force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:19:22 np0005541914.localdomain sudo[132324]: pam_unix(sudo:session): session closed for user root
Dec 02 09:19:23 np0005541914.localdomain sudo[132416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zbtklusikyoxpxkzihrsyvbptljtavrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667162.9781594-736-280042456916024/AnsiballZ_slurp.py
Dec 02 09:19:23 np0005541914.localdomain sudo[132416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:19:23 np0005541914.localdomain python3.9[132418]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec 02 09:19:23 np0005541914.localdomain sudo[132416]: pam_unix(sudo:session): session closed for user root
Dec 02 09:19:24 np0005541914.localdomain sshd[132433]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:19:24 np0005541914.localdomain sshd[132447]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:19:24 np0005541914.localdomain sshd[132447]: error: kex_exchange_identification: read: Connection reset by peer
Dec 02 09:19:24 np0005541914.localdomain sshd[132447]: Connection reset by 45.140.17.97 port 32458
Dec 02 09:19:25 np0005541914.localdomain sudo[132523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-drpeyyqcuohtuxgkzprzrvqwnieyqzpe ; ANSIBLE_ASYNC_DIR='~/.ansible_async' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667164.8049212-763-49185667276614/async_wrapper.py j890941955757 300 /home/zuul/.ansible/tmp/ansible-tmp-1764667164.8049212-763-49185667276614/AnsiballZ_edpm_os_net_config.py _
Dec 02 09:19:25 np0005541914.localdomain sudo[132523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:19:25 np0005541914.localdomain ansible-async_wrapper.py[132525]: Invoked with j890941955757 300 /home/zuul/.ansible/tmp/ansible-tmp-1764667164.8049212-763-49185667276614/AnsiballZ_edpm_os_net_config.py _
Dec 02 09:19:25 np0005541914.localdomain ansible-async_wrapper.py[132528]: Starting module and watcher
Dec 02 09:19:25 np0005541914.localdomain ansible-async_wrapper.py[132528]: Start watching 132529 (300)
Dec 02 09:19:25 np0005541914.localdomain ansible-async_wrapper.py[132529]: Start module (132529)
Dec 02 09:19:25 np0005541914.localdomain ansible-async_wrapper.py[132525]: Return async_wrapper task started.
Dec 02 09:19:25 np0005541914.localdomain sudo[132523]: pam_unix(sudo:session): session closed for user root
Dec 02 09:19:25 np0005541914.localdomain python3.9[132530]: ansible-edpm_os_net_config Invoked with cleanup=False config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=False
Dec 02 09:19:26 np0005541914.localdomain ansible-async_wrapper.py[132529]: Module complete (132529)
Dec 02 09:19:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58160 DF PROTO=TCP SPT=45708 DPT=9101 SEQ=1738078137 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53960620000000001030307) 
Dec 02 09:19:29 np0005541914.localdomain sudo[132620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hehyxtocnprntvqiewxfmzjvvexzlzpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667168.7053454-763-97267684131902/AnsiballZ_async_status.py
Dec 02 09:19:29 np0005541914.localdomain sudo[132620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:19:29 np0005541914.localdomain python3.9[132622]: ansible-ansible.legacy.async_status Invoked with jid=j890941955757.132525 mode=status _async_dir=/root/.ansible_async
Dec 02 09:19:29 np0005541914.localdomain sudo[132620]: pam_unix(sudo:session): session closed for user root
Dec 02 09:19:29 np0005541914.localdomain sudo[132679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-frjukkrfnnfhoqgzudppmdbavzjnupbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667168.7053454-763-97267684131902/AnsiballZ_async_status.py
Dec 02 09:19:29 np0005541914.localdomain sudo[132679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:19:30 np0005541914.localdomain python3.9[132681]: ansible-ansible.legacy.async_status Invoked with jid=j890941955757.132525 mode=cleanup _async_dir=/root/.ansible_async
Dec 02 09:19:30 np0005541914.localdomain sudo[132679]: pam_unix(sudo:session): session closed for user root
Dec 02 09:19:30 np0005541914.localdomain sudo[132771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cqbzdoknnipopkatgrectiwgzzfpewgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667170.204541-829-68851442392888/AnsiballZ_stat.py
Dec 02 09:19:30 np0005541914.localdomain sudo[132771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:19:30 np0005541914.localdomain ansible-async_wrapper.py[132528]: Done in kid B.
Dec 02 09:19:30 np0005541914.localdomain python3.9[132773]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:19:30 np0005541914.localdomain sudo[132771]: pam_unix(sudo:session): session closed for user root
Dec 02 09:19:30 np0005541914.localdomain sudo[132844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dffqhjrtswmywnehvehbdosovozbqorm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667170.204541-829-68851442392888/AnsiballZ_copy.py
Dec 02 09:19:30 np0005541914.localdomain sudo[132844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:19:31 np0005541914.localdomain python3.9[132846]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667170.204541-829-68851442392888/.source.returncode _original_basename=.i6lpc53s follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:19:31 np0005541914.localdomain sudo[132844]: pam_unix(sudo:session): session closed for user root
Dec 02 09:19:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41452 DF PROTO=TCP SPT=48142 DPT=9102 SEQ=4222163239 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53971230000000001030307) 
Dec 02 09:19:31 np0005541914.localdomain sudo[132936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vpkhrruogkifcmpcbaildfhkxlpbzcly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667171.4468324-877-155512735449835/AnsiballZ_stat.py
Dec 02 09:19:31 np0005541914.localdomain sudo[132936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:19:31 np0005541914.localdomain python3.9[132938]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:19:31 np0005541914.localdomain sudo[132936]: pam_unix(sudo:session): session closed for user root
Dec 02 09:19:32 np0005541914.localdomain sudo[133009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dqkwxmqsomahyladcpqtuikbdkfeyygs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667171.4468324-877-155512735449835/AnsiballZ_copy.py
Dec 02 09:19:32 np0005541914.localdomain sudo[133009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:19:33 np0005541914.localdomain python3.9[133011]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667171.4468324-877-155512735449835/.source.cfg _original_basename=.k33vu6i6 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:19:33 np0005541914.localdomain sudo[133009]: pam_unix(sudo:session): session closed for user root
Dec 02 09:19:33 np0005541914.localdomain sudo[133101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qlziubbhstqshptyoxcwxzpcqftqnaze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667173.2122874-923-266224094418471/AnsiballZ_systemd.py
Dec 02 09:19:33 np0005541914.localdomain sudo[133101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:19:33 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59823 DF PROTO=TCP SPT=39654 DPT=9882 SEQ=2844155250 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53979C10000000001030307) 
Dec 02 09:19:33 np0005541914.localdomain python3.9[133103]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 02 09:19:33 np0005541914.localdomain systemd[1]: Reloading Network Manager...
Dec 02 09:19:33 np0005541914.localdomain NetworkManager[5967]: <info>  [1764667173.8114] audit: op="reload" arg="0" pid=133107 uid=0 result="success"
Dec 02 09:19:33 np0005541914.localdomain NetworkManager[5967]: <info>  [1764667173.8121] config: signal: SIGHUP (no changes from disk)
Dec 02 09:19:33 np0005541914.localdomain systemd[1]: Reloaded Network Manager.
Dec 02 09:19:33 np0005541914.localdomain sudo[133101]: pam_unix(sudo:session): session closed for user root
Dec 02 09:19:34 np0005541914.localdomain sshd[129206]: pam_unix(sshd:session): session closed for user zuul
Dec 02 09:19:34 np0005541914.localdomain systemd[1]: session-40.scope: Deactivated successfully.
Dec 02 09:19:34 np0005541914.localdomain systemd[1]: session-40.scope: Consumed 35.416s CPU time.
Dec 02 09:19:34 np0005541914.localdomain systemd-logind[760]: Session 40 logged out. Waiting for processes to exit.
Dec 02 09:19:34 np0005541914.localdomain systemd-logind[760]: Removed session 40.
Dec 02 09:19:34 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59824 DF PROTO=TCP SPT=39654 DPT=9882 SEQ=2844155250 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5397DE20000000001030307) 
Dec 02 09:19:36 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59825 DF PROTO=TCP SPT=39654 DPT=9882 SEQ=2844155250 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53985E20000000001030307) 
Dec 02 09:19:39 np0005541914.localdomain sshd[133122]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:19:39 np0005541914.localdomain sshd[133122]: Accepted publickey for zuul from 192.168.122.30 port 33438 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 09:19:39 np0005541914.localdomain systemd-logind[760]: New session 41 of user zuul.
Dec 02 09:19:39 np0005541914.localdomain systemd[1]: Started Session 41 of User zuul.
Dec 02 09:19:39 np0005541914.localdomain sshd[133122]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 09:19:40 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61272 DF PROTO=TCP SPT=37962 DPT=9100 SEQ=1499661919 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53993220000000001030307) 
Dec 02 09:19:40 np0005541914.localdomain python3.9[133215]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 09:19:41 np0005541914.localdomain python3.9[133309]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 02 09:19:42 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21632 DF PROTO=TCP SPT=50206 DPT=9100 SEQ=2930293193 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5399D220000000001030307) 
Dec 02 09:19:44 np0005541914.localdomain python3.9[133454]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:19:45 np0005541914.localdomain sshd[133122]: pam_unix(sshd:session): session closed for user zuul
Dec 02 09:19:45 np0005541914.localdomain systemd[1]: session-41.scope: Deactivated successfully.
Dec 02 09:19:45 np0005541914.localdomain systemd[1]: session-41.scope: Consumed 2.100s CPU time.
Dec 02 09:19:45 np0005541914.localdomain systemd-logind[760]: Session 41 logged out. Waiting for processes to exit.
Dec 02 09:19:45 np0005541914.localdomain systemd-logind[760]: Removed session 41.
Dec 02 09:19:46 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38640 DF PROTO=TCP SPT=33550 DPT=9102 SEQ=1544027499 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD539AA1E0000000001030307) 
Dec 02 09:19:48 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59827 DF PROTO=TCP SPT=39654 DPT=9882 SEQ=2844155250 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD539B5220000000001030307) 
Dec 02 09:19:49 np0005541914.localdomain sudo[133470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:19:49 np0005541914.localdomain sudo[133470]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:19:49 np0005541914.localdomain sudo[133470]: pam_unix(sudo:session): session closed for user root
Dec 02 09:19:49 np0005541914.localdomain sudo[133485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 02 09:19:49 np0005541914.localdomain sudo[133485]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:19:49 np0005541914.localdomain sudo[133485]: pam_unix(sudo:session): session closed for user root
Dec 02 09:19:49 np0005541914.localdomain sudo[133520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:19:49 np0005541914.localdomain sudo[133520]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:19:49 np0005541914.localdomain sudo[133520]: pam_unix(sudo:session): session closed for user root
Dec 02 09:19:49 np0005541914.localdomain sudo[133535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:19:49 np0005541914.localdomain sudo[133535]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:19:50 np0005541914.localdomain sudo[133535]: pam_unix(sudo:session): session closed for user root
Dec 02 09:19:50 np0005541914.localdomain sshd[133582]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:19:50 np0005541914.localdomain sshd[133582]: Invalid user ubuntu from 45.148.10.240 port 39642
Dec 02 09:19:51 np0005541914.localdomain sshd[133582]: Connection closed by invalid user ubuntu 45.148.10.240 port 39642 [preauth]
Dec 02 09:19:51 np0005541914.localdomain sshd[133597]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:19:51 np0005541914.localdomain sudo[133584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:19:51 np0005541914.localdomain sudo[133584]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:19:51 np0005541914.localdomain sudo[133584]: pam_unix(sudo:session): session closed for user root
Dec 02 09:19:51 np0005541914.localdomain sshd[133597]: Accepted publickey for zuul from 192.168.122.30 port 49074 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 09:19:51 np0005541914.localdomain systemd-logind[760]: New session 42 of user zuul.
Dec 02 09:19:51 np0005541914.localdomain systemd[1]: Started Session 42 of User zuul.
Dec 02 09:19:51 np0005541914.localdomain sshd[133597]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 09:19:51 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58162 DF PROTO=TCP SPT=45708 DPT=9101 SEQ=1738078137 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD539C1220000000001030307) 
Dec 02 09:19:52 np0005541914.localdomain python3.9[133692]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 09:19:53 np0005541914.localdomain python3.9[133786]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 09:19:54 np0005541914.localdomain sudo[133880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lebxjilhvqjwkqemftnmqivnyaockgha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667194.3355749-82-275222819321031/AnsiballZ_setup.py
Dec 02 09:19:54 np0005541914.localdomain sudo[133880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:19:54 np0005541914.localdomain python3.9[133882]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 02 09:19:55 np0005541914.localdomain sudo[133880]: pam_unix(sudo:session): session closed for user root
Dec 02 09:19:55 np0005541914.localdomain sudo[133934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ezypvspgnablwjxjssligbnrzevbuuow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667194.3355749-82-275222819321031/AnsiballZ_dnf.py
Dec 02 09:19:55 np0005541914.localdomain sudo[133934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:19:55 np0005541914.localdomain python3.9[133936]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 02 09:19:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30392 DF PROTO=TCP SPT=33936 DPT=9101 SEQ=2207220434 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD539D5A20000000001030307) 
Dec 02 09:19:59 np0005541914.localdomain sudo[133934]: pam_unix(sudo:session): session closed for user root
Dec 02 09:19:59 np0005541914.localdomain sudo[134028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kyhurhdsfnleawdufrzmnksdyutepmtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667199.415451-118-171668023721384/AnsiballZ_setup.py
Dec 02 09:19:59 np0005541914.localdomain sudo[134028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:19:59 np0005541914.localdomain python3.9[134030]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 02 09:20:00 np0005541914.localdomain sudo[134028]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:01 np0005541914.localdomain sudo[134175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wccqdpsgfttkevewljtolaulqldbwxud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667200.6050143-151-72922330423504/AnsiballZ_file.py
Dec 02 09:20:01 np0005541914.localdomain sudo[134175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:01 np0005541914.localdomain python3.9[134177]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:20:01 np0005541914.localdomain sudo[134175]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38644 DF PROTO=TCP SPT=33550 DPT=9102 SEQ=1544027499 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD539E7220000000001030307) 
Dec 02 09:20:01 np0005541914.localdomain sudo[134267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ikkfjbzcpandtqesittiyioykaujmvlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667201.5880208-175-14571502881509/AnsiballZ_command.py
Dec 02 09:20:01 np0005541914.localdomain sudo[134267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:02 np0005541914.localdomain python3.9[134269]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:20:02 np0005541914.localdomain sudo[134267]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:02 np0005541914.localdomain sudo[134371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tgkuficaacpqjmkzsjsmdyjujwaprpcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667202.438108-200-174486410240314/AnsiballZ_stat.py
Dec 02 09:20:02 np0005541914.localdomain sudo[134371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:03 np0005541914.localdomain python3.9[134373]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:20:03 np0005541914.localdomain sudo[134371]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:03 np0005541914.localdomain sudo[134419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-btulexahxljbhqmeoroqtaeklstlskfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667202.438108-200-174486410240314/AnsiballZ_file.py
Dec 02 09:20:03 np0005541914.localdomain sudo[134419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:03 np0005541914.localdomain python3.9[134421]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:20:03 np0005541914.localdomain sudo[134419]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:03 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41966 DF PROTO=TCP SPT=43494 DPT=9882 SEQ=3585536734 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD539EEF00000000001030307) 
Dec 02 09:20:04 np0005541914.localdomain sudo[134511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yteyjptligadkhpsgouqramcferubrns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667204.1129265-235-171402020403778/AnsiballZ_stat.py
Dec 02 09:20:04 np0005541914.localdomain sudo[134511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:04 np0005541914.localdomain python3.9[134513]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:20:04 np0005541914.localdomain sudo[134511]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:04 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41967 DF PROTO=TCP SPT=43494 DPT=9882 SEQ=3585536734 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD539F2E20000000001030307) 
Dec 02 09:20:04 np0005541914.localdomain sudo[134559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nydsuqawsrhacomgaybyymirhxchskuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667204.1129265-235-171402020403778/AnsiballZ_file.py
Dec 02 09:20:04 np0005541914.localdomain sudo[134559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:05 np0005541914.localdomain python3.9[134561]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:20:05 np0005541914.localdomain sudo[134559]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:06 np0005541914.localdomain sudo[134651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-irgbhfdbofvbqwhrsaoritopryrtxqrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667205.9095485-274-119480074644777/AnsiballZ_ini_file.py
Dec 02 09:20:06 np0005541914.localdomain sudo[134651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:06 np0005541914.localdomain python3.9[134653]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:20:06 np0005541914.localdomain sudo[134651]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:06 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41968 DF PROTO=TCP SPT=43494 DPT=9882 SEQ=3585536734 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD539FAE30000000001030307) 
Dec 02 09:20:06 np0005541914.localdomain sudo[134743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-twnuomydimlruagafhpuvsxabdyprkui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667206.5727549-274-65782083189845/AnsiballZ_ini_file.py
Dec 02 09:20:06 np0005541914.localdomain sudo[134743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:06 np0005541914.localdomain python3.9[134745]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:20:07 np0005541914.localdomain sudo[134743]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:07 np0005541914.localdomain auditd[726]: Audit daemon rotating log files
Dec 02 09:20:07 np0005541914.localdomain sudo[134835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-stugasbztxibcmjbjgstvhlutbnjawpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667207.1017113-274-240001047497293/AnsiballZ_ini_file.py
Dec 02 09:20:07 np0005541914.localdomain sudo[134835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:07 np0005541914.localdomain python3.9[134837]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:20:07 np0005541914.localdomain sudo[134835]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:08 np0005541914.localdomain sudo[134927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iplxamblpedrprztutybjgrdcblcqopj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667207.8088126-274-123405052223423/AnsiballZ_ini_file.py
Dec 02 09:20:08 np0005541914.localdomain sudo[134927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:08 np0005541914.localdomain python3.9[134929]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:20:08 np0005541914.localdomain sudo[134927]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:09 np0005541914.localdomain sudo[135019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mdzqfakpwsxcvhjezdhlrszxrtuwlyum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667208.7976024-367-238270296383054/AnsiballZ_dnf.py
Dec 02 09:20:09 np0005541914.localdomain sudo[135019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:09 np0005541914.localdomain python3.9[135021]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 02 09:20:10 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55657 DF PROTO=TCP SPT=60176 DPT=9100 SEQ=184674842 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53A09220000000001030307) 
Dec 02 09:20:12 np0005541914.localdomain sudo[135019]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:13 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28752 DF PROTO=TCP SPT=52848 DPT=9105 SEQ=2716686921 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53A14220000000001030307) 
Dec 02 09:20:14 np0005541914.localdomain sudo[135113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nroyvottgoeakiosjjtqjjlenncoylbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667213.9245598-400-66576565810360/AnsiballZ_setup.py
Dec 02 09:20:14 np0005541914.localdomain sudo[135113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:14 np0005541914.localdomain python3.9[135115]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 09:20:14 np0005541914.localdomain sudo[135113]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:14 np0005541914.localdomain sudo[135207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ghmupmedalwarlsvawweuvkmygrnbcmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667214.6817005-424-262660652323530/AnsiballZ_stat.py
Dec 02 09:20:14 np0005541914.localdomain sudo[135207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:15 np0005541914.localdomain python3.9[135209]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:20:15 np0005541914.localdomain sudo[135207]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:15 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23697 DF PROTO=TCP SPT=45836 DPT=9105 SEQ=3907984531 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53A1F220000000001030307) 
Dec 02 09:20:16 np0005541914.localdomain sudo[135299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ydkjojgoxxwyjqjnelhxwyvopgywdinm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667215.917927-452-269278343717990/AnsiballZ_stat.py
Dec 02 09:20:16 np0005541914.localdomain sudo[135299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:16 np0005541914.localdomain python3.9[135301]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:20:16 np0005541914.localdomain sudo[135299]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:16 np0005541914.localdomain sudo[135391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qarvrikkbddzfzuzcrraqrgqdxpucwuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667216.6741076-481-261550864575335/AnsiballZ_command.py
Dec 02 09:20:16 np0005541914.localdomain sudo[135391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:17 np0005541914.localdomain python3.9[135393]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:20:17 np0005541914.localdomain sudo[135391]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:17 np0005541914.localdomain sudo[135484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tkoebtlaaslpaqkowzgkszhzhvqhwxqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667217.4020352-512-225695305174942/AnsiballZ_service_facts.py
Dec 02 09:20:17 np0005541914.localdomain sudo[135484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:17 np0005541914.localdomain python3.9[135486]: ansible-service_facts Invoked
Dec 02 09:20:18 np0005541914.localdomain network[135503]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 02 09:20:18 np0005541914.localdomain network[135504]: 'network-scripts' will be removed from distribution in near future.
Dec 02 09:20:18 np0005541914.localdomain network[135505]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 02 09:20:19 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41970 DF PROTO=TCP SPT=43494 DPT=9882 SEQ=3585536734 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53A2B220000000001030307) 
Dec 02 09:20:20 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:20:21 np0005541914.localdomain sudo[135484]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:23 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48355 DF PROTO=TCP SPT=46226 DPT=9102 SEQ=3232583679 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53A3B220000000001030307) 
Dec 02 09:20:23 np0005541914.localdomain sudo[135718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-edzbydmnuhyktmtpnjyqcjvdvicmqcpa ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1764667223.4896817-557-146018508379157/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1764667223.4896817-557-146018508379157/args
Dec 02 09:20:23 np0005541914.localdomain sudo[135718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:23 np0005541914.localdomain sudo[135718]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:25 np0005541914.localdomain sudo[135825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lbvmmnknkuysrelcnkzvyummtngicrao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667224.4189036-590-167613727855142/AnsiballZ_dnf.py
Dec 02 09:20:25 np0005541914.localdomain sudo[135825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:25 np0005541914.localdomain python3.9[135827]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 02 09:20:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38516 DF PROTO=TCP SPT=48996 DPT=9101 SEQ=1971808841 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53A4AE20000000001030307) 
Dec 02 09:20:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28755 DF PROTO=TCP SPT=52848 DPT=9105 SEQ=2716686921 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53A4D220000000001030307) 
Dec 02 09:20:29 np0005541914.localdomain sudo[135825]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:30 np0005541914.localdomain sudo[135919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-msnuxelilzedvsepiiiybligeroefyly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667229.7739348-629-166324746300300/AnsiballZ_package_facts.py
Dec 02 09:20:30 np0005541914.localdomain sudo[135919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:30 np0005541914.localdomain python3.9[135921]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec 02 09:20:31 np0005541914.localdomain sudo[135919]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48356 DF PROTO=TCP SPT=46226 DPT=9102 SEQ=3232583679 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53A5B220000000001030307) 
Dec 02 09:20:32 np0005541914.localdomain sudo[136011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hjnevxdaorlwofrlejrqilfgjqkizunc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667231.9347064-659-129508954755631/AnsiballZ_stat.py
Dec 02 09:20:32 np0005541914.localdomain sudo[136011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:32 np0005541914.localdomain python3.9[136013]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:20:32 np0005541914.localdomain sudo[136011]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:32 np0005541914.localdomain sudo[136086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-urhzgjmyyrrzynvibzbnhlktlbguzzgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667231.9347064-659-129508954755631/AnsiballZ_copy.py
Dec 02 09:20:32 np0005541914.localdomain sudo[136086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:33 np0005541914.localdomain python3.9[136088]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667231.9347064-659-129508954755631/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:20:33 np0005541914.localdomain sudo[136086]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:34 np0005541914.localdomain sudo[136180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ytmwvscjgrvcazsvgncpmnkvkdgdfinh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667233.4481237-704-223874985687674/AnsiballZ_stat.py
Dec 02 09:20:34 np0005541914.localdomain sudo[136180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:34 np0005541914.localdomain python3.9[136182]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:20:34 np0005541914.localdomain sudo[136180]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:34 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42979 DF PROTO=TCP SPT=38882 DPT=9882 SEQ=748661140 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53A68220000000001030307) 
Dec 02 09:20:34 np0005541914.localdomain sudo[136255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pebgufaxgyveatuysoemghbxytszxyfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667233.4481237-704-223874985687674/AnsiballZ_copy.py
Dec 02 09:20:34 np0005541914.localdomain sudo[136255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:35 np0005541914.localdomain python3.9[136257]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667233.4481237-704-223874985687674/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:20:35 np0005541914.localdomain sudo[136255]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:36 np0005541914.localdomain sudo[136349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vbceceaahhelzwbycsiyiwvewvnkdbxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667236.1298113-768-190429840212956/AnsiballZ_lineinfile.py
Dec 02 09:20:36 np0005541914.localdomain sudo[136349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:36 np0005541914.localdomain python3.9[136351]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:20:36 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42980 DF PROTO=TCP SPT=38882 DPT=9882 SEQ=748661140 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53A70220000000001030307) 
Dec 02 09:20:36 np0005541914.localdomain sudo[136349]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:38 np0005541914.localdomain sudo[136443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rbpjyxtmvhostiyligxjriplpedtejcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667237.860828-814-164937733959217/AnsiballZ_setup.py
Dec 02 09:20:38 np0005541914.localdomain sudo[136443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:38 np0005541914.localdomain python3.9[136445]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 02 09:20:38 np0005541914.localdomain sudo[136443]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:39 np0005541914.localdomain sudo[136497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kbarsrswfpqwyqdlpvhewywnbzqsnyac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667237.860828-814-164937733959217/AnsiballZ_systemd.py
Dec 02 09:20:39 np0005541914.localdomain sudo[136497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:39 np0005541914.localdomain python3.9[136499]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:20:39 np0005541914.localdomain sudo[136497]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:40 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9335 DF PROTO=TCP SPT=57912 DPT=9100 SEQ=2118578433 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53A7D230000000001030307) 
Dec 02 09:20:40 np0005541914.localdomain sudo[136591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fgqhpicgufovmdelznhqoeuehxkojfqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667240.6298516-861-169157137523726/AnsiballZ_setup.py
Dec 02 09:20:40 np0005541914.localdomain sudo[136591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:41 np0005541914.localdomain python3.9[136593]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 02 09:20:41 np0005541914.localdomain sudo[136591]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:41 np0005541914.localdomain sudo[136645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-clngrsrawzjgvicahyldmemdnjswnbit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667240.6298516-861-169157137523726/AnsiballZ_systemd.py
Dec 02 09:20:41 np0005541914.localdomain sudo[136645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:41 np0005541914.localdomain python3.9[136647]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 02 09:20:41 np0005541914.localdomain systemd[1]: Stopping NTP client/server...
Dec 02 09:20:41 np0005541914.localdomain chronyd[26062]: chronyd exiting
Dec 02 09:20:41 np0005541914.localdomain systemd[1]: chronyd.service: Deactivated successfully.
Dec 02 09:20:41 np0005541914.localdomain systemd[1]: Stopped NTP client/server.
Dec 02 09:20:41 np0005541914.localdomain systemd[1]: Starting NTP client/server...
Dec 02 09:20:42 np0005541914.localdomain chronyd[136655]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Dec 02 09:20:42 np0005541914.localdomain chronyd[136655]: Frequency -30.246 +/- 0.280 ppm read from /var/lib/chrony/drift
Dec 02 09:20:42 np0005541914.localdomain chronyd[136655]: Loaded seccomp filter (level 2)
Dec 02 09:20:42 np0005541914.localdomain systemd[1]: Started NTP client/server.
Dec 02 09:20:42 np0005541914.localdomain sudo[136645]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:42 np0005541914.localdomain sshd[133597]: pam_unix(sshd:session): session closed for user zuul
Dec 02 09:20:42 np0005541914.localdomain systemd-logind[760]: Session 42 logged out. Waiting for processes to exit.
Dec 02 09:20:42 np0005541914.localdomain systemd[1]: session-42.scope: Deactivated successfully.
Dec 02 09:20:42 np0005541914.localdomain systemd[1]: session-42.scope: Consumed 28.933s CPU time.
Dec 02 09:20:42 np0005541914.localdomain systemd-logind[760]: Removed session 42.
Dec 02 09:20:43 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46652 DF PROTO=TCP SPT=44054 DPT=9105 SEQ=3194187341 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53A89230000000001030307) 
Dec 02 09:20:46 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36120 DF PROTO=TCP SPT=54018 DPT=9102 SEQ=140166374 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53A947E0000000001030307) 
Dec 02 09:20:47 np0005541914.localdomain sshd[136671]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:20:48 np0005541914.localdomain sshd[136671]: Accepted publickey for zuul from 192.168.122.30 port 38662 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 09:20:48 np0005541914.localdomain systemd-logind[760]: New session 43 of user zuul.
Dec 02 09:20:48 np0005541914.localdomain systemd[1]: Started Session 43 of User zuul.
Dec 02 09:20:48 np0005541914.localdomain sshd[136671]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 09:20:49 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36122 DF PROTO=TCP SPT=54018 DPT=9102 SEQ=140166374 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53AA0A30000000001030307) 
Dec 02 09:20:49 np0005541914.localdomain python3.9[136764]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 09:20:50 np0005541914.localdomain sudo[136858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jrqzqlmvarzyuyfybiazkvkbtzgzocci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667249.6731312-61-239578845817589/AnsiballZ_file.py
Dec 02 09:20:50 np0005541914.localdomain sudo[136858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:50 np0005541914.localdomain python3.9[136860]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:20:50 np0005541914.localdomain sudo[136858]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:50 np0005541914.localdomain sudo[136963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iuyuxyimgiuyvcxgncgbczyivdkxajhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667250.5082185-86-3190063217705/AnsiballZ_stat.py
Dec 02 09:20:50 np0005541914.localdomain sudo[136963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:51 np0005541914.localdomain python3.9[136965]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:20:51 np0005541914.localdomain sudo[136963]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:51 np0005541914.localdomain sudo[136967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:20:51 np0005541914.localdomain sudo[136967]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:20:51 np0005541914.localdomain sudo[136967]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:51 np0005541914.localdomain sudo[136996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:20:51 np0005541914.localdomain sudo[136996]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:20:51 np0005541914.localdomain sudo[137041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xgpimayhiuqobaypuicgjtxgojnpkmfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667250.5082185-86-3190063217705/AnsiballZ_file.py
Dec 02 09:20:51 np0005541914.localdomain sudo[137041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:51 np0005541914.localdomain python3.9[137043]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.kigequg_ recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:20:51 np0005541914.localdomain sudo[137041]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:51 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38518 DF PROTO=TCP SPT=48996 DPT=9101 SEQ=1971808841 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53AAB220000000001030307) 
Dec 02 09:20:51 np0005541914.localdomain sudo[136996]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:52 np0005541914.localdomain sudo[137166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zjwhcffinooiwmoxzynqsbnatnxnhhkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667252.036441-145-144920253995012/AnsiballZ_stat.py
Dec 02 09:20:52 np0005541914.localdomain sudo[137166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:52 np0005541914.localdomain python3.9[137168]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:20:52 np0005541914.localdomain sudo[137166]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:52 np0005541914.localdomain sudo[137171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:20:52 np0005541914.localdomain sudo[137171]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:20:52 np0005541914.localdomain sudo[137171]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:52 np0005541914.localdomain sudo[137256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ardeolqhrvwtqepfvegcbbwnsinukcgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667252.036441-145-144920253995012/AnsiballZ_copy.py
Dec 02 09:20:52 np0005541914.localdomain sudo[137256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:53 np0005541914.localdomain python3.9[137258]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667252.036441-145-144920253995012/.source _original_basename=.mryekves follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:20:53 np0005541914.localdomain sudo[137256]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:53 np0005541914.localdomain sudo[137348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zgowkqmxhobmbelhkmeqhmdwrtobcqxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667253.426774-193-20994186690969/AnsiballZ_file.py
Dec 02 09:20:53 np0005541914.localdomain sudo[137348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:53 np0005541914.localdomain python3.9[137350]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:20:53 np0005541914.localdomain sudo[137348]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:54 np0005541914.localdomain sudo[137440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qroppxggcsiupgrnyexgwajmoewgqsvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667254.6229198-217-242864558704562/AnsiballZ_stat.py
Dec 02 09:20:54 np0005541914.localdomain sudo[137440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:55 np0005541914.localdomain python3.9[137442]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:20:55 np0005541914.localdomain sudo[137440]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:55 np0005541914.localdomain sudo[137513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tumamfiuztqyusgthssbgyppbwvbtexd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667254.6229198-217-242864558704562/AnsiballZ_copy.py
Dec 02 09:20:55 np0005541914.localdomain sudo[137513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:55 np0005541914.localdomain python3.9[137515]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764667254.6229198-217-242864558704562/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:20:55 np0005541914.localdomain sudo[137513]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:56 np0005541914.localdomain sudo[137605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tjycvooqgydmakfrxwjwmrnhtzxfussy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667256.4510932-217-207855007429213/AnsiballZ_stat.py
Dec 02 09:20:56 np0005541914.localdomain sudo[137605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:56 np0005541914.localdomain python3.9[137607]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:20:56 np0005541914.localdomain sudo[137605]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=442 DF PROTO=TCP SPT=41590 DPT=9101 SEQ=101522747 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53AC0220000000001030307) 
Dec 02 09:20:57 np0005541914.localdomain sudo[137678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ltycswhpfpapbpmupmbpptogxfhspztt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667256.4510932-217-207855007429213/AnsiballZ_copy.py
Dec 02 09:20:57 np0005541914.localdomain sudo[137678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:57 np0005541914.localdomain python3.9[137680]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764667256.4510932-217-207855007429213/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:20:57 np0005541914.localdomain sudo[137678]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:57 np0005541914.localdomain sudo[137770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-evjorzrajapoftnskstflvalbmvmpxte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667257.5699925-305-238888466014973/AnsiballZ_file.py
Dec 02 09:20:57 np0005541914.localdomain sudo[137770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:58 np0005541914.localdomain python3.9[137772]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:20:58 np0005541914.localdomain sudo[137770]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:58 np0005541914.localdomain sudo[137862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vqearjlqhjixgqmrdjnmkurovpdwddiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667258.2181497-329-70111803024865/AnsiballZ_stat.py
Dec 02 09:20:58 np0005541914.localdomain sudo[137862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:58 np0005541914.localdomain python3.9[137864]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:20:58 np0005541914.localdomain sudo[137862]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:58 np0005541914.localdomain sudo[137935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tiwpfaejonnabdnxywrypsquksctaxcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667258.2181497-329-70111803024865/AnsiballZ_copy.py
Dec 02 09:20:58 np0005541914.localdomain sudo[137935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:59 np0005541914.localdomain python3.9[137937]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667258.2181497-329-70111803024865/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:20:59 np0005541914.localdomain sudo[137935]: pam_unix(sudo:session): session closed for user root
Dec 02 09:20:59 np0005541914.localdomain sudo[138027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ivyfcphixkpjdmszyuictbvugdiizujl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667259.342999-374-145756951399975/AnsiballZ_stat.py
Dec 02 09:20:59 np0005541914.localdomain sudo[138027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:20:59 np0005541914.localdomain python3.9[138029]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:20:59 np0005541914.localdomain sudo[138027]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:00 np0005541914.localdomain sudo[138100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lhzjtvxokwbgiwfztcobbbhotodbwuhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667259.342999-374-145756951399975/AnsiballZ_copy.py
Dec 02 09:21:00 np0005541914.localdomain sudo[138100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:00 np0005541914.localdomain python3.9[138102]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667259.342999-374-145756951399975/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:21:00 np0005541914.localdomain sudo[138100]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:01 np0005541914.localdomain sudo[138192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gbwpnpfhiukkkdmqobebzdmoyasteyev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667260.5891354-419-277164453336821/AnsiballZ_systemd.py
Dec 02 09:21:01 np0005541914.localdomain sudo[138192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:01 np0005541914.localdomain python3.9[138194]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:21:01 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:21:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36124 DF PROTO=TCP SPT=54018 DPT=9102 SEQ=140166374 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53AD1220000000001030307) 
Dec 02 09:21:01 np0005541914.localdomain systemd-rc-local-generator[138218]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:21:01 np0005541914.localdomain systemd-sysv-generator[138222]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:21:01 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:21:01 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:21:01 np0005541914.localdomain systemd-rc-local-generator[138260]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:21:01 np0005541914.localdomain systemd-sysv-generator[138263]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:21:01 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:21:02 np0005541914.localdomain systemd[1]: Starting EDPM Container Shutdown...
Dec 02 09:21:02 np0005541914.localdomain systemd[1]: Finished EDPM Container Shutdown.
Dec 02 09:21:02 np0005541914.localdomain sudo[138192]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:02 np0005541914.localdomain sudo[138361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-blupoehvpqnkfmtpfemakmfaojxgfkub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667262.297252-443-13874881280350/AnsiballZ_stat.py
Dec 02 09:21:02 np0005541914.localdomain sudo[138361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:02 np0005541914.localdomain python3.9[138363]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:21:02 np0005541914.localdomain sudo[138361]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:03 np0005541914.localdomain sudo[138434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bpowdecfrpjyipvssenevywsxaubacny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667262.297252-443-13874881280350/AnsiballZ_copy.py
Dec 02 09:21:03 np0005541914.localdomain sudo[138434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:03 np0005541914.localdomain python3.9[138436]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667262.297252-443-13874881280350/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:21:03 np0005541914.localdomain sudo[138434]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:03 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2566 DF PROTO=TCP SPT=50354 DPT=9882 SEQ=920202697 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53AD9500000000001030307) 
Dec 02 09:21:03 np0005541914.localdomain sudo[138526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qxlyjoggvluflvduasjdtnjluopkufar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667263.4632423-487-60601841819601/AnsiballZ_stat.py
Dec 02 09:21:03 np0005541914.localdomain sudo[138526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:03 np0005541914.localdomain python3.9[138528]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:21:03 np0005541914.localdomain sudo[138526]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:04 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2567 DF PROTO=TCP SPT=50354 DPT=9882 SEQ=920202697 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53ADD620000000001030307) 
Dec 02 09:21:05 np0005541914.localdomain sudo[138599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-teofjbysvbjvcdmnoflxprbnmgsozbxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667263.4632423-487-60601841819601/AnsiballZ_copy.py
Dec 02 09:21:05 np0005541914.localdomain sudo[138599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:05 np0005541914.localdomain python3.9[138601]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667263.4632423-487-60601841819601/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:21:05 np0005541914.localdomain sudo[138599]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:05 np0005541914.localdomain sudo[138691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pizglrhrvucczchmywqnaqhducylwwfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667265.5656252-532-258832405000246/AnsiballZ_systemd.py
Dec 02 09:21:05 np0005541914.localdomain sudo[138691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:06 np0005541914.localdomain python3.9[138693]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:21:06 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:21:06 np0005541914.localdomain systemd-rc-local-generator[138719]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:21:06 np0005541914.localdomain systemd-sysv-generator[138725]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:21:06 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:21:06 np0005541914.localdomain systemd[1]: Starting Create netns directory...
Dec 02 09:21:06 np0005541914.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 02 09:21:06 np0005541914.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 02 09:21:06 np0005541914.localdomain systemd[1]: Finished Create netns directory.
Dec 02 09:21:06 np0005541914.localdomain sudo[138691]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:06 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2568 DF PROTO=TCP SPT=50354 DPT=9882 SEQ=920202697 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53AE5630000000001030307) 
Dec 02 09:21:08 np0005541914.localdomain python3.9[138827]: ansible-ansible.builtin.service_facts Invoked
Dec 02 09:21:08 np0005541914.localdomain network[138844]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 02 09:21:08 np0005541914.localdomain network[138845]: 'network-scripts' will be removed from distribution in near future.
Dec 02 09:21:08 np0005541914.localdomain network[138846]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 02 09:21:10 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39302 DF PROTO=TCP SPT=40848 DPT=9100 SEQ=2009350041 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53AF3230000000001030307) 
Dec 02 09:21:10 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:21:13 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18553 DF PROTO=TCP SPT=45392 DPT=9105 SEQ=2193856773 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53AFE630000000001030307) 
Dec 02 09:21:13 np0005541914.localdomain sudo[139046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yrwlvfphuojjyaertaagrnkcwwzcpnag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667273.5136456-611-84129631988213/AnsiballZ_stat.py
Dec 02 09:21:13 np0005541914.localdomain sudo[139046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:13 np0005541914.localdomain python3.9[139048]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:21:13 np0005541914.localdomain sudo[139046]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:14 np0005541914.localdomain sudo[139121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dwpcbpcbaxdivqgjcledxlbspxzppiyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667273.5136456-611-84129631988213/AnsiballZ_copy.py
Dec 02 09:21:14 np0005541914.localdomain sudo[139121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:14 np0005541914.localdomain python3.9[139123]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667273.5136456-611-84129631988213/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:21:14 np0005541914.localdomain sudo[139121]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:15 np0005541914.localdomain sudo[139214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yhidzvzcydhegbmopneardpxxyeolxzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667275.3432317-656-237655656393250/AnsiballZ_systemd.py
Dec 02 09:21:15 np0005541914.localdomain sudo[139214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:15 np0005541914.localdomain python3.9[139216]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 02 09:21:15 np0005541914.localdomain systemd[1]: Reloading OpenSSH server daemon...
Dec 02 09:21:15 np0005541914.localdomain sshd[119204]: Received SIGHUP; restarting.
Dec 02 09:21:15 np0005541914.localdomain systemd[1]: Reloaded OpenSSH server daemon.
Dec 02 09:21:15 np0005541914.localdomain sshd[119204]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:21:15 np0005541914.localdomain sshd[119204]: Server listening on 0.0.0.0 port 22.
Dec 02 09:21:15 np0005541914.localdomain sshd[119204]: Server listening on :: port 22.
Dec 02 09:21:15 np0005541914.localdomain sudo[139214]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:16 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42345 DF PROTO=TCP SPT=52450 DPT=9102 SEQ=3839931986 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53B09AF0000000001030307) 
Dec 02 09:21:17 np0005541914.localdomain sudo[139310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-robiyqsyqpqvijetomfbtjjdwtobvgys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667277.4533877-680-38121299005460/AnsiballZ_file.py
Dec 02 09:21:17 np0005541914.localdomain sudo[139310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:17 np0005541914.localdomain python3.9[139312]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:21:17 np0005541914.localdomain sudo[139310]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:18 np0005541914.localdomain sudo[139402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cbyyqoqfusbxdgvuinywxryjvxbotrsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667278.148781-704-85892581619172/AnsiballZ_stat.py
Dec 02 09:21:18 np0005541914.localdomain sudo[139402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:18 np0005541914.localdomain python3.9[139404]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:21:18 np0005541914.localdomain sudo[139402]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:18 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2570 DF PROTO=TCP SPT=50354 DPT=9882 SEQ=920202697 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53B15220000000001030307) 
Dec 02 09:21:18 np0005541914.localdomain sudo[139475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jwgcahuxlidynodpichevvfjtkgtjptk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667278.148781-704-85892581619172/AnsiballZ_copy.py
Dec 02 09:21:18 np0005541914.localdomain sudo[139475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:19 np0005541914.localdomain python3.9[139477]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667278.148781-704-85892581619172/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:21:19 np0005541914.localdomain sudo[139475]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:19 np0005541914.localdomain sudo[139567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mlnhlwiyagepnjuvjmyzzwzuxtotaaxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667279.5280216-758-13965064578041/AnsiballZ_timezone.py
Dec 02 09:21:19 np0005541914.localdomain sudo[139567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:20 np0005541914.localdomain python3.9[139569]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 02 09:21:20 np0005541914.localdomain systemd[1]: Starting Time & Date Service...
Dec 02 09:21:20 np0005541914.localdomain systemd[1]: Started Time & Date Service.
Dec 02 09:21:20 np0005541914.localdomain sudo[139567]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:20 np0005541914.localdomain sudo[139663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mqngkahxclzbeynzwmvpovbfasldfmwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667280.5340936-785-38048097125955/AnsiballZ_file.py
Dec 02 09:21:20 np0005541914.localdomain sudo[139663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:21 np0005541914.localdomain python3.9[139665]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:21:21 np0005541914.localdomain sudo[139663]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:21 np0005541914.localdomain sudo[139755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-szelgnacjwtvuktkdfpzldybzrbjdjdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667281.1986651-809-88046430470728/AnsiballZ_stat.py
Dec 02 09:21:21 np0005541914.localdomain sudo[139755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:21 np0005541914.localdomain python3.9[139757]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:21:21 np0005541914.localdomain sudo[139755]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:21 np0005541914.localdomain sudo[139828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kztgpzpsuyvaxerztkaztmttsasypuhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667281.1986651-809-88046430470728/AnsiballZ_copy.py
Dec 02 09:21:21 np0005541914.localdomain sudo[139828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:22 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=444 DF PROTO=TCP SPT=41590 DPT=9101 SEQ=101522747 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53B21220000000001030307) 
Dec 02 09:21:22 np0005541914.localdomain python3.9[139830]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667281.1986651-809-88046430470728/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:21:22 np0005541914.localdomain sudo[139828]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:22 np0005541914.localdomain sudo[139920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-faocqdhiwseresvhvqkowkctvktqgscd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667282.3685393-854-118311154061231/AnsiballZ_stat.py
Dec 02 09:21:22 np0005541914.localdomain sudo[139920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:22 np0005541914.localdomain python3.9[139922]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:21:22 np0005541914.localdomain sudo[139920]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:23 np0005541914.localdomain sudo[139993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pupiijnsswpxyodvurjzinybgigmwvnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667282.3685393-854-118311154061231/AnsiballZ_copy.py
Dec 02 09:21:23 np0005541914.localdomain sudo[139993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:23 np0005541914.localdomain python3.9[139995]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667282.3685393-854-118311154061231/.source.yaml _original_basename=.cbtobiq8 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:21:23 np0005541914.localdomain sudo[139993]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:23 np0005541914.localdomain sudo[140085]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fdfbmoexhviqvdsvdyugiepgmzebszhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667283.5438411-899-234846099999234/AnsiballZ_stat.py
Dec 02 09:21:23 np0005541914.localdomain sudo[140085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:24 np0005541914.localdomain python3.9[140087]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:21:24 np0005541914.localdomain sudo[140085]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:24 np0005541914.localdomain sudo[140160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hysvootqaovsbgbrohjcpyfoktrtobse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667283.5438411-899-234846099999234/AnsiballZ_copy.py
Dec 02 09:21:24 np0005541914.localdomain sudo[140160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:24 np0005541914.localdomain python3.9[140162]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667283.5438411-899-234846099999234/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:21:24 np0005541914.localdomain sudo[140160]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:25 np0005541914.localdomain sudo[140252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tijdthjxjfkznjtchkfcasjxvuqvxymd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667284.803592-944-212220920149593/AnsiballZ_command.py
Dec 02 09:21:25 np0005541914.localdomain sudo[140252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:25 np0005541914.localdomain python3.9[140254]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:21:25 np0005541914.localdomain sudo[140252]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:25 np0005541914.localdomain sudo[140345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zngdwhnwlsjrnguspjndyfqqeeduefmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667285.6204119-968-270889530294471/AnsiballZ_command.py
Dec 02 09:21:25 np0005541914.localdomain sudo[140345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:26 np0005541914.localdomain python3.9[140347]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:21:26 np0005541914.localdomain sudo[140345]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27852 DF PROTO=TCP SPT=49166 DPT=9101 SEQ=2935783225 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53B35220000000001030307) 
Dec 02 09:21:27 np0005541914.localdomain sudo[140438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fkvdtomtchjzkgrcnypldwjkmtygcmus ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764667286.2996774-991-133608380280780/AnsiballZ_edpm_nftables_from_files.py
Dec 02 09:21:27 np0005541914.localdomain sudo[140438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:27 np0005541914.localdomain python3[140440]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 02 09:21:27 np0005541914.localdomain sudo[140438]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18556 DF PROTO=TCP SPT=45392 DPT=9105 SEQ=2193856773 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53B37220000000001030307) 
Dec 02 09:21:27 np0005541914.localdomain sudo[140530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lnyjhqufyaniqbpynfbziifnycrrmgsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667287.6533875-1016-184812338118586/AnsiballZ_stat.py
Dec 02 09:21:27 np0005541914.localdomain sudo[140530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:28 np0005541914.localdomain python3.9[140532]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:21:28 np0005541914.localdomain sudo[140530]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:28 np0005541914.localdomain sudo[140603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-akqfennedikbudgpgrmzcmlzluidrynh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667287.6533875-1016-184812338118586/AnsiballZ_copy.py
Dec 02 09:21:28 np0005541914.localdomain sudo[140603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:28 np0005541914.localdomain python3.9[140605]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667287.6533875-1016-184812338118586/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:21:28 np0005541914.localdomain sudo[140603]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:29 np0005541914.localdomain sudo[140695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rbivzpmpcurvytfybbtrwtwceacdtsni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667289.0495946-1061-227284860251009/AnsiballZ_stat.py
Dec 02 09:21:29 np0005541914.localdomain sudo[140695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:29 np0005541914.localdomain python3.9[140697]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:21:29 np0005541914.localdomain sudo[140695]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:29 np0005541914.localdomain sudo[140768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-brrovzweiadyxusxdojgtcczvwnboigr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667289.0495946-1061-227284860251009/AnsiballZ_copy.py
Dec 02 09:21:29 np0005541914.localdomain sudo[140768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:30 np0005541914.localdomain python3.9[140770]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667289.0495946-1061-227284860251009/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:21:30 np0005541914.localdomain sudo[140768]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:30 np0005541914.localdomain sudo[140860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qamiacuyqnbdxtqxcwigrmulgtvzvhcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667290.3169687-1106-18509187840266/AnsiballZ_stat.py
Dec 02 09:21:30 np0005541914.localdomain sudo[140860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:30 np0005541914.localdomain python3.9[140862]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:21:30 np0005541914.localdomain sudo[140860]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:31 np0005541914.localdomain sudo[140934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-orgwjghiiokpunogxgojxdgjftmywfxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667290.3169687-1106-18509187840266/AnsiballZ_copy.py
Dec 02 09:21:31 np0005541914.localdomain sudo[140934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42349 DF PROTO=TCP SPT=52450 DPT=9102 SEQ=3839931986 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53B45220000000001030307) 
Dec 02 09:21:31 np0005541914.localdomain python3.9[140936]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667290.3169687-1106-18509187840266/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:21:31 np0005541914.localdomain sudo[140934]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:31 np0005541914.localdomain sudo[141026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gtpuuqejnoquejrlzbwhmucsbstiieze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667291.5028868-1152-170243589156562/AnsiballZ_stat.py
Dec 02 09:21:31 np0005541914.localdomain sudo[141026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:31 np0005541914.localdomain python3.9[141028]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:21:31 np0005541914.localdomain sudo[141026]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:32 np0005541914.localdomain sudo[141099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dcdthxvnivhlswrtwgfegwqijtijyosc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667291.5028868-1152-170243589156562/AnsiballZ_copy.py
Dec 02 09:21:32 np0005541914.localdomain sudo[141099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:32 np0005541914.localdomain python3.9[141101]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667291.5028868-1152-170243589156562/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:21:32 np0005541914.localdomain sudo[141099]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:32 np0005541914.localdomain sudo[141191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tyvuevmtvbqfmchhpiucbsffttmreatt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667292.6886299-1196-270305456228576/AnsiballZ_stat.py
Dec 02 09:21:33 np0005541914.localdomain sudo[141191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:33 np0005541914.localdomain python3.9[141193]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:21:33 np0005541914.localdomain sudo[141191]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:33 np0005541914.localdomain sudo[141264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cnczknoimxbkccqdhwyihekeikxvbchn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667292.6886299-1196-270305456228576/AnsiballZ_copy.py
Dec 02 09:21:33 np0005541914.localdomain sudo[141264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:33 np0005541914.localdomain python3.9[141266]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667292.6886299-1196-270305456228576/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:21:33 np0005541914.localdomain sudo[141264]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:34 np0005541914.localdomain sudo[141356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dvzxsetytzxjhlluzcthsgwfqwqvtczl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667294.141509-1240-53054082525487/AnsiballZ_file.py
Dec 02 09:21:34 np0005541914.localdomain sudo[141356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:34 np0005541914.localdomain python3.9[141358]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:21:34 np0005541914.localdomain sudo[141356]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:35 np0005541914.localdomain sudo[141448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ylpfseaqkdyqvtjmncdqkozwoksxtdug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667294.7744563-1265-213632230750262/AnsiballZ_command.py
Dec 02 09:21:35 np0005541914.localdomain sudo[141448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:35 np0005541914.localdomain python3.9[141450]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:21:35 np0005541914.localdomain sudo[141448]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:35 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2571 DF PROTO=TCP SPT=50354 DPT=9882 SEQ=920202697 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53B55220000000001030307) 
Dec 02 09:21:35 np0005541914.localdomain sudo[141543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vfwivgdtoqdqliiistjtopclryqznfag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667295.4605312-1289-255162109349564/AnsiballZ_blockinfile.py
Dec 02 09:21:35 np0005541914.localdomain sudo[141543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:36 np0005541914.localdomain python3.9[141545]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                                            include "/etc/nftables/edpm-chains.nft"
                                                            include "/etc/nftables/edpm-rules.nft"
                                                            include "/etc/nftables/edpm-jumps.nft"
                                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:21:36 np0005541914.localdomain sudo[141543]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:36 np0005541914.localdomain sudo[141636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bujmunugohzxnhrdhoxhxtabtoknbwkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667296.322704-1316-153040613291132/AnsiballZ_file.py
Dec 02 09:21:36 np0005541914.localdomain sudo[141636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:36 np0005541914.localdomain python3.9[141638]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:21:36 np0005541914.localdomain sudo[141636]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:37 np0005541914.localdomain sudo[141728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rjidtvulbatfdnzauipetatnzxqrdpwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667296.8350718-1316-179866433219643/AnsiballZ_file.py
Dec 02 09:21:37 np0005541914.localdomain sudo[141728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:37 np0005541914.localdomain python3.9[141730]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:21:37 np0005541914.localdomain sudo[141728]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:37 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42984 DF PROTO=TCP SPT=38882 DPT=9882 SEQ=748661140 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53B5F220000000001030307) 
Dec 02 09:21:38 np0005541914.localdomain sudo[141820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-spupxblswvzihjdfffxqereswcwyjizh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667297.9872143-1361-197129084274938/AnsiballZ_mount.py
Dec 02 09:21:38 np0005541914.localdomain sudo[141820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:38 np0005541914.localdomain python3.9[141822]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 02 09:21:38 np0005541914.localdomain sudo[141820]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:39 np0005541914.localdomain sudo[141913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wouneodlieicytqjfdhuetvoeaizgvuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667298.8585792-1361-182770929022987/AnsiballZ_mount.py
Dec 02 09:21:39 np0005541914.localdomain sudo[141913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:39 np0005541914.localdomain python3.9[141915]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 02 09:21:39 np0005541914.localdomain sudo[141913]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:40 np0005541914.localdomain sshd[136671]: pam_unix(sshd:session): session closed for user zuul
Dec 02 09:21:40 np0005541914.localdomain systemd-logind[760]: Session 43 logged out. Waiting for processes to exit.
Dec 02 09:21:40 np0005541914.localdomain systemd[1]: session-43.scope: Deactivated successfully.
Dec 02 09:21:40 np0005541914.localdomain systemd[1]: session-43.scope: Consumed 28.259s CPU time.
Dec 02 09:21:40 np0005541914.localdomain systemd-logind[760]: Removed session 43.
Dec 02 09:21:40 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20774 DF PROTO=TCP SPT=53124 DPT=9100 SEQ=695088761 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53B69230000000001030307) 
Dec 02 09:21:45 np0005541914.localdomain sshd[141931]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:21:46 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7468 DF PROTO=TCP SPT=54218 DPT=9102 SEQ=4218921566 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53B7EDE0000000001030307) 
Dec 02 09:21:46 np0005541914.localdomain sshd[141931]: Accepted publickey for zuul from 192.168.122.30 port 45606 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 09:21:46 np0005541914.localdomain systemd-logind[760]: New session 44 of user zuul.
Dec 02 09:21:46 np0005541914.localdomain systemd[1]: Started Session 44 of User zuul.
Dec 02 09:21:46 np0005541914.localdomain sshd[141931]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 09:21:46 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46657 DF PROTO=TCP SPT=44054 DPT=9105 SEQ=3194187341 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53B7F220000000001030307) 
Dec 02 09:21:46 np0005541914.localdomain sudo[142024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zduptykrohfhqkwnbivqhanhtvrpgqnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667306.149128-23-208009450807319/AnsiballZ_tempfile.py
Dec 02 09:21:46 np0005541914.localdomain sudo[142024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:46 np0005541914.localdomain python3.9[142026]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 02 09:21:46 np0005541914.localdomain sudo[142024]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:48 np0005541914.localdomain sudo[142116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tqtkbwvdunhuoojhuhxferzpuvxtskeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667308.1790917-95-173790090249964/AnsiballZ_stat.py
Dec 02 09:21:48 np0005541914.localdomain sudo[142116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:48 np0005541914.localdomain python3.9[142118]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:21:48 np0005541914.localdomain sudo[142116]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:50 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56992 DF PROTO=TCP SPT=44614 DPT=9101 SEQ=3857717863 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53B8EA10000000001030307) 
Dec 02 09:21:50 np0005541914.localdomain systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 02 09:21:50 np0005541914.localdomain sudo[142212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yulehpqdpsdzsaggbqehzkrrbrxybbyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667310.2097564-143-251835896964312/AnsiballZ_slurp.py
Dec 02 09:21:50 np0005541914.localdomain sudo[142212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:50 np0005541914.localdomain python3.9[142214]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Dec 02 09:21:50 np0005541914.localdomain sudo[142212]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:51 np0005541914.localdomain sudo[142304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mifhhgajsfntzjexjnehcvnygshgqzly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667311.5976827-191-157810556419856/AnsiballZ_stat.py
Dec 02 09:21:51 np0005541914.localdomain sudo[142304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:52 np0005541914.localdomain python3.9[142306]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.2rgthf84 follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:21:52 np0005541914.localdomain sudo[142304]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:52 np0005541914.localdomain sudo[142379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ykmnzvvopqtlqogmgfhxsrnjwagmzjjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667311.5976827-191-157810556419856/AnsiballZ_copy.py
Dec 02 09:21:52 np0005541914.localdomain sudo[142379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:52 np0005541914.localdomain sudo[142382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:21:52 np0005541914.localdomain sudo[142382]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:21:52 np0005541914.localdomain sudo[142382]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:52 np0005541914.localdomain sudo[142397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 02 09:21:52 np0005541914.localdomain sudo[142397]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:21:52 np0005541914.localdomain python3.9[142381]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.2rgthf84 mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667311.5976827-191-157810556419856/.source.2rgthf84 _original_basename=._7njjbyl follow=False checksum=9674ae9a797ab88dd38896b99c4666372998fea7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:21:52 np0005541914.localdomain sudo[142379]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:53 np0005541914.localdomain systemd[1]: tmp-crun.rALYLK.mount: Deactivated successfully.
Dec 02 09:21:53 np0005541914.localdomain podman[142499]: 2025-12-02 09:21:53.492390835 +0000 UTC m=+0.087003534 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, io.buildah.version=1.41.4, vcs-type=git, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., distribution-scope=public, name=rhceph, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, version=7, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main)
Dec 02 09:21:53 np0005541914.localdomain podman[142499]: 2025-12-02 09:21:53.604310679 +0000 UTC m=+0.198923478 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, architecture=x86_64, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, distribution-scope=public, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, release=1763362218, GIT_CLEAN=True, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, vcs-type=git)
Dec 02 09:21:53 np0005541914.localdomain sudo[142397]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:53 np0005541914.localdomain sudo[142567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:21:54 np0005541914.localdomain sudo[142567]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:21:54 np0005541914.localdomain sudo[142567]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:54 np0005541914.localdomain sudo[142582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:21:54 np0005541914.localdomain sudo[142582]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:21:54 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=445 DF PROTO=TCP SPT=41590 DPT=9101 SEQ=101522747 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53B9F220000000001030307) 
Dec 02 09:21:54 np0005541914.localdomain sudo[142692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gtlpbnxrxurlqxgaprpdhorqocmvxblc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667314.0719829-282-17344771678129/AnsiballZ_setup.py
Dec 02 09:21:54 np0005541914.localdomain sudo[142692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:54 np0005541914.localdomain sudo[142582]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:54 np0005541914.localdomain python3.9[142699]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 09:21:54 np0005541914.localdomain sudo[142692]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:55 np0005541914.localdomain sudo[142721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:21:55 np0005541914.localdomain sudo[142721]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:21:55 np0005541914.localdomain sudo[142721]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:56 np0005541914.localdomain sudo[142811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gftfcyqftrmiclqslljfnahtnndxkqqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667315.885089-330-39283674172675/AnsiballZ_blockinfile.py
Dec 02 09:21:56 np0005541914.localdomain sudo[142811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:56 np0005541914.localdomain python3.9[142813]: ansible-ansible.builtin.blockinfile Invoked with block=np0005541914.localdomain,192.168.122.108,np0005541914* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCHh7115UF/t7QzqWY1fk2wHPOuHuMPRhaYTC/yfMWr+nqJ5/TNZTuFxq0aW/1gHanB2usmC0wpWf4c1KsPZ71Ehs/j5nV1wfGtNVEq5Zj7uhs0ea/SQToF2RS406RoIzJW6ogv4Kl3nxGEK6c44WCu8+Ki98dCQ4wesh5kSBkqgiSq2IZkL2gjoAKeXdracGRJ596gTB0yfsMl/qdJDneVHMq/rptlFhabLeiEN+7C0o0gsZwYsxCd2oSB+DD9KfXhWIBeXRr1B7mFcMZpGNG7pG0d1IjYOUmqjvVpECHrLvjiitS3800ZEFwygU4sbM/DWHelobjtJB/fxxPTtGNlbH4MK/OGFh2mm5jB1LMqWSsifA/ZAHASAAffWDwKtF+xJ06OHRDT6gjzOd7VJpc8kR9Jn9pT7UnjypnrM12GtrO0CH8Lf3rin71kf9iZRIphqWXhiLN3G/mdJC2XPIxJp7NQ1Mqc5IhHciCv80bvsGrzLCtAr16/b+cPYo7vIGU=
                                                            np0005541914.localdomain,192.168.122.108,np0005541914* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGGWCSLJV2aPwMTfOaIZ+xjv1QFJPyldmo6H+V71SAll
                                                            np0005541914.localdomain,192.168.122.108,np0005541914* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFoWDrioobP7nWM6onZB+AZBuk/AQQ7zXxT58XHHnNVCXAZxKDdYUpn8CqfQBodfVNr1sWDyzBr0D5lMGYZypzo=
                                                            np0005541909.localdomain,192.168.122.103,np0005541909* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0b4xecJ9cZa0s7FCPYSs6kLrfHyBh8YL/KS+tj3DrfUU03KCcmbHQesHBBcRxB6PDYjueAsvx5rGXzjMojO5Jz2DlZoSPaBM9tm/HAKWhaiL+seTfrRsNLFvxfWyxU/x0FUSOTf01ZThrT/IJ5WkfJD4UgZQSzUPucffImwFt4y2oERfa96sAwSwE4o5RuLzRdKuWB3npxcApj2/3+pyWR59yubokMiU506MI37Hbg8xCaC5qn4ISKB8WBJObICoNQoatrbcqSOrrUEFv/vcWANDYUEw6XzTTwkuIu6dJPJiJh8j5TzDnnvKSK+f3eEG7OCiz814F+o82tDo7U6k5ERO0xmElXdOlPYsiuM5+CTQmmm6xmFN2L3HIvZlyPn3oF26oV+INAd3XsF5MIFcfpGUXH5b04gE7LhpdVLVfLGGYSVWjZhzxl/Wa0OiHoMaDUYoN2bPG0h5SPUDIyDv2jW3FDxhOWANR/9ITUCQpz3gSwl/1AVN3HCWf+RUeLuE=
                                                            np0005541909.localdomain,192.168.122.103,np0005541909* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA7RcuDge6wF/g+qZxY6m8WG6IEuMAvvdJQnnCjLs+Z1
                                                            np0005541909.localdomain,192.168.122.103,np0005541909* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP5sNXub2DBEGdchrrXonnWitouBamsCHQlfu1Eq48/u/VA5EJmoCHsMI/KSOMxMnSS+uUeGceHpl9AyeHtY2NU=
                                                            np0005541910.localdomain,192.168.122.104,np0005541910* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDOmh2HMG9Y5+9VA8Ap3pHIOQhG/GfAsIqnmfJJuGwKb8N2T9r1Yd+kmoP7Xs41cto4h6Fw1f4Pa6Tw050y3LmwpXvDN+2Qq1qYI0rT4pqOiYBkyMbOQhqLF5tA+MNYGdibQj/fWkG+gKa8wwzkTgCEAn6PgEZiqR9LFJrqr4RfQDxaWCLmXM96+AVGG5/SXWx5u6T3lanUnpcfISvB2yx4HifsINAHPgLR4weEzra/b7e0QNyxItxvlDseasPyeYHD3Hdi2PNuUmoZC+zWEoWoU3BMAQeXR7lmEcdtyK5wr0pIBmf0CKFdvGrdVWrzAUbDc8ZHXmWyKlWHHZvHch1V2r/S4J2983UsG3sJwM8954Tj325LgS1nldIYBSjwMGfhZFYzmy9obAN7ZSV5qwD0h+rxt/I9RNdXS3SRu9tOZI+AN59De44cF23OJS5MfrfnB7JUnBOv4ScVML4rPjPx9L4/omOlfbBVJx42b1RlboXEk52J7Aa3xRseA4Elvuk=
                                                            np0005541910.localdomain,192.168.122.104,np0005541910* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIx+QMGsIWmPvyCeFcRzy+Z3KrW6oIHjAujq2mTiluKE
                                                            np0005541910.localdomain,192.168.122.104,np0005541910* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPiujdvwsNBrUjQMVBj6TBCEcpbfZIgHcCBzjuRUWPac2ltR7NNO2aF0KEDTH4F4qoWK7fw0fn0UFKuTrY4INV8=
                                                            np0005541913.localdomain,192.168.122.107,np0005541913* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDYXeXWwxJkeR9i2V9hYiVGqEGSbkwFIKUbTm3m8em9m5o380jUORSYXOITLm0CAl/waSYEc4fiPu2sAYDISig1zqAItfAODEdayFoKK63ui7vq92ZPKayhmjahj2jNo3KMAZ5aFzNBcowsRooRqLNJ7R9BAQ4H8kdqL9xdRjy5bvfWJHGrm8PvWcUaRYebCQ35j+7nHq4RFRYsd964NKjrq+FxkjyOSs2AxE+SHYOVgAAd8Jp2uyr3dR56IzWy8WqQzPj6tlsER8+/Kt1lASATcuMFeteA0M7tbjZxEIAPyfktPVQOq9mgeFOFmTf8oTbt94Rk2QmyNI4oE7sQHFWo9UWrvZd9LpDDartUls5uHunn4SzvgvtRimO3e1hNXn0VQLGNfSUwGij0R3iOYJpACHgly3J7sbX3tROvwRpawZlGIGZY46vaYRMXGClXz+lUCa6ZZO+f6BX6bEt0VfYWX8IVmnH2oJXEJBYJPVXZML+OcczJc8zEfHxBylpZn4k=
                                                            np0005541913.localdomain,192.168.122.107,np0005541913* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEGKyrd1x8JIpNEVeXNPog2z4+Z1Gyh32lFLn9uh2H3I
                                                            np0005541913.localdomain,192.168.122.107,np0005541913* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAGOHjEHyYQ71qgLjQWD4LGL0rAKniN6cBK/Yx+b+dGqDveVXKGlkaXQOOfCp4GEX5fDI6bqBjCB02Ool/6wTT8=
                                                            np0005541911.localdomain,192.168.122.105,np0005541911* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzI5YTDMvj8zBlKqeNplIMBQQJ43gcDfB5cRE7DwwpHBRcqOuhSoIm7r0C3h5ABQJYkTXEGRY0i5HC5eMErD7SKRJJ3q9aZ+uv4VvUGagr7M9S/JGUjZej2+ACXZ7L+d9MLt389xVtIuuNh5Cy3U8muIBEAS1b4mXOJ95eiW3M5b2hxmol0DTjUMX/bLtJU/MQ09wE72pj6Uqz/CCFsUwDBZlQ3jcVK74fYwgItCNkLJ+D2E4wTl4Ei8XOlEY9cV8B1E+aK6iUKesiya0Vfi/Ant77ONQDeCsI21AJDbi5wtUXg4qXBu3Z/zObZiEmedzqWj7K46Nv8lDlQoeoKuxzTCwxgn0PaorQgkUvUdAyk5Qo4BaUOv8ojICiZvRy9QZ3jblr1dCM/Jy3g4Sz6Hz4QHxtV21nUw//sBN2X6jCHQVGTJeZrbVvgGNcGiqcCzQTW/4NoiOB0ho7RVNtD+oYb5UE+Lh+Ibua3bv7zfnLjsw1GiyclsCgrQTKBl8Netc=
                                                            np0005541911.localdomain,192.168.122.105,np0005541911* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILT7VjxC/vKVj4DmZTIjCQwrK+UN5wih4A5ddEFb5wLX
                                                            np0005541911.localdomain,192.168.122.105,np0005541911* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEJ5o8j1+/xDc8zMV2yChXY+U6nf1GT6sS3GGAkd+aR/6mUWuiQzjkFESsidYGPHaqz55q4REeXXQtW6T8mmqzU=
                                                            np0005541912.localdomain,192.168.122.106,np0005541912* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKgyHtHHKWFdaOqx5AsvOJPmNsbjVxvzh05A7Hy02rgbdg4zBUd/E0mqG+tYVGg12fIdbRNgjUfM+PEGJznZdEQnZCtLgMhbpRC33IbCXMw7Ev/tRfkffpP+H8VdyGL83zCFFnMIMD2IDWU+MjTf/ais63Zv/UiBL24pkZ18u3nypjN3uN2FdeDF4JNtnSVK6i1a+wE6wLmdSAfX8ovFbLhZMgAAPU3I3Fu5D/pSa6OjKshEcNy0m6KCKwQoT6cbDGsnMjd2sdE1Vc+KgkrBN3fMmrChdgi2Ig7CpkdGvQF0G/t53cwNatjp78FrNCHjpLcIAFw3QgfepiTiXQbXQ/jC5xkdM+5wIcSmB3rf3GKaUgaxnjk55GAXxrHwAFwOi+ltxSNPszH9vfIBLluThUdmQmvtCOCvEFZ5uuVuu94A5frS9BzOIzz7ylrqau3nHGaPjbT80XubnqZsHlOahsovbk1mu3ewvoitAVb0E+BBroNWeHT9BbA8Igh+sxwGM=
                                                            np0005541912.localdomain,192.168.122.106,np0005541912* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJZZ0KsiMflqlnr0GTYoucjExbwZ18yPSOiSsfRMt90v
                                                            np0005541912.localdomain,192.168.122.106,np0005541912* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGm4CXNWO0ZHMO4eJHc4n6NO7LQlY2+Ctp7F81Y3AEXQl3GIl2c/UCuL0O5ZJj6nEB654FSLAuOOifViFW8rlDc=
                                                             create=True mode=0644 path=/tmp/ansible.2rgthf84 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:21:56 np0005541914.localdomain sudo[142811]: pam_unix(sudo:session): session closed for user root
Dec 02 09:21:58 np0005541914.localdomain sudo[142903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xasowiwwqnvdklgtzvhazkhubhlgwopd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667317.9252636-379-35776353407636/AnsiballZ_command.py
Dec 02 09:21:58 np0005541914.localdomain sudo[142903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:21:58 np0005541914.localdomain python3.9[142905]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.2rgthf84' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:21:58 np0005541914.localdomain sudo[142903]: pam_unix(sudo:session): session closed for user root
Dec 02 09:22:00 np0005541914.localdomain sudo[142997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wgegqhqhkcikliynimqlazentztjuslc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667319.1739938-426-69071870239097/AnsiballZ_file.py
Dec 02 09:22:00 np0005541914.localdomain sudo[142997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:22:00 np0005541914.localdomain python3.9[142999]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.2rgthf84 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:22:00 np0005541914.localdomain sudo[142997]: pam_unix(sudo:session): session closed for user root
Dec 02 09:22:01 np0005541914.localdomain sshd[141931]: pam_unix(sshd:session): session closed for user zuul
Dec 02 09:22:01 np0005541914.localdomain systemd[1]: session-44.scope: Deactivated successfully.
Dec 02 09:22:01 np0005541914.localdomain systemd[1]: session-44.scope: Consumed 4.185s CPU time.
Dec 02 09:22:01 np0005541914.localdomain systemd-logind[760]: Session 44 logged out. Waiting for processes to exit.
Dec 02 09:22:01 np0005541914.localdomain systemd-logind[760]: Removed session 44.
Dec 02 09:22:03 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21489 DF PROTO=TCP SPT=55132 DPT=9882 SEQ=2040504519 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53BC3B10000000001030307) 
Dec 02 09:22:07 np0005541914.localdomain sshd[143014]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:22:07 np0005541914.localdomain sshd[143014]: Accepted publickey for zuul from 192.168.122.30 port 53798 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 09:22:07 np0005541914.localdomain systemd-logind[760]: New session 45 of user zuul.
Dec 02 09:22:07 np0005541914.localdomain systemd[1]: Started Session 45 of User zuul.
Dec 02 09:22:07 np0005541914.localdomain sshd[143014]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 09:22:08 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31649 DF PROTO=TCP SPT=46550 DPT=9100 SEQ=1511025377 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53BD6870000000001030307) 
Dec 02 09:22:08 np0005541914.localdomain sshd[143063]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:22:09 np0005541914.localdomain sshd[143063]: Invalid user ubuntu from 45.148.10.240 port 44710
Dec 02 09:22:09 np0005541914.localdomain sshd[143063]: Connection closed by invalid user ubuntu 45.148.10.240 port 44710 [preauth]
Dec 02 09:22:09 np0005541914.localdomain python3.9[143109]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 09:22:10 np0005541914.localdomain sudo[143203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nqzwtwyhnwqeslvrlkbfmrcykjurjypz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667330.1285334-60-189141695878212/AnsiballZ_systemd.py
Dec 02 09:22:10 np0005541914.localdomain sudo[143203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:22:10 np0005541914.localdomain python3.9[143205]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 02 09:22:11 np0005541914.localdomain sudo[143203]: pam_unix(sudo:session): session closed for user root
Dec 02 09:22:11 np0005541914.localdomain sudo[143297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lpypehyeriaidkdhspfbxbsessgyjjxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667331.210125-83-104357889449666/AnsiballZ_systemd.py
Dec 02 09:22:11 np0005541914.localdomain sudo[143297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:22:11 np0005541914.localdomain python3.9[143299]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 02 09:22:11 np0005541914.localdomain sudo[143297]: pam_unix(sudo:session): session closed for user root
Dec 02 09:22:12 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23805 DF PROTO=TCP SPT=52734 DPT=9105 SEQ=2099678045 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53BE4C70000000001030307) 
Dec 02 09:22:12 np0005541914.localdomain sudo[143390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oavzudcednzpqxrnhlkpeffrculudapn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667332.0011234-109-208289431233869/AnsiballZ_command.py
Dec 02 09:22:12 np0005541914.localdomain sudo[143390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:22:12 np0005541914.localdomain python3.9[143392]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:22:12 np0005541914.localdomain sudo[143390]: pam_unix(sudo:session): session closed for user root
Dec 02 09:22:13 np0005541914.localdomain sudo[143483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ymurjkjqyzzsvkqutbmozjolyyeqqton ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667332.7460268-133-97425854787223/AnsiballZ_stat.py
Dec 02 09:22:13 np0005541914.localdomain sudo[143483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:22:13 np0005541914.localdomain python3.9[143485]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:22:13 np0005541914.localdomain sudo[143483]: pam_unix(sudo:session): session closed for user root
Dec 02 09:22:13 np0005541914.localdomain sudo[143577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yjikqkhfwlzbzzsyqeyxlmwqdleyuqvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667333.5041845-157-121694788322163/AnsiballZ_command.py
Dec 02 09:22:13 np0005541914.localdomain sudo[143577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:22:13 np0005541914.localdomain python3.9[143579]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:22:13 np0005541914.localdomain sudo[143577]: pam_unix(sudo:session): session closed for user root
Dec 02 09:22:14 np0005541914.localdomain sudo[143672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mwffzffpqdfdmlsnofhvfvhdvpbnmhvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667334.163164-181-169654102925253/AnsiballZ_file.py
Dec 02 09:22:14 np0005541914.localdomain sudo[143672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:22:14 np0005541914.localdomain python3.9[143674]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:22:14 np0005541914.localdomain sudo[143672]: pam_unix(sudo:session): session closed for user root
Dec 02 09:22:15 np0005541914.localdomain sshd[143014]: pam_unix(sshd:session): session closed for user zuul
Dec 02 09:22:15 np0005541914.localdomain systemd[1]: session-45.scope: Deactivated successfully.
Dec 02 09:22:15 np0005541914.localdomain systemd[1]: session-45.scope: Consumed 3.893s CPU time.
Dec 02 09:22:15 np0005541914.localdomain systemd-logind[760]: Session 45 logged out. Waiting for processes to exit.
Dec 02 09:22:15 np0005541914.localdomain systemd-logind[760]: Removed session 45.
Dec 02 09:22:16 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2060 DF PROTO=TCP SPT=60334 DPT=9102 SEQ=4163297219 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53BF40E0000000001030307) 
Dec 02 09:22:17 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2061 DF PROTO=TCP SPT=60334 DPT=9102 SEQ=4163297219 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53BF8220000000001030307) 
Dec 02 09:22:19 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2062 DF PROTO=TCP SPT=60334 DPT=9102 SEQ=4163297219 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53C00230000000001030307) 
Dec 02 09:22:20 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36972 DF PROTO=TCP SPT=59186 DPT=9101 SEQ=2155726346 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53C03D00000000001030307) 
Dec 02 09:22:21 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36973 DF PROTO=TCP SPT=59186 DPT=9101 SEQ=2155726346 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53C07E20000000001030307) 
Dec 02 09:22:21 np0005541914.localdomain sshd[143689]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:22:21 np0005541914.localdomain sshd[143689]: Accepted publickey for zuul from 192.168.122.30 port 53108 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 09:22:21 np0005541914.localdomain systemd-logind[760]: New session 46 of user zuul.
Dec 02 09:22:21 np0005541914.localdomain systemd[1]: Started Session 46 of User zuul.
Dec 02 09:22:21 np0005541914.localdomain sshd[143689]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 09:22:22 np0005541914.localdomain python3.9[143782]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 09:22:23 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36974 DF PROTO=TCP SPT=59186 DPT=9101 SEQ=2155726346 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53C0FE30000000001030307) 
Dec 02 09:22:23 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2063 DF PROTO=TCP SPT=60334 DPT=9102 SEQ=4163297219 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53C0FE30000000001030307) 
Dec 02 09:22:23 np0005541914.localdomain sudo[143876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-myepzqpumhsqoaepfyzsfrircpjigjdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667343.100668-66-196764938670989/AnsiballZ_setup.py
Dec 02 09:22:23 np0005541914.localdomain sudo[143876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:22:23 np0005541914.localdomain python3.9[143878]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 02 09:22:23 np0005541914.localdomain sudo[143876]: pam_unix(sudo:session): session closed for user root
Dec 02 09:22:24 np0005541914.localdomain sudo[143930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vwuqabwkmlqciovutoioffrmthvphxqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667343.100668-66-196764938670989/AnsiballZ_dnf.py
Dec 02 09:22:24 np0005541914.localdomain sudo[143930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:22:24 np0005541914.localdomain python3.9[143932]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 02 09:22:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36975 DF PROTO=TCP SPT=59186 DPT=9101 SEQ=2155726346 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53C1FA20000000001030307) 
Dec 02 09:22:27 np0005541914.localdomain sudo[143930]: pam_unix(sudo:session): session closed for user root
Dec 02 09:22:28 np0005541914.localdomain python3.9[144024]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:22:30 np0005541914.localdomain sudo[144115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ckewvjlrnrtcslpkwwrsiexygvccjdac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667350.1920044-127-47643356989129/AnsiballZ_file.py
Dec 02 09:22:30 np0005541914.localdomain sudo[144115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:22:30 np0005541914.localdomain python3.9[144117]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/reboot_required/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:22:30 np0005541914.localdomain sudo[144115]: pam_unix(sudo:session): session closed for user root
Dec 02 09:22:31 np0005541914.localdomain sudo[144207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-acmdjigzpcpsdiiappctfyqhejgrltxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667350.974897-151-236638610206903/AnsiballZ_file.py
Dec 02 09:22:31 np0005541914.localdomain sudo[144207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:22:31 np0005541914.localdomain python3.9[144209]: ansible-ansible.builtin.file Invoked with mode=0600 path=/var/lib/openstack/reboot_required/needs_restarting state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:22:31 np0005541914.localdomain sudo[144207]: pam_unix(sudo:session): session closed for user root
Dec 02 09:22:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2064 DF PROTO=TCP SPT=60334 DPT=9102 SEQ=4163297219 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53C31220000000001030307) 
Dec 02 09:22:32 np0005541914.localdomain sudo[144299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wimwljlcxxbvndcsdkhvioxbsrvbceqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667351.9423342-175-159256793690097/AnsiballZ_lineinfile.py
Dec 02 09:22:32 np0005541914.localdomain sudo[144299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:22:32 np0005541914.localdomain python3.9[144301]: ansible-ansible.builtin.lineinfile Invoked with dest=/var/lib/openstack/reboot_required/needs_restarting line=Not root, Subscription Management repositories not updated
                                                            Core libraries or services have been updated since boot-up:
                                                              * systemd
                                                            
                                                            Reboot is required to fully utilize these updates.
                                                            More information: https://access.redhat.com/solutions/27943 path=/var/lib/openstack/reboot_required/needs_restarting state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:22:32 np0005541914.localdomain sudo[144299]: pam_unix(sudo:session): session closed for user root
Dec 02 09:22:33 np0005541914.localdomain python3.9[144391]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 02 09:22:33 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25881 DF PROTO=TCP SPT=36800 DPT=9882 SEQ=2158992548 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53C38E10000000001030307) 
Dec 02 09:22:34 np0005541914.localdomain python3.9[144481]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:22:34 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25882 DF PROTO=TCP SPT=36800 DPT=9882 SEQ=2158992548 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53C3CE20000000001030307) 
Dec 02 09:22:34 np0005541914.localdomain python3.9[144573]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:22:35 np0005541914.localdomain sshd[143689]: pam_unix(sshd:session): session closed for user zuul
Dec 02 09:22:35 np0005541914.localdomain systemd[1]: session-46.scope: Deactivated successfully.
Dec 02 09:22:35 np0005541914.localdomain systemd[1]: session-46.scope: Consumed 9.210s CPU time.
Dec 02 09:22:35 np0005541914.localdomain systemd-logind[760]: Session 46 logged out. Waiting for processes to exit.
Dec 02 09:22:35 np0005541914.localdomain systemd-logind[760]: Removed session 46.
Dec 02 09:22:36 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25883 DF PROTO=TCP SPT=36800 DPT=9882 SEQ=2158992548 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53C44E20000000001030307) 
Dec 02 09:22:40 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25884 DF PROTO=TCP SPT=36800 DPT=9882 SEQ=2158992548 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53C54A20000000001030307) 
Dec 02 09:22:43 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53566 DF PROTO=TCP SPT=41358 DPT=9105 SEQ=1471419076 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53C5DE20000000001030307) 
Dec 02 09:22:43 np0005541914.localdomain sshd[144588]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:22:43 np0005541914.localdomain sshd[144588]: Accepted publickey for zuul from 192.168.122.30 port 40256 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 09:22:43 np0005541914.localdomain systemd-logind[760]: New session 47 of user zuul.
Dec 02 09:22:43 np0005541914.localdomain systemd[1]: Started Session 47 of User zuul.
Dec 02 09:22:43 np0005541914.localdomain sshd[144588]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 09:22:44 np0005541914.localdomain python3.9[144681]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 09:22:46 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44548 DF PROTO=TCP SPT=54310 DPT=9102 SEQ=2725746045 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53C693E0000000001030307) 
Dec 02 09:22:46 np0005541914.localdomain sudo[144775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qqlvcklwouilmnsdmwzooqkutlrlnxkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667366.0200946-158-34200001191988/AnsiballZ_file.py
Dec 02 09:22:46 np0005541914.localdomain sudo[144775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:22:46 np0005541914.localdomain python3.9[144777]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:22:46 np0005541914.localdomain sudo[144775]: pam_unix(sudo:session): session closed for user root
Dec 02 09:22:47 np0005541914.localdomain sudo[144867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lcrxrzdxvinzdmbokocbbimvbktknmxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667366.8549404-186-222855213792009/AnsiballZ_stat.py
Dec 02 09:22:47 np0005541914.localdomain sudo[144867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:22:47 np0005541914.localdomain python3.9[144869]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:22:47 np0005541914.localdomain sudo[144867]: pam_unix(sudo:session): session closed for user root
Dec 02 09:22:48 np0005541914.localdomain sudo[144940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ckfovcfvabfabivhbkupkuszsobsgrff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667366.8549404-186-222855213792009/AnsiballZ_copy.py
Dec 02 09:22:48 np0005541914.localdomain sudo[144940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:22:48 np0005541914.localdomain python3.9[144942]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667366.8549404-186-222855213792009/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=73226dd0fbcefd6bca2e777d65fae037e6bf10fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:22:48 np0005541914.localdomain sudo[144940]: pam_unix(sudo:session): session closed for user root
Dec 02 09:22:48 np0005541914.localdomain sudo[145032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-utfopjtrbncuxlhtgvkwnoxubiygrsod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667368.545942-233-148129876324842/AnsiballZ_file.py
Dec 02 09:22:48 np0005541914.localdomain sudo[145032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:22:49 np0005541914.localdomain python3.9[145034]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-sriov setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:22:49 np0005541914.localdomain sudo[145032]: pam_unix(sudo:session): session closed for user root
Dec 02 09:22:49 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25885 DF PROTO=TCP SPT=36800 DPT=9882 SEQ=2158992548 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53C75230000000001030307) 
Dec 02 09:22:49 np0005541914.localdomain sudo[145124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vczttwudxdjhquhocsuwjswblwocsklo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667369.1697154-259-139233647535715/AnsiballZ_stat.py
Dec 02 09:22:49 np0005541914.localdomain sudo[145124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:22:49 np0005541914.localdomain python3.9[145126]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:22:49 np0005541914.localdomain sudo[145124]: pam_unix(sudo:session): session closed for user root
Dec 02 09:22:50 np0005541914.localdomain sudo[145197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hcjzsttazxaijabhouhrbyaouyofnses ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667369.1697154-259-139233647535715/AnsiballZ_copy.py
Dec 02 09:22:50 np0005541914.localdomain sudo[145197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:22:50 np0005541914.localdomain python3.9[145199]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667369.1697154-259-139233647535715/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=73226dd0fbcefd6bca2e777d65fae037e6bf10fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:22:50 np0005541914.localdomain sudo[145197]: pam_unix(sudo:session): session closed for user root
Dec 02 09:22:51 np0005541914.localdomain sudo[145289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nekyjfcogqastadnnpbwcjozynqikerq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667370.8287783-310-9956103665439/AnsiballZ_file.py
Dec 02 09:22:51 np0005541914.localdomain sudo[145289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:22:51 np0005541914.localdomain python3.9[145291]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-dhcp setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:22:51 np0005541914.localdomain sudo[145289]: pam_unix(sudo:session): session closed for user root
Dec 02 09:22:51 np0005541914.localdomain chronyd[136655]: Selected source 162.159.200.1 (pool.ntp.org)
Dec 02 09:22:52 np0005541914.localdomain sudo[145381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mtybvfipcxfsbvvcehtqqivjzioxzqpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667371.4708521-335-200761063864217/AnsiballZ_stat.py
Dec 02 09:22:52 np0005541914.localdomain sudo[145381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:22:52 np0005541914.localdomain python3.9[145383]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:22:52 np0005541914.localdomain sudo[145381]: pam_unix(sudo:session): session closed for user root
Dec 02 09:22:52 np0005541914.localdomain sudo[145454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mdfgylzoqgfkzlrjotgajbknzqjnjcfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667371.4708521-335-200761063864217/AnsiballZ_copy.py
Dec 02 09:22:52 np0005541914.localdomain sudo[145454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:22:52 np0005541914.localdomain python3.9[145456]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667371.4708521-335-200761063864217/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=73226dd0fbcefd6bca2e777d65fae037e6bf10fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:22:52 np0005541914.localdomain sudo[145454]: pam_unix(sudo:session): session closed for user root
Dec 02 09:22:53 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44551 DF PROTO=TCP SPT=54310 DPT=9102 SEQ=2725746045 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53C85220000000001030307) 
Dec 02 09:22:53 np0005541914.localdomain sudo[145546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-awbsyoirnmppdrzqlutittaoolncdyki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667373.1548905-387-48655253605930/AnsiballZ_file.py
Dec 02 09:22:53 np0005541914.localdomain sudo[145546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:22:53 np0005541914.localdomain python3.9[145548]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:22:53 np0005541914.localdomain sudo[145546]: pam_unix(sudo:session): session closed for user root
Dec 02 09:22:54 np0005541914.localdomain sudo[145638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-svruwkmyhazszhoteiqzgrpaaxwnkqan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667373.923136-411-22533227368450/AnsiballZ_stat.py
Dec 02 09:22:54 np0005541914.localdomain sudo[145638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:22:54 np0005541914.localdomain python3.9[145640]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:22:54 np0005541914.localdomain sudo[145638]: pam_unix(sudo:session): session closed for user root
Dec 02 09:22:54 np0005541914.localdomain sudo[145711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hfltqsoawyvnkjvkiqegqhjrjgmibidz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667373.923136-411-22533227368450/AnsiballZ_copy.py
Dec 02 09:22:54 np0005541914.localdomain sudo[145711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:22:54 np0005541914.localdomain python3.9[145713]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667373.923136-411-22533227368450/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=73226dd0fbcefd6bca2e777d65fae037e6bf10fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:22:54 np0005541914.localdomain sudo[145711]: pam_unix(sudo:session): session closed for user root
Dec 02 09:22:55 np0005541914.localdomain sudo[145803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xwsxzztijsgwcnzpautqbccmuehsgrpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667375.148849-460-266166350809777/AnsiballZ_file.py
Dec 02 09:22:55 np0005541914.localdomain sudo[145803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:22:55 np0005541914.localdomain sudo[145806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:22:55 np0005541914.localdomain sudo[145806]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:22:55 np0005541914.localdomain sudo[145806]: pam_unix(sudo:session): session closed for user root
Dec 02 09:22:55 np0005541914.localdomain python3.9[145805]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:22:55 np0005541914.localdomain sudo[145803]: pam_unix(sudo:session): session closed for user root
Dec 02 09:22:55 np0005541914.localdomain sudo[145821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:22:55 np0005541914.localdomain sudo[145821]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:22:56 np0005541914.localdomain sudo[145938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-flxyfmlgikwubjgfslvzezmjvhdjxmev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667375.8353608-486-20674791776152/AnsiballZ_stat.py
Dec 02 09:22:56 np0005541914.localdomain sudo[145938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:22:56 np0005541914.localdomain python3.9[145942]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:22:56 np0005541914.localdomain sudo[145938]: pam_unix(sudo:session): session closed for user root
Dec 02 09:22:56 np0005541914.localdomain sudo[145821]: pam_unix(sudo:session): session closed for user root
Dec 02 09:22:56 np0005541914.localdomain sudo[146030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vtkzdrrxusdqxedibjejmljuvzdathdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667375.8353608-486-20674791776152/AnsiballZ_copy.py
Dec 02 09:22:56 np0005541914.localdomain sudo[146030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:22:56 np0005541914.localdomain python3.9[146032]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667375.8353608-486-20674791776152/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=73226dd0fbcefd6bca2e777d65fae037e6bf10fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:22:56 np0005541914.localdomain sudo[146030]: pam_unix(sudo:session): session closed for user root
Dec 02 09:22:57 np0005541914.localdomain sudo[146060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:22:57 np0005541914.localdomain sudo[146060]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:22:57 np0005541914.localdomain sudo[146060]: pam_unix(sudo:session): session closed for user root
Dec 02 09:22:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54398 DF PROTO=TCP SPT=46714 DPT=9101 SEQ=325028808 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53C94E30000000001030307) 
Dec 02 09:22:57 np0005541914.localdomain sudo[146137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-efkgeddsjlbupziagxtjqzrvmolrbkss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667377.0909166-533-78713901450093/AnsiballZ_file.py
Dec 02 09:22:57 np0005541914.localdomain sudo[146137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:22:57 np0005541914.localdomain python3.9[146139]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:22:57 np0005541914.localdomain sudo[146137]: pam_unix(sudo:session): session closed for user root
Dec 02 09:22:58 np0005541914.localdomain sudo[146229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wpyoyujfgijergydlftmyqigjgcrnula ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667377.7392528-559-167024099175096/AnsiballZ_stat.py
Dec 02 09:22:58 np0005541914.localdomain sudo[146229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:22:58 np0005541914.localdomain python3.9[146231]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:22:58 np0005541914.localdomain sudo[146229]: pam_unix(sudo:session): session closed for user root
Dec 02 09:22:58 np0005541914.localdomain sudo[146302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yrwknncwadrfcnjpmyyzeyjpkfpnfbhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667377.7392528-559-167024099175096/AnsiballZ_copy.py
Dec 02 09:22:58 np0005541914.localdomain sudo[146302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:22:58 np0005541914.localdomain python3.9[146304]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667377.7392528-559-167024099175096/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=73226dd0fbcefd6bca2e777d65fae037e6bf10fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:22:58 np0005541914.localdomain sudo[146302]: pam_unix(sudo:session): session closed for user root
Dec 02 09:22:59 np0005541914.localdomain sudo[146394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sswxkuhcyuokvcbvqqjecozeiyeyuevs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667379.024367-607-136701241065360/AnsiballZ_file.py
Dec 02 09:22:59 np0005541914.localdomain sudo[146394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:22:59 np0005541914.localdomain python3.9[146396]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:22:59 np0005541914.localdomain sudo[146394]: pam_unix(sudo:session): session closed for user root
Dec 02 09:23:00 np0005541914.localdomain sudo[146486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pqrczqjuihwdllqzxteepdvltfkyiojk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667379.6445596-631-4177287125721/AnsiballZ_stat.py
Dec 02 09:23:00 np0005541914.localdomain sudo[146486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:23:00 np0005541914.localdomain python3.9[146488]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:23:00 np0005541914.localdomain sudo[146486]: pam_unix(sudo:session): session closed for user root
Dec 02 09:23:00 np0005541914.localdomain sudo[146559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dxlyoqjifkuzjugjltnyypbredpwoxxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667379.6445596-631-4177287125721/AnsiballZ_copy.py
Dec 02 09:23:00 np0005541914.localdomain sudo[146559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:23:01 np0005541914.localdomain python3.9[146561]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667379.6445596-631-4177287125721/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=73226dd0fbcefd6bca2e777d65fae037e6bf10fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:23:01 np0005541914.localdomain sudo[146559]: pam_unix(sudo:session): session closed for user root
Dec 02 09:23:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44552 DF PROTO=TCP SPT=54310 DPT=9102 SEQ=2725746045 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53CA5230000000001030307) 
Dec 02 09:23:01 np0005541914.localdomain sudo[146651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dhfbhhatztdcivftqrqdpseafnmjhdud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667381.3726516-684-152701376460238/AnsiballZ_file.py
Dec 02 09:23:01 np0005541914.localdomain sudo[146651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:23:01 np0005541914.localdomain python3.9[146653]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:23:01 np0005541914.localdomain sudo[146651]: pam_unix(sudo:session): session closed for user root
Dec 02 09:23:02 np0005541914.localdomain sudo[146743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-awrbkzxvcybnucafimuxspqvnhyyndta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667382.597127-702-113803606209010/AnsiballZ_stat.py
Dec 02 09:23:02 np0005541914.localdomain sudo[146743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:23:03 np0005541914.localdomain python3.9[146745]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:23:03 np0005541914.localdomain sudo[146743]: pam_unix(sudo:session): session closed for user root
Dec 02 09:23:03 np0005541914.localdomain sudo[146816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lrewnhcwcfvgmiraqgbccywwqoiarrgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667382.597127-702-113803606209010/AnsiballZ_copy.py
Dec 02 09:23:03 np0005541914.localdomain sudo[146816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:23:03 np0005541914.localdomain python3.9[146818]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667382.597127-702-113803606209010/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=73226dd0fbcefd6bca2e777d65fae037e6bf10fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:23:03 np0005541914.localdomain sudo[146816]: pam_unix(sudo:session): session closed for user root
Dec 02 09:23:03 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60192 DF PROTO=TCP SPT=49492 DPT=9882 SEQ=96069256 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53CAE110000000001030307) 
Dec 02 09:23:03 np0005541914.localdomain sshd[144588]: pam_unix(sshd:session): session closed for user zuul
Dec 02 09:23:03 np0005541914.localdomain systemd-logind[760]: Session 47 logged out. Waiting for processes to exit.
Dec 02 09:23:03 np0005541914.localdomain systemd[1]: session-47.scope: Deactivated successfully.
Dec 02 09:23:03 np0005541914.localdomain systemd[1]: session-47.scope: Consumed 12.454s CPU time.
Dec 02 09:23:03 np0005541914.localdomain systemd-logind[760]: Removed session 47.
Dec 02 09:23:04 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60193 DF PROTO=TCP SPT=49492 DPT=9882 SEQ=96069256 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53CB2230000000001030307) 
Dec 02 09:23:06 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60194 DF PROTO=TCP SPT=49492 DPT=9882 SEQ=96069256 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53CBA220000000001030307) 
Dec 02 09:23:10 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13741 DF PROTO=TCP SPT=60270 DPT=9100 SEQ=2993189913 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53CC7220000000001030307) 
Dec 02 09:23:13 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10988 DF PROTO=TCP SPT=50888 DPT=9105 SEQ=2037304202 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53CD3220000000001030307) 
Dec 02 09:23:16 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1105 DF PROTO=TCP SPT=59208 DPT=9102 SEQ=3413441333 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53CDE6E0000000001030307) 
Dec 02 09:23:19 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1107 DF PROTO=TCP SPT=59208 DPT=9102 SEQ=3413441333 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53CEA620000000001030307) 
Dec 02 09:23:21 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54400 DF PROTO=TCP SPT=46714 DPT=9101 SEQ=325028808 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53CF5220000000001030307) 
Dec 02 09:23:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40391 DF PROTO=TCP SPT=57954 DPT=9101 SEQ=1057897481 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53D09E20000000001030307) 
Dec 02 09:23:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1109 DF PROTO=TCP SPT=59208 DPT=9102 SEQ=3413441333 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53D1B230000000001030307) 
Dec 02 09:23:33 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50080 DF PROTO=TCP SPT=59806 DPT=9882 SEQ=629235110 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53D23410000000001030307) 
Dec 02 09:23:34 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50081 DF PROTO=TCP SPT=59806 DPT=9882 SEQ=629235110 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53D27620000000001030307) 
Dec 02 09:23:35 np0005541914.localdomain sshd[146833]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:23:35 np0005541914.localdomain sshd[146833]: Accepted publickey for zuul from 192.168.122.30 port 53576 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 09:23:35 np0005541914.localdomain systemd-logind[760]: New session 48 of user zuul.
Dec 02 09:23:35 np0005541914.localdomain systemd[1]: Started Session 48 of User zuul.
Dec 02 09:23:35 np0005541914.localdomain sshd[146833]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 09:23:36 np0005541914.localdomain sudo[146926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dcruopzutjsfbixyhvjnvrboavxbwgae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667415.9983127-28-121327324453230/AnsiballZ_file.py
Dec 02 09:23:36 np0005541914.localdomain sudo[146926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:23:36 np0005541914.localdomain python3.9[146928]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:23:36 np0005541914.localdomain sudo[146926]: pam_unix(sudo:session): session closed for user root
Dec 02 09:23:36 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50082 DF PROTO=TCP SPT=59806 DPT=9882 SEQ=629235110 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53D2F620000000001030307) 
Dec 02 09:23:37 np0005541914.localdomain sudo[147018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qlafpcywbovbpuycfdkwgklvkngvhxdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667416.847182-64-197282402515176/AnsiballZ_stat.py
Dec 02 09:23:37 np0005541914.localdomain sudo[147018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:23:37 np0005541914.localdomain python3.9[147020]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:23:37 np0005541914.localdomain sudo[147018]: pam_unix(sudo:session): session closed for user root
Dec 02 09:23:37 np0005541914.localdomain sudo[147091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qnjcnfdovbknzcbandxrsbdhzstimbws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667416.847182-64-197282402515176/AnsiballZ_copy.py
Dec 02 09:23:37 np0005541914.localdomain sudo[147091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:23:38 np0005541914.localdomain python3.9[147093]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667416.847182-64-197282402515176/.source.conf _original_basename=ceph.conf follow=False checksum=bb050c8012c4b6ce73dbd1d555a91a361a703a4d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:23:38 np0005541914.localdomain sudo[147091]: pam_unix(sudo:session): session closed for user root
Dec 02 09:23:38 np0005541914.localdomain sudo[147183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nbjopbshfpchoarbdtftzwekzwyzvtgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667418.173121-64-251895035122086/AnsiballZ_stat.py
Dec 02 09:23:38 np0005541914.localdomain sudo[147183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:23:38 np0005541914.localdomain python3.9[147185]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:23:38 np0005541914.localdomain sudo[147183]: pam_unix(sudo:session): session closed for user root
Dec 02 09:23:38 np0005541914.localdomain sudo[147256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-egmciibjrzzxxamflrpboolvplegutrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667418.173121-64-251895035122086/AnsiballZ_copy.py
Dec 02 09:23:38 np0005541914.localdomain sudo[147256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:23:39 np0005541914.localdomain python3.9[147258]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667418.173121-64-251895035122086/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=55e6802793866e8195bd7dc6c06395cc4184e741 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:23:39 np0005541914.localdomain sudo[147256]: pam_unix(sudo:session): session closed for user root
Dec 02 09:23:39 np0005541914.localdomain sshd[146833]: pam_unix(sshd:session): session closed for user zuul
Dec 02 09:23:39 np0005541914.localdomain systemd[1]: session-48.scope: Deactivated successfully.
Dec 02 09:23:39 np0005541914.localdomain systemd[1]: session-48.scope: Consumed 2.154s CPU time.
Dec 02 09:23:39 np0005541914.localdomain systemd-logind[760]: Session 48 logged out. Waiting for processes to exit.
Dec 02 09:23:39 np0005541914.localdomain systemd-logind[760]: Removed session 48.
Dec 02 09:23:40 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9155 DF PROTO=TCP SPT=56848 DPT=9100 SEQ=3703839231 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53D3D230000000001030307) 
Dec 02 09:23:43 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57403 DF PROTO=TCP SPT=56976 DPT=9105 SEQ=617448496 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53D48620000000001030307) 
Dec 02 09:23:45 np0005541914.localdomain sshd[147273]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:23:45 np0005541914.localdomain sshd[147273]: Accepted publickey for zuul from 192.168.122.30 port 36004 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 09:23:45 np0005541914.localdomain systemd-logind[760]: New session 49 of user zuul.
Dec 02 09:23:45 np0005541914.localdomain systemd[1]: Started Session 49 of User zuul.
Dec 02 09:23:45 np0005541914.localdomain sshd[147273]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 09:23:45 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53571 DF PROTO=TCP SPT=41358 DPT=9105 SEQ=1471419076 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53D53220000000001030307) 
Dec 02 09:23:46 np0005541914.localdomain python3.9[147366]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 09:23:47 np0005541914.localdomain sudo[147460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-spqkiahjnvqplbwhpivdcbfaveizxnii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667427.038376-64-19722511580999/AnsiballZ_file.py
Dec 02 09:23:47 np0005541914.localdomain sudo[147460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:23:47 np0005541914.localdomain python3.9[147462]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:23:47 np0005541914.localdomain sudo[147460]: pam_unix(sudo:session): session closed for user root
Dec 02 09:23:48 np0005541914.localdomain sudo[147552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-omxlyvjjylouprrnfxxrssyorwtmsmuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667427.7660217-64-88412240535918/AnsiballZ_file.py
Dec 02 09:23:48 np0005541914.localdomain sudo[147552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:23:48 np0005541914.localdomain python3.9[147554]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:23:48 np0005541914.localdomain sudo[147552]: pam_unix(sudo:session): session closed for user root
Dec 02 09:23:48 np0005541914.localdomain python3.9[147644]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 09:23:48 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50084 DF PROTO=TCP SPT=59806 DPT=9882 SEQ=629235110 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53D5F220000000001030307) 
Dec 02 09:23:50 np0005541914.localdomain sudo[147734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mejvltoecnmtzblkdcgamqjqemybmzmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667429.8396466-133-204392865252984/AnsiballZ_seboolean.py
Dec 02 09:23:50 np0005541914.localdomain sudo[147734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:23:50 np0005541914.localdomain python3.9[147736]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 02 09:23:50 np0005541914.localdomain sudo[147734]: pam_unix(sudo:session): session closed for user root
Dec 02 09:23:51 np0005541914.localdomain sudo[147826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uxpisimkrfqtvphxdzjhubmiwlbeezbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667431.6294074-163-104417413005520/AnsiballZ_setup.py
Dec 02 09:23:51 np0005541914.localdomain sudo[147826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:23:52 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40393 DF PROTO=TCP SPT=57954 DPT=9101 SEQ=1057897481 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53D6B220000000001030307) 
Dec 02 09:23:52 np0005541914.localdomain python3.9[147828]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 02 09:23:52 np0005541914.localdomain sudo[147826]: pam_unix(sudo:session): session closed for user root
Dec 02 09:23:52 np0005541914.localdomain sudo[147880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fzitmvurqkbkphinavcnsgeoaghixitp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667431.6294074-163-104417413005520/AnsiballZ_dnf.py
Dec 02 09:23:52 np0005541914.localdomain sudo[147880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:23:53 np0005541914.localdomain python3.9[147882]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 02 09:23:56 np0005541914.localdomain sudo[147880]: pam_unix(sudo:session): session closed for user root
Dec 02 09:23:57 np0005541914.localdomain sudo[147974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ckxaoaldbdbkoclacarcdzopwsvbhfsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667436.5739813-199-154586758957159/AnsiballZ_systemd.py
Dec 02 09:23:57 np0005541914.localdomain sudo[147974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:23:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44036 DF PROTO=TCP SPT=57400 DPT=9101 SEQ=3631023866 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53D7F230000000001030307) 
Dec 02 09:23:57 np0005541914.localdomain sudo[147977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:23:57 np0005541914.localdomain sudo[147977]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:23:57 np0005541914.localdomain sudo[147977]: pam_unix(sudo:session): session closed for user root
Dec 02 09:23:57 np0005541914.localdomain sudo[147992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:23:57 np0005541914.localdomain sudo[147992]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:23:57 np0005541914.localdomain python3.9[147976]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 02 09:23:57 np0005541914.localdomain sudo[147974]: pam_unix(sudo:session): session closed for user root
Dec 02 09:23:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57406 DF PROTO=TCP SPT=56976 DPT=9105 SEQ=617448496 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53D81220000000001030307) 
Dec 02 09:23:58 np0005541914.localdomain sudo[148128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xjmhphfjbalcncaemiqxqdwyaaefzrni ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764667437.6799688-224-141783426829523/AnsiballZ_edpm_nftables_snippet.py
Dec 02 09:23:58 np0005541914.localdomain sudo[148128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:23:58 np0005541914.localdomain sudo[147992]: pam_unix(sudo:session): session closed for user root
Dec 02 09:23:58 np0005541914.localdomain python3[148134]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                                            rule:
                                                              proto: udp
                                                              dport: 4789
                                                          - rule_name: 119 neutron geneve networks
                                                            rule:
                                                              proto: udp
                                                              dport: 6081
                                                              state: ["UNTRACKED"]
                                                          - rule_name: 120 neutron geneve networks no conntrack
                                                            rule:
                                                              proto: udp
                                                              dport: 6081
                                                              table: raw
                                                              chain: OUTPUT
                                                              jump: NOTRACK
                                                              action: append
                                                              state: []
                                                          - rule_name: 121 neutron geneve networks no conntrack
                                                            rule:
                                                              proto: udp
                                                              dport: 6081
                                                              table: raw
                                                              chain: PREROUTING
                                                              jump: NOTRACK
                                                              action: append
                                                              state: []
                                                           dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec 02 09:23:58 np0005541914.localdomain sudo[148128]: pam_unix(sudo:session): session closed for user root
Dec 02 09:23:58 np0005541914.localdomain sudo[148194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:23:58 np0005541914.localdomain sudo[148194]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:23:58 np0005541914.localdomain sudo[148194]: pam_unix(sudo:session): session closed for user root
Dec 02 09:23:58 np0005541914.localdomain sudo[148239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pqzwgyhhtuwvjmeqzdhmtteftugnlrtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667438.5421028-250-48129603181731/AnsiballZ_file.py
Dec 02 09:23:58 np0005541914.localdomain sudo[148239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:23:59 np0005541914.localdomain python3.9[148241]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:23:59 np0005541914.localdomain sudo[148239]: pam_unix(sudo:session): session closed for user root
Dec 02 09:23:59 np0005541914.localdomain sudo[148331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sywamjpposldbftsuabvkjpeuaqwycwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667439.1920433-275-251445230093643/AnsiballZ_stat.py
Dec 02 09:23:59 np0005541914.localdomain sudo[148331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:23:59 np0005541914.localdomain python3.9[148333]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:23:59 np0005541914.localdomain sudo[148331]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:00 np0005541914.localdomain sudo[148379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tgecqwaengrtikhmfwrumcaostkdaads ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667439.1920433-275-251445230093643/AnsiballZ_file.py
Dec 02 09:24:00 np0005541914.localdomain sudo[148379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:00 np0005541914.localdomain python3.9[148381]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:24:00 np0005541914.localdomain sudo[148379]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:01 np0005541914.localdomain sudo[148471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lesplomvatxdpyssfqkclcxbksugrblr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667440.8558831-311-244850521855316/AnsiballZ_stat.py
Dec 02 09:24:01 np0005541914.localdomain sudo[148471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25563 DF PROTO=TCP SPT=60764 DPT=9102 SEQ=2472583394 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53D8F230000000001030307) 
Dec 02 09:24:01 np0005541914.localdomain python3.9[148473]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:24:01 np0005541914.localdomain sudo[148471]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:01 np0005541914.localdomain sudo[148519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xljhqmelyegnbntypfroapkuypngazhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667440.8558831-311-244850521855316/AnsiballZ_file.py
Dec 02 09:24:01 np0005541914.localdomain sudo[148519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:01 np0005541914.localdomain python3.9[148521]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.h3scoodm recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:24:01 np0005541914.localdomain sudo[148519]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:02 np0005541914.localdomain sudo[148611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wrromksmheqbhqqugjusdfgycwjqzbmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667442.2200677-347-190530188510274/AnsiballZ_stat.py
Dec 02 09:24:02 np0005541914.localdomain sudo[148611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:02 np0005541914.localdomain python3.9[148613]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:24:02 np0005541914.localdomain sudo[148611]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:02 np0005541914.localdomain sudo[148659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-crvuotxobtfnmqgsmbihibnxrfgpnjor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667442.2200677-347-190530188510274/AnsiballZ_file.py
Dec 02 09:24:02 np0005541914.localdomain sudo[148659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:03 np0005541914.localdomain python3.9[148661]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:24:03 np0005541914.localdomain sudo[148659]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:03 np0005541914.localdomain sudo[148751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eladbbyjarpuabpdrzjfkkhfbcbjgsmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667443.3673415-386-177334632289509/AnsiballZ_command.py
Dec 02 09:24:03 np0005541914.localdomain sudo[148751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:03 np0005541914.localdomain python3.9[148753]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:24:03 np0005541914.localdomain sudo[148751]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:04 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39208 DF PROTO=TCP SPT=42224 DPT=9882 SEQ=141486871 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53D9C620000000001030307) 
Dec 02 09:24:05 np0005541914.localdomain sudo[148844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xuvnogyrdcbrmsgmrraaiwervzcbxrwy ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764667445.0864801-410-152323016742829/AnsiballZ_edpm_nftables_from_files.py
Dec 02 09:24:05 np0005541914.localdomain sudo[148844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:05 np0005541914.localdomain python3[148846]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 02 09:24:05 np0005541914.localdomain sudo[148844]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:06 np0005541914.localdomain sudo[148936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-segyckvmiealekgrioacyjnqusiuzvtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667445.8260493-434-174312310178595/AnsiballZ_stat.py
Dec 02 09:24:06 np0005541914.localdomain sudo[148936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:06 np0005541914.localdomain python3.9[148938]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:24:06 np0005541914.localdomain sudo[148936]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:06 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39209 DF PROTO=TCP SPT=42224 DPT=9882 SEQ=141486871 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53DA4620000000001030307) 
Dec 02 09:24:06 np0005541914.localdomain sudo[149011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ojxtcunucosonrgdqquegjbvelacemnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667445.8260493-434-174312310178595/AnsiballZ_copy.py
Dec 02 09:24:06 np0005541914.localdomain sudo[149011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:06 np0005541914.localdomain python3.9[149013]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667445.8260493-434-174312310178595/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:24:07 np0005541914.localdomain sudo[149011]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:07 np0005541914.localdomain sudo[149103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rvvgouakjvxkfcnljkpaejukskgmqkoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667447.1363156-479-122724017332538/AnsiballZ_stat.py
Dec 02 09:24:07 np0005541914.localdomain sudo[149103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:07 np0005541914.localdomain python3.9[149105]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:24:07 np0005541914.localdomain sudo[149103]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:07 np0005541914.localdomain sudo[149178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ertcshxbqfxcjdtcwyfqjucbgssctbvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667447.1363156-479-122724017332538/AnsiballZ_copy.py
Dec 02 09:24:07 np0005541914.localdomain sudo[149178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:08 np0005541914.localdomain python3.9[149180]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667447.1363156-479-122724017332538/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:24:08 np0005541914.localdomain sudo[149178]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:08 np0005541914.localdomain sudo[149270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qmdcjkytkkgwhlhexmygvkyptyxyypua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667448.3327696-523-261271181287278/AnsiballZ_stat.py
Dec 02 09:24:08 np0005541914.localdomain sudo[149270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:08 np0005541914.localdomain python3.9[149272]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:24:08 np0005541914.localdomain sudo[149270]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:09 np0005541914.localdomain sudo[149345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bhszxbfgrdpknbkteikxabkuyaeqxrgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667448.3327696-523-261271181287278/AnsiballZ_copy.py
Dec 02 09:24:09 np0005541914.localdomain sudo[149345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:09 np0005541914.localdomain python3.9[149347]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667448.3327696-523-261271181287278/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:24:09 np0005541914.localdomain sudo[149345]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:10 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61248 DF PROTO=TCP SPT=58590 DPT=9100 SEQ=3499258261 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53DB3220000000001030307) 
Dec 02 09:24:10 np0005541914.localdomain sudo[149437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dmiatqdvdtdiemsjomjeeaatfplewrqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667449.9358726-569-55813683967494/AnsiballZ_stat.py
Dec 02 09:24:10 np0005541914.localdomain sudo[149437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:11 np0005541914.localdomain python3.9[149439]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:24:11 np0005541914.localdomain sudo[149437]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:11 np0005541914.localdomain sudo[149512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jckvovxhkufuqjllpeyfvbpypzzvxjgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667449.9358726-569-55813683967494/AnsiballZ_copy.py
Dec 02 09:24:11 np0005541914.localdomain sudo[149512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:11 np0005541914.localdomain python3.9[149514]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667449.9358726-569-55813683967494/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:24:11 np0005541914.localdomain sudo[149512]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:12 np0005541914.localdomain sudo[149604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cjvxnsfhiwuhnmholkawekknprwwzlrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667451.8546398-614-227539375211042/AnsiballZ_stat.py
Dec 02 09:24:12 np0005541914.localdomain sudo[149604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:12 np0005541914.localdomain python3.9[149606]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:24:12 np0005541914.localdomain sudo[149604]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:13 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16634 DF PROTO=TCP SPT=51906 DPT=9105 SEQ=863709002 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53DBDA20000000001030307) 
Dec 02 09:24:13 np0005541914.localdomain sudo[149679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ppsuvotwqrfzovivdodpewybcwjyshut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667451.8546398-614-227539375211042/AnsiballZ_copy.py
Dec 02 09:24:13 np0005541914.localdomain sudo[149679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:13 np0005541914.localdomain python3.9[149681]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667451.8546398-614-227539375211042/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:24:13 np0005541914.localdomain sudo[149679]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:13 np0005541914.localdomain sudo[149771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-igxappyajtzjrvpxvmlgrqtcmtvddpqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667453.6744642-659-262154519993346/AnsiballZ_file.py
Dec 02 09:24:13 np0005541914.localdomain sudo[149771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:14 np0005541914.localdomain python3.9[149773]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:24:14 np0005541914.localdomain sudo[149771]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:14 np0005541914.localdomain sudo[149863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vhofpkowkpnsqezpdxpvjcnohzfwhxmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667454.2963011-682-209713756981883/AnsiballZ_command.py
Dec 02 09:24:14 np0005541914.localdomain sudo[149863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:14 np0005541914.localdomain python3.9[149865]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:24:14 np0005541914.localdomain sudo[149863]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:15 np0005541914.localdomain sudo[149958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cazzmbenvwjbmkmghhqwwhoiscxeyhsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667454.9377792-707-113069574808920/AnsiballZ_blockinfile.py
Dec 02 09:24:15 np0005541914.localdomain sudo[149958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:15 np0005541914.localdomain python3.9[149960]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                                            include "/etc/nftables/edpm-chains.nft"
                                                            include "/etc/nftables/edpm-rules.nft"
                                                            include "/etc/nftables/edpm-jumps.nft"
                                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:24:15 np0005541914.localdomain sudo[149958]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:16 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1115 DF PROTO=TCP SPT=36154 DPT=9102 SEQ=2809519453 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53DC8CE0000000001030307) 
Dec 02 09:24:16 np0005541914.localdomain sudo[150050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-okrevxiktjqrzquulelrlllgtyxkmjfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667455.8173199-734-226361155098658/AnsiballZ_command.py
Dec 02 09:24:16 np0005541914.localdomain sudo[150050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:16 np0005541914.localdomain python3.9[150052]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:24:16 np0005541914.localdomain sudo[150050]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:16 np0005541914.localdomain sudo[150143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kjesuncmybqbxpczqsfnigkijaclpnmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667456.4611795-758-35431366685998/AnsiballZ_stat.py
Dec 02 09:24:16 np0005541914.localdomain sudo[150143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:16 np0005541914.localdomain python3.9[150145]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:24:16 np0005541914.localdomain sudo[150143]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:17 np0005541914.localdomain sudo[150237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mxdyjeizylvjawoftlvhmsfdhubmgshe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667457.102406-781-278581445293956/AnsiballZ_command.py
Dec 02 09:24:17 np0005541914.localdomain sudo[150237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:17 np0005541914.localdomain python3.9[150239]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:24:17 np0005541914.localdomain sudo[150237]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:17 np0005541914.localdomain sudo[150333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qsihlnycyfjloozonfghrarcwavsxqsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667457.7055273-805-219366056649197/AnsiballZ_file.py
Dec 02 09:24:17 np0005541914.localdomain sudo[150333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:18 np0005541914.localdomain python3.9[150335]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:24:18 np0005541914.localdomain sudo[150333]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:18 np0005541914.localdomain sshd[150418]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:24:19 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1117 DF PROTO=TCP SPT=36154 DPT=9102 SEQ=2809519453 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53DD4E20000000001030307) 
Dec 02 09:24:19 np0005541914.localdomain python3.9[150426]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 09:24:19 np0005541914.localdomain sshd[150418]: Invalid user ubuntu from 45.148.10.240 port 42466
Dec 02 09:24:19 np0005541914.localdomain sshd[150418]: Connection closed by invalid user ubuntu 45.148.10.240 port 42466 [preauth]
Dec 02 09:24:20 np0005541914.localdomain sudo[150518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wbjxgdferygjdoyhwvzxjzeatokqqxet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667459.9417858-925-271463972894889/AnsiballZ_command.py
Dec 02 09:24:20 np0005541914.localdomain sudo[150518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:20 np0005541914.localdomain python3.9[150520]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=np0005541914.localdomain external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:80:ac:27:10" external_ids:ovn-encap-ip=172.19.0.108 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=tcp:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:24:20 np0005541914.localdomain ovs-vsctl[150521]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=np0005541914.localdomain external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:80:ac:27:10 external_ids:ovn-encap-ip=172.19.0.108 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=tcp:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec 02 09:24:20 np0005541914.localdomain sudo[150518]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:21 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44038 DF PROTO=TCP SPT=57400 DPT=9101 SEQ=3631023866 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53DDF220000000001030307) 
Dec 02 09:24:21 np0005541914.localdomain sudo[150611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cuduqezpmwrszeolmqtzewusbknixqko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667460.6373389-953-259008700497156/AnsiballZ_command.py
Dec 02 09:24:21 np0005541914.localdomain sudo[150611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:21 np0005541914.localdomain python3.9[150613]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                                            ovs-vsctl show | grep -q "Manager"
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:24:22 np0005541914.localdomain sudo[150611]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:22 np0005541914.localdomain python3.9[150706]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:24:23 np0005541914.localdomain sudo[150798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rshdmjtropcffcaexdgbrgkwtdcragpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667463.718627-1007-96327105693699/AnsiballZ_file.py
Dec 02 09:24:23 np0005541914.localdomain sudo[150798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:24 np0005541914.localdomain python3.9[150801]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:24:24 np0005541914.localdomain sudo[150798]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:24 np0005541914.localdomain sudo[150891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xedcmocwwjiveqvpanvnxjxdsxwlnwrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667464.4490626-1030-110830580714759/AnsiballZ_stat.py
Dec 02 09:24:24 np0005541914.localdomain sudo[150891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:24 np0005541914.localdomain python3.9[150893]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:24:24 np0005541914.localdomain sudo[150891]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:25 np0005541914.localdomain sudo[150939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rrfigzfotunypcabjqcwbajxycyymxmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667464.4490626-1030-110830580714759/AnsiballZ_file.py
Dec 02 09:24:25 np0005541914.localdomain sudo[150939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:25 np0005541914.localdomain python3.9[150941]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:24:25 np0005541914.localdomain sudo[150939]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:25 np0005541914.localdomain sudo[151031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qpbdkyfeqcnaydqvpiczudiohmwvmmmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667465.4817815-1030-251192565396988/AnsiballZ_stat.py
Dec 02 09:24:25 np0005541914.localdomain sudo[151031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:25 np0005541914.localdomain python3.9[151033]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:24:25 np0005541914.localdomain sudo[151031]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:26 np0005541914.localdomain sudo[151079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pcmjnuxgmxuaoqdaclkjccfpegbhmkoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667465.4817815-1030-251192565396988/AnsiballZ_file.py
Dec 02 09:24:26 np0005541914.localdomain sudo[151079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:26 np0005541914.localdomain python3.9[151081]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:24:26 np0005541914.localdomain sudo[151079]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:26 np0005541914.localdomain sudo[151171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jculfkgspbdkycsezdvdsmchlyftqxzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667466.643715-1100-257203840983713/AnsiballZ_file.py
Dec 02 09:24:26 np0005541914.localdomain sudo[151171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:27 np0005541914.localdomain python3.9[151173]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:24:27 np0005541914.localdomain sudo[151171]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57143 DF PROTO=TCP SPT=35940 DPT=9101 SEQ=3024191987 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53DF4620000000001030307) 
Dec 02 09:24:27 np0005541914.localdomain sudo[151263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rxmjujiimjkvxxezunhhsklyzwefttew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667467.288546-1124-120473031065137/AnsiballZ_stat.py
Dec 02 09:24:27 np0005541914.localdomain sudo[151263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:27 np0005541914.localdomain python3.9[151265]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:24:27 np0005541914.localdomain sudo[151263]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:27 np0005541914.localdomain sudo[151311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hflyfxevwtyvtqoeraffpslwsdtrnspl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667467.288546-1124-120473031065137/AnsiballZ_file.py
Dec 02 09:24:27 np0005541914.localdomain sudo[151311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:28 np0005541914.localdomain python3.9[151313]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:24:28 np0005541914.localdomain sudo[151311]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:28 np0005541914.localdomain sudo[151403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uxkbcjivrotqakcncxgcydsxkogpzukh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667468.349974-1160-132104439223531/AnsiballZ_stat.py
Dec 02 09:24:28 np0005541914.localdomain sudo[151403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:28 np0005541914.localdomain python3.9[151405]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:24:28 np0005541914.localdomain sudo[151403]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:29 np0005541914.localdomain sudo[151451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-esncitiqjgecrhuxzobmxymduhffcwmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667468.349974-1160-132104439223531/AnsiballZ_file.py
Dec 02 09:24:29 np0005541914.localdomain sudo[151451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:29 np0005541914.localdomain python3.9[151453]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:24:29 np0005541914.localdomain sudo[151451]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:29 np0005541914.localdomain sudo[151543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pnbwgsxogrpwffwglarnaaicdwdujyzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667469.4508128-1196-125835474722096/AnsiballZ_systemd.py
Dec 02 09:24:29 np0005541914.localdomain sudo[151543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:30 np0005541914.localdomain python3.9[151545]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:24:30 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:24:30 np0005541914.localdomain systemd-sysv-generator[151574]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:24:30 np0005541914.localdomain systemd-rc-local-generator[151569]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:24:30 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:24:30 np0005541914.localdomain sudo[151543]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:31 np0005541914.localdomain sudo[151672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xcmaxhbajiwptqikbzxejyvaqagpmecc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667471.0836656-1220-249157563474240/AnsiballZ_stat.py
Dec 02 09:24:31 np0005541914.localdomain sudo[151672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1119 DF PROTO=TCP SPT=36154 DPT=9102 SEQ=2809519453 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53E05230000000001030307) 
Dec 02 09:24:31 np0005541914.localdomain python3.9[151674]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:24:31 np0005541914.localdomain sudo[151672]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:32 np0005541914.localdomain sudo[151720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oaxijxcqqpfykylhdwoxmgbcbygmlqjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667471.0836656-1220-249157563474240/AnsiballZ_file.py
Dec 02 09:24:32 np0005541914.localdomain sudo[151720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:32 np0005541914.localdomain python3.9[151722]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:24:32 np0005541914.localdomain sudo[151720]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:33 np0005541914.localdomain sudo[151812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uljqyluqurbceltxkqmidxnmuynrbfsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667472.8653655-1256-251092243507546/AnsiballZ_stat.py
Dec 02 09:24:33 np0005541914.localdomain sudo[151812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:33 np0005541914.localdomain python3.9[151814]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:24:33 np0005541914.localdomain sudo[151812]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:33 np0005541914.localdomain sudo[151860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vpmalliessvgynuknazkwxugbjzwddzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667472.8653655-1256-251092243507546/AnsiballZ_file.py
Dec 02 09:24:33 np0005541914.localdomain sudo[151860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:33 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63826 DF PROTO=TCP SPT=60950 DPT=9882 SEQ=2058759243 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53E0DA20000000001030307) 
Dec 02 09:24:33 np0005541914.localdomain python3.9[151862]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:24:33 np0005541914.localdomain sudo[151860]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:34 np0005541914.localdomain sudo[151952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-awukhuewlbpuhrzxpdglascnqqzpuyum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667474.3159792-1292-251712766577862/AnsiballZ_systemd.py
Dec 02 09:24:34 np0005541914.localdomain sudo[151952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:34 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63827 DF PROTO=TCP SPT=60950 DPT=9882 SEQ=2058759243 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53E11A20000000001030307) 
Dec 02 09:24:34 np0005541914.localdomain python3.9[151954]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:24:34 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:24:34 np0005541914.localdomain systemd-sysv-generator[151986]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:24:34 np0005541914.localdomain systemd-rc-local-generator[151983]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:24:34 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:24:35 np0005541914.localdomain systemd[1]: Starting Create netns directory...
Dec 02 09:24:35 np0005541914.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 02 09:24:35 np0005541914.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 02 09:24:35 np0005541914.localdomain systemd[1]: Finished Create netns directory.
Dec 02 09:24:35 np0005541914.localdomain sudo[151952]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:35 np0005541914.localdomain sudo[152089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-biqjgfdpiwpfmneabqngdcittdjsvhbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667475.6177154-1321-108652587083773/AnsiballZ_file.py
Dec 02 09:24:35 np0005541914.localdomain sudo[152089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:36 np0005541914.localdomain python3.9[152091]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:24:36 np0005541914.localdomain sudo[152089]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:36 np0005541914.localdomain sudo[152181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-loayvubaaictdmglqajakaulxcmsnmuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667476.2761338-1345-42670944581187/AnsiballZ_stat.py
Dec 02 09:24:36 np0005541914.localdomain sudo[152181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:36 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63828 DF PROTO=TCP SPT=60950 DPT=9882 SEQ=2058759243 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53E19A20000000001030307) 
Dec 02 09:24:36 np0005541914.localdomain python3.9[152183]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:24:36 np0005541914.localdomain sudo[152181]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:37 np0005541914.localdomain sudo[152254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-blqrgzimyhrczgtpzsamncihhdaqykns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667476.2761338-1345-42670944581187/AnsiballZ_copy.py
Dec 02 09:24:37 np0005541914.localdomain sudo[152254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:37 np0005541914.localdomain python3.9[152256]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764667476.2761338-1345-42670944581187/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:24:37 np0005541914.localdomain sudo[152254]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:38 np0005541914.localdomain sudo[152346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ljuacsrofsuwnnamxlprtwflamajwynm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667477.896907-1396-183306172994870/AnsiballZ_file.py
Dec 02 09:24:38 np0005541914.localdomain sudo[152346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:38 np0005541914.localdomain python3.9[152348]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:24:38 np0005541914.localdomain sudo[152346]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:38 np0005541914.localdomain sudo[152438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iyukxwtyohmgqrutkooxabakziaidzjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667478.5716455-1420-82931941760283/AnsiballZ_stat.py
Dec 02 09:24:38 np0005541914.localdomain sudo[152438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:39 np0005541914.localdomain python3.9[152440]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:24:39 np0005541914.localdomain sudo[152438]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:39 np0005541914.localdomain sudo[152513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ozyfbkoptyuvcflrtyvdsoopusnpbrqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667478.5716455-1420-82931941760283/AnsiballZ_copy.py
Dec 02 09:24:39 np0005541914.localdomain sudo[152513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:39 np0005541914.localdomain python3.9[152515]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667478.5716455-1420-82931941760283/.source.json _original_basename=._y7o53nv follow=False checksum=38f75f59f5c2ef6b5da12297bfd31cd1e97012ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:24:39 np0005541914.localdomain sudo[152513]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:40 np0005541914.localdomain sudo[152605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cmcjlgqvnsbdraeuhfbouwgcaopbpvdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667479.7223043-1465-66186777356517/AnsiballZ_file.py
Dec 02 09:24:40 np0005541914.localdomain sudo[152605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:40 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32867 DF PROTO=TCP SPT=33294 DPT=9100 SEQ=2871554951 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53E27220000000001030307) 
Dec 02 09:24:40 np0005541914.localdomain python3.9[152607]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:24:40 np0005541914.localdomain sudo[152605]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:40 np0005541914.localdomain sudo[152697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qhvqpjihtoiqkynjdmlndlrhgluqojez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667480.466083-1489-86525636123939/AnsiballZ_stat.py
Dec 02 09:24:40 np0005541914.localdomain sudo[152697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:40 np0005541914.localdomain sudo[152697]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:41 np0005541914.localdomain sudo[152770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oazplglkyffmmwonwhjjsvghiztsddbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667480.466083-1489-86525636123939/AnsiballZ_copy.py
Dec 02 09:24:41 np0005541914.localdomain sudo[152770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:41 np0005541914.localdomain sudo[152770]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:42 np0005541914.localdomain sudo[152862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nkaikygqpigqtzcmypepvrhotzsoabmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667481.751021-1541-262169091275148/AnsiballZ_container_config_data.py
Dec 02 09:24:42 np0005541914.localdomain sudo[152862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:42 np0005541914.localdomain python3.9[152864]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec 02 09:24:42 np0005541914.localdomain sudo[152862]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:42 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61249 DF PROTO=TCP SPT=58590 DPT=9100 SEQ=3499258261 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53E31230000000001030307) 
Dec 02 09:24:42 np0005541914.localdomain sudo[152954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dsoooequkabjnosljnpcqhsatxeijdck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667482.5536115-1567-62078264224236/AnsiballZ_container_config_hash.py
Dec 02 09:24:42 np0005541914.localdomain sudo[152954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:43 np0005541914.localdomain python3.9[152956]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 02 09:24:43 np0005541914.localdomain sudo[152954]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:43 np0005541914.localdomain sudo[153046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-evqklpwxokpzhxcovwwgkvbsbxtdauwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667483.4280229-1594-242555658404099/AnsiballZ_podman_container_info.py
Dec 02 09:24:43 np0005541914.localdomain sudo[153046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:44 np0005541914.localdomain python3.9[153048]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 02 09:24:44 np0005541914.localdomain sudo[153046]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:46 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63708 DF PROTO=TCP SPT=49276 DPT=9102 SEQ=4061710079 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53E3DFE0000000001030307) 
Dec 02 09:24:47 np0005541914.localdomain sudo[153166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xmyggklsedspfgooprrygxvzjsptuklq ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764667487.4059763-1633-2841995213341/AnsiballZ_edpm_container_manage.py
Dec 02 09:24:47 np0005541914.localdomain sudo[153166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:48 np0005541914.localdomain python3[153168]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 02 09:24:48 np0005541914.localdomain python3[153168]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                               {
                                                                    "Id": "3a37a52861b2e44ebd2a63ca2589a7c9d8e4119e5feace9d19c6312ed9b8421c",
                                                                    "Digest": "sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c",
                                                                    "RepoTags": [
                                                                         "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified"
                                                                    ],
                                                                    "RepoDigests": [
                                                                         "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:ebeb25c4a4ce978c741d166518070e05f0fd81c143bdc680ee1d8f5985ec8d6c"
                                                                    ],
                                                                    "Parent": "",
                                                                    "Comment": "",
                                                                    "Created": "2025-12-01T06:38:47.246477714Z",
                                                                    "Config": {
                                                                         "User": "root",
                                                                         "Env": [
                                                                              "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                                              "LANG=en_US.UTF-8",
                                                                              "TZ=UTC",
                                                                              "container=oci"
                                                                         ],
                                                                         "Entrypoint": [
                                                                              "dumb-init",
                                                                              "--single-child",
                                                                              "--"
                                                                         ],
                                                                         "Cmd": [
                                                                              "kolla_start"
                                                                         ],
                                                                         "Labels": {
                                                                              "io.buildah.version": "1.41.3",
                                                                              "maintainer": "OpenStack Kubernetes Operator team",
                                                                              "org.label-schema.build-date": "20251125",
                                                                              "org.label-schema.license": "GPLv2",
                                                                              "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                              "org.label-schema.schema-version": "1.0",
                                                                              "org.label-schema.vendor": "CentOS",
                                                                              "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "tcib_managed": "true"
                                                                         },
                                                                         "StopSignal": "SIGTERM"
                                                                    },
                                                                    "Version": "",
                                                                    "Author": "",
                                                                    "Architecture": "amd64",
                                                                    "Os": "linux",
                                                                    "Size": 345722821,
                                                                    "VirtualSize": 345722821,
                                                                    "GraphDriver": {
                                                                         "Name": "overlay",
                                                                         "Data": {
                                                                              "LowerDir": "/var/lib/containers/storage/overlay/06baa34adcac19ffd1cac321f0c14e5e32037c7b357d2eb54e065b4d177d72fd/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",
                                                                              "UpperDir": "/var/lib/containers/storage/overlay/0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8/diff",
                                                                              "WorkDir": "/var/lib/containers/storage/overlay/0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8/work"
                                                                         }
                                                                    },
                                                                    "RootFS": {
                                                                         "Type": "layers",
                                                                         "Layers": [
                                                                              "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",
                                                                              "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",
                                                                              "sha256:ba9362d2aeb297e34b0679b2fc8168350c70a5b0ec414daf293bf2bc013e9088",
                                                                              "sha256:aae3b8a85314314b9db80a043fdf3f3b1d0b69927faca0303c73969a23dddd0f"
                                                                         ]
                                                                    },
                                                                    "Labels": {
                                                                         "io.buildah.version": "1.41.3",
                                                                         "maintainer": "OpenStack Kubernetes Operator team",
                                                                         "org.label-schema.build-date": "20251125",
                                                                         "org.label-schema.license": "GPLv2",
                                                                         "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                         "org.label-schema.schema-version": "1.0",
                                                                         "org.label-schema.vendor": "CentOS",
                                                                         "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                         "tcib_managed": "true"
                                                                    },
                                                                    "Annotations": {},
                                                                    "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                                    "User": "root",
                                                                    "History": [
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223494528Z",
                                                                              "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223562059Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251125\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:39.054452717Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025707917Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                                              "comment": "FROM quay.io/centos/centos:stream9",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025744608Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025767729Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025791379Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.02581523Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025867611Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.469442331Z",
                                                                              "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:02.029095017Z",
                                                                              "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:05.672474685Z",
                                                                              "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.113425253Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.532320725Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.370061347Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.805172373Z",
                                                                              "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.259306372Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.625948784Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.028304824Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.423316076Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.801219631Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.239187116Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.70996597Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.147342611Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.5739488Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.006975065Z",
                                                                              "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.421255505Z",
                                                                              "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.066694755Z",
                                                                              "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.475695836Z",
                                                                              "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.8971372Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /openstack",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:18.542651107Z",
                                                                              "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622503041Z",
                                                                              "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622561802Z",
                                                                              "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622578342Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622594423Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:22.080892529Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:22.759131427Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:13:25.258260855Z",
                                                                              "created_by": "/bin/sh -c dnf -y install openvswitch openvswitch-ovn-common python3-netifaces python3-openvswitch tcpdump && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:13:28.025145079Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:38:13.535675197Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-ovn-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:38:47.244104142Z",
                                                                              "created_by": "/bin/sh -c dnf -y install openvswitch-ovn-host && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:38:48.759416475Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         }
                                                                    ],
                                                                    "NamesHistory": [
                                                                         "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified"
                                                                    ]
                                                               }
                                                          ]
                                                          : quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 02 09:24:48 np0005541914.localdomain podman[153220]: 2025-12-02 09:24:48.588756756 +0000 UTC m=+0.095602407 container remove b34d6130ee3ae145ef9932d7e00ae2959cee4850e4f541a3b95c6fe20434fa5d (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.12 17.1_20251118.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-ref=ae875c168a6ec3400acf0a639b71f4bcc4adf272, architecture=x86_64, name=rhosp17/openstack-ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.12, release=1761123044, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, batch=17.1_20251118.1, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ae875c168a6ec3400acf0a639b71f4bcc4adf272, managed_by=tripleo_ansible, build-date=2025-11-18T23:34:05Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 09:24:48 np0005541914.localdomain python3[153168]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ovn_controller
Dec 02 09:24:48 np0005541914.localdomain podman[153234]: 
Dec 02 09:24:48 np0005541914.localdomain podman[153234]: 2025-12-02 09:24:48.672379934 +0000 UTC m=+0.070011223 container create c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec 02 09:24:48 np0005541914.localdomain podman[153234]: 2025-12-02 09:24:48.631878845 +0000 UTC m=+0.029510174 image pull  quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 02 09:24:48 np0005541914.localdomain python3[153168]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 02 09:24:48 np0005541914.localdomain sudo[153166]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:48 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63830 DF PROTO=TCP SPT=60950 DPT=9882 SEQ=2058759243 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53E49220000000001030307) 
Dec 02 09:24:49 np0005541914.localdomain sudo[153360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lsemuvhoqwrwgtgqsxheqqdhalvtofmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667488.9911861-1657-45863186871379/AnsiballZ_stat.py
Dec 02 09:24:49 np0005541914.localdomain sudo[153360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:49 np0005541914.localdomain python3.9[153362]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:24:49 np0005541914.localdomain sudo[153360]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:49 np0005541914.localdomain sudo[153454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yqetubaksrjgxeautijdmrmzrryfgnji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667489.6713078-1684-95419478994278/AnsiballZ_file.py
Dec 02 09:24:49 np0005541914.localdomain sudo[153454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:50 np0005541914.localdomain python3.9[153456]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:24:50 np0005541914.localdomain sudo[153454]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:50 np0005541914.localdomain sudo[153500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-akhwgvsitmpkxhewovbrahndhedxptdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667489.6713078-1684-95419478994278/AnsiballZ_stat.py
Dec 02 09:24:50 np0005541914.localdomain sudo[153500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:50 np0005541914.localdomain python3.9[153502]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:24:50 np0005541914.localdomain sudo[153500]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:51 np0005541914.localdomain sudo[153591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-phohlfutlnbnkijfkzalwesopwoowcrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667490.6350732-1684-171162379760655/AnsiballZ_copy.py
Dec 02 09:24:51 np0005541914.localdomain sudo[153591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:51 np0005541914.localdomain python3.9[153593]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764667490.6350732-1684-171162379760655/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:24:51 np0005541914.localdomain sudo[153591]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:51 np0005541914.localdomain sudo[153637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oliufsdvhjfadciraauwfhtslqnvsvrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667490.6350732-1684-171162379760655/AnsiballZ_systemd.py
Dec 02 09:24:51 np0005541914.localdomain sudo[153637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:51 np0005541914.localdomain python3.9[153639]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 02 09:24:51 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:24:51 np0005541914.localdomain systemd-rc-local-generator[153664]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:24:51 np0005541914.localdomain systemd-sysv-generator[153669]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:24:51 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:24:51 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57145 DF PROTO=TCP SPT=35940 DPT=9101 SEQ=3024191987 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53E55220000000001030307) 
Dec 02 09:24:52 np0005541914.localdomain sudo[153637]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:52 np0005541914.localdomain sudo[153719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tjahwtcpyoteunnjabrzfrogqtgbubmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667490.6350732-1684-171162379760655/AnsiballZ_systemd.py
Dec 02 09:24:52 np0005541914.localdomain sudo[153719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:52 np0005541914.localdomain python3.9[153721]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:24:52 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:24:52 np0005541914.localdomain systemd-rc-local-generator[153747]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:24:52 np0005541914.localdomain systemd-sysv-generator[153753]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:24:52 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:24:53 np0005541914.localdomain systemd[1]: Starting ovn_controller container...
Dec 02 09:24:53 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:24:53 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc4b0753893eeb8df37e9b4452d463ad6765466abfa1ffd2a9924ebcac6d8353/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 02 09:24:53 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:24:53 np0005541914.localdomain podman[153763]: 2025-12-02 09:24:53.260574348 +0000 UTC m=+0.168243520 container init c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: + sudo -E kolla_set_configs
Dec 02 09:24:53 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:24:53 np0005541914.localdomain podman[153763]: 2025-12-02 09:24:53.305252585 +0000 UTC m=+0.212921707 container start c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:24:53 np0005541914.localdomain edpm-start-podman-container[153763]: ovn_controller
Dec 02 09:24:53 np0005541914.localdomain systemd[1]: Created slice User Slice of UID 0.
Dec 02 09:24:53 np0005541914.localdomain systemd[1]: Starting User Runtime Directory /run/user/0...
Dec 02 09:24:53 np0005541914.localdomain systemd[1]: Finished User Runtime Directory /run/user/0.
Dec 02 09:24:53 np0005541914.localdomain systemd[1]: Starting User Manager for UID 0...
Dec 02 09:24:53 np0005541914.localdomain systemd[153805]: pam_unix(systemd-user:session): session opened for user root(uid=0) by (uid=0)
Dec 02 09:24:53 np0005541914.localdomain podman[153786]: 2025-12-02 09:24:53.422270656 +0000 UTC m=+0.113309909 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 02 09:24:53 np0005541914.localdomain podman[153786]: 2025-12-02 09:24:53.437697448 +0000 UTC m=+0.128736701 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Dec 02 09:24:53 np0005541914.localdomain podman[153786]: unhealthy
Dec 02 09:24:53 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:24:53 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Failed with result 'exit-code'.
Dec 02 09:24:53 np0005541914.localdomain systemd[153805]: Queued start job for default target Main User Target.
Dec 02 09:24:53 np0005541914.localdomain systemd[153805]: Created slice User Application Slice.
Dec 02 09:24:53 np0005541914.localdomain systemd[153805]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec 02 09:24:53 np0005541914.localdomain systemd[153805]: Started Daily Cleanup of User's Temporary Directories.
Dec 02 09:24:53 np0005541914.localdomain systemd[153805]: Reached target Paths.
Dec 02 09:24:53 np0005541914.localdomain systemd[153805]: Reached target Timers.
Dec 02 09:24:53 np0005541914.localdomain systemd[153805]: Starting D-Bus User Message Bus Socket...
Dec 02 09:24:53 np0005541914.localdomain systemd[153805]: Starting Create User's Volatile Files and Directories...
Dec 02 09:24:53 np0005541914.localdomain edpm-start-podman-container[153762]: Creating additional drop-in dependency for "ovn_controller" (c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf)
Dec 02 09:24:53 np0005541914.localdomain systemd[153805]: Finished Create User's Volatile Files and Directories.
Dec 02 09:24:53 np0005541914.localdomain systemd[153805]: Listening on D-Bus User Message Bus Socket.
Dec 02 09:24:53 np0005541914.localdomain systemd[153805]: Reached target Sockets.
Dec 02 09:24:53 np0005541914.localdomain systemd[153805]: Reached target Basic System.
Dec 02 09:24:53 np0005541914.localdomain systemd[153805]: Reached target Main User Target.
Dec 02 09:24:53 np0005541914.localdomain systemd[153805]: Startup finished in 136ms.
Dec 02 09:24:53 np0005541914.localdomain systemd[1]: Started User Manager for UID 0.
Dec 02 09:24:53 np0005541914.localdomain systemd[1]: Started Session c11 of User root.
Dec 02 09:24:53 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: INFO:__main__:Validating config file
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: INFO:__main__:Writing out command to execute
Dec 02 09:24:53 np0005541914.localdomain systemd-sysv-generator[153864]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:24:53 np0005541914.localdomain systemd-rc-local-generator[153860]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:24:53 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:24:53 np0005541914.localdomain systemd[1]: Started ovn_controller container.
Dec 02 09:24:53 np0005541914.localdomain systemd[1]: session-c11.scope: Deactivated successfully.
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: ++ cat /run_command
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock '
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: + ARGS=
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: + sudo kolla_copy_cacerts
Dec 02 09:24:53 np0005541914.localdomain systemd[1]: Started Session c12 of User root.
Dec 02 09:24:53 np0005541914.localdomain sudo[153719]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:53 np0005541914.localdomain systemd[1]: session-c12.scope: Deactivated successfully.
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: + [[ ! -n '' ]]
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: + . kolla_extend_start
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock '
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock '\'''
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: + umask 0022
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:24:53Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:24:53Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:24:53Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:24:53Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:24:53Z|00005|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:24:53Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:24:53Z|00007|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connected
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:24:53Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:24:53Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:24:53Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:24:53Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:24:53Z|00012|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:24:53Z|00013|main|INFO|OVS feature set changed, force recompute.
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:24:53Z|00014|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:24:53Z|00015|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:24:53Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:24:53Z|00017|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:24:53Z|00018|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:24:53Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:24:53Z|00020|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:24:53Z|00021|main|INFO|OVS feature set changed, force recompute.
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:24:53Z|00022|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:24:53Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:24:53Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:24:53Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:24:53Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:24:53Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 02 09:24:53 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:24:53Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 02 09:24:54 np0005541914.localdomain sudo[153973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tfyxmhlfkhyhsqttfoxtgclvwmnkmdqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667494.1087787-1768-277474954369885/AnsiballZ_command.py
Dec 02 09:24:54 np0005541914.localdomain sudo[153973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:54 np0005541914.localdomain python3.9[153975]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:24:54 np0005541914.localdomain ovs-vsctl[153976]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec 02 09:24:54 np0005541914.localdomain sudo[153973]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:55 np0005541914.localdomain sudo[154066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rulrgovzbhbccbzykmraobcdmpixycrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667494.8009155-1793-42975472523950/AnsiballZ_command.py
Dec 02 09:24:55 np0005541914.localdomain sudo[154066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:55 np0005541914.localdomain python3.9[154068]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:24:55 np0005541914.localdomain ovs-vsctl[154070]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec 02 09:24:55 np0005541914.localdomain sudo[154066]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:56 np0005541914.localdomain sudo[154161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zdxkvlfyqxcmtyikkatrfqenpuoyigwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667495.8062198-1834-270595941142286/AnsiballZ_command.py
Dec 02 09:24:56 np0005541914.localdomain sudo[154161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:24:57 np0005541914.localdomain python3.9[154163]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:24:57 np0005541914.localdomain ovs-vsctl[154164]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec 02 09:24:57 np0005541914.localdomain sudo[154161]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12318 DF PROTO=TCP SPT=53152 DPT=9101 SEQ=1380458474 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53E69A20000000001030307) 
Dec 02 09:24:57 np0005541914.localdomain sshd[147273]: pam_unix(sshd:session): session closed for user zuul
Dec 02 09:24:57 np0005541914.localdomain systemd[1]: session-49.scope: Deactivated successfully.
Dec 02 09:24:57 np0005541914.localdomain systemd[1]: session-49.scope: Consumed 41.334s CPU time.
Dec 02 09:24:57 np0005541914.localdomain systemd-logind[760]: Session 49 logged out. Waiting for processes to exit.
Dec 02 09:24:57 np0005541914.localdomain systemd-logind[760]: Removed session 49.
Dec 02 09:24:58 np0005541914.localdomain sudo[154180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:24:58 np0005541914.localdomain sudo[154180]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:24:58 np0005541914.localdomain sudo[154180]: pam_unix(sudo:session): session closed for user root
Dec 02 09:24:58 np0005541914.localdomain sudo[154195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:24:58 np0005541914.localdomain sudo[154195]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:24:59 np0005541914.localdomain sudo[154195]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:00 np0005541914.localdomain sudo[154243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:25:00 np0005541914.localdomain sudo[154243]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:25:00 np0005541914.localdomain sudo[154243]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63712 DF PROTO=TCP SPT=49276 DPT=9102 SEQ=4061710079 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53E7B220000000001030307) 
Dec 02 09:25:03 np0005541914.localdomain sshd[154258]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:25:03 np0005541914.localdomain sshd[154258]: Accepted publickey for zuul from 192.168.122.30 port 49964 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 09:25:03 np0005541914.localdomain systemd-logind[760]: New session 51 of user zuul.
Dec 02 09:25:03 np0005541914.localdomain systemd[1]: Started Session 51 of User zuul.
Dec 02 09:25:03 np0005541914.localdomain sshd[154258]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 09:25:03 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39635 DF PROTO=TCP SPT=36732 DPT=9882 SEQ=332566116 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53E82D10000000001030307) 
Dec 02 09:25:03 np0005541914.localdomain systemd[1]: Stopping User Manager for UID 0...
Dec 02 09:25:03 np0005541914.localdomain systemd[153805]: Activating special unit Exit the Session...
Dec 02 09:25:03 np0005541914.localdomain systemd[153805]: Stopped target Main User Target.
Dec 02 09:25:03 np0005541914.localdomain systemd[153805]: Stopped target Basic System.
Dec 02 09:25:03 np0005541914.localdomain systemd[153805]: Stopped target Paths.
Dec 02 09:25:03 np0005541914.localdomain systemd[153805]: Stopped target Sockets.
Dec 02 09:25:03 np0005541914.localdomain systemd[153805]: Stopped target Timers.
Dec 02 09:25:03 np0005541914.localdomain systemd[153805]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 02 09:25:03 np0005541914.localdomain systemd[153805]: Closed D-Bus User Message Bus Socket.
Dec 02 09:25:03 np0005541914.localdomain systemd[153805]: Stopped Create User's Volatile Files and Directories.
Dec 02 09:25:03 np0005541914.localdomain systemd[153805]: Removed slice User Application Slice.
Dec 02 09:25:03 np0005541914.localdomain systemd[153805]: Reached target Shutdown.
Dec 02 09:25:03 np0005541914.localdomain systemd[153805]: Finished Exit the Session.
Dec 02 09:25:03 np0005541914.localdomain systemd[153805]: Reached target Exit the Session.
Dec 02 09:25:03 np0005541914.localdomain systemd[1]: user@0.service: Deactivated successfully.
Dec 02 09:25:03 np0005541914.localdomain systemd[1]: Stopped User Manager for UID 0.
Dec 02 09:25:04 np0005541914.localdomain systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec 02 09:25:04 np0005541914.localdomain systemd[1]: run-user-0.mount: Deactivated successfully.
Dec 02 09:25:04 np0005541914.localdomain systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec 02 09:25:04 np0005541914.localdomain systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec 02 09:25:04 np0005541914.localdomain systemd[1]: Removed slice User Slice of UID 0.
Dec 02 09:25:04 np0005541914.localdomain python3.9[154353]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 09:25:04 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39636 DF PROTO=TCP SPT=36732 DPT=9882 SEQ=332566116 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53E86E20000000001030307) 
Dec 02 09:25:05 np0005541914.localdomain sudo[154447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zhzqglwimiglzbxskkgdaxnyfjoxgggp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667505.1526344-64-49006861556755/AnsiballZ_file.py
Dec 02 09:25:05 np0005541914.localdomain sudo[154447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:05 np0005541914.localdomain python3.9[154449]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:25:05 np0005541914.localdomain sudo[154447]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:06 np0005541914.localdomain sudo[154539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ojgbvffonjpobzhcoymddwcpcbaqupch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667505.9040904-64-103090940126374/AnsiballZ_file.py
Dec 02 09:25:06 np0005541914.localdomain sudo[154539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:06 np0005541914.localdomain python3.9[154541]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:25:06 np0005541914.localdomain sudo[154539]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:06 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39637 DF PROTO=TCP SPT=36732 DPT=9882 SEQ=332566116 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53E8EE30000000001030307) 
Dec 02 09:25:06 np0005541914.localdomain sudo[154631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nzgbfytjgsqfisaelofeuiolsnonsgbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667506.5143704-64-87687235326705/AnsiballZ_file.py
Dec 02 09:25:06 np0005541914.localdomain sudo[154631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:06 np0005541914.localdomain python3.9[154633]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:25:07 np0005541914.localdomain sudo[154631]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:07 np0005541914.localdomain sudo[154723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ttqohwycnadevbnkyipbjcrwfivfjdrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667507.3798573-64-200347079756087/AnsiballZ_file.py
Dec 02 09:25:07 np0005541914.localdomain sudo[154723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:07 np0005541914.localdomain python3.9[154725]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:25:07 np0005541914.localdomain sudo[154723]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:08 np0005541914.localdomain sudo[154815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mdmdieqmkislsgceyacwamvuaryxmrzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667508.038239-64-101755367030291/AnsiballZ_file.py
Dec 02 09:25:08 np0005541914.localdomain sudo[154815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:08 np0005541914.localdomain python3.9[154817]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:25:08 np0005541914.localdomain sudo[154815]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:10 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4197 DF PROTO=TCP SPT=57484 DPT=9100 SEQ=2546861134 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53E9D220000000001030307) 
Dec 02 09:25:11 np0005541914.localdomain python3.9[154907]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 09:25:12 np0005541914.localdomain sudo[154998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jpeceyqbyqcotutludayvzfpubhlwymb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667511.7262714-197-223447882946065/AnsiballZ_seboolean.py
Dec 02 09:25:12 np0005541914.localdomain sudo[154998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:12 np0005541914.localdomain python3.9[155000]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 02 09:25:12 np0005541914.localdomain sudo[154998]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:13 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44713 DF PROTO=TCP SPT=55490 DPT=9105 SEQ=206652984 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53EA7E20000000001030307) 
Dec 02 09:25:13 np0005541914.localdomain python3.9[155090]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:25:13 np0005541914.localdomain python3.9[155163]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764667512.6817198-220-226362207237117/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:25:14 np0005541914.localdomain python3.9[155253]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:25:15 np0005541914.localdomain python3.9[155326]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764667514.0171661-265-126838285431775/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:25:15 np0005541914.localdomain sudo[155416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-psgaldjmbfjvibpredcfetcianzxdgql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667515.4801846-316-254057660978294/AnsiballZ_setup.py
Dec 02 09:25:15 np0005541914.localdomain sudo[155416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:15 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16639 DF PROTO=TCP SPT=51906 DPT=9105 SEQ=863709002 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53EB3220000000001030307) 
Dec 02 09:25:16 np0005541914.localdomain python3.9[155418]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 02 09:25:16 np0005541914.localdomain sudo[155416]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:16 np0005541914.localdomain sudo[155470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kiverdpimnfajsbbvffwwbmrfutblblw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667515.4801846-316-254057660978294/AnsiballZ_dnf.py
Dec 02 09:25:16 np0005541914.localdomain sudo[155470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:16 np0005541914.localdomain python3.9[155472]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 02 09:25:19 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14329 DF PROTO=TCP SPT=46910 DPT=9102 SEQ=955170998 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53EBF220000000001030307) 
Dec 02 09:25:20 np0005541914.localdomain sudo[155470]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:21 np0005541914.localdomain sudo[155564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zaznnzuprkyzautplkdpozjgamnvmkxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667520.4683945-352-140081727154096/AnsiballZ_systemd.py
Dec 02 09:25:21 np0005541914.localdomain sudo[155564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:21 np0005541914.localdomain python3.9[155566]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 02 09:25:21 np0005541914.localdomain sudo[155564]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:21 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12320 DF PROTO=TCP SPT=53152 DPT=9101 SEQ=1380458474 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53EC9220000000001030307) 
Dec 02 09:25:23 np0005541914.localdomain python3.9[155659]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:25:23 np0005541914.localdomain python3.9[155730]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764667522.6252763-376-172714620551088/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:25:23 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:25:24 np0005541914.localdomain python3.9[155820]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:25:24 np0005541914.localdomain systemd[1]: tmp-crun.e2iNSI.mount: Deactivated successfully.
Dec 02 09:25:24 np0005541914.localdomain podman[155821]: 2025-12-02 09:25:24.108431026 +0000 UTC m=+0.106115666 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125)
Dec 02 09:25:24 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:25:24Z|00023|memory|INFO|13080 kB peak resident set size after 30.2 seconds
Dec 02 09:25:24 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:25:24Z|00024|memory|INFO|idl-cells-OVN_Southbound:4028 idl-cells-Open_vSwitch:813 ofctrl_desired_flow_usage-KB:9 ofctrl_installed_flow_usage-KB:7 ofctrl_sb_flow_ref_usage-KB:3
Dec 02 09:25:24 np0005541914.localdomain podman[155821]: 2025-12-02 09:25:24.187878314 +0000 UTC m=+0.185562904 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 02 09:25:24 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:25:24 np0005541914.localdomain python3.9[155915]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764667523.664777-376-230040373073843/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:25:26 np0005541914.localdomain python3.9[156005]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:25:26 np0005541914.localdomain python3.9[156076]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764667525.4554493-509-43200407583977/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=aa9e89725fbcebf7a5c773d7b97083445b7b7759 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:25:27 np0005541914.localdomain python3.9[156166]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:25:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57255 DF PROTO=TCP SPT=49426 DPT=9101 SEQ=3183603933 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53EDEA20000000001030307) 
Dec 02 09:25:27 np0005541914.localdomain python3.9[156237]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764667526.6062455-509-151344027655842/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=979187b925479d81d0609f4188e5b95fe1f92c18 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:25:29 np0005541914.localdomain python3.9[156327]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:25:30 np0005541914.localdomain sudo[156419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ugosdnvyadvnjdrmbfwnszehmfgsijuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667529.8018239-622-243939588895613/AnsiballZ_file.py
Dec 02 09:25:30 np0005541914.localdomain sudo[156419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:30 np0005541914.localdomain python3.9[156421]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:25:30 np0005541914.localdomain sudo[156419]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:30 np0005541914.localdomain sudo[156511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xqymcicbdulukpavjhunkmrctknaydku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667530.4686415-646-154024678710038/AnsiballZ_stat.py
Dec 02 09:25:30 np0005541914.localdomain sudo[156511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:30 np0005541914.localdomain python3.9[156513]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:25:30 np0005541914.localdomain sudo[156511]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:31 np0005541914.localdomain sudo[156559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uqhppmcykzfsjpgnmofqkttlohhmtkrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667530.4686415-646-154024678710038/AnsiballZ_file.py
Dec 02 09:25:31 np0005541914.localdomain sudo[156559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14331 DF PROTO=TCP SPT=46910 DPT=9102 SEQ=955170998 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53EEF220000000001030307) 
Dec 02 09:25:31 np0005541914.localdomain python3.9[156561]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:25:31 np0005541914.localdomain sudo[156559]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:32 np0005541914.localdomain sudo[156651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-geklkbacqsezxufqvsfogdeghtlyhfaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667531.7945848-646-183654751135434/AnsiballZ_stat.py
Dec 02 09:25:32 np0005541914.localdomain sudo[156651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:32 np0005541914.localdomain python3.9[156653]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:25:32 np0005541914.localdomain sudo[156651]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:32 np0005541914.localdomain sudo[156699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bbetofbvttdtpxauqftjtlppynwkifwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667531.7945848-646-183654751135434/AnsiballZ_file.py
Dec 02 09:25:32 np0005541914.localdomain sudo[156699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:32 np0005541914.localdomain python3.9[156701]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:25:32 np0005541914.localdomain sudo[156699]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:33 np0005541914.localdomain sudo[156791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-snvqwxrdwxtczbglghebwesvoibtacmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667532.8136666-715-187135731423914/AnsiballZ_file.py
Dec 02 09:25:33 np0005541914.localdomain sudo[156791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:33 np0005541914.localdomain python3.9[156793]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:25:33 np0005541914.localdomain sudo[156791]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:33 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53695 DF PROTO=TCP SPT=48526 DPT=9882 SEQ=590766144 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53EF8000000000001030307) 
Dec 02 09:25:33 np0005541914.localdomain sudo[156883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nryhglwclfaryqukdwsizniijoobmstd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667533.526715-740-70887749652448/AnsiballZ_stat.py
Dec 02 09:25:33 np0005541914.localdomain sudo[156883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:33 np0005541914.localdomain python3.9[156885]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:25:34 np0005541914.localdomain sudo[156883]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:34 np0005541914.localdomain sudo[156931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bldvmkbikcipkwjdkelmpqunipgozysl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667533.526715-740-70887749652448/AnsiballZ_file.py
Dec 02 09:25:34 np0005541914.localdomain sudo[156931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:34 np0005541914.localdomain python3.9[156933]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:25:34 np0005541914.localdomain sudo[156931]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:34 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53696 DF PROTO=TCP SPT=48526 DPT=9882 SEQ=590766144 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53EFC220000000001030307) 
Dec 02 09:25:34 np0005541914.localdomain sudo[157023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tfmsowopxicnsvtmtipigznresqxutyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667534.6298223-775-62749688769455/AnsiballZ_stat.py
Dec 02 09:25:34 np0005541914.localdomain sudo[157023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:35 np0005541914.localdomain python3.9[157025]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:25:35 np0005541914.localdomain sudo[157023]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:35 np0005541914.localdomain sudo[157071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-njfizjocagmfjlzacdhzgbjmycoiyobm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667534.6298223-775-62749688769455/AnsiballZ_file.py
Dec 02 09:25:35 np0005541914.localdomain sudo[157071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:35 np0005541914.localdomain python3.9[157073]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:25:35 np0005541914.localdomain sudo[157071]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:35 np0005541914.localdomain sudo[157163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kxsiyhbqudrczkpikzirwhijyrbkjsek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667535.6976128-812-177210028991387/AnsiballZ_systemd.py
Dec 02 09:25:35 np0005541914.localdomain sudo[157163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:36 np0005541914.localdomain python3.9[157165]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:25:36 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:25:36 np0005541914.localdomain systemd-rc-local-generator[157188]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:25:36 np0005541914.localdomain systemd-sysv-generator[157193]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:25:36 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:25:36 np0005541914.localdomain sudo[157163]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:36 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53697 DF PROTO=TCP SPT=48526 DPT=9882 SEQ=590766144 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53F04220000000001030307) 
Dec 02 09:25:37 np0005541914.localdomain sudo[157293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gnxdilddmooluuramauusazdzkqylqek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667536.7880025-835-214616648498184/AnsiballZ_stat.py
Dec 02 09:25:37 np0005541914.localdomain sudo[157293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:37 np0005541914.localdomain python3.9[157295]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:25:37 np0005541914.localdomain sudo[157293]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:38 np0005541914.localdomain sudo[157341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ablxjjneqmvmpwjfongljxcqewiseufm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667536.7880025-835-214616648498184/AnsiballZ_file.py
Dec 02 09:25:38 np0005541914.localdomain sudo[157341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:38 np0005541914.localdomain python3.9[157343]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:25:38 np0005541914.localdomain sudo[157341]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:38 np0005541914.localdomain sudo[157433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-itkdlcxwkeupwwjpclsnaepqweimlxtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667538.4624124-872-264330861291969/AnsiballZ_stat.py
Dec 02 09:25:38 np0005541914.localdomain sudo[157433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:38 np0005541914.localdomain python3.9[157435]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:25:38 np0005541914.localdomain sudo[157433]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:39 np0005541914.localdomain sudo[157481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qgoydzmxwuebiesgrxzjvllklyuieprb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667538.4624124-872-264330861291969/AnsiballZ_file.py
Dec 02 09:25:39 np0005541914.localdomain sudo[157481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:39 np0005541914.localdomain python3.9[157483]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:25:39 np0005541914.localdomain sudo[157481]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:40 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11093 DF PROTO=TCP SPT=50322 DPT=9100 SEQ=2776366170 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53F11220000000001030307) 
Dec 02 09:25:40 np0005541914.localdomain sudo[157573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iehcnwynatsmgryvwurtyombkniqunns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667539.9384851-907-264410253318355/AnsiballZ_systemd.py
Dec 02 09:25:40 np0005541914.localdomain sudo[157573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:40 np0005541914.localdomain python3.9[157575]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:25:40 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:25:40 np0005541914.localdomain systemd-sysv-generator[157601]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:25:40 np0005541914.localdomain systemd-rc-local-generator[157598]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:25:40 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:25:40 np0005541914.localdomain systemd[1]: Starting Create netns directory...
Dec 02 09:25:40 np0005541914.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 02 09:25:40 np0005541914.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 02 09:25:40 np0005541914.localdomain systemd[1]: Finished Create netns directory.
Dec 02 09:25:40 np0005541914.localdomain sudo[157573]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:42 np0005541914.localdomain sudo[157706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mvsgrspzrmeypzkjwfloviatyanlyhke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667542.1759984-937-24834448480887/AnsiballZ_file.py
Dec 02 09:25:42 np0005541914.localdomain sudo[157706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:42 np0005541914.localdomain python3.9[157708]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:25:42 np0005541914.localdomain sudo[157706]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:43 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8137 DF PROTO=TCP SPT=37242 DPT=9105 SEQ=863077683 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53F1D220000000001030307) 
Dec 02 09:25:43 np0005541914.localdomain sudo[157798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ygrkuaibjdfmtjyifxazteqrefegsrob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667542.865387-961-89189219767186/AnsiballZ_stat.py
Dec 02 09:25:43 np0005541914.localdomain sudo[157798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:43 np0005541914.localdomain python3.9[157800]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:25:43 np0005541914.localdomain sudo[157798]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:43 np0005541914.localdomain sudo[157871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uqmezyodyopwwsvngpmmitfhwiursddy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667542.865387-961-89189219767186/AnsiballZ_copy.py
Dec 02 09:25:43 np0005541914.localdomain sudo[157871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:43 np0005541914.localdomain python3.9[157873]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764667542.865387-961-89189219767186/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:25:43 np0005541914.localdomain sudo[157871]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:44 np0005541914.localdomain sudo[157963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-olrdmmjzmdzegwdvpmusetmvzidajcrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667544.261872-1012-133877067709951/AnsiballZ_file.py
Dec 02 09:25:44 np0005541914.localdomain sudo[157963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:44 np0005541914.localdomain python3.9[157965]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:25:44 np0005541914.localdomain sudo[157963]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:45 np0005541914.localdomain sudo[158055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-foggupiacvhgmyhbahtcpztgssotrjmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667544.9638226-1036-8980708798820/AnsiballZ_stat.py
Dec 02 09:25:45 np0005541914.localdomain sudo[158055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:45 np0005541914.localdomain python3.9[158057]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:25:45 np0005541914.localdomain sudo[158055]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:45 np0005541914.localdomain sudo[158130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-teumhzlwqixzcomggxcgbruuznruqyjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667544.9638226-1036-8980708798820/AnsiballZ_copy.py
Dec 02 09:25:45 np0005541914.localdomain sudo[158130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:46 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11284 DF PROTO=TCP SPT=58472 DPT=9102 SEQ=3007936010 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53F285E0000000001030307) 
Dec 02 09:25:46 np0005541914.localdomain python3.9[158132]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667544.9638226-1036-8980708798820/.source.json _original_basename=.tvrndj_8 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:25:46 np0005541914.localdomain sudo[158130]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:46 np0005541914.localdomain sudo[158222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iirtmkbsoyveffruxhxxfokvfuufzkyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667546.2284465-1081-240613212639980/AnsiballZ_file.py
Dec 02 09:25:46 np0005541914.localdomain sudo[158222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:46 np0005541914.localdomain python3.9[158224]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:25:46 np0005541914.localdomain sudo[158222]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:47 np0005541914.localdomain sudo[158314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bznbtfllpyaxlszxzkhylkdfvibzippv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667546.9407988-1105-165551492754758/AnsiballZ_stat.py
Dec 02 09:25:47 np0005541914.localdomain sudo[158314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:47 np0005541914.localdomain sudo[158314]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:47 np0005541914.localdomain sudo[158387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jzxtkjtvvwlyptnwyinupceqapuhhejj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667546.9407988-1105-165551492754758/AnsiballZ_copy.py
Dec 02 09:25:47 np0005541914.localdomain sudo[158387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:47 np0005541914.localdomain sudo[158387]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:49 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11286 DF PROTO=TCP SPT=58472 DPT=9102 SEQ=3007936010 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53F34620000000001030307) 
Dec 02 09:25:49 np0005541914.localdomain sudo[158479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-apdncqlhtzsaerbrjeqrwjqsyyykuvxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667548.9100597-1156-126407004937861/AnsiballZ_container_config_data.py
Dec 02 09:25:49 np0005541914.localdomain sudo[158479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:49 np0005541914.localdomain python3.9[158481]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Dec 02 09:25:49 np0005541914.localdomain sudo[158479]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:50 np0005541914.localdomain sudo[158571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-epgtaboyenvidercifbcncpkxfpamijt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667549.715357-1183-189055264474512/AnsiballZ_container_config_hash.py
Dec 02 09:25:50 np0005541914.localdomain sudo[158571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:50 np0005541914.localdomain python3.9[158573]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 02 09:25:50 np0005541914.localdomain sudo[158571]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:51 np0005541914.localdomain sudo[158663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-etwprwmlcgsdgsqcmkoyuufjjklfhtol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667551.0584974-1210-2330325273419/AnsiballZ_podman_container_info.py
Dec 02 09:25:51 np0005541914.localdomain sudo[158663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:51 np0005541914.localdomain python3.9[158665]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 02 09:25:51 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57257 DF PROTO=TCP SPT=49426 DPT=9101 SEQ=3183603933 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53F3F220000000001030307) 
Dec 02 09:25:52 np0005541914.localdomain sudo[158663]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:54 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:25:55 np0005541914.localdomain podman[158708]: 2025-12-02 09:25:55.096188954 +0000 UTC m=+0.094085137 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 02 09:25:55 np0005541914.localdomain podman[158708]: 2025-12-02 09:25:55.17388723 +0000 UTC m=+0.171783423 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 02 09:25:55 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:25:55 np0005541914.localdomain sudo[158806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kmuejihnreckqabmedxdicprlhhexvyr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764667555.0516577-1249-170281681213458/AnsiballZ_edpm_container_manage.py
Dec 02 09:25:55 np0005541914.localdomain sudo[158806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:55 np0005541914.localdomain python3[158808]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 02 09:25:56 np0005541914.localdomain python3[158808]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                               {
                                                                    "Id": "014dc726c85414b29f2dde7b5d875685d08784761c0f0ffa8630d1583a877bf9",
                                                                    "Digest": "sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3",
                                                                    "RepoTags": [
                                                                         "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified"
                                                                    ],
                                                                    "RepoDigests": [
                                                                         "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:db3e3d71618c3539a2853a20f7684f016b67370157990932291b00a48fa16bd3"
                                                                    ],
                                                                    "Parent": "",
                                                                    "Comment": "",
                                                                    "Created": "2025-12-01T06:29:20.327314945Z",
                                                                    "Config": {
                                                                         "User": "neutron",
                                                                         "Env": [
                                                                              "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                                              "LANG=en_US.UTF-8",
                                                                              "TZ=UTC",
                                                                              "container=oci"
                                                                         ],
                                                                         "Entrypoint": [
                                                                              "dumb-init",
                                                                              "--single-child",
                                                                              "--"
                                                                         ],
                                                                         "Cmd": [
                                                                              "kolla_start"
                                                                         ],
                                                                         "Labels": {
                                                                              "io.buildah.version": "1.41.3",
                                                                              "maintainer": "OpenStack Kubernetes Operator team",
                                                                              "org.label-schema.build-date": "20251125",
                                                                              "org.label-schema.license": "GPLv2",
                                                                              "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                              "org.label-schema.schema-version": "1.0",
                                                                              "org.label-schema.vendor": "CentOS",
                                                                              "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "tcib_managed": "true"
                                                                         },
                                                                         "StopSignal": "SIGTERM"
                                                                    },
                                                                    "Version": "",
                                                                    "Author": "",
                                                                    "Architecture": "amd64",
                                                                    "Os": "linux",
                                                                    "Size": 784141054,
                                                                    "VirtualSize": 784141054,
                                                                    "GraphDriver": {
                                                                         "Name": "overlay",
                                                                         "Data": {
                                                                              "LowerDir": "/var/lib/containers/storage/overlay/c229f79c70cf5be9a27371d03399d655b2b0280f5e9159c8f223d964c49a7e53/diff:/var/lib/containers/storage/overlay/2bd01f86bd06174222a9d55fe041ff06edb278c28aedc59c96738054f88e995d/diff:/var/lib/containers/storage/overlay/11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",
                                                                              "UpperDir": "/var/lib/containers/storage/overlay/70249a3a7715ea2081744d13dd83fad2e62b9b24ab69f2af1c4f45ccd311c7a7/diff",
                                                                              "WorkDir": "/var/lib/containers/storage/overlay/70249a3a7715ea2081744d13dd83fad2e62b9b24ab69f2af1c4f45ccd311c7a7/work"
                                                                         }
                                                                    },
                                                                    "RootFS": {
                                                                         "Type": "layers",
                                                                         "Layers": [
                                                                              "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",
                                                                              "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",
                                                                              "sha256:86c2cd3987225f8a9bf38cc88e9c24b56bdf4a194f2301186519b4a7571b0c92",
                                                                              "sha256:75abaaa40a93c0e2bba524b6f8d4eb5f1c4c9a33db70c892c7582ec5b0827e5e",
                                                                              "sha256:01f43f620d1ea2a9e584abe0cc14c336bedcf55765127c000d743f536dd36f25",
                                                                              "sha256:0bf5bd378602f28be423f5e84abddff3b103396fae3c167031b6e3fcfcf6f120"
                                                                         ]
                                                                    },
                                                                    "Labels": {
                                                                         "io.buildah.version": "1.41.3",
                                                                         "maintainer": "OpenStack Kubernetes Operator team",
                                                                         "org.label-schema.build-date": "20251125",
                                                                         "org.label-schema.license": "GPLv2",
                                                                         "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                         "org.label-schema.schema-version": "1.0",
                                                                         "org.label-schema.vendor": "CentOS",
                                                                         "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                         "tcib_managed": "true"
                                                                    },
                                                                    "Annotations": {},
                                                                    "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                                    "User": "neutron",
                                                                    "History": [
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223494528Z",
                                                                              "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223562059Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251125\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:39.054452717Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025707917Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                                              "comment": "FROM quay.io/centos/centos:stream9",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025744608Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025767729Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025791379Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.02581523Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025867611Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.469442331Z",
                                                                              "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:02.029095017Z",
                                                                              "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:05.672474685Z",
                                                                              "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.113425253Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.532320725Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.370061347Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.805172373Z",
                                                                              "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.259306372Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.625948784Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.028304824Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.423316076Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.801219631Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.239187116Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.70996597Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.147342611Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.5739488Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.006975065Z",
                                                                              "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.421255505Z",
                                                                              "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.066694755Z",
                                                                              "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.475695836Z",
                                                                              "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.8971372Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /openstack",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:18.542651107Z",
                                                                              "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622503041Z",
                                                                              "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622561802Z",
                                                                              "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622578342Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622594423Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:22.080892529Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:15.092312074Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:53.218820537Z",
                                                                              "created_by": "/bin/sh -c dnf install -y python3-barbicanclient python3-cinderclient python3-designateclient python3-glanceclient python3-ironicclient python3-keystoneclient python3-manilaclient python3-neutronclient python3-novaclient python3-observabilityclient python3-octaviaclient python3-openstackclient python3-swiftclient python3-pymemcache && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:56.858075591Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:15:50.18897737Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-os:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:15:50.762138914Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage neutron",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:17:13.720608935Z",
                                                                              "created_by": "/bin/sh -c dnf -y install iputils net-tools openstack-neutron openstack-neutron-rpc-server openstack-neutron-ml2 openvswitch python3-networking-baremetal python3-openvswitch python3-unbound && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:17:27.636630318Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/neutron-base/neutron_sudoers /etc/sudoers.d/neutron_sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:17:40.546186661Z",
                                                                              "created_by": "/bin/sh -c chmod 440 /etc/sudoers.d/neutron_sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:17:52.875291445Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:27:22.608862134Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:28:35.764559413Z",
                                                                              "created_by": "/bin/sh -c dnf -y install libseccomp podman && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:28:40.983506098Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:28:44.803537768Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-agent-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:29:20.324920691Z",
                                                                              "created_by": "/bin/sh -c dnf -y install python3-networking-ovn-metadata-agent && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:29:20.324983383Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER neutron",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:29:24.215761584Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         }
                                                                    ],
                                                                    "NamesHistory": [
                                                                         "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified"
                                                                    ]
                                                               }
                                                          ]
                                                          : quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 02 09:25:56 np0005541914.localdomain podman[158858]: 2025-12-02 09:25:56.115656625 +0000 UTC m=+0.080355267 container remove 6bfb33b1a38349143dc3dc8a172e429cb5445c9b726955e335640e6ad651fc9b (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2025-11-19T00:14:25Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, name=rhosp17/openstack-neutron-metadata-agent-ovn, vcs-ref=89d55f10f82ff50b4f24de36868d7c635c279c7c, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '6b6de39672ef4d892f2e8f81f38c430b'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, org.opencontainers.image.revision=89d55f10f82ff50b4f24de36868d7c635c279c7c, url=https://www.redhat.com, version=17.1.12, release=1761123044, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, batch=17.1_20251118.1)
Dec 02 09:25:56 np0005541914.localdomain python3[158808]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ovn_metadata_agent
Dec 02 09:25:56 np0005541914.localdomain podman[158872]: 
Dec 02 09:25:56 np0005541914.localdomain podman[158872]: 2025-12-02 09:25:56.214597661 +0000 UTC m=+0.079888824 container create 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 02 09:25:56 np0005541914.localdomain podman[158872]: 2025-12-02 09:25:56.179072474 +0000 UTC m=+0.044363627 image pull  quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 02 09:25:56 np0005541914.localdomain python3[158808]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311 --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 02 09:25:56 np0005541914.localdomain sudo[158806]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:56 np0005541914.localdomain sudo[158998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-biinpfcjmautphbmqctfdmlrdrgwjsbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667556.7526367-1273-1968413669193/AnsiballZ_stat.py
Dec 02 09:25:56 np0005541914.localdomain sudo[158998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56727 DF PROTO=TCP SPT=60572 DPT=9101 SEQ=3092294769 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53F53E30000000001030307) 
Dec 02 09:25:57 np0005541914.localdomain python3.9[159000]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:25:57 np0005541914.localdomain sudo[158998]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:57 np0005541914.localdomain sudo[159092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jfeprvfscpfhdeevlwhpywlyvogrlcip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667557.4992158-1300-116945954834055/AnsiballZ_file.py
Dec 02 09:25:57 np0005541914.localdomain sudo[159092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:57 np0005541914.localdomain python3.9[159094]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:25:57 np0005541914.localdomain sudo[159092]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:58 np0005541914.localdomain sudo[159138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-slaqjcmqgpshxjuhcxynottknacxyelm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667557.4992158-1300-116945954834055/AnsiballZ_stat.py
Dec 02 09:25:58 np0005541914.localdomain sudo[159138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:58 np0005541914.localdomain python3.9[159140]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:25:58 np0005541914.localdomain sudo[159138]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:58 np0005541914.localdomain sudo[159229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lambeeplhirlygpcfksqnpsfhylsohjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667558.4321272-1300-94824610146451/AnsiballZ_copy.py
Dec 02 09:25:58 np0005541914.localdomain sudo[159229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:59 np0005541914.localdomain python3.9[159231]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764667558.4321272-1300-94824610146451/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:25:59 np0005541914.localdomain sudo[159229]: pam_unix(sudo:session): session closed for user root
Dec 02 09:25:59 np0005541914.localdomain sudo[159275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wgguyjigsulesmordcxzlvwlqeitxlgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667558.4321272-1300-94824610146451/AnsiballZ_systemd.py
Dec 02 09:25:59 np0005541914.localdomain sudo[159275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:25:59 np0005541914.localdomain python3.9[159277]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 02 09:25:59 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:25:59 np0005541914.localdomain systemd-rc-local-generator[159300]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:25:59 np0005541914.localdomain systemd-sysv-generator[159303]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:26:00 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:26:00 np0005541914.localdomain sudo[159275]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:00 np0005541914.localdomain sudo[159327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:26:00 np0005541914.localdomain sudo[159327]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:26:00 np0005541914.localdomain sudo[159327]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:00 np0005541914.localdomain sudo[159344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:26:00 np0005541914.localdomain sudo[159344]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:26:00 np0005541914.localdomain sudo[159387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yrjwqwaqcfzjxfqiiujoivcoidpytlno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667558.4321272-1300-94824610146451/AnsiballZ_systemd.py
Dec 02 09:26:00 np0005541914.localdomain sudo[159387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:00 np0005541914.localdomain python3.9[159389]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:26:00 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:26:00 np0005541914.localdomain systemd-sysv-generator[159434]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:26:00 np0005541914.localdomain systemd-rc-local-generator[159430]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:26:00 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:26:01 np0005541914.localdomain systemd[1]: Starting ovn_metadata_agent container...
Dec 02 09:26:01 np0005541914.localdomain sudo[159344]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:01 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:26:01 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4da2e578b04930fec74c4af4c53467c7a43895f3bd10cda05f9c1b3856d5818/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec 02 09:26:01 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4da2e578b04930fec74c4af4c53467c7a43895f3bd10cda05f9c1b3856d5818/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 09:26:01 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:26:01 np0005541914.localdomain podman[159464]: 2025-12-02 09:26:01.302844446 +0000 UTC m=+0.151940076 container init 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 02 09:26:01 np0005541914.localdomain systemd[1]: tmp-crun.89U60b.mount: Deactivated successfully.
Dec 02 09:26:01 np0005541914.localdomain ovn_metadata_agent[159477]: + sudo -E kolla_set_configs
Dec 02 09:26:01 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:26:01 np0005541914.localdomain podman[159464]: 2025-12-02 09:26:01.345648955 +0000 UTC m=+0.194744585 container start 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 02 09:26:01 np0005541914.localdomain edpm-start-podman-container[159464]: ovn_metadata_agent
Dec 02 09:26:01 np0005541914.localdomain ovn_metadata_agent[159477]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 02 09:26:01 np0005541914.localdomain ovn_metadata_agent[159477]: INFO:__main__:Validating config file
Dec 02 09:26:01 np0005541914.localdomain ovn_metadata_agent[159477]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 02 09:26:01 np0005541914.localdomain ovn_metadata_agent[159477]: INFO:__main__:Copying service configuration files
Dec 02 09:26:01 np0005541914.localdomain ovn_metadata_agent[159477]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec 02 09:26:01 np0005541914.localdomain ovn_metadata_agent[159477]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec 02 09:26:01 np0005541914.localdomain ovn_metadata_agent[159477]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec 02 09:26:01 np0005541914.localdomain ovn_metadata_agent[159477]: INFO:__main__:Writing out command to execute
Dec 02 09:26:01 np0005541914.localdomain ovn_metadata_agent[159477]: INFO:__main__:Setting permission for /var/lib/neutron
Dec 02 09:26:01 np0005541914.localdomain ovn_metadata_agent[159477]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec 02 09:26:01 np0005541914.localdomain ovn_metadata_agent[159477]: INFO:__main__:Setting permission for /var/lib/neutron/.cache
Dec 02 09:26:01 np0005541914.localdomain ovn_metadata_agent[159477]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec 02 09:26:01 np0005541914.localdomain ovn_metadata_agent[159477]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec 02 09:26:01 np0005541914.localdomain ovn_metadata_agent[159477]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec 02 09:26:01 np0005541914.localdomain ovn_metadata_agent[159477]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy
Dec 02 09:26:01 np0005541914.localdomain ovn_metadata_agent[159477]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec 02 09:26:01 np0005541914.localdomain ovn_metadata_agent[159477]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints
Dec 02 09:26:01 np0005541914.localdomain ovn_metadata_agent[159477]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934
Dec 02 09:26:01 np0005541914.localdomain ovn_metadata_agent[159477]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec 02 09:26:01 np0005541914.localdomain ovn_metadata_agent[159477]: ++ cat /run_command
Dec 02 09:26:01 np0005541914.localdomain ovn_metadata_agent[159477]: + CMD=neutron-ovn-metadata-agent
Dec 02 09:26:01 np0005541914.localdomain ovn_metadata_agent[159477]: + ARGS=
Dec 02 09:26:01 np0005541914.localdomain ovn_metadata_agent[159477]: + sudo kolla_copy_cacerts
Dec 02 09:26:01 np0005541914.localdomain ovn_metadata_agent[159477]: + [[ ! -n '' ]]
Dec 02 09:26:01 np0005541914.localdomain ovn_metadata_agent[159477]: + . kolla_extend_start
Dec 02 09:26:01 np0005541914.localdomain ovn_metadata_agent[159477]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Dec 02 09:26:01 np0005541914.localdomain ovn_metadata_agent[159477]: Running command: 'neutron-ovn-metadata-agent'
Dec 02 09:26:01 np0005541914.localdomain ovn_metadata_agent[159477]: + umask 0022
Dec 02 09:26:01 np0005541914.localdomain ovn_metadata_agent[159477]: + exec neutron-ovn-metadata-agent
Dec 02 09:26:01 np0005541914.localdomain edpm-start-podman-container[159463]: Creating additional drop-in dependency for "ovn_metadata_agent" (225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1)
Dec 02 09:26:01 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:26:01 np0005541914.localdomain podman[159485]: 2025-12-02 09:26:01.5266558 +0000 UTC m=+0.171876707 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=starting, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:26:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11288 DF PROTO=TCP SPT=58472 DPT=9102 SEQ=3007936010 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53F65220000000001030307) 
Dec 02 09:26:01 np0005541914.localdomain podman[159485]: 2025-12-02 09:26:01.567995154 +0000 UTC m=+0.213216091 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 02 09:26:01 np0005541914.localdomain systemd-rc-local-generator[159545]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:26:01 np0005541914.localdomain systemd-sysv-generator[159553]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:26:01 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:26:01 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:26:01 np0005541914.localdomain systemd[1]: Started ovn_metadata_agent container.
Dec 02 09:26:01 np0005541914.localdomain sudo[159387]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:02 np0005541914.localdomain sudo[159582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:26:02 np0005541914.localdomain sudo[159582]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:26:02 np0005541914.localdomain sudo[159582]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.059 159483 INFO neutron.common.config [-] Logging enabled!
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.059 159483 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.059 159483 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.059 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.059 159483 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.060 159483 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.060 159483 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.060 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.060 159483 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.060 159483 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.060 159483 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.060 159483 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.060 159483 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.060 159483 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.060 159483 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.061 159483 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.061 159483 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.061 159483 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.061 159483 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.061 159483 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.061 159483 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.061 159483 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.061 159483 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.061 159483 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.061 159483 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.062 159483 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.062 159483 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.062 159483 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.062 159483 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.062 159483 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.062 159483 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.062 159483 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.062 159483 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.062 159483 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.062 159483 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.063 159483 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.063 159483 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.063 159483 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.063 159483 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = np0005541914.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.063 159483 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.063 159483 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.063 159483 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.063 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.063 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.064 159483 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.064 159483 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.064 159483 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.064 159483 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.064 159483 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.064 159483 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.064 159483 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.064 159483 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.064 159483 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.064 159483 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.064 159483 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.065 159483 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.065 159483 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.065 159483 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.065 159483 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.065 159483 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.065 159483 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.065 159483 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.065 159483 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.065 159483 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.065 159483 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.066 159483 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.066 159483 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.066 159483 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.066 159483 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.066 159483 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.066 159483 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.066 159483 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.066 159483 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.066 159483 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.066 159483 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.067 159483 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.067 159483 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = http log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.067 159483 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.067 159483 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.067 159483 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.067 159483 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.067 159483 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.067 159483 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.067 159483 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.067 159483 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.068 159483 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.068 159483 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.068 159483 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.068 159483 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.068 159483 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.068 159483 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.068 159483 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.068 159483 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.068 159483 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.068 159483 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.069 159483 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.069 159483 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.069 159483 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.069 159483 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.069 159483 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.069 159483 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.069 159483 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.069 159483 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.069 159483 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.069 159483 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.069 159483 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.070 159483 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.070 159483 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.070 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.070 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.070 159483 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.070 159483 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.070 159483 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.070 159483 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.070 159483 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.070 159483 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.071 159483 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.071 159483 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.071 159483 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.071 159483 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.071 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.071 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.071 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.071 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.071 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.072 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.072 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.072 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.072 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.072 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.072 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.072 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.072 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.072 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.072 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.073 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.073 159483 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.073 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.073 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.073 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.073 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.073 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.073 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.073 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.073 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.074 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.074 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.074 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.074 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.074 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.074 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.074 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.074 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.074 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.074 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.075 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.075 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.075 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.075 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.075 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.075 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.075 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.075 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.075 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.075 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.076 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.076 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.076 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.076 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.076 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.076 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.076 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.076 159483 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.076 159483 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.077 159483 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.077 159483 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.077 159483 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.077 159483 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.077 159483 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.077 159483 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.077 159483 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.077 159483 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.077 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.077 159483 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.078 159483 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.078 159483 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.078 159483 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.078 159483 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.078 159483 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.078 159483 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.078 159483 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.078 159483 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.078 159483 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.078 159483 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.079 159483 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.079 159483 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.079 159483 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.079 159483 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.079 159483 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.079 159483 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.079 159483 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.079 159483 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.079 159483 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.079 159483 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.080 159483 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.080 159483 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.080 159483 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.080 159483 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.080 159483 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.080 159483 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.080 159483 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.080 159483 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.080 159483 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.080 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.081 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.081 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.081 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.081 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.081 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.081 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.081 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.081 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.081 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.081 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.082 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.082 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.082 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.082 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.082 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.082 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.082 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.082 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.082 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.082 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.083 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.083 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.083 159483 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.083 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.083 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.083 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.083 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.083 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.083 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.084 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.084 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.084 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.084 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.084 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.084 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.084 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.084 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.084 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.084 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.085 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.085 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.085 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = tcp:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.085 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.085 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.085 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.085 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.085 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.085 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.086 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.086 159483 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.086 159483 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.086 159483 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.086 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.086 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.086 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.086 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.086 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.087 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.087 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.087 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.087 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.087 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.087 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.087 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.087 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.087 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.087 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.088 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.088 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.088 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.088 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.088 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.088 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.088 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.088 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.088 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.088 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.089 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.089 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.089 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.089 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.089 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.089 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.089 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.089 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.089 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.089 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.089 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.090 159483 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.090 159483 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.143 159483 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.144 159483 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.144 159483 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.144 159483 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.145 159483 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.172 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 515e0717-8baa-40e6-ac30-5fb148626504 (UUID: 515e0717-8baa-40e6-ac30-5fb148626504) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.195 159483 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.195 159483 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.195 159483 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.196 159483 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.197 159483 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.201 159483 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connected
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.213 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '515e0717-8baa-40e6-ac30-5fb148626504'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], external_ids={'neutron:ovn-metadata-id': 'b2bdd40e-63a2-5a14-9aa4-7df5929ce52d', 'neutron:ovn-metadata-sb-cfg': '1'}, name=515e0717-8baa-40e6-ac30-5fb148626504, nb_cfg_timestamp=1764667502568, nb_cfg=4) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.215 159483 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fe0b81d1b50>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.216 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.217 159483 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.217 159483 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.218 159483 INFO oslo_service.service [-] Starting 1 workers
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.223 159483 DEBUG oslo_service.service [-] Started child 159597 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.227 159597 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-387997'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.230 159483 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpanuh74ka/privsep.sock']
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.257 159597 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.258 159597 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.259 159597 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.263 159597 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.265 159597 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connected
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.284 159597 INFO eventlet.wsgi.server [-] (159597) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Dec 02 09:26:03 np0005541914.localdomain sshd[154258]: pam_unix(sshd:session): session closed for user zuul
Dec 02 09:26:03 np0005541914.localdomain systemd[1]: session-51.scope: Deactivated successfully.
Dec 02 09:26:03 np0005541914.localdomain systemd[1]: session-51.scope: Consumed 32.721s CPU time.
Dec 02 09:26:03 np0005541914.localdomain systemd-logind[760]: Session 51 logged out. Waiting for processes to exit.
Dec 02 09:26:03 np0005541914.localdomain systemd-logind[760]: Removed session 51.
Dec 02 09:26:03 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62903 DF PROTO=TCP SPT=58678 DPT=9882 SEQ=1205215035 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53F6D310000000001030307) 
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.915 159483 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.916 159483 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpanuh74ka/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.794 159602 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.797 159602 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.799 159602 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.799 159602 INFO oslo.privsep.daemon [-] privsep daemon running as pid 159602
Dec 02 09:26:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:03.920 159602 DEBUG oslo.privsep.daemon [-] privsep: reply[54b9440d-c494-4358-94c9-34d6f72f2dae]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.400 159602 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.400 159602 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.400 159602 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:26:04 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62904 DF PROTO=TCP SPT=58678 DPT=9882 SEQ=1205215035 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53F71230000000001030307) 
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.852 159602 DEBUG oslo.privsep.daemon [-] privsep: reply[e556d8a9-fc26-461a-96b1-2d591fea2d1c]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.855 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, column=external_ids, values=({'neutron:ovn-metadata-id': 'b2bdd40e-63a2-5a14-9aa4-7df5929ce52d'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.856 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.857 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.866 159483 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.866 159483 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.866 159483 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.867 159483 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.867 159483 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.867 159483 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.867 159483 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.868 159483 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.868 159483 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.868 159483 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.869 159483 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.869 159483 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.869 159483 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.869 159483 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.870 159483 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.870 159483 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.870 159483 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.870 159483 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.871 159483 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.871 159483 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.871 159483 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.871 159483 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.871 159483 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.872 159483 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.872 159483 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.872 159483 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.873 159483 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.873 159483 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.873 159483 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.873 159483 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.874 159483 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.874 159483 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.874 159483 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.874 159483 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.875 159483 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.875 159483 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.875 159483 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.876 159483 DEBUG oslo_service.service [-] host                           = np0005541914.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.876 159483 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.876 159483 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.876 159483 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.876 159483 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.877 159483 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.877 159483 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.877 159483 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.877 159483 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.878 159483 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.878 159483 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.878 159483 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.878 159483 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.878 159483 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.879 159483 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.879 159483 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.879 159483 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.879 159483 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.879 159483 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.880 159483 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.880 159483 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.880 159483 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.880 159483 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.881 159483 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.881 159483 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.881 159483 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.881 159483 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.882 159483 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.882 159483 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.882 159483 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.882 159483 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.882 159483 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.883 159483 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.883 159483 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.883 159483 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.883 159483 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.884 159483 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.884 159483 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.884 159483 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.884 159483 DEBUG oslo_service.service [-] nova_metadata_protocol         = http log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.885 159483 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.885 159483 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.885 159483 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.885 159483 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.886 159483 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.886 159483 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.886 159483 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.886 159483 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.886 159483 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.887 159483 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.887 159483 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.887 159483 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.887 159483 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.887 159483 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.888 159483 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.888 159483 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.888 159483 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.888 159483 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.888 159483 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.889 159483 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.889 159483 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.889 159483 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.889 159483 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.889 159483 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.890 159483 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.890 159483 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.890 159483 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.890 159483 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.890 159483 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.891 159483 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.891 159483 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.891 159483 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.891 159483 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.892 159483 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.892 159483 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.892 159483 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.892 159483 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.893 159483 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.893 159483 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.893 159483 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.893 159483 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.894 159483 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.894 159483 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.894 159483 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.894 159483 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.895 159483 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.895 159483 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.895 159483 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.895 159483 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.896 159483 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.896 159483 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.896 159483 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.896 159483 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.897 159483 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.897 159483 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.897 159483 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.897 159483 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.898 159483 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.898 159483 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.898 159483 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.898 159483 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.899 159483 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.899 159483 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.899 159483 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.899 159483 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.899 159483 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.900 159483 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.900 159483 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.900 159483 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.900 159483 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.901 159483 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.901 159483 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.901 159483 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.901 159483 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.901 159483 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.902 159483 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.902 159483 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.902 159483 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.903 159483 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.903 159483 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.903 159483 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.903 159483 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.903 159483 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.904 159483 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.904 159483 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.904 159483 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.904 159483 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.905 159483 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.905 159483 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.905 159483 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.905 159483 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.906 159483 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.906 159483 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.906 159483 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.906 159483 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.906 159483 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.907 159483 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.907 159483 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.907 159483 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.907 159483 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.908 159483 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.908 159483 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.908 159483 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.908 159483 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.908 159483 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.909 159483 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.909 159483 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.909 159483 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.910 159483 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.910 159483 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.910 159483 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.910 159483 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.910 159483 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.911 159483 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.911 159483 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.911 159483 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.911 159483 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.911 159483 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.912 159483 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.912 159483 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.912 159483 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.912 159483 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.912 159483 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.912 159483 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.913 159483 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.913 159483 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.913 159483 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.913 159483 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.913 159483 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.913 159483 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.913 159483 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.913 159483 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.914 159483 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.914 159483 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.914 159483 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.914 159483 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.914 159483 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.914 159483 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.914 159483 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.915 159483 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.915 159483 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.915 159483 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.915 159483 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.915 159483 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.915 159483 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.915 159483 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.916 159483 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.916 159483 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.916 159483 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.916 159483 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.916 159483 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.916 159483 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.916 159483 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.917 159483 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.917 159483 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.917 159483 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.917 159483 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.917 159483 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.917 159483 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.917 159483 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.917 159483 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.918 159483 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.918 159483 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.918 159483 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.918 159483 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.918 159483 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.918 159483 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.918 159483 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.919 159483 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.919 159483 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.919 159483 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.919 159483 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.919 159483 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.919 159483 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.920 159483 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.920 159483 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.920 159483 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.920 159483 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.920 159483 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = tcp:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.920 159483 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.920 159483 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.921 159483 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.921 159483 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.921 159483 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.921 159483 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.921 159483 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.921 159483 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.921 159483 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.922 159483 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.922 159483 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.922 159483 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.922 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.922 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.922 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.922 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.923 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.923 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.923 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.923 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.923 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.923 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.923 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.924 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.924 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.924 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.924 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.924 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.924 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.924 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.925 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.925 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.925 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.925 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.925 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.925 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.925 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.926 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.926 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.926 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.926 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.926 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.926 159483 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.927 159483 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.927 159483 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.927 159483 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.927 159483 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:26:04 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:26:04.927 159483 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 02 09:26:06 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62905 DF PROTO=TCP SPT=58678 DPT=9882 SEQ=1205215035 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53F79220000000001030307) 
Dec 02 09:26:09 np0005541914.localdomain sshd[159607]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:26:09 np0005541914.localdomain sshd[159607]: Accepted publickey for zuul from 192.168.122.30 port 37394 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 09:26:09 np0005541914.localdomain systemd-logind[760]: New session 52 of user zuul.
Dec 02 09:26:09 np0005541914.localdomain systemd[1]: Started Session 52 of User zuul.
Dec 02 09:26:09 np0005541914.localdomain sshd[159607]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 09:26:10 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41489 DF PROTO=TCP SPT=36486 DPT=9100 SEQ=1409631211 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53F87230000000001030307) 
Dec 02 09:26:10 np0005541914.localdomain python3.9[159700]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 09:26:12 np0005541914.localdomain sudo[159794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-trrwtfrfpyypwbdnwopsulzhoqfcvlxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667571.874551-64-70392508703450/AnsiballZ_command.py
Dec 02 09:26:12 np0005541914.localdomain sudo[159794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:12 np0005541914.localdomain python3.9[159796]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:26:12 np0005541914.localdomain sudo[159794]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:13 np0005541914.localdomain sudo[159899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wrfzpvsjiwetwcquowqptnoovtanjnqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667572.7932973-88-19807800442609/AnsiballZ_command.py
Dec 02 09:26:13 np0005541914.localdomain sudo[159899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:13 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52041 DF PROTO=TCP SPT=44504 DPT=9105 SEQ=3672999032 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53F92630000000001030307) 
Dec 02 09:26:13 np0005541914.localdomain python3.9[159901]: ansible-ansible.legacy.command Invoked with _raw_params=podman stop nova_virtlogd _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:26:13 np0005541914.localdomain systemd[1]: tmp-crun.4QOz1O.mount: Deactivated successfully.
Dec 02 09:26:13 np0005541914.localdomain systemd[1]: libpod-c02a8a11b94227111c66c22001221e662ea333a2c613bf3410586b68e637798a.scope: Deactivated successfully.
Dec 02 09:26:13 np0005541914.localdomain podman[159902]: 2025-12-02 09:26:13.443148779 +0000 UTC m=+0.093373860 container died c02a8a11b94227111c66c22001221e662ea333a2c613bf3410586b68e637798a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, name=rhosp17/openstack-nova-libvirt, tcib_managed=true, build-date=2025-11-19T00:35:22Z, release=1761123044, version=17.1.12, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, batch=17.1_20251118.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt)
Dec 02 09:26:13 np0005541914.localdomain podman[159902]: 2025-12-02 09:26:13.483065153 +0000 UTC m=+0.133290214 container cleanup c02a8a11b94227111c66c22001221e662ea333a2c613bf3410586b68e637798a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, build-date=2025-11-19T00:35:22Z, com.redhat.component=openstack-nova-libvirt-container, url=https://www.redhat.com, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, maintainer=OpenStack TripleO Team, release=1761123044, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, vcs-type=git, version=17.1.12, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp17/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.12 17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, tcib_managed=true, vendor=Red Hat, Inc.)
Dec 02 09:26:13 np0005541914.localdomain sudo[159899]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:13 np0005541914.localdomain podman[159917]: 2025-12-02 09:26:13.53855346 +0000 UTC m=+0.084441153 container remove c02a8a11b94227111c66c22001221e662ea333a2c613bf3410586b68e637798a (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, build-date=2025-11-19T00:35:22Z, vcs-type=git, version=17.1.12, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, release=1761123044, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, batch=17.1_20251118.1, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.12 17.1_20251118.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, name=rhosp17/openstack-nova-libvirt, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt)
Dec 02 09:26:13 np0005541914.localdomain systemd[1]: libpod-conmon-c02a8a11b94227111c66c22001221e662ea333a2c613bf3410586b68e637798a.scope: Deactivated successfully.
Dec 02 09:26:14 np0005541914.localdomain systemd[1]: tmp-crun.fefopQ.mount: Deactivated successfully.
Dec 02 09:26:14 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f14a2782138c084b8d1f9a2d1c3241237dbc098d9496c81144c959b54b35a260-merged.mount: Deactivated successfully.
Dec 02 09:26:14 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c02a8a11b94227111c66c22001221e662ea333a2c613bf3410586b68e637798a-userdata-shm.mount: Deactivated successfully.
Dec 02 09:26:14 np0005541914.localdomain sudo[160021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-guaqnjvuvvvitugzmwanavglzgptgrxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667573.9567788-119-216287023713153/AnsiballZ_systemd_service.py
Dec 02 09:26:14 np0005541914.localdomain sudo[160021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:14 np0005541914.localdomain python3.9[160023]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 02 09:26:14 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:26:14 np0005541914.localdomain systemd-rc-local-generator[160048]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:26:14 np0005541914.localdomain systemd-sysv-generator[160052]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:26:14 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:26:15 np0005541914.localdomain sudo[160021]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:15 np0005541914.localdomain python3.9[160148]: ansible-ansible.builtin.service_facts Invoked
Dec 02 09:26:15 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44718 DF PROTO=TCP SPT=55490 DPT=9105 SEQ=206652984 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53F9D230000000001030307) 
Dec 02 09:26:15 np0005541914.localdomain network[160165]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 02 09:26:15 np0005541914.localdomain network[160166]: 'network-scripts' will be removed from distribution in near future.
Dec 02 09:26:15 np0005541914.localdomain network[160167]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 02 09:26:17 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:26:18 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62907 DF PROTO=TCP SPT=58678 DPT=9882 SEQ=1205215035 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53FA9220000000001030307) 
Dec 02 09:26:20 np0005541914.localdomain sudo[160367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-biqeduqoadigfijvvkfyfmngdgshdkyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667580.556658-176-78826503838910/AnsiballZ_systemd_service.py
Dec 02 09:26:20 np0005541914.localdomain sudo[160367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:21 np0005541914.localdomain python3.9[160369]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:26:21 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:26:21 np0005541914.localdomain systemd-rc-local-generator[160394]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:26:21 np0005541914.localdomain systemd-sysv-generator[160400]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:26:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:26:21 np0005541914.localdomain systemd[1]: Stopped target tripleo_nova_libvirt.target.
Dec 02 09:26:21 np0005541914.localdomain sudo[160367]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:22 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56729 DF PROTO=TCP SPT=60572 DPT=9101 SEQ=3092294769 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53FB5220000000001030307) 
Dec 02 09:26:22 np0005541914.localdomain sudo[160499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-roskoipdomwniljgpbtyjzyghphqvhqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667582.0047472-176-267886340783193/AnsiballZ_systemd_service.py
Dec 02 09:26:22 np0005541914.localdomain sudo[160499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:22 np0005541914.localdomain python3.9[160501]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:26:23 np0005541914.localdomain sudo[160499]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:23 np0005541914.localdomain sudo[160592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uxyebdpynddbabzjmycegbkpwfjafvya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667583.7363656-176-180665969244724/AnsiballZ_systemd_service.py
Dec 02 09:26:23 np0005541914.localdomain sudo[160592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:24 np0005541914.localdomain python3.9[160594]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:26:24 np0005541914.localdomain sudo[160592]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:24 np0005541914.localdomain sudo[160685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kwadpmmykpcrgevygqygzdzxkdsbpbpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667584.4309478-176-223343199675972/AnsiballZ_systemd_service.py
Dec 02 09:26:24 np0005541914.localdomain sudo[160685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:24 np0005541914.localdomain python3.9[160687]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:26:25 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:26:26 np0005541914.localdomain sshd[160690]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:26:26 np0005541914.localdomain sudo[160685]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:26 np0005541914.localdomain systemd[1]: tmp-crun.sTFtFC.mount: Deactivated successfully.
Dec 02 09:26:26 np0005541914.localdomain podman[160689]: 2025-12-02 09:26:26.10252743 +0000 UTC m=+0.098410176 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:26:26 np0005541914.localdomain podman[160689]: 2025-12-02 09:26:26.17592522 +0000 UTC m=+0.171807676 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:26:26 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:26:26 np0005541914.localdomain sshd[160690]: Invalid user ubuntu from 45.148.10.240 port 43986
Dec 02 09:26:26 np0005541914.localdomain sshd[160690]: Connection closed by invalid user ubuntu 45.148.10.240 port 43986 [preauth]
Dec 02 09:26:26 np0005541914.localdomain sudo[160804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hfxrxvqzgoxeueoegexhyqsbyrfdhbes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667586.2445772-176-132568030970165/AnsiballZ_systemd_service.py
Dec 02 09:26:26 np0005541914.localdomain sudo[160804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:26 np0005541914.localdomain python3.9[160806]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:26:27 np0005541914.localdomain sudo[160804]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59594 DF PROTO=TCP SPT=34650 DPT=9101 SEQ=2590249443 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53FC9220000000001030307) 
Dec 02 09:26:27 np0005541914.localdomain sudo[160897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-chkpsvjtxdwptdknyjnhrylbczerqdqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667587.1525154-176-94118974169670/AnsiballZ_systemd_service.py
Dec 02 09:26:27 np0005541914.localdomain sudo[160897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52044 DF PROTO=TCP SPT=44504 DPT=9105 SEQ=3672999032 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53FCB220000000001030307) 
Dec 02 09:26:27 np0005541914.localdomain python3.9[160899]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:26:27 np0005541914.localdomain sudo[160897]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:28 np0005541914.localdomain sudo[160990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uebgnebqevgwsaaudkxkabpqbhidiuvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667587.8876555-176-260642960217693/AnsiballZ_systemd_service.py
Dec 02 09:26:28 np0005541914.localdomain sudo[160990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:28 np0005541914.localdomain python3.9[160992]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:26:28 np0005541914.localdomain sudo[160990]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:29 np0005541914.localdomain sudo[161083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fglyekrubfklvuxzqbckyszaqyzivnyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667588.8776648-332-137665298256986/AnsiballZ_file.py
Dec 02 09:26:29 np0005541914.localdomain sudo[161083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:29 np0005541914.localdomain python3.9[161085]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:26:29 np0005541914.localdomain sudo[161083]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:30 np0005541914.localdomain sudo[161175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jjgbrcypnhcuhqbmqzwbmdrvveitjkrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667590.1963177-332-28908174565332/AnsiballZ_file.py
Dec 02 09:26:30 np0005541914.localdomain sudo[161175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:30 np0005541914.localdomain python3.9[161177]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:26:30 np0005541914.localdomain sudo[161175]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27390 DF PROTO=TCP SPT=47454 DPT=9102 SEQ=1091343766 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53FD9220000000001030307) 
Dec 02 09:26:31 np0005541914.localdomain sudo[161267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jhszfzxvadcnhdywuhsdjympeaxiclog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667590.7914143-332-154994903427290/AnsiballZ_file.py
Dec 02 09:26:31 np0005541914.localdomain sudo[161267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:31 np0005541914.localdomain python3.9[161269]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:26:31 np0005541914.localdomain sudo[161267]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:31 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:26:32 np0005541914.localdomain podman[161270]: 2025-12-02 09:26:32.066722432 +0000 UTC m=+0.066203149 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 02 09:26:32 np0005541914.localdomain podman[161270]: 2025-12-02 09:26:32.101894229 +0000 UTC m=+0.101374926 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 02 09:26:32 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:26:32 np0005541914.localdomain sudo[161377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rgwwewlyqvowupyztppcngcqbqgyzfan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667592.098463-332-174818944484420/AnsiballZ_file.py
Dec 02 09:26:32 np0005541914.localdomain sudo[161377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:32 np0005541914.localdomain python3.9[161379]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:26:32 np0005541914.localdomain sudo[161377]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:32 np0005541914.localdomain sudo[161469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wbuhrhsdmngmhbsvobdmicarzatjhkgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667592.638645-332-65690470949789/AnsiballZ_file.py
Dec 02 09:26:32 np0005541914.localdomain sudo[161469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:33 np0005541914.localdomain python3.9[161471]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:26:33 np0005541914.localdomain sudo[161469]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:33 np0005541914.localdomain sudo[161561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mhnvlxyltsgrnndpbnxrqhznjrkblioa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667593.2289257-332-105045277445860/AnsiballZ_file.py
Dec 02 09:26:33 np0005541914.localdomain sudo[161561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:33 np0005541914.localdomain python3.9[161563]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:26:33 np0005541914.localdomain sudo[161561]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:34 np0005541914.localdomain sudo[161653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rlansycfpngqbfccydtdyhecgjsrnzpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667593.8118777-332-95895835849688/AnsiballZ_file.py
Dec 02 09:26:34 np0005541914.localdomain sudo[161653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:34 np0005541914.localdomain python3.9[161655]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:26:34 np0005541914.localdomain sudo[161653]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:34 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17563 DF PROTO=TCP SPT=36876 DPT=9882 SEQ=4223304392 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53FE6620000000001030307) 
Dec 02 09:26:34 np0005541914.localdomain sudo[161745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pxdpghmqqfuxbjiyjhmrpuzuyyrdgipm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667594.403036-482-100742282613766/AnsiballZ_file.py
Dec 02 09:26:34 np0005541914.localdomain sudo[161745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:34 np0005541914.localdomain python3.9[161747]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:26:34 np0005541914.localdomain sudo[161745]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:35 np0005541914.localdomain sudo[161837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vggybxsoekidgmwdmehoujeqibxyuiae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667594.9854445-482-89413882285467/AnsiballZ_file.py
Dec 02 09:26:35 np0005541914.localdomain sudo[161837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:35 np0005541914.localdomain python3.9[161839]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:26:35 np0005541914.localdomain sudo[161837]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:35 np0005541914.localdomain sudo[161929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-affbaxoigjavfxlejmcnibmjpdlpsqss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667595.503026-482-236323331401672/AnsiballZ_file.py
Dec 02 09:26:35 np0005541914.localdomain sudo[161929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:35 np0005541914.localdomain python3.9[161931]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:26:35 np0005541914.localdomain sudo[161929]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:36 np0005541914.localdomain sudo[162021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tvuytpshbyppazolrhbknphjdvgmgtlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667596.0826602-482-245911770186640/AnsiballZ_file.py
Dec 02 09:26:36 np0005541914.localdomain sudo[162021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:36 np0005541914.localdomain python3.9[162023]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:26:36 np0005541914.localdomain sudo[162021]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:36 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17564 DF PROTO=TCP SPT=36876 DPT=9882 SEQ=4223304392 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53FEE620000000001030307) 
Dec 02 09:26:37 np0005541914.localdomain sudo[162113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zhxnohbvyrpzauhesydruhnpufpsrmfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667596.74626-482-199107801233914/AnsiballZ_file.py
Dec 02 09:26:37 np0005541914.localdomain sudo[162113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:37 np0005541914.localdomain python3.9[162115]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:26:37 np0005541914.localdomain sudo[162113]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:37 np0005541914.localdomain sudo[162205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uvpfzkxtapemgqcohszhltpyfnkmwgem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667597.3335507-482-215704649882150/AnsiballZ_file.py
Dec 02 09:26:37 np0005541914.localdomain sudo[162205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:37 np0005541914.localdomain python3.9[162207]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:26:37 np0005541914.localdomain sudo[162205]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:38 np0005541914.localdomain sudo[162297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xqavnbiosqggfvvtmcznivdnkyapafop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667597.861608-482-123895393052179/AnsiballZ_file.py
Dec 02 09:26:38 np0005541914.localdomain sudo[162297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:38 np0005541914.localdomain python3.9[162299]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:26:38 np0005541914.localdomain sudo[162297]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:39 np0005541914.localdomain sudo[162389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pxltkfqvrvqujlxvchwfpvazyqfwfryn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667598.7751803-635-60935529035774/AnsiballZ_command.py
Dec 02 09:26:39 np0005541914.localdomain sudo[162389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:39 np0005541914.localdomain python3.9[162391]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                                              systemctl disable --now certmonger.service
                                                              test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                                            fi
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:26:39 np0005541914.localdomain sudo[162389]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:39 np0005541914.localdomain python3.9[162483]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 02 09:26:40 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62854 DF PROTO=TCP SPT=54384 DPT=9100 SEQ=970801497 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD53FFD220000000001030307) 
Dec 02 09:26:40 np0005541914.localdomain sudo[162573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xborhllonjxjmrbczozmgqztxtfmupia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667600.5113344-689-14588693754215/AnsiballZ_systemd_service.py
Dec 02 09:26:40 np0005541914.localdomain sudo[162573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:41 np0005541914.localdomain python3.9[162575]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 02 09:26:41 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:26:41 np0005541914.localdomain systemd-rc-local-generator[162597]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:26:41 np0005541914.localdomain systemd-sysv-generator[162603]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:26:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:26:41 np0005541914.localdomain sudo[162573]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:42 np0005541914.localdomain sudo[162701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-blfikknuwgqjleiqghrasywbxeavghdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667601.6191943-713-90614310210442/AnsiballZ_command.py
Dec 02 09:26:42 np0005541914.localdomain sudo[162701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:42 np0005541914.localdomain python3.9[162703]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:26:42 np0005541914.localdomain sudo[162701]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:43 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4092 DF PROTO=TCP SPT=46746 DPT=9105 SEQ=2375097227 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54007630000000001030307) 
Dec 02 09:26:43 np0005541914.localdomain sudo[162794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-obvxlhdjoygronfmkabhweuuboqghoca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667603.0000665-713-41855379339650/AnsiballZ_command.py
Dec 02 09:26:43 np0005541914.localdomain sudo[162794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:43 np0005541914.localdomain python3.9[162796]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:26:43 np0005541914.localdomain sudo[162794]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:44 np0005541914.localdomain sudo[162887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uwungkapxqupvvbozulyhtkxagmkknma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667603.83203-713-136953036700497/AnsiballZ_command.py
Dec 02 09:26:44 np0005541914.localdomain sudo[162887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:44 np0005541914.localdomain python3.9[162889]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:26:44 np0005541914.localdomain sudo[162887]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:44 np0005541914.localdomain sudo[162980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hrprejnrbrcofwshknadwncfxhpgzkla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667604.4183836-713-24618333032998/AnsiballZ_command.py
Dec 02 09:26:44 np0005541914.localdomain sudo[162980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:44 np0005541914.localdomain python3.9[162982]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:26:44 np0005541914.localdomain sudo[162980]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:45 np0005541914.localdomain sudo[163073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-purkiegqfzsycoxxffsegegvnxwkeslr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667604.9843154-713-231505847651992/AnsiballZ_command.py
Dec 02 09:26:45 np0005541914.localdomain sudo[163073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:45 np0005541914.localdomain python3.9[163075]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:26:45 np0005541914.localdomain sudo[163073]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:45 np0005541914.localdomain sudo[163166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uysrnfhfitzcxheotaceosnsdnwlymxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667605.59343-713-56581118876469/AnsiballZ_command.py
Dec 02 09:26:45 np0005541914.localdomain sudo[163166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:45 np0005541914.localdomain python3.9[163168]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:26:46 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44006 DF PROTO=TCP SPT=34134 DPT=9102 SEQ=3684987153 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54012BF0000000001030307) 
Dec 02 09:26:46 np0005541914.localdomain sudo[163166]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:46 np0005541914.localdomain sudo[163259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tcfvleoomsulmyunudyeqrmayodniosu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667606.1262326-713-59708836018367/AnsiballZ_command.py
Dec 02 09:26:46 np0005541914.localdomain sudo[163259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:46 np0005541914.localdomain python3.9[163261]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:26:46 np0005541914.localdomain sudo[163259]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:48 np0005541914.localdomain sudo[163353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dzafiiqvmsrtlwgjcybtdhctccqoqncb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667607.690493-875-30143368372254/AnsiballZ_getent.py
Dec 02 09:26:48 np0005541914.localdomain sudo[163353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:48 np0005541914.localdomain python3.9[163355]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec 02 09:26:48 np0005541914.localdomain sudo[163353]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:49 np0005541914.localdomain sudo[163446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rnnwdjftglkbmxsuaumzxvrjvhsjmzfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667608.7340438-901-270197195848413/AnsiballZ_group.py
Dec 02 09:26:49 np0005541914.localdomain sudo[163446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:49 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44008 DF PROTO=TCP SPT=34134 DPT=9102 SEQ=3684987153 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5401EE20000000001030307) 
Dec 02 09:26:49 np0005541914.localdomain python3.9[163448]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 02 09:26:49 np0005541914.localdomain groupadd[163449]: group added to /etc/group: name=libvirt, GID=42473
Dec 02 09:26:49 np0005541914.localdomain systemd-journald[47679]: Field hash table of /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal has a fill level at 76.6 (255 of 333 items), suggesting rotation.
Dec 02 09:26:49 np0005541914.localdomain systemd-journald[47679]: /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal: Journal header limits reached or header out-of-date, rotating.
Dec 02 09:26:49 np0005541914.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 02 09:26:49 np0005541914.localdomain groupadd[163449]: group added to /etc/gshadow: name=libvirt
Dec 02 09:26:49 np0005541914.localdomain groupadd[163449]: new group: name=libvirt, GID=42473
Dec 02 09:26:49 np0005541914.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 02 09:26:49 np0005541914.localdomain sudo[163446]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:50 np0005541914.localdomain sudo[163545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wcjwuqgibrkanefyyycgpyajxskpkarn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667609.5689337-922-179813299710212/AnsiballZ_user.py
Dec 02 09:26:50 np0005541914.localdomain sudo[163545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:50 np0005541914.localdomain python3.9[163547]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005541914.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 02 09:26:50 np0005541914.localdomain useradd[163549]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Dec 02 09:26:50 np0005541914.localdomain sudo[163545]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:51 np0005541914.localdomain sudo[163645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pjlvvbcqykczdljmauebvipecbqekawd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667611.2933855-955-96170875313002/AnsiballZ_setup.py
Dec 02 09:26:51 np0005541914.localdomain sudo[163645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:51 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59596 DF PROTO=TCP SPT=34650 DPT=9101 SEQ=2590249443 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54029230000000001030307) 
Dec 02 09:26:51 np0005541914.localdomain python3.9[163647]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 02 09:26:52 np0005541914.localdomain sudo[163645]: pam_unix(sudo:session): session closed for user root
Dec 02 09:26:53 np0005541914.localdomain sudo[163699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yogzvuaervvbtvdyavgsfllzlhpfyhyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667611.2933855-955-96170875313002/AnsiballZ_dnf.py
Dec 02 09:26:53 np0005541914.localdomain sudo[163699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:26:53 np0005541914.localdomain python3.9[163701]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 02 09:26:56 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:26:57 np0005541914.localdomain podman[163722]: 2025-12-02 09:26:57.098236682 +0000 UTC m=+0.097432445 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:26:57 np0005541914.localdomain podman[163722]: 2025-12-02 09:26:57.161908352 +0000 UTC m=+0.161104095 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 02 09:26:57 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:26:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25836 DF PROTO=TCP SPT=58862 DPT=9101 SEQ=1538546494 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5403E630000000001030307) 
Dec 02 09:27:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44010 DF PROTO=TCP SPT=34134 DPT=9102 SEQ=3684987153 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5404F220000000001030307) 
Dec 02 09:27:02 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:27:03 np0005541914.localdomain sudo[163808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:27:03 np0005541914.localdomain sudo[163808]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:27:03 np0005541914.localdomain sudo[163808]: pam_unix(sudo:session): session closed for user root
Dec 02 09:27:03 np0005541914.localdomain systemd[1]: tmp-crun.65dltK.mount: Deactivated successfully.
Dec 02 09:27:03 np0005541914.localdomain podman[163797]: 2025-12-02 09:27:03.107957352 +0000 UTC m=+0.104672198 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 02 09:27:03 np0005541914.localdomain podman[163797]: 2025-12-02 09:27:03.113431501 +0000 UTC m=+0.110146307 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:27:03 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:27:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:27:03.130 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:27:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:27:03.130 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:27:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:27:03.130 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:27:03 np0005541914.localdomain sudo[163829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:27:03 np0005541914.localdomain sudo[163829]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:27:03 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35199 DF PROTO=TCP SPT=55990 DPT=9882 SEQ=3044870298 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54057900000000001030307) 
Dec 02 09:27:03 np0005541914.localdomain sudo[163829]: pam_unix(sudo:session): session closed for user root
Dec 02 09:27:04 np0005541914.localdomain sudo[163880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:27:04 np0005541914.localdomain sudo[163880]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:27:04 np0005541914.localdomain sudo[163880]: pam_unix(sudo:session): session closed for user root
Dec 02 09:27:04 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35200 DF PROTO=TCP SPT=55990 DPT=9882 SEQ=3044870298 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5405BA20000000001030307) 
Dec 02 09:27:06 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35201 DF PROTO=TCP SPT=55990 DPT=9882 SEQ=3044870298 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54063A20000000001030307) 
Dec 02 09:27:10 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30292 DF PROTO=TCP SPT=33450 DPT=9100 SEQ=3336536510 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54071220000000001030307) 
Dec 02 09:27:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 09:27:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 6000.1 total, 600.0 interval
                                                          Cumulative writes: 4846 writes, 21K keys, 4846 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 4846 writes, 677 syncs, 7.16 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          
                                                          ** Compaction Stats [default] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                           Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [default] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [default] **
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-0] **
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-1] **
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-2] **
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.57 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                           Sum      1/0    1.57 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-0] **
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-1] **
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-2] **
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf57610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-0] **
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf57610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-1] **
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.26 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                                           Sum      1/0    1.26 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.010       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf57610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-2] **
                                                          
                                                          ** Compaction Stats [L] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.005       0      0       0.0       0.0
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.005       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [L] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.005       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [L] **
                                                          
                                                          ** Compaction Stats [P] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [P] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x56102bf562d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [P] **
Dec 02 09:27:12 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62855 DF PROTO=TCP SPT=54384 DPT=9100 SEQ=970801497 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5407B220000000001030307) 
Dec 02 09:27:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 09:27:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 6000.2 total, 600.0 interval
                                                          Cumulative writes: 5767 writes, 25K keys, 5767 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5767 writes, 746 syncs, 7.73 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          
                                                          ** Compaction Stats [default] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                           Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [default] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.006       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.2 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [default] **
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.2 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-0] **
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.2 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-1] **
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.2 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-2] **
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.57 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.023       0      0       0.0       0.0
                                                           Sum      1/0    1.57 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.023       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.023       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.2 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-0] **
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.2 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-1] **
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.2 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-2] **
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.2 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x562050321610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-0] **
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.2 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x562050321610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-1] **
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.26 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                                           Sum      1/0    1.26 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.033       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.2 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x562050321610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-2] **
                                                          
                                                          ** Compaction Stats [L] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [L] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.037       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.2 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [L] **
                                                          
                                                          ** Compaction Stats [P] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [P] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.2 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x5620503202d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [P] **
Dec 02 09:27:16 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9675 DF PROTO=TCP SPT=40786 DPT=9102 SEQ=3376284197 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54087EE0000000001030307) 
Dec 02 09:27:18 np0005541914.localdomain kernel: SELinux:  Converting 2746 SID table entries...
Dec 02 09:27:18 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35203 DF PROTO=TCP SPT=55990 DPT=9882 SEQ=3044870298 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54093220000000001030307) 
Dec 02 09:27:18 np0005541914.localdomain kernel: SELinux:  Context system_u:object_r:insights_client_cache_t:s0 became invalid (unmapped).
Dec 02 09:27:18 np0005541914.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 02 09:27:18 np0005541914.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 02 09:27:18 np0005541914.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 02 09:27:18 np0005541914.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 02 09:27:18 np0005541914.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 02 09:27:18 np0005541914.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 02 09:27:18 np0005541914.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 02 09:27:21 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25838 DF PROTO=TCP SPT=58862 DPT=9101 SEQ=1538546494 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5409F230000000001030307) 
Dec 02 09:27:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28802 DF PROTO=TCP SPT=57196 DPT=9101 SEQ=654126358 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD540B3630000000001030307) 
Dec 02 09:27:28 np0005541914.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=19 res=1
Dec 02 09:27:28 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:27:28 np0005541914.localdomain systemd[1]: tmp-crun.YDaw87.mount: Deactivated successfully.
Dec 02 09:27:28 np0005541914.localdomain podman[164913]: 2025-12-02 09:27:28.235273258 +0000 UTC m=+0.113728628 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:27:28 np0005541914.localdomain podman[164913]: 2025-12-02 09:27:28.343074224 +0000 UTC m=+0.221529614 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec 02 09:27:28 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:27:29 np0005541914.localdomain kernel: SELinux:  Converting 2749 SID table entries...
Dec 02 09:27:29 np0005541914.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 02 09:27:29 np0005541914.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 02 09:27:29 np0005541914.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 02 09:27:29 np0005541914.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 02 09:27:29 np0005541914.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 02 09:27:29 np0005541914.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 02 09:27:29 np0005541914.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 02 09:27:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9679 DF PROTO=TCP SPT=40786 DPT=9102 SEQ=3376284197 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD540C3220000000001030307) 
Dec 02 09:27:33 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15954 DF PROTO=TCP SPT=45510 DPT=9882 SEQ=2592087443 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD540CCC10000000001030307) 
Dec 02 09:27:33 np0005541914.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=20 res=1
Dec 02 09:27:34 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:27:34 np0005541914.localdomain podman[164947]: 2025-12-02 09:27:34.109829767 +0000 UTC m=+0.088751830 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:27:34 np0005541914.localdomain podman[164947]: 2025-12-02 09:27:34.149893372 +0000 UTC m=+0.128815435 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 02 09:27:34 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:27:34 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15955 DF PROTO=TCP SPT=45510 DPT=9882 SEQ=2592087443 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD540D0E20000000001030307) 
Dec 02 09:27:36 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15956 DF PROTO=TCP SPT=45510 DPT=9882 SEQ=2592087443 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD540D8E20000000001030307) 
Dec 02 09:27:37 np0005541914.localdomain kernel: SELinux:  Converting 2749 SID table entries...
Dec 02 09:27:37 np0005541914.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 02 09:27:37 np0005541914.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 02 09:27:37 np0005541914.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 02 09:27:37 np0005541914.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 02 09:27:37 np0005541914.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 02 09:27:37 np0005541914.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 02 09:27:37 np0005541914.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 02 09:27:40 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8420 DF PROTO=TCP SPT=60342 DPT=9100 SEQ=550164076 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD540E7220000000001030307) 
Dec 02 09:27:43 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61809 DF PROTO=TCP SPT=45892 DPT=9105 SEQ=704966943 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD540F1E30000000001030307) 
Dec 02 09:27:46 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37827 DF PROTO=TCP SPT=37188 DPT=9102 SEQ=3609529685 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD540FD1E0000000001030307) 
Dec 02 09:27:46 np0005541914.localdomain kernel: SELinux:  Converting 2749 SID table entries...
Dec 02 09:27:46 np0005541914.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 02 09:27:46 np0005541914.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 02 09:27:46 np0005541914.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 02 09:27:46 np0005541914.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 02 09:27:46 np0005541914.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 02 09:27:46 np0005541914.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 02 09:27:46 np0005541914.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 02 09:27:49 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15958 DF PROTO=TCP SPT=45510 DPT=9882 SEQ=2592087443 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54109230000000001030307) 
Dec 02 09:27:51 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28804 DF PROTO=TCP SPT=57196 DPT=9101 SEQ=654126358 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54113220000000001030307) 
Dec 02 09:27:55 np0005541914.localdomain kernel: SELinux:  Converting 2749 SID table entries...
Dec 02 09:27:55 np0005541914.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 02 09:27:55 np0005541914.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 02 09:27:55 np0005541914.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 02 09:27:55 np0005541914.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 02 09:27:55 np0005541914.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 02 09:27:55 np0005541914.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 02 09:27:55 np0005541914.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 02 09:27:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60997 DF PROTO=TCP SPT=34648 DPT=9101 SEQ=1711180692 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54128A20000000001030307) 
Dec 02 09:27:58 np0005541914.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=23 res=1
Dec 02 09:27:58 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:27:59 np0005541914.localdomain podman[164994]: 2025-12-02 09:27:59.081976033 +0000 UTC m=+0.078487852 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125)
Dec 02 09:27:59 np0005541914.localdomain podman[164994]: 2025-12-02 09:27:59.149011996 +0000 UTC m=+0.145523815 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller)
Dec 02 09:27:59 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:28:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37831 DF PROTO=TCP SPT=37188 DPT=9102 SEQ=3609529685 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54139220000000001030307) 
Dec 02 09:28:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:28:03.132 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:28:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:28:03.134 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:28:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:28:03.134 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:28:03 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39431 DF PROTO=TCP SPT=49376 DPT=9882 SEQ=2903844089 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54141F10000000001030307) 
Dec 02 09:28:04 np0005541914.localdomain kernel: SELinux:  Converting 2749 SID table entries...
Dec 02 09:28:04 np0005541914.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 02 09:28:04 np0005541914.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 02 09:28:04 np0005541914.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 02 09:28:04 np0005541914.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 02 09:28:04 np0005541914.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 02 09:28:04 np0005541914.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 02 09:28:04 np0005541914.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 02 09:28:04 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39432 DF PROTO=TCP SPT=49376 DPT=9882 SEQ=2903844089 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54145E20000000001030307) 
Dec 02 09:28:04 np0005541914.localdomain sudo[165024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:28:04 np0005541914.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=24 res=1
Dec 02 09:28:04 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:28:04 np0005541914.localdomain sudo[165024]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:28:04 np0005541914.localdomain sudo[165024]: pam_unix(sudo:session): session closed for user root
Dec 02 09:28:04 np0005541914.localdomain sudo[165049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:28:04 np0005541914.localdomain sudo[165049]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:28:04 np0005541914.localdomain podman[165042]: 2025-12-02 09:28:04.798617685 +0000 UTC m=+0.085506979 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:28:04 np0005541914.localdomain podman[165042]: 2025-12-02 09:28:04.808598468 +0000 UTC m=+0.095487752 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 02 09:28:04 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:28:04 np0005541914.localdomain systemd-rc-local-generator[165104]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:28:04 np0005541914.localdomain systemd-sysv-generator[165107]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:28:04 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:28:05 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:28:05 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:28:05 np0005541914.localdomain sudo[165049]: pam_unix(sudo:session): session closed for user root
Dec 02 09:28:05 np0005541914.localdomain systemd-rc-local-generator[165172]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:28:05 np0005541914.localdomain systemd-sysv-generator[165176]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:28:05 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:28:06 np0005541914.localdomain sudo[165192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:28:06 np0005541914.localdomain sudo[165192]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:28:06 np0005541914.localdomain sudo[165192]: pam_unix(sudo:session): session closed for user root
Dec 02 09:28:06 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39433 DF PROTO=TCP SPT=49376 DPT=9882 SEQ=2903844089 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5414DE20000000001030307) 
Dec 02 09:28:10 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25494 DF PROTO=TCP SPT=35460 DPT=9100 SEQ=209492473 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5415B220000000001030307) 
Dec 02 09:28:12 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8421 DF PROTO=TCP SPT=60342 DPT=9100 SEQ=550164076 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54165220000000001030307) 
Dec 02 09:28:14 np0005541914.localdomain kernel: SELinux:  Converting 2750 SID table entries...
Dec 02 09:28:14 np0005541914.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Dec 02 09:28:14 np0005541914.localdomain kernel: SELinux:  policy capability open_perms=1
Dec 02 09:28:14 np0005541914.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Dec 02 09:28:14 np0005541914.localdomain kernel: SELinux:  policy capability always_check_network=0
Dec 02 09:28:14 np0005541914.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 02 09:28:14 np0005541914.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 02 09:28:14 np0005541914.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 02 09:28:14 np0005541914.localdomain groupadd[165221]: group added to /etc/group: name=clevis, GID=985
Dec 02 09:28:14 np0005541914.localdomain groupadd[165221]: group added to /etc/gshadow: name=clevis
Dec 02 09:28:14 np0005541914.localdomain groupadd[165221]: new group: name=clevis, GID=985
Dec 02 09:28:14 np0005541914.localdomain useradd[165228]: new user: name=clevis, UID=985, GID=985, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Dec 02 09:28:15 np0005541914.localdomain usermod[165238]: add 'clevis' to group 'tss'
Dec 02 09:28:15 np0005541914.localdomain usermod[165238]: add 'clevis' to shadow group 'tss'
Dec 02 09:28:16 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5692 DF PROTO=TCP SPT=44704 DPT=9102 SEQ=374549034 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD541724F0000000001030307) 
Dec 02 09:28:18 np0005541914.localdomain groupadd[165260]: group added to /etc/group: name=dnsmasq, GID=984
Dec 02 09:28:18 np0005541914.localdomain groupadd[165260]: group added to /etc/gshadow: name=dnsmasq
Dec 02 09:28:18 np0005541914.localdomain groupadd[165260]: new group: name=dnsmasq, GID=984
Dec 02 09:28:18 np0005541914.localdomain useradd[165267]: new user: name=dnsmasq, UID=984, GID=984, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Dec 02 09:28:18 np0005541914.localdomain dbus-broker-launch[751]: Noticed file-system modification, trigger reload.
Dec 02 09:28:18 np0005541914.localdomain dbus-broker-launch[755]: avc:  op=load_policy lsm=selinux seqno=25 res=1
Dec 02 09:28:18 np0005541914.localdomain dbus-broker-launch[751]: Noticed file-system modification, trigger reload.
Dec 02 09:28:18 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39435 DF PROTO=TCP SPT=49376 DPT=9882 SEQ=2903844089 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5417D2C0000000001030307) 
Dec 02 09:28:19 np0005541914.localdomain polkitd[1037]: Reloading rules
Dec 02 09:28:19 np0005541914.localdomain polkitd[1037]: Collecting garbage unconditionally...
Dec 02 09:28:19 np0005541914.localdomain polkitd[1037]: Loading rules from directory /etc/polkit-1/rules.d
Dec 02 09:28:19 np0005541914.localdomain polkitd[1037]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 02 09:28:19 np0005541914.localdomain polkitd[1037]: Finished loading, compiling and executing 5 rules
Dec 02 09:28:19 np0005541914.localdomain polkitd[1037]: Reloading rules
Dec 02 09:28:19 np0005541914.localdomain polkitd[1037]: Collecting garbage unconditionally...
Dec 02 09:28:19 np0005541914.localdomain polkitd[1037]: Loading rules from directory /etc/polkit-1/rules.d
Dec 02 09:28:19 np0005541914.localdomain polkitd[1037]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 02 09:28:19 np0005541914.localdomain polkitd[1037]: Finished loading, compiling and executing 5 rules
Dec 02 09:28:21 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60999 DF PROTO=TCP SPT=34648 DPT=9101 SEQ=1711180692 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54189220000000001030307) 
Dec 02 09:28:25 np0005541914.localdomain sshd[165449]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:28:25 np0005541914.localdomain sshd[165449]: Received disconnect from 34.78.29.97 port 44832:11: Bye Bye [preauth]
Dec 02 09:28:25 np0005541914.localdomain sshd[165449]: Disconnected from authenticating user root 34.78.29.97 port 44832 [preauth]
Dec 02 09:28:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32955 DF PROTO=TCP SPT=40984 DPT=9101 SEQ=2589123676 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5419DE20000000001030307) 
Dec 02 09:28:29 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:28:30 np0005541914.localdomain podman[165451]: 2025-12-02 09:28:30.097012785 +0000 UTC m=+0.087452448 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller)
Dec 02 09:28:30 np0005541914.localdomain podman[165451]: 2025-12-02 09:28:30.161857788 +0000 UTC m=+0.152297511 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:28:30 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:28:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5696 DF PROTO=TCP SPT=44704 DPT=9102 SEQ=374549034 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD541AF220000000001030307) 
Dec 02 09:28:33 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59734 DF PROTO=TCP SPT=40510 DPT=9882 SEQ=4201351893 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD541B7210000000001030307) 
Dec 02 09:28:34 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59735 DF PROTO=TCP SPT=40510 DPT=9882 SEQ=4201351893 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD541BB220000000001030307) 
Dec 02 09:28:35 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:28:36 np0005541914.localdomain podman[168403]: 2025-12-02 09:28:36.08976473 +0000 UTC m=+0.090008549 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible)
Dec 02 09:28:36 np0005541914.localdomain podman[168403]: 2025-12-02 09:28:36.122909673 +0000 UTC m=+0.123153492 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 02 09:28:36 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:28:36 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59736 DF PROTO=TCP SPT=40510 DPT=9882 SEQ=4201351893 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD541C3220000000001030307) 
Dec 02 09:28:38 np0005541914.localdomain sshd[169858]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:28:38 np0005541914.localdomain sshd[169858]: Invalid user ubuntu from 45.148.10.240 port 53164
Dec 02 09:28:38 np0005541914.localdomain sshd[169858]: Connection closed by invalid user ubuntu 45.148.10.240 port 53164 [preauth]
Dec 02 09:28:40 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55659 DF PROTO=TCP SPT=37460 DPT=9100 SEQ=4060318635 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD541D1220000000001030307) 
Dec 02 09:28:43 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63230 DF PROTO=TCP SPT=37626 DPT=9105 SEQ=977405425 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD541DC220000000001030307) 
Dec 02 09:28:45 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61814 DF PROTO=TCP SPT=45892 DPT=9105 SEQ=704966943 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD541E7220000000001030307) 
Dec 02 09:28:48 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59738 DF PROTO=TCP SPT=40510 DPT=9882 SEQ=4201351893 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD541F3220000000001030307) 
Dec 02 09:28:52 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32957 DF PROTO=TCP SPT=40984 DPT=9101 SEQ=2589123676 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD541FF220000000001030307) 
Dec 02 09:28:56 np0005541914.localdomain groupadd[182288]: group added to /etc/group: name=ceph, GID=167
Dec 02 09:28:56 np0005541914.localdomain groupadd[182288]: group added to /etc/gshadow: name=ceph
Dec 02 09:28:56 np0005541914.localdomain groupadd[182288]: new group: name=ceph, GID=167
Dec 02 09:28:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1763 DF PROTO=TCP SPT=34954 DPT=9101 SEQ=1749529197 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54213220000000001030307) 
Dec 02 09:28:57 np0005541914.localdomain useradd[182294]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Dec 02 09:28:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63233 DF PROTO=TCP SPT=37626 DPT=9105 SEQ=977405425 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54215220000000001030307) 
Dec 02 09:29:00 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:29:01 np0005541914.localdomain systemd[1]: Stopping OpenSSH server daemon...
Dec 02 09:29:01 np0005541914.localdomain sshd[119204]: Received signal 15; terminating.
Dec 02 09:29:01 np0005541914.localdomain systemd[1]: sshd.service: Deactivated successfully.
Dec 02 09:29:01 np0005541914.localdomain systemd[1]: Stopped OpenSSH server daemon.
Dec 02 09:29:01 np0005541914.localdomain systemd[1]: sshd.service: Consumed 1.360s CPU time, read 32.0K from disk, written 0B to disk.
Dec 02 09:29:01 np0005541914.localdomain systemd[1]: Stopped target sshd-keygen.target.
Dec 02 09:29:01 np0005541914.localdomain systemd[1]: Stopping sshd-keygen.target...
Dec 02 09:29:01 np0005541914.localdomain systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 02 09:29:01 np0005541914.localdomain systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 02 09:29:01 np0005541914.localdomain systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 02 09:29:01 np0005541914.localdomain systemd[1]: Reached target sshd-keygen.target.
Dec 02 09:29:01 np0005541914.localdomain systemd[1]: Starting OpenSSH server daemon...
Dec 02 09:29:01 np0005541914.localdomain sshd[183018]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:29:01 np0005541914.localdomain sshd[183018]: Server listening on 0.0.0.0 port 22.
Dec 02 09:29:01 np0005541914.localdomain sshd[183018]: Server listening on :: port 22.
Dec 02 09:29:01 np0005541914.localdomain systemd[1]: Started OpenSSH server daemon.
Dec 02 09:29:01 np0005541914.localdomain podman[183005]: 2025-12-02 09:29:01.071120592 +0000 UTC m=+0.082826494 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:29:01 np0005541914.localdomain podman[183005]: 2025-12-02 09:29:01.111615665 +0000 UTC m=+0.123321617 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:29:01 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:29:01 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:01 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31274 DF PROTO=TCP SPT=57088 DPT=9102 SEQ=2967827767 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54223220000000001030307) 
Dec 02 09:29:01 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:01 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:01 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:01 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:01 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:01 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:01 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:01 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:01 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:01 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:01 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:01 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:02 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:02 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:02 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:02 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:02 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:02 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:02 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:02 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:02 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:02 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:02 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:02 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:02 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:02 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:29:03.134 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:29:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:29:03.134 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:29:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:29:03.135 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:29:03 np0005541914.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 02 09:29:03 np0005541914.localdomain systemd[1]: Starting man-db-cache-update.service...
Dec 02 09:29:03 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:29:03 np0005541914.localdomain systemd-rc-local-generator[183327]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:29:03 np0005541914.localdomain systemd-sysv-generator[183330]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:29:03 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:03 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:03 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:29:03 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:03 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:03 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:03 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:03 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:03 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:03 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:03 np0005541914.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Dec 02 09:29:03 np0005541914.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 02 09:29:04 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8158 DF PROTO=TCP SPT=51746 DPT=9882 SEQ=2012140012 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54230620000000001030307) 
Dec 02 09:29:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:29:06 np0005541914.localdomain sudo[187341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:29:06 np0005541914.localdomain sudo[187341]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:29:06 np0005541914.localdomain sudo[187341]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:06 np0005541914.localdomain sudo[187496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:29:06 np0005541914.localdomain sudo[187496]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:29:06 np0005541914.localdomain podman[187438]: 2025-12-02 09:29:06.326552503 +0000 UTC m=+0.077939352 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:29:06 np0005541914.localdomain podman[187438]: 2025-12-02 09:29:06.362901017 +0000 UTC m=+0.114287936 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:29:06 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:29:06 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8159 DF PROTO=TCP SPT=51746 DPT=9882 SEQ=2012140012 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54238620000000001030307) 
Dec 02 09:29:06 np0005541914.localdomain sudo[163699]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:06 np0005541914.localdomain sudo[187496]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:09 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49288 DF PROTO=TCP SPT=60504 DPT=9100 SEQ=3567513200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54245220000000001030307) 
Dec 02 09:29:10 np0005541914.localdomain sudo[189700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:29:10 np0005541914.localdomain sudo[189700]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:29:10 np0005541914.localdomain sudo[189700]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:13 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16953 DF PROTO=TCP SPT=55702 DPT=9105 SEQ=2484044217 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54251620000000001030307) 
Dec 02 09:29:15 np0005541914.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 02 09:29:15 np0005541914.localdomain systemd[1]: Finished man-db-cache-update.service.
Dec 02 09:29:15 np0005541914.localdomain systemd[1]: man-db-cache-update.service: Consumed 14.896s CPU time.
Dec 02 09:29:15 np0005541914.localdomain systemd[1]: run-re5fb30a729d24dcba4d6ddc90d4aef33.service: Deactivated successfully.
Dec 02 09:29:15 np0005541914.localdomain systemd[1]: run-r80df02c78fb243f78c431720685a7ef1.service: Deactivated successfully.
Dec 02 09:29:16 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60589 DF PROTO=TCP SPT=46662 DPT=9102 SEQ=2111838520 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5425CAF0000000001030307) 
Dec 02 09:29:19 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60591 DF PROTO=TCP SPT=46662 DPT=9102 SEQ=2111838520 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54268A20000000001030307) 
Dec 02 09:29:19 np0005541914.localdomain sudo[192007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-flojarrytjatzwsfiyqiwnvzmrrlcqlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667758.6866095-992-171599078073052/AnsiballZ_systemd.py
Dec 02 09:29:19 np0005541914.localdomain sudo[192007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:29:19 np0005541914.localdomain python3.9[192009]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 02 09:29:19 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:29:19 np0005541914.localdomain systemd-rc-local-generator[192033]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:29:19 np0005541914.localdomain systemd-sysv-generator[192039]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:29:19 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:19 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:29:19 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:19 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:19 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:19 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:19 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:19 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:19 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:19 np0005541914.localdomain sudo[192007]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:20 np0005541914.localdomain sudo[192156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-flbehkobgsvjaixuzddjobncxypbbjpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667760.0255013-992-266040695977096/AnsiballZ_systemd.py
Dec 02 09:29:20 np0005541914.localdomain sudo[192156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:29:20 np0005541914.localdomain python3.9[192158]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 02 09:29:20 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:29:20 np0005541914.localdomain systemd-rc-local-generator[192186]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:29:20 np0005541914.localdomain systemd-sysv-generator[192190]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:29:20 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:20 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:29:20 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:20 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:20 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:20 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:20 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:20 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:20 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:21 np0005541914.localdomain sudo[192156]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:21 np0005541914.localdomain sudo[192304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tzjkodewbdcdybhindekbxftcavefntn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667761.1294832-992-259891751468007/AnsiballZ_systemd.py
Dec 02 09:29:21 np0005541914.localdomain sudo[192304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:29:21 np0005541914.localdomain python3.9[192306]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 02 09:29:21 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1765 DF PROTO=TCP SPT=34954 DPT=9101 SEQ=1749529197 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54273230000000001030307) 
Dec 02 09:29:21 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:29:21 np0005541914.localdomain systemd-rc-local-generator[192331]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:29:21 np0005541914.localdomain systemd-sysv-generator[192336]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:29:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:29:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:22 np0005541914.localdomain sudo[192304]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:22 np0005541914.localdomain sudo[192453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ugxplyurxfewcnejtbyzuahtducgqnjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667762.2029507-992-266071433126922/AnsiballZ_systemd.py
Dec 02 09:29:22 np0005541914.localdomain sudo[192453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:29:22 np0005541914.localdomain python3.9[192455]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 02 09:29:22 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:29:22 np0005541914.localdomain systemd-rc-local-generator[192481]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:29:22 np0005541914.localdomain systemd-sysv-generator[192485]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:29:22 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:23 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:29:23 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:23 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:23 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:23 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:23 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:23 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:23 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:23 np0005541914.localdomain sudo[192453]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:25 np0005541914.localdomain sudo[192602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ihncbkralpcbakicbarywulzbxrbflwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667764.7406495-1079-228209954920660/AnsiballZ_systemd.py
Dec 02 09:29:25 np0005541914.localdomain sudo[192602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:29:25 np0005541914.localdomain python3.9[192604]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 02 09:29:25 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:29:25 np0005541914.localdomain systemd-rc-local-generator[192630]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:29:25 np0005541914.localdomain systemd-sysv-generator[192635]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:29:25 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:25 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:29:25 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:25 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:25 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:25 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:25 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:25 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:25 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:25 np0005541914.localdomain sudo[192602]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:26 np0005541914.localdomain sudo[192750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ifpwvrjcbwetpgnfskkrxqwmemptbiqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667765.8522754-1079-246256940170379/AnsiballZ_systemd.py
Dec 02 09:29:26 np0005541914.localdomain sudo[192750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:29:26 np0005541914.localdomain python3.9[192752]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 02 09:29:26 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:29:26 np0005541914.localdomain systemd-rc-local-generator[192783]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:29:26 np0005541914.localdomain systemd-sysv-generator[192786]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:29:26 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:26 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:26 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:29:26 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:26 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:26 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:26 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:26 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:26 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:26 np0005541914.localdomain sudo[192750]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37702 DF PROTO=TCP SPT=38500 DPT=9101 SEQ=3609384108 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54288220000000001030307) 
Dec 02 09:29:27 np0005541914.localdomain sudo[192899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ixdwwyefuoynvawdrwshimwkonwdsisp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667766.9359205-1079-107279241103374/AnsiballZ_systemd.py
Dec 02 09:29:27 np0005541914.localdomain sudo[192899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:29:27 np0005541914.localdomain python3.9[192901]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 02 09:29:27 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:29:27 np0005541914.localdomain systemd-rc-local-generator[192928]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:29:27 np0005541914.localdomain systemd-sysv-generator[192935]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:29:27 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:27 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:27 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:27 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:29:27 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:27 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:27 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:27 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:27 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:27 np0005541914.localdomain sudo[192899]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:28 np0005541914.localdomain sudo[193048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rdsdfmvgldnhhfpdjfgsjkzeryzfwxri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667768.0676143-1079-173629908604587/AnsiballZ_systemd.py
Dec 02 09:29:28 np0005541914.localdomain sudo[193048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:29:28 np0005541914.localdomain python3.9[193050]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 02 09:29:28 np0005541914.localdomain sudo[193048]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:29 np0005541914.localdomain sudo[193161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yuyfamvoylzvwydyujwkbmvayawckpqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667768.8206217-1079-255004820805099/AnsiballZ_systemd.py
Dec 02 09:29:29 np0005541914.localdomain sudo[193161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:29:29 np0005541914.localdomain python3.9[193163]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 02 09:29:29 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:29:29 np0005541914.localdomain systemd-rc-local-generator[193195]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:29:29 np0005541914.localdomain systemd-sysv-generator[193198]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:29:29 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:29 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:29 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:29 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:29 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:29:29 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:29 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:29 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:29 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:29 np0005541914.localdomain sudo[193161]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60593 DF PROTO=TCP SPT=46662 DPT=9102 SEQ=2111838520 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54299220000000001030307) 
Dec 02 09:29:31 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:29:32 np0005541914.localdomain systemd[1]: tmp-crun.7T7tzh.mount: Deactivated successfully.
Dec 02 09:29:32 np0005541914.localdomain podman[193220]: 2025-12-02 09:29:32.110473962 +0000 UTC m=+0.103552196 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller)
Dec 02 09:29:32 np0005541914.localdomain podman[193220]: 2025-12-02 09:29:32.199879953 +0000 UTC m=+0.192958237 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 02 09:29:32 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:29:32 np0005541914.localdomain sudo[193335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dmzbykddropdruzmxvpjtvvyuqijbmnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667772.0536652-1187-231576275203906/AnsiballZ_systemd.py
Dec 02 09:29:32 np0005541914.localdomain sudo[193335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:29:32 np0005541914.localdomain python3.9[193337]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 02 09:29:33 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=98 DF PROTO=TCP SPT=52688 DPT=9882 SEQ=1422130130 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD542A1810000000001030307) 
Dec 02 09:29:33 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:29:33 np0005541914.localdomain systemd-sysv-generator[193370]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:29:33 np0005541914.localdomain systemd-rc-local-generator[193365]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:29:33 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:33 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:33 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:33 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:33 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:29:33 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:33 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:33 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:33 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:29:34 np0005541914.localdomain sudo[193335]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:34 np0005541914.localdomain sudo[193484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oggrmssqpuqwpoczlarzcilshrlmofuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667774.2695038-1210-138851245686003/AnsiballZ_systemd.py
Dec 02 09:29:34 np0005541914.localdomain sudo[193484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:29:34 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=99 DF PROTO=TCP SPT=52688 DPT=9882 SEQ=1422130130 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD542A5A20000000001030307) 
Dec 02 09:29:34 np0005541914.localdomain python3.9[193486]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 02 09:29:35 np0005541914.localdomain sudo[193484]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:36 np0005541914.localdomain sudo[193597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ngrnkwohaipqeizjoastrewqsfdmlnfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667776.0536814-1210-146578961625916/AnsiballZ_systemd.py
Dec 02 09:29:36 np0005541914.localdomain sudo[193597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:29:36 np0005541914.localdomain python3.9[193599]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 02 09:29:36 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:29:36 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=100 DF PROTO=TCP SPT=52688 DPT=9882 SEQ=1422130130 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD542ADA20000000001030307) 
Dec 02 09:29:36 np0005541914.localdomain sudo[193597]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:36 np0005541914.localdomain podman[193601]: 2025-12-02 09:29:36.759856728 +0000 UTC m=+0.074482812 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 02 09:29:36 np0005541914.localdomain podman[193601]: 2025-12-02 09:29:36.798129535 +0000 UTC m=+0.112755679 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 02 09:29:36 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:29:37 np0005541914.localdomain sudo[193728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-riepytygcfjhshhnwbcvrboenqfqhomj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667776.8821247-1210-65377948944197/AnsiballZ_systemd.py
Dec 02 09:29:37 np0005541914.localdomain sudo[193728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:29:37 np0005541914.localdomain python3.9[193730]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 02 09:29:37 np0005541914.localdomain sudo[193728]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:37 np0005541914.localdomain sudo[193841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jqxkqxosbzehjcfbiotqsmrzttvwnoml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667777.6982932-1210-58839513954853/AnsiballZ_systemd.py
Dec 02 09:29:37 np0005541914.localdomain sudo[193841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:29:38 np0005541914.localdomain python3.9[193843]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 02 09:29:39 np0005541914.localdomain sudo[193841]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:39 np0005541914.localdomain sudo[193954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xwamuivxyvussphykxzfxcegoxpjmfbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667779.5076652-1210-167000327449104/AnsiballZ_systemd.py
Dec 02 09:29:39 np0005541914.localdomain sudo[193954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:29:40 np0005541914.localdomain python3.9[193956]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 02 09:29:40 np0005541914.localdomain sudo[193954]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:40 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1952 DF PROTO=TCP SPT=56382 DPT=9100 SEQ=952668016 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD542BB220000000001030307) 
Dec 02 09:29:40 np0005541914.localdomain sudo[194067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eracosougrubnfvrcpytyojqpmkdppie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667780.2723656-1210-73442409752062/AnsiballZ_systemd.py
Dec 02 09:29:40 np0005541914.localdomain sudo[194067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:29:40 np0005541914.localdomain python3.9[194069]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 02 09:29:40 np0005541914.localdomain sudo[194067]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:41 np0005541914.localdomain sudo[194180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ncxvnlpnxmbiqsjwjacjudkkbtlnnlgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667781.59182-1210-156354724755735/AnsiballZ_systemd.py
Dec 02 09:29:41 np0005541914.localdomain sudo[194180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:29:42 np0005541914.localdomain python3.9[194182]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 02 09:29:43 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3370 DF PROTO=TCP SPT=54862 DPT=9105 SEQ=3158011964 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD542C6A30000000001030307) 
Dec 02 09:29:43 np0005541914.localdomain sudo[194180]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:43 np0005541914.localdomain sudo[194293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rdlrfohlpsdguefdnytpwfcezydvvkjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667783.5743675-1210-301193967361/AnsiballZ_systemd.py
Dec 02 09:29:43 np0005541914.localdomain sudo[194293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:29:44 np0005541914.localdomain python3.9[194295]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 02 09:29:44 np0005541914.localdomain sudo[194293]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:44 np0005541914.localdomain sudo[194406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ycxlccwuwolxolexgnzavgwzppccijhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667784.3742235-1210-260091175461507/AnsiballZ_systemd.py
Dec 02 09:29:44 np0005541914.localdomain sudo[194406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:29:44 np0005541914.localdomain python3.9[194408]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 02 09:29:46 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57407 DF PROTO=TCP SPT=43958 DPT=9102 SEQ=3185712321 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD542D1DE0000000001030307) 
Dec 02 09:29:46 np0005541914.localdomain sudo[194406]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:46 np0005541914.localdomain sudo[194519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hmfdnypiiuxhvxnjjmwpwvpgiigapajb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667786.1973019-1210-172905347743752/AnsiballZ_systemd.py
Dec 02 09:29:46 np0005541914.localdomain sudo[194519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:29:46 np0005541914.localdomain python3.9[194521]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 02 09:29:46 np0005541914.localdomain sudo[194519]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:47 np0005541914.localdomain sudo[194632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nvshsxmkqrzcxgzappasgmfhahgrruwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667787.0064225-1210-72978632222576/AnsiballZ_systemd.py
Dec 02 09:29:47 np0005541914.localdomain sudo[194632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:29:47 np0005541914.localdomain python3.9[194634]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 02 09:29:47 np0005541914.localdomain sudo[194632]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:48 np0005541914.localdomain sudo[194745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-layydntkgblhfadahmsxqxpnuuwybvfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667787.7632666-1210-221239351887961/AnsiballZ_systemd.py
Dec 02 09:29:48 np0005541914.localdomain sudo[194745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:29:48 np0005541914.localdomain python3.9[194747]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 02 09:29:48 np0005541914.localdomain sudo[194745]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:48 np0005541914.localdomain sudo[194858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-evdmfepehqkvfckeslwhmwgwrpgqnrht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667788.6170084-1210-148054173215450/AnsiballZ_systemd.py
Dec 02 09:29:48 np0005541914.localdomain sudo[194858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:29:48 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=102 DF PROTO=TCP SPT=52688 DPT=9882 SEQ=1422130130 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD542DD220000000001030307) 
Dec 02 09:29:49 np0005541914.localdomain python3.9[194860]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 02 09:29:49 np0005541914.localdomain sudo[194858]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:49 np0005541914.localdomain sudo[194971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nryqierdpofcyewvamjgueqowragxdxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667789.364373-1210-275687427225244/AnsiballZ_systemd.py
Dec 02 09:29:49 np0005541914.localdomain sudo[194971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:29:49 np0005541914.localdomain python3.9[194973]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 02 09:29:50 np0005541914.localdomain sudo[194971]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:51 np0005541914.localdomain sudo[195084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hyljrblpgujauzhgtdrswnqhdldbkrvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667791.5237699-1517-39177289412597/AnsiballZ_file.py
Dec 02 09:29:51 np0005541914.localdomain sudo[195084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:29:51 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37704 DF PROTO=TCP SPT=38500 DPT=9101 SEQ=3609384108 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD542E9220000000001030307) 
Dec 02 09:29:51 np0005541914.localdomain python3.9[195086]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:29:52 np0005541914.localdomain sudo[195084]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:52 np0005541914.localdomain sudo[195194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wyehbsxgrvoyfizfzvceaslyffmovqoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667792.1559463-1517-254808103616127/AnsiballZ_file.py
Dec 02 09:29:52 np0005541914.localdomain sudo[195194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:29:53 np0005541914.localdomain python3.9[195196]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:29:53 np0005541914.localdomain sudo[195194]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:53 np0005541914.localdomain sudo[195304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tthypmzqepjouympclyjofqxvwojazxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667793.310438-1517-193849404454421/AnsiballZ_file.py
Dec 02 09:29:53 np0005541914.localdomain sudo[195304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:29:53 np0005541914.localdomain python3.9[195306]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:29:53 np0005541914.localdomain sudo[195304]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:54 np0005541914.localdomain sudo[195414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qxnkzcjnfcyahpsxltcxscztydxsbqtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667793.8973727-1517-67735016943817/AnsiballZ_file.py
Dec 02 09:29:54 np0005541914.localdomain sudo[195414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:29:54 np0005541914.localdomain python3.9[195416]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:29:54 np0005541914.localdomain sudo[195414]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:55 np0005541914.localdomain sudo[195524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rabtplefkotcgamkepmqfckkmmdarxcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667794.8755922-1517-105150971189252/AnsiballZ_file.py
Dec 02 09:29:55 np0005541914.localdomain sudo[195524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:29:55 np0005541914.localdomain python3.9[195526]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:29:55 np0005541914.localdomain sudo[195524]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:55 np0005541914.localdomain sudo[195634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kqfpjpnpcyggtxohmzvdkfiinwqdzuqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667795.517447-1517-70144896168477/AnsiballZ_file.py
Dec 02 09:29:55 np0005541914.localdomain sudo[195634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:29:56 np0005541914.localdomain python3.9[195636]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:29:56 np0005541914.localdomain sudo[195634]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:56 np0005541914.localdomain sudo[195744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uyznriddosvhmnxarhjmxumfpkdzosro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667796.301507-1646-189306395908923/AnsiballZ_stat.py
Dec 02 09:29:56 np0005541914.localdomain sudo[195744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:29:56 np0005541914.localdomain python3.9[195746]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:29:56 np0005541914.localdomain sudo[195744]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26777 DF PROTO=TCP SPT=35554 DPT=9101 SEQ=1272628940 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD542FD620000000001030307) 
Dec 02 09:29:57 np0005541914.localdomain sudo[195834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nhxlrjwoqyamakjwtwcpluxbdniwynwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667796.301507-1646-189306395908923/AnsiballZ_copy.py
Dec 02 09:29:57 np0005541914.localdomain sudo[195834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:29:57 np0005541914.localdomain python3.9[195836]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764667796.301507-1646-189306395908923/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:29:57 np0005541914.localdomain sudo[195834]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:57 np0005541914.localdomain sudo[195944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gvsjpyajeffnhzapartabkmscjhjbrjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667797.7130475-1646-180303503220734/AnsiballZ_stat.py
Dec 02 09:29:57 np0005541914.localdomain sudo[195944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:29:58 np0005541914.localdomain python3.9[195946]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:29:58 np0005541914.localdomain sudo[195944]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:58 np0005541914.localdomain sudo[196034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mmtcpzleroubkbslyilabefdsaiqogod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667797.7130475-1646-180303503220734/AnsiballZ_copy.py
Dec 02 09:29:58 np0005541914.localdomain sudo[196034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:29:58 np0005541914.localdomain python3.9[196036]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764667797.7130475-1646-180303503220734/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:29:58 np0005541914.localdomain sudo[196034]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:59 np0005541914.localdomain sudo[196144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hjnrkbhgagnkcxiggsacbxmkrktjyojl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667798.9005985-1646-177896051673428/AnsiballZ_stat.py
Dec 02 09:29:59 np0005541914.localdomain sudo[196144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:29:59 np0005541914.localdomain python3.9[196146]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:29:59 np0005541914.localdomain sudo[196144]: pam_unix(sudo:session): session closed for user root
Dec 02 09:29:59 np0005541914.localdomain sudo[196234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qairscdfumugpuaxobylqqcfkyzwkaxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667798.9005985-1646-177896051673428/AnsiballZ_copy.py
Dec 02 09:29:59 np0005541914.localdomain sudo[196234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:29:59 np0005541914.localdomain python3.9[196236]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764667798.9005985-1646-177896051673428/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:29:59 np0005541914.localdomain sudo[196234]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:00 np0005541914.localdomain sudo[196344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hwpoetuebdgrfhjmtezoldmhgenshouo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667800.0311012-1646-91566603159161/AnsiballZ_stat.py
Dec 02 09:30:00 np0005541914.localdomain sudo[196344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:00 np0005541914.localdomain python3.9[196346]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:30:00 np0005541914.localdomain sudo[196344]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:00 np0005541914.localdomain sudo[196434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hqisryxsfpppfoprxycueyapmvqbxbzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667800.0311012-1646-91566603159161/AnsiballZ_copy.py
Dec 02 09:30:00 np0005541914.localdomain sudo[196434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:01 np0005541914.localdomain python3.9[196436]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764667800.0311012-1646-91566603159161/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:01 np0005541914.localdomain sudo[196434]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57411 DF PROTO=TCP SPT=43958 DPT=9102 SEQ=3185712321 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5430D220000000001030307) 
Dec 02 09:30:01 np0005541914.localdomain sudo[196544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gqbpfhqpnoiqreuajcmaqvimrhwcmlas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667801.2365613-1646-21257265897604/AnsiballZ_stat.py
Dec 02 09:30:01 np0005541914.localdomain sudo[196544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:01 np0005541914.localdomain python3.9[196546]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:30:01 np0005541914.localdomain sudo[196544]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:02 np0005541914.localdomain sudo[196634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tbdpmoxbpoehoxbalnlqxkzoniltuseu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667801.2365613-1646-21257265897604/AnsiballZ_copy.py
Dec 02 09:30:02 np0005541914.localdomain sudo[196634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:02 np0005541914.localdomain python3.9[196636]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764667801.2365613-1646-21257265897604/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=8d9b2057482987a531d808ceb2ac4bc7d43bf17c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:02 np0005541914.localdomain sudo[196634]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:02 np0005541914.localdomain sudo[196744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xnjsetzljiryflbzwkjpsyrxwchyappc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667802.4253078-1646-76431641510727/AnsiballZ_stat.py
Dec 02 09:30:02 np0005541914.localdomain sudo[196744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:02 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:30:02 np0005541914.localdomain podman[196747]: 2025-12-02 09:30:02.832124724 +0000 UTC m=+0.087484713 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:30:02 np0005541914.localdomain podman[196747]: 2025-12-02 09:30:02.87265808 +0000 UTC m=+0.128018069 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true)
Dec 02 09:30:02 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:30:02 np0005541914.localdomain python3.9[196746]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:30:02 np0005541914.localdomain sudo[196744]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:30:03.137 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:30:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:30:03.139 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:30:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:30:03.139 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:30:03 np0005541914.localdomain sudo[196859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vbmwbknwjpqlgqceuyfkprjclqsvzkvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667802.4253078-1646-76431641510727/AnsiballZ_copy.py
Dec 02 09:30:03 np0005541914.localdomain sudo[196859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:03 np0005541914.localdomain python3.9[196861]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764667802.4253078-1646-76431641510727/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:03 np0005541914.localdomain sudo[196859]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:03 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12897 DF PROTO=TCP SPT=45686 DPT=9882 SEQ=3776684803 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54316B10000000001030307) 
Dec 02 09:30:04 np0005541914.localdomain sudo[196969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uumkvnbpwvsxgfiwbzuozdibvwdgntdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667803.5914452-1646-38514141710391/AnsiballZ_stat.py
Dec 02 09:30:04 np0005541914.localdomain sudo[196969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:04 np0005541914.localdomain python3.9[196971]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:30:04 np0005541914.localdomain sudo[196969]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:04 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12898 DF PROTO=TCP SPT=45686 DPT=9882 SEQ=3776684803 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5431AA30000000001030307) 
Dec 02 09:30:04 np0005541914.localdomain sudo[197057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-btrpdrfyrlwhamhzgejaznykioyxrctl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667803.5914452-1646-38514141710391/AnsiballZ_copy.py
Dec 02 09:30:04 np0005541914.localdomain sudo[197057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:04 np0005541914.localdomain python3.9[197059]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764667803.5914452-1646-38514141710391/.source.conf follow=False _original_basename=auth.conf checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:04 np0005541914.localdomain sudo[197057]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:05 np0005541914.localdomain sudo[197167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ewqfawcusfxqqpnqdlsfrfxqomuhpklh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667805.0074477-1646-116104007768948/AnsiballZ_stat.py
Dec 02 09:30:05 np0005541914.localdomain sudo[197167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:05 np0005541914.localdomain python3.9[197169]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:30:05 np0005541914.localdomain sudo[197167]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:06 np0005541914.localdomain sudo[197257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-csemekawkyshoolqsmyfheyraigsxspc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667805.0074477-1646-116104007768948/AnsiballZ_copy.py
Dec 02 09:30:06 np0005541914.localdomain sudo[197257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:06 np0005541914.localdomain python3.9[197259]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764667805.0074477-1646-116104007768948/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:06 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12899 DF PROTO=TCP SPT=45686 DPT=9882 SEQ=3776684803 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54322A30000000001030307) 
Dec 02 09:30:06 np0005541914.localdomain sudo[197257]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:30:07 np0005541914.localdomain podman[197277]: 2025-12-02 09:30:07.087541011 +0000 UTC m=+0.087807192 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:30:07 np0005541914.localdomain podman[197277]: 2025-12-02 09:30:07.098768246 +0000 UTC m=+0.099034427 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 02 09:30:07 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:30:07 np0005541914.localdomain sudo[197385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sgimnplxdbhzdmmasdojsjcxvxypnasz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667807.0444129-1988-14731660643038/AnsiballZ_file.py
Dec 02 09:30:07 np0005541914.localdomain sudo[197385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:07 np0005541914.localdomain python3.9[197387]: ansible-ansible.builtin.file Invoked with path=/etc/libvirt/passwd.db state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:07 np0005541914.localdomain sudo[197385]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:07 np0005541914.localdomain sudo[197495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lmniemgutwxoveouzblgdnhcwiyptvbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667807.7154677-2011-206552576290790/AnsiballZ_file.py
Dec 02 09:30:07 np0005541914.localdomain sudo[197495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:08 np0005541914.localdomain python3.9[197497]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:08 np0005541914.localdomain sudo[197495]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:08 np0005541914.localdomain sudo[197605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bdhxygsyecnmnbauluaukhslmmvpvico ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667808.288326-2011-44185232200005/AnsiballZ_file.py
Dec 02 09:30:08 np0005541914.localdomain sudo[197605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:08 np0005541914.localdomain python3.9[197607]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:08 np0005541914.localdomain sudo[197605]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:09 np0005541914.localdomain sudo[197715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aucnxmijyqkooqlabtkymrsuypsjthmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667808.823955-2011-82656686214838/AnsiballZ_file.py
Dec 02 09:30:09 np0005541914.localdomain sudo[197715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:09 np0005541914.localdomain python3.9[197717]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:09 np0005541914.localdomain sudo[197715]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:09 np0005541914.localdomain sudo[197825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zdwsfjzmplruundjxjkotzjfdgbdswcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667809.4029121-2011-236871771299393/AnsiballZ_file.py
Dec 02 09:30:09 np0005541914.localdomain sudo[197825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:09 np0005541914.localdomain python3.9[197827]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:09 np0005541914.localdomain sudo[197825]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:10 np0005541914.localdomain sudo[197899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:30:10 np0005541914.localdomain sudo[197899]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:30:10 np0005541914.localdomain sudo[197899]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:10 np0005541914.localdomain sudo[197969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cxnyqucbgungippijpdvzgfnqflqvtjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667809.9696763-2011-185491577456918/AnsiballZ_file.py
Dec 02 09:30:10 np0005541914.localdomain sudo[197969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:10 np0005541914.localdomain sudo[197941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 02 09:30:10 np0005541914.localdomain sudo[197941]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:30:10 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36923 DF PROTO=TCP SPT=41554 DPT=9100 SEQ=1344575929 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54331220000000001030307) 
Dec 02 09:30:10 np0005541914.localdomain python3.9[197972]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:10 np0005541914.localdomain sudo[197969]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:10 np0005541914.localdomain sudo[197941]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:10 np0005541914.localdomain sshd[198065]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:30:10 np0005541914.localdomain sudo[198069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:30:10 np0005541914.localdomain sudo[198069]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:30:10 np0005541914.localdomain sudo[198069]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:10 np0005541914.localdomain sudo[198127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xklkrujlaqcghwqnglofymcdzucylutj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667810.5452867-2011-91400202326886/AnsiballZ_file.py
Dec 02 09:30:10 np0005541914.localdomain sudo[198127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:10 np0005541914.localdomain sudo[198118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:30:10 np0005541914.localdomain sudo[198118]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:30:10 np0005541914.localdomain python3.9[198140]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:10 np0005541914.localdomain sudo[198127]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:11 np0005541914.localdomain sshd[198065]: Received disconnect from 34.78.29.97 port 43576:11: Bye Bye [preauth]
Dec 02 09:30:11 np0005541914.localdomain sshd[198065]: Disconnected from authenticating user root 34.78.29.97 port 43576 [preauth]
Dec 02 09:30:11 np0005541914.localdomain sudo[198272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-psyktjmdvxlerrbkhfqoqkmgqksavnde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667811.1101625-2011-59875796782558/AnsiballZ_file.py
Dec 02 09:30:11 np0005541914.localdomain sudo[198272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:11 np0005541914.localdomain sudo[198118]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:11 np0005541914.localdomain python3.9[198281]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:11 np0005541914.localdomain sudo[198272]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:11 np0005541914.localdomain sudo[198391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rtgdtejkxjgghssctjckoqqmwpkknvvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667811.6678927-2011-265540236834503/AnsiballZ_file.py
Dec 02 09:30:11 np0005541914.localdomain sudo[198391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:12 np0005541914.localdomain sudo[198394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:30:12 np0005541914.localdomain sudo[198394]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:30:12 np0005541914.localdomain sudo[198394]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:12 np0005541914.localdomain python3.9[198393]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:12 np0005541914.localdomain sudo[198391]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:12 np0005541914.localdomain sudo[198519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-robsayxravnnubffmziebwbvyklqdcyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667812.250056-2011-13237546302576/AnsiballZ_file.py
Dec 02 09:30:12 np0005541914.localdomain sudo[198519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:12 np0005541914.localdomain python3.9[198521]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:12 np0005541914.localdomain sudo[198519]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:13 np0005541914.localdomain sudo[198629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-celycabhgkjzrybfsmgmurybwzfvmfam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667812.8378217-2011-220712406343640/AnsiballZ_file.py
Dec 02 09:30:13 np0005541914.localdomain sudo[198629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:13 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9798 DF PROTO=TCP SPT=37900 DPT=9105 SEQ=3808598112 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5433BE20000000001030307) 
Dec 02 09:30:13 np0005541914.localdomain python3.9[198631]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:13 np0005541914.localdomain sudo[198629]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:13 np0005541914.localdomain sudo[198739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cfggxxfjqqytxcpdsadjcqneuykjqejm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667813.4197958-2011-54054984051218/AnsiballZ_file.py
Dec 02 09:30:13 np0005541914.localdomain sudo[198739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:13 np0005541914.localdomain python3.9[198741]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:13 np0005541914.localdomain sudo[198739]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:14 np0005541914.localdomain sudo[198849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jhupfzrdhyzllzhrupogywkigjzrezkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667813.9926589-2011-21743017270630/AnsiballZ_file.py
Dec 02 09:30:14 np0005541914.localdomain sudo[198849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:14 np0005541914.localdomain python3.9[198851]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:14 np0005541914.localdomain sudo[198849]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:15 np0005541914.localdomain sudo[198959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zuacmfkjsmnzsrhvqcplgqiibnkiwfwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667814.7810273-2011-48921276879508/AnsiballZ_file.py
Dec 02 09:30:15 np0005541914.localdomain sudo[198959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:15 np0005541914.localdomain python3.9[198961]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:15 np0005541914.localdomain sudo[198959]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:15 np0005541914.localdomain sudo[199069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cmcssmqechmothrbrwrvgmdbswtamgsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667815.4072993-2011-86008373525444/AnsiballZ_file.py
Dec 02 09:30:15 np0005541914.localdomain sudo[199069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:15 np0005541914.localdomain python3.9[199071]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:15 np0005541914.localdomain sudo[199069]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:16 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64295 DF PROTO=TCP SPT=34866 DPT=9102 SEQ=166964221 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD543470E0000000001030307) 
Dec 02 09:30:18 np0005541914.localdomain sudo[199179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qttcxtkocucqfiehvcgdxzjfgdqawmea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667818.0421448-2308-54985032620514/AnsiballZ_stat.py
Dec 02 09:30:18 np0005541914.localdomain sudo[199179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:18 np0005541914.localdomain python3.9[199181]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:30:18 np0005541914.localdomain sudo[199179]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:18 np0005541914.localdomain sudo[199267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dmalkubsslykvqyrvcidtigmojrrulfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667818.0421448-2308-54985032620514/AnsiballZ_copy.py
Dec 02 09:30:18 np0005541914.localdomain sudo[199267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:18 np0005541914.localdomain python3.9[199269]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667818.0421448-2308-54985032620514/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:19 np0005541914.localdomain sudo[199267]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:19 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64297 DF PROTO=TCP SPT=34866 DPT=9102 SEQ=166964221 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54353220000000001030307) 
Dec 02 09:30:19 np0005541914.localdomain sudo[199377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vilvjbjexlktakpdnelgaohhdwvpmwom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667819.128598-2308-115298099314289/AnsiballZ_stat.py
Dec 02 09:30:19 np0005541914.localdomain sudo[199377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:19 np0005541914.localdomain python3.9[199379]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:30:19 np0005541914.localdomain sudo[199377]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:19 np0005541914.localdomain sudo[199465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hvmfujpqkomgqnqsulkyyfujapqzytqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667819.128598-2308-115298099314289/AnsiballZ_copy.py
Dec 02 09:30:19 np0005541914.localdomain sudo[199465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:20 np0005541914.localdomain python3.9[199467]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667819.128598-2308-115298099314289/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:20 np0005541914.localdomain sudo[199465]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:20 np0005541914.localdomain sudo[199575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fynkunbummyzgjezdyimrcqjpexlfygp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667820.209472-2308-247384621504066/AnsiballZ_stat.py
Dec 02 09:30:20 np0005541914.localdomain sudo[199575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:20 np0005541914.localdomain python3.9[199577]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:30:20 np0005541914.localdomain sudo[199575]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:20 np0005541914.localdomain sudo[199663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lcnsjelmhcpavkwnzotmcelerirdhnow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667820.209472-2308-247384621504066/AnsiballZ_copy.py
Dec 02 09:30:20 np0005541914.localdomain sudo[199663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:21 np0005541914.localdomain python3.9[199665]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667820.209472-2308-247384621504066/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:21 np0005541914.localdomain sudo[199663]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:21 np0005541914.localdomain sudo[199773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qcrkaqforicwvmenhffmbvvoutliylap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667821.2708068-2308-221379035764601/AnsiballZ_stat.py
Dec 02 09:30:21 np0005541914.localdomain sudo[199773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:21 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26779 DF PROTO=TCP SPT=35554 DPT=9101 SEQ=1272628940 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5435D220000000001030307) 
Dec 02 09:30:21 np0005541914.localdomain python3.9[199775]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:30:21 np0005541914.localdomain sudo[199773]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:22 np0005541914.localdomain sudo[199861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ogjjcvvkynsbnmswtntvwlcwjkrswoej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667821.2708068-2308-221379035764601/AnsiballZ_copy.py
Dec 02 09:30:22 np0005541914.localdomain sudo[199861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:22 np0005541914.localdomain python3.9[199863]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667821.2708068-2308-221379035764601/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:22 np0005541914.localdomain sudo[199861]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:22 np0005541914.localdomain sudo[199971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vpqgazjnvvbgxmsdzsnjofplehgodupv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667822.6093314-2308-109205659591544/AnsiballZ_stat.py
Dec 02 09:30:22 np0005541914.localdomain sudo[199971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:23 np0005541914.localdomain python3.9[199973]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:30:23 np0005541914.localdomain sudo[199971]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:23 np0005541914.localdomain sudo[200059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-anwmxcymufntxfaivjlzkynljaicswaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667822.6093314-2308-109205659591544/AnsiballZ_copy.py
Dec 02 09:30:23 np0005541914.localdomain sudo[200059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:23 np0005541914.localdomain python3.9[200061]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667822.6093314-2308-109205659591544/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:23 np0005541914.localdomain sudo[200059]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:23 np0005541914.localdomain sudo[200169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-owzpuqdupwooyywkqastlxuurldqezsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667823.7027185-2308-35672199100509/AnsiballZ_stat.py
Dec 02 09:30:23 np0005541914.localdomain sudo[200169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:24 np0005541914.localdomain python3.9[200171]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:30:24 np0005541914.localdomain sudo[200169]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:24 np0005541914.localdomain sudo[200257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-apndlwnromstzbzhlynejabaqhpfnbkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667823.7027185-2308-35672199100509/AnsiballZ_copy.py
Dec 02 09:30:24 np0005541914.localdomain sudo[200257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:24 np0005541914.localdomain python3.9[200259]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667823.7027185-2308-35672199100509/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:24 np0005541914.localdomain sudo[200257]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:25 np0005541914.localdomain sudo[200367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kjirhegjtnwkeqtbjdchfdhwoukoqehe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667824.8461573-2308-261627378298264/AnsiballZ_stat.py
Dec 02 09:30:25 np0005541914.localdomain sudo[200367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:25 np0005541914.localdomain python3.9[200369]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:30:25 np0005541914.localdomain sudo[200367]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:25 np0005541914.localdomain sudo[200455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cnsxonibqzvadnkpykbjwrjhbqztcwrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667824.8461573-2308-261627378298264/AnsiballZ_copy.py
Dec 02 09:30:25 np0005541914.localdomain sudo[200455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:25 np0005541914.localdomain python3.9[200457]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667824.8461573-2308-261627378298264/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:26 np0005541914.localdomain sudo[200455]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:26 np0005541914.localdomain sudo[200565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-paickysmllnvxbpeycjdbmcykwcejyyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667826.1349342-2308-39897930903925/AnsiballZ_stat.py
Dec 02 09:30:26 np0005541914.localdomain sudo[200565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:26 np0005541914.localdomain python3.9[200567]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:30:26 np0005541914.localdomain sudo[200565]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:26 np0005541914.localdomain sudo[200653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kvnctkhdjojvucmosbyjzljrfmbmtzic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667826.1349342-2308-39897930903925/AnsiballZ_copy.py
Dec 02 09:30:26 np0005541914.localdomain sudo[200653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:27 np0005541914.localdomain python3.9[200655]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667826.1349342-2308-39897930903925/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:27 np0005541914.localdomain sudo[200653]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1797 DF PROTO=TCP SPT=57938 DPT=9101 SEQ=1718877674 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54372A20000000001030307) 
Dec 02 09:30:27 np0005541914.localdomain sudo[200763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bkcdqanocpwegteicthhqhibbjhenxor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667827.2282357-2308-68589797032285/AnsiballZ_stat.py
Dec 02 09:30:27 np0005541914.localdomain sudo[200763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:27 np0005541914.localdomain python3.9[200765]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:30:27 np0005541914.localdomain sudo[200763]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:27 np0005541914.localdomain sudo[200851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-edxauvuygiizfimxcmdvfngbbwtcnvin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667827.2282357-2308-68589797032285/AnsiballZ_copy.py
Dec 02 09:30:27 np0005541914.localdomain sudo[200851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:28 np0005541914.localdomain python3.9[200853]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667827.2282357-2308-68589797032285/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:28 np0005541914.localdomain sudo[200851]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:28 np0005541914.localdomain sudo[200961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aesaaewnaauxvcqtoajxaeprdpjlekvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667828.5740378-2308-4511502889548/AnsiballZ_stat.py
Dec 02 09:30:28 np0005541914.localdomain sudo[200961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:29 np0005541914.localdomain python3.9[200963]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:30:29 np0005541914.localdomain sudo[200961]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:29 np0005541914.localdomain sudo[201049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yaybxkyqesezoxuxhsvlmtimjalttzbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667828.5740378-2308-4511502889548/AnsiballZ_copy.py
Dec 02 09:30:29 np0005541914.localdomain sudo[201049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:29 np0005541914.localdomain python3.9[201051]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667828.5740378-2308-4511502889548/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:29 np0005541914.localdomain sudo[201049]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:29 np0005541914.localdomain sudo[201159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hdlrmdplxlmgmpbemvavlgcqzgrsetyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667829.6379445-2308-204901045801869/AnsiballZ_stat.py
Dec 02 09:30:29 np0005541914.localdomain sudo[201159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:30 np0005541914.localdomain python3.9[201161]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:30:30 np0005541914.localdomain sudo[201159]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:30 np0005541914.localdomain sudo[201247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-drjxyxnzavdfravewlhptdueioointqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667829.6379445-2308-204901045801869/AnsiballZ_copy.py
Dec 02 09:30:30 np0005541914.localdomain sudo[201247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:30 np0005541914.localdomain python3.9[201249]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667829.6379445-2308-204901045801869/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:30 np0005541914.localdomain sudo[201247]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:30 np0005541914.localdomain sudo[201357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bdvlehwbnyfkyrjlmhntjodheuknjpyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667830.6486254-2308-97054831665936/AnsiballZ_stat.py
Dec 02 09:30:30 np0005541914.localdomain sudo[201357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:31 np0005541914.localdomain python3.9[201359]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:30:31 np0005541914.localdomain sudo[201357]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64299 DF PROTO=TCP SPT=34866 DPT=9102 SEQ=166964221 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54383220000000001030307) 
Dec 02 09:30:31 np0005541914.localdomain sudo[201445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dwsegafytxjhdenrijuvddtodseqynlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667830.6486254-2308-97054831665936/AnsiballZ_copy.py
Dec 02 09:30:31 np0005541914.localdomain sudo[201445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:31 np0005541914.localdomain python3.9[201447]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667830.6486254-2308-97054831665936/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:31 np0005541914.localdomain sudo[201445]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:31 np0005541914.localdomain sudo[201555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sgnskyezgsbdwmtxjruwpzallyqyszaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667831.7083123-2308-87635549735233/AnsiballZ_stat.py
Dec 02 09:30:31 np0005541914.localdomain sudo[201555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:32 np0005541914.localdomain python3.9[201557]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:30:32 np0005541914.localdomain sudo[201555]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:32 np0005541914.localdomain sudo[201643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uoxtoowadatckipccdtepkyfflukvljx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667831.7083123-2308-87635549735233/AnsiballZ_copy.py
Dec 02 09:30:32 np0005541914.localdomain sudo[201643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:32 np0005541914.localdomain python3.9[201645]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667831.7083123-2308-87635549735233/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:32 np0005541914.localdomain sudo[201643]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:32 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:30:33 np0005541914.localdomain sudo[201753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-crlshecszuuvpfvkajiqhvraexmzfgbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667832.7533855-2308-103660482533503/AnsiballZ_stat.py
Dec 02 09:30:33 np0005541914.localdomain sudo[201753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:33 np0005541914.localdomain podman[201754]: 2025-12-02 09:30:33.095545263 +0000 UTC m=+0.089611866 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:30:33 np0005541914.localdomain systemd[1]: tmp-crun.mMsXLL.mount: Deactivated successfully.
Dec 02 09:30:33 np0005541914.localdomain podman[201754]: 2025-12-02 09:30:33.133958024 +0000 UTC m=+0.128024617 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:30:33 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:30:33 np0005541914.localdomain python3.9[201761]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:30:33 np0005541914.localdomain sudo[201753]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:33 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3250 DF PROTO=TCP SPT=51620 DPT=9882 SEQ=1328832982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5438BE00000000001030307) 
Dec 02 09:30:33 np0005541914.localdomain sudo[201867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rcbpgvldcrsxubtwrpzogtmxjaupohjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667832.7533855-2308-103660482533503/AnsiballZ_copy.py
Dec 02 09:30:33 np0005541914.localdomain sudo[201867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:33 np0005541914.localdomain python3.9[201869]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667832.7533855-2308-103660482533503/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:33 np0005541914.localdomain sudo[201867]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:34 np0005541914.localdomain python3.9[201977]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                                            ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:30:34 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3251 DF PROTO=TCP SPT=51620 DPT=9882 SEQ=1328832982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5438FE20000000001030307) 
Dec 02 09:30:35 np0005541914.localdomain sudo[202088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fyerqrodtxzjmqloknvkczzqbybzoxga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667834.8213098-2927-137442929413250/AnsiballZ_seboolean.py
Dec 02 09:30:35 np0005541914.localdomain sudo[202088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:35 np0005541914.localdomain python3.9[202090]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec 02 09:30:35 np0005541914.localdomain sudo[202088]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:36 np0005541914.localdomain sudo[202198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fxavehexrbqcrefgbfkvuchxqsizfftn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667836.1375616-2957-171686314288507/AnsiballZ_systemd.py
Dec 02 09:30:36 np0005541914.localdomain sudo[202198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:36 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3252 DF PROTO=TCP SPT=51620 DPT=9882 SEQ=1328832982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54397E20000000001030307) 
Dec 02 09:30:36 np0005541914.localdomain python3.9[202200]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 02 09:30:36 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:30:36 np0005541914.localdomain systemd-rc-local-generator[202218]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:30:36 np0005541914.localdomain systemd-sysv-generator[202225]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:30:36 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:36 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:36 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:36 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:36 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:30:37 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:37 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:37 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:37 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:37 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:30:37 np0005541914.localdomain podman[202237]: 2025-12-02 09:30:37.229915417 +0000 UTC m=+0.079265103 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 02 09:30:37 np0005541914.localdomain podman[202237]: 2025-12-02 09:30:37.240998632 +0000 UTC m=+0.090348338 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3)
Dec 02 09:30:37 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:30:38 np0005541914.localdomain systemd[1]: Starting libvirt logging daemon socket...
Dec 02 09:30:38 np0005541914.localdomain systemd[1]: Listening on libvirt logging daemon socket.
Dec 02 09:30:38 np0005541914.localdomain systemd[1]: Starting libvirt logging daemon admin socket...
Dec 02 09:30:38 np0005541914.localdomain systemd[1]: Listening on libvirt logging daemon admin socket.
Dec 02 09:30:38 np0005541914.localdomain systemd[1]: Starting libvirt logging daemon...
Dec 02 09:30:38 np0005541914.localdomain systemd[1]: Started libvirt logging daemon.
Dec 02 09:30:38 np0005541914.localdomain sudo[202198]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:38 np0005541914.localdomain sudo[202365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jhrrlmlpxqlyobkknqukhincfvaeaaja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667838.4374385-2957-126062119068177/AnsiballZ_systemd.py
Dec 02 09:30:38 np0005541914.localdomain sudo[202365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:39 np0005541914.localdomain python3.9[202367]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 02 09:30:39 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:30:39 np0005541914.localdomain systemd-rc-local-generator[202394]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:30:39 np0005541914.localdomain systemd-sysv-generator[202398]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:30:39 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:39 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:39 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:39 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:39 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:30:39 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:39 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:39 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:39 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:39 np0005541914.localdomain systemd[1]: Starting libvirt nodedev daemon socket...
Dec 02 09:30:39 np0005541914.localdomain systemd[1]: Listening on libvirt nodedev daemon socket.
Dec 02 09:30:39 np0005541914.localdomain systemd[1]: Starting libvirt nodedev daemon admin socket...
Dec 02 09:30:39 np0005541914.localdomain systemd[1]: Starting libvirt nodedev daemon read-only socket...
Dec 02 09:30:39 np0005541914.localdomain systemd[1]: Listening on libvirt nodedev daemon admin socket.
Dec 02 09:30:39 np0005541914.localdomain systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Dec 02 09:30:39 np0005541914.localdomain systemd[1]: Started libvirt nodedev daemon.
Dec 02 09:30:39 np0005541914.localdomain sudo[202365]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:39 np0005541914.localdomain sudo[202540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dqxiufnskxeoijacvzfanpkttfuylasn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667839.586893-2957-37948029592398/AnsiballZ_systemd.py
Dec 02 09:30:39 np0005541914.localdomain sudo[202540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:40 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31962 DF PROTO=TCP SPT=32846 DPT=9100 SEQ=3270466836 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD543A5220000000001030307) 
Dec 02 09:30:40 np0005541914.localdomain python3.9[202542]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 02 09:30:40 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:30:40 np0005541914.localdomain systemd-sysv-generator[202574]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:30:40 np0005541914.localdomain systemd-rc-local-generator[202570]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:30:40 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:40 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:40 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:40 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:40 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:30:40 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:40 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:40 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:40 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:40 np0005541914.localdomain systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Dec 02 09:30:40 np0005541914.localdomain systemd[1]: Starting libvirt proxy daemon socket...
Dec 02 09:30:40 np0005541914.localdomain systemd[1]: Listening on libvirt proxy daemon socket.
Dec 02 09:30:40 np0005541914.localdomain systemd[1]: Starting libvirt proxy daemon admin socket...
Dec 02 09:30:40 np0005541914.localdomain systemd[1]: Starting libvirt proxy daemon read-only socket...
Dec 02 09:30:40 np0005541914.localdomain systemd[1]: Listening on libvirt proxy daemon admin socket.
Dec 02 09:30:40 np0005541914.localdomain systemd[1]: Listening on libvirt proxy daemon read-only socket.
Dec 02 09:30:40 np0005541914.localdomain systemd[1]: Started libvirt proxy daemon.
Dec 02 09:30:40 np0005541914.localdomain sudo[202540]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:40 np0005541914.localdomain systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Dec 02 09:30:41 np0005541914.localdomain systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Dec 02 09:30:41 np0005541914.localdomain systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Dec 02 09:30:41 np0005541914.localdomain sudo[202717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hazlocatoixnrajqrxigmsljbhkqbanp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667840.7779737-2957-274028508734853/AnsiballZ_systemd.py
Dec 02 09:30:41 np0005541914.localdomain sudo[202717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:41 np0005541914.localdomain python3.9[202722]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 02 09:30:41 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:30:41 np0005541914.localdomain systemd-rc-local-generator[202748]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:30:41 np0005541914.localdomain systemd-sysv-generator[202753]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:30:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:30:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:41 np0005541914.localdomain systemd[1]: Listening on libvirt locking daemon socket.
Dec 02 09:30:41 np0005541914.localdomain systemd[1]: Starting libvirt QEMU daemon socket...
Dec 02 09:30:41 np0005541914.localdomain systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec 02 09:30:41 np0005541914.localdomain systemd[1]: Starting Virtual Machine and Container Registration Service...
Dec 02 09:30:41 np0005541914.localdomain systemd[1]: Listening on libvirt QEMU daemon socket.
Dec 02 09:30:41 np0005541914.localdomain systemd[1]: Starting libvirt QEMU daemon admin socket...
Dec 02 09:30:41 np0005541914.localdomain systemd[1]: Starting libvirt QEMU daemon read-only socket...
Dec 02 09:30:41 np0005541914.localdomain systemd[1]: Listening on libvirt QEMU daemon admin socket.
Dec 02 09:30:41 np0005541914.localdomain systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Dec 02 09:30:41 np0005541914.localdomain systemd[1]: Started Virtual Machine and Container Registration Service.
Dec 02 09:30:41 np0005541914.localdomain systemd[1]: Started libvirt QEMU daemon.
Dec 02 09:30:41 np0005541914.localdomain sudo[202717]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:42 np0005541914.localdomain setroubleshoot[202581]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l a8ab41f9-e430-4d93-8a79-59719374bbe5
Dec 02 09:30:42 np0005541914.localdomain setroubleshoot[202581]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                                 
                                                                 *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                                 
                                                                 If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                                 Then turn on full auditing to get path information about the offending file and generate the error again.
                                                                 Do
                                                                 
                                                                 Turn on full auditing
                                                                 # auditctl -w /etc/shadow -p w
                                                                 Try to recreate AVC. Then execute
                                                                 # ausearch -m avc -ts recent
                                                                 If you see PATH record check ownership/permissions on file, and fix it,
                                                                 otherwise report as a bugzilla.
                                                                 
                                                                 *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                                 
                                                                 If you believe that virtlogd should have the dac_read_search capability by default.
                                                                 Then you should report this as a bug.
                                                                 You can generate a local policy module to allow this access.
                                                                 Do
                                                                 allow this access for now by executing:
                                                                 # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                                 # semodule -X 300 -i my-virtlogd.pp
                                                                 
Dec 02 09:30:42 np0005541914.localdomain setroubleshoot[202581]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l a8ab41f9-e430-4d93-8a79-59719374bbe5
Dec 02 09:30:42 np0005541914.localdomain setroubleshoot[202581]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                                 
                                                                 *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                                 
                                                                 If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                                 Then turn on full auditing to get path information about the offending file and generate the error again.
                                                                 Do
                                                                 
                                                                 Turn on full auditing
                                                                 # auditctl -w /etc/shadow -p w
                                                                 Try to recreate AVC. Then execute
                                                                 # ausearch -m avc -ts recent
                                                                 If you see PATH record check ownership/permissions on file, and fix it,
                                                                 otherwise report as a bugzilla.
                                                                 
                                                                 *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                                 
                                                                 If you believe that virtlogd should have the dac_read_search capability by default.
                                                                 Then you should report this as a bug.
                                                                 You can generate a local policy module to allow this access.
                                                                 Do
                                                                 allow this access for now by executing:
                                                                 # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                                 # semodule -X 300 -i my-virtlogd.pp
                                                                 
Dec 02 09:30:42 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36924 DF PROTO=TCP SPT=41554 DPT=9100 SEQ=1344575929 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD543AF230000000001030307) 
Dec 02 09:30:43 np0005541914.localdomain sudo[202897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-azipoiycsfpopzvxqnqbkzpsoivdzhid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667842.8580208-2957-11237446718960/AnsiballZ_systemd.py
Dec 02 09:30:43 np0005541914.localdomain sudo[202897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:43 np0005541914.localdomain python3.9[202899]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 02 09:30:43 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:30:43 np0005541914.localdomain systemd-sysv-generator[202930]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:30:43 np0005541914.localdomain systemd-rc-local-generator[202926]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:30:43 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:43 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:43 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:43 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:43 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:30:43 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:43 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:43 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:43 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:30:43 np0005541914.localdomain systemd[1]: Starting libvirt secret daemon socket...
Dec 02 09:30:43 np0005541914.localdomain systemd[1]: Listening on libvirt secret daemon socket.
Dec 02 09:30:43 np0005541914.localdomain systemd[1]: Starting libvirt secret daemon admin socket...
Dec 02 09:30:43 np0005541914.localdomain systemd[1]: Starting libvirt secret daemon read-only socket...
Dec 02 09:30:43 np0005541914.localdomain systemd[1]: Listening on libvirt secret daemon admin socket.
Dec 02 09:30:43 np0005541914.localdomain systemd[1]: Listening on libvirt secret daemon read-only socket.
Dec 02 09:30:43 np0005541914.localdomain systemd[1]: Started libvirt secret daemon.
Dec 02 09:30:43 np0005541914.localdomain sudo[202897]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:44 np0005541914.localdomain sshd[203032]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:30:44 np0005541914.localdomain sudo[203070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mztwuuoveauhiicnnorkppwsgziglnpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667844.7219408-3067-227043032973698/AnsiballZ_file.py
Dec 02 09:30:44 np0005541914.localdomain sudo[203070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:45 np0005541914.localdomain sshd[203032]: Invalid user ubuntu from 45.148.10.240 port 43102
Dec 02 09:30:45 np0005541914.localdomain python3.9[203072]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:45 np0005541914.localdomain sudo[203070]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:45 np0005541914.localdomain sshd[203032]: Connection closed by invalid user ubuntu 45.148.10.240 port 43102 [preauth]
Dec 02 09:30:45 np0005541914.localdomain sudo[203180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nhzpcacjbwaiazkaotjiqoxjqivyoupg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667845.4606-3092-159908657348197/AnsiballZ_find.py
Dec 02 09:30:45 np0005541914.localdomain sudo[203180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:45 np0005541914.localdomain python3.9[203182]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 02 09:30:45 np0005541914.localdomain sudo[203180]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:46 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21332 DF PROTO=TCP SPT=45408 DPT=9102 SEQ=2038881755 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD543BC3E0000000001030307) 
Dec 02 09:30:46 np0005541914.localdomain sudo[203290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nzaacszrbyftscazdqqyobqcctfblmyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667846.2021604-3115-245090415142567/AnsiballZ_command.py
Dec 02 09:30:46 np0005541914.localdomain sudo[203290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:46 np0005541914.localdomain python3.9[203292]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                                            echo ceph
                                                            awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:30:46 np0005541914.localdomain sudo[203290]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:47 np0005541914.localdomain python3.9[203404]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 02 09:30:48 np0005541914.localdomain python3.9[203512]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:30:48 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3254 DF PROTO=TCP SPT=51620 DPT=9882 SEQ=1328832982 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD543C7230000000001030307) 
Dec 02 09:30:48 np0005541914.localdomain python3.9[203598]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667848.0622027-3173-278766117252849/.source.xml follow=False _original_basename=secret.xml.j2 checksum=45e14b3898e47796a04e3213d8ff716cad2ef6d4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:49 np0005541914.localdomain sudo[203706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-drexsbhuuulbunjwogqkpivoalnayxxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667849.2799304-3217-248353388211663/AnsiballZ_command.py
Dec 02 09:30:49 np0005541914.localdomain sudo[203706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:49 np0005541914.localdomain python3.9[203708]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine c7c8e171-a193-56fb-95fa-8879fcfa7074
                                                            virsh secret-define --file /tmp/secret.xml
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:30:49 np0005541914.localdomain polkitd[1037]: Registered Authentication Agent for unix-process:203710:1001399 (system bus name :1.2851 [pkttyagent --process 203710 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale C.UTF-8)
Dec 02 09:30:49 np0005541914.localdomain polkitd[1037]: Unregistered Authentication Agent for unix-process:203710:1001399 (system bus name :1.2851, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale C.UTF-8) (disconnected from bus)
Dec 02 09:30:49 np0005541914.localdomain polkitd[1037]: Registered Authentication Agent for unix-process:203709:1001398 (system bus name :1.2852 [pkttyagent --process 203709 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale C.UTF-8)
Dec 02 09:30:49 np0005541914.localdomain polkitd[1037]: Unregistered Authentication Agent for unix-process:203709:1001398 (system bus name :1.2852, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale C.UTF-8) (disconnected from bus)
Dec 02 09:30:49 np0005541914.localdomain sudo[203706]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:50 np0005541914.localdomain python3.9[203828]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:51 np0005541914.localdomain sudo[203936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-csttntfuxluhvmfrszylvigngmkkznli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667850.7752585-3265-10149054962798/AnsiballZ_command.py
Dec 02 09:30:51 np0005541914.localdomain sudo[203936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:51 np0005541914.localdomain sudo[203936]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:51 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1799 DF PROTO=TCP SPT=57938 DPT=9101 SEQ=1718877674 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD543D3220000000001030307) 
Dec 02 09:30:51 np0005541914.localdomain sudo[204047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jfqetzlkxpitmifrnhzvfvgckcdystqj ; FSID=c7c8e171-a193-56fb-95fa-8879fcfa7074 KEY=AQCsmS5pAAAAABAA9iv/nZiAlLVhWrPIkulquw== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667851.4987006-3289-280872256142331/AnsiballZ_command.py
Dec 02 09:30:51 np0005541914.localdomain sudo[204047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:52 np0005541914.localdomain systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Dec 02 09:30:52 np0005541914.localdomain systemd[1]: setroubleshootd.service: Deactivated successfully.
Dec 02 09:30:52 np0005541914.localdomain polkitd[1037]: Registered Authentication Agent for unix-process:204050:1001640 (system bus name :1.2855 [pkttyagent --process 204050 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale C.UTF-8)
Dec 02 09:30:52 np0005541914.localdomain polkitd[1037]: Unregistered Authentication Agent for unix-process:204050:1001640 (system bus name :1.2855, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale C.UTF-8) (disconnected from bus)
Dec 02 09:30:52 np0005541914.localdomain sudo[204047]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:54 np0005541914.localdomain sudo[204163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rbuzicvofygpqknikjkyaaoghgysaoui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667853.3935723-3313-267935478638612/AnsiballZ_copy.py
Dec 02 09:30:54 np0005541914.localdomain sudo[204163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:54 np0005541914.localdomain python3.9[204165]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:54 np0005541914.localdomain sudo[204163]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:54 np0005541914.localdomain sudo[204273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kqehtlnkyxabufrvbcirmbogatvbkqyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667854.6742232-3338-47073940731427/AnsiballZ_stat.py
Dec 02 09:30:54 np0005541914.localdomain sudo[204273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:55 np0005541914.localdomain python3.9[204275]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:30:55 np0005541914.localdomain sudo[204273]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:55 np0005541914.localdomain sudo[204361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hiubjrqglqxoczkydpordtubvrfmrwvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667854.6742232-3338-47073940731427/AnsiballZ_copy.py
Dec 02 09:30:55 np0005541914.localdomain sudo[204361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:55 np0005541914.localdomain python3.9[204363]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667854.6742232-3338-47073940731427/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=dc5ee7162311c27a6084cbee4052b901d56cb1ba backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:55 np0005541914.localdomain sudo[204361]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:57 np0005541914.localdomain sudo[204471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nkvtwbeyfitduakgckxxgspccuxhlrea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667856.8036902-3386-219364359347156/AnsiballZ_file.py
Dec 02 09:30:57 np0005541914.localdomain sudo[204471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26533 DF PROTO=TCP SPT=58322 DPT=9101 SEQ=3739041820 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD543E7E20000000001030307) 
Dec 02 09:30:57 np0005541914.localdomain python3.9[204473]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:57 np0005541914.localdomain sudo[204471]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:57 np0005541914.localdomain sudo[204581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vruwjhiubumzqhncgqyklpxtidhmlkus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667857.5113819-3410-263209897131024/AnsiballZ_stat.py
Dec 02 09:30:57 np0005541914.localdomain sudo[204581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:57 np0005541914.localdomain python3.9[204583]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:30:58 np0005541914.localdomain sudo[204581]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:58 np0005541914.localdomain sudo[204638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kijsrqjdbgyumauboprcbjmtwwbcyqzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667857.5113819-3410-263209897131024/AnsiballZ_file.py
Dec 02 09:30:58 np0005541914.localdomain sudo[204638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:58 np0005541914.localdomain python3.9[204640]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:58 np0005541914.localdomain sudo[204638]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:58 np0005541914.localdomain sudo[204748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hjwsntrjemxrrrsyqhjqcqdnfsarbmso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667858.6683934-3446-56089925387994/AnsiballZ_stat.py
Dec 02 09:30:58 np0005541914.localdomain sudo[204748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:59 np0005541914.localdomain python3.9[204750]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:30:59 np0005541914.localdomain sudo[204748]: pam_unix(sudo:session): session closed for user root
Dec 02 09:30:59 np0005541914.localdomain sudo[204805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wvzmrcuhbefangjagawfatmchveykkzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667858.6683934-3446-56089925387994/AnsiballZ_file.py
Dec 02 09:30:59 np0005541914.localdomain sudo[204805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:30:59 np0005541914.localdomain python3.9[204807]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.p3ozdzjc recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:30:59 np0005541914.localdomain sudo[204805]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:00 np0005541914.localdomain sudo[204915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fimkctoxgcdnbnzvkqbwpdnsviopjvgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667859.7661436-3482-32905506027528/AnsiballZ_stat.py
Dec 02 09:31:00 np0005541914.localdomain sudo[204915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:00 np0005541914.localdomain python3.9[204917]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:31:00 np0005541914.localdomain sudo[204915]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:00 np0005541914.localdomain sudo[204972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wsovaqxepzkwemckttfjuvllnuwfkjem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667859.7661436-3482-32905506027528/AnsiballZ_file.py
Dec 02 09:31:00 np0005541914.localdomain sudo[204972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:00 np0005541914.localdomain python3.9[204974]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:31:00 np0005541914.localdomain sudo[204972]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:01 np0005541914.localdomain sudo[205082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yzknmykgzmkqzyfcpsyhmieqwddqxxss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667860.9690793-3521-67193173745602/AnsiballZ_command.py
Dec 02 09:31:01 np0005541914.localdomain sudo[205082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:01 np0005541914.localdomain python3.9[205084]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:31:01 np0005541914.localdomain sudo[205082]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21336 DF PROTO=TCP SPT=45408 DPT=9102 SEQ=2038881755 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD543F9230000000001030307) 
Dec 02 09:31:01 np0005541914.localdomain sudo[205193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-litzvljznoyziegvfsvoefmdrkizjmfq ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764667861.6205044-3545-86999538679945/AnsiballZ_edpm_nftables_from_files.py
Dec 02 09:31:01 np0005541914.localdomain sudo[205193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:02 np0005541914.localdomain python3[205195]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 02 09:31:02 np0005541914.localdomain sudo[205193]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:02 np0005541914.localdomain sudo[205303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oklzyqgtegzffbdshayjwkblqjcbitbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667862.4240444-3569-74385196758608/AnsiballZ_stat.py
Dec 02 09:31:02 np0005541914.localdomain sudo[205303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:02 np0005541914.localdomain python3.9[205305]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:31:02 np0005541914.localdomain sudo[205303]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:03 np0005541914.localdomain sudo[205360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-omoqimjwqncxlcstaykmebezxpwmgvrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667862.4240444-3569-74385196758608/AnsiballZ_file.py
Dec 02 09:31:03 np0005541914.localdomain sudo[205360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:31:03.137 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:31:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:31:03.138 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:31:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:31:03.138 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:31:03 np0005541914.localdomain python3.9[205362]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:31:03 np0005541914.localdomain sudo[205360]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:03 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60346 DF PROTO=TCP SPT=49468 DPT=9882 SEQ=3836888894 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54401100000000001030307) 
Dec 02 09:31:03 np0005541914.localdomain sudo[205470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-udxjdqtcqssoagwbyttxcxbsttxnfxwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667863.570972-3605-237301045182132/AnsiballZ_stat.py
Dec 02 09:31:03 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:31:03 np0005541914.localdomain sudo[205470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:03 np0005541914.localdomain systemd[1]: tmp-crun.KqJqzf.mount: Deactivated successfully.
Dec 02 09:31:03 np0005541914.localdomain podman[205472]: 2025-12-02 09:31:03.984216539 +0000 UTC m=+0.096163846 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 02 09:31:04 np0005541914.localdomain podman[205472]: 2025-12-02 09:31:04.069263707 +0000 UTC m=+0.181210994 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 02 09:31:04 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:31:04 np0005541914.localdomain python3.9[205473]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:31:04 np0005541914.localdomain sudo[205470]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:04 np0005541914.localdomain sudo[205550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-znaylsrdalclgogpqnbndpiwbnmybihq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667863.570972-3605-237301045182132/AnsiballZ_file.py
Dec 02 09:31:04 np0005541914.localdomain sudo[205550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:04 np0005541914.localdomain python3.9[205552]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:31:04 np0005541914.localdomain sudo[205550]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:04 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60347 DF PROTO=TCP SPT=49468 DPT=9882 SEQ=3836888894 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54405220000000001030307) 
Dec 02 09:31:05 np0005541914.localdomain sudo[205660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tyxlbanmmdlldwykhxlfnapltrtvlowh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667864.7866538-3641-265529752850312/AnsiballZ_stat.py
Dec 02 09:31:05 np0005541914.localdomain sudo[205660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:05 np0005541914.localdomain python3.9[205662]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:31:05 np0005541914.localdomain sudo[205660]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:05 np0005541914.localdomain sudo[205717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rbmjsahtfawtrbyivbowucssnmqmxscu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667864.7866538-3641-265529752850312/AnsiballZ_file.py
Dec 02 09:31:05 np0005541914.localdomain sudo[205717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:05 np0005541914.localdomain python3.9[205719]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:31:05 np0005541914.localdomain sudo[205717]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:06 np0005541914.localdomain sudo[205827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yfoltskbihnkzkjsdxgdkvpecajyxtbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667865.8912709-3677-41163986603282/AnsiballZ_stat.py
Dec 02 09:31:06 np0005541914.localdomain sudo[205827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:06 np0005541914.localdomain python3.9[205829]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:31:06 np0005541914.localdomain sudo[205827]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:06 np0005541914.localdomain sudo[205884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gzxhhoxppdwkiaeavptytovinkrmxdow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667865.8912709-3677-41163986603282/AnsiballZ_file.py
Dec 02 09:31:06 np0005541914.localdomain sudo[205884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:06 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60348 DF PROTO=TCP SPT=49468 DPT=9882 SEQ=3836888894 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5440D230000000001030307) 
Dec 02 09:31:06 np0005541914.localdomain python3.9[205886]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:31:06 np0005541914.localdomain sudo[205884]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:07 np0005541914.localdomain sudo[205994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uptsofcdonvghjjknsjqhcqbajhoagfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667867.3871915-3713-121318721951660/AnsiballZ_stat.py
Dec 02 09:31:07 np0005541914.localdomain sudo[205994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:07 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:31:07 np0005541914.localdomain systemd[1]: tmp-crun.9BbOeb.mount: Deactivated successfully.
Dec 02 09:31:07 np0005541914.localdomain podman[205997]: 2025-12-02 09:31:07.859782233 +0000 UTC m=+0.091906009 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 02 09:31:07 np0005541914.localdomain podman[205997]: 2025-12-02 09:31:07.889953451 +0000 UTC m=+0.122077197 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec 02 09:31:07 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:31:07 np0005541914.localdomain python3.9[205996]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:31:08 np0005541914.localdomain sudo[205994]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:08 np0005541914.localdomain sudo[206103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dlxnbgekbqljaeyazajcshdihmaudxgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667867.3871915-3713-121318721951660/AnsiballZ_copy.py
Dec 02 09:31:08 np0005541914.localdomain sudo[206103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:08 np0005541914.localdomain python3.9[206105]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764667867.3871915-3713-121318721951660/.source.nft follow=False _original_basename=ruleset.j2 checksum=e2e2635f27347d386f310e86d2b40c40289835bb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:31:08 np0005541914.localdomain sudo[206103]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:09 np0005541914.localdomain sudo[206213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-byvwuifzbvwaxyzjpfhxcfzedppsccws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667868.7300649-3758-280633479568066/AnsiballZ_file.py
Dec 02 09:31:09 np0005541914.localdomain sudo[206213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:09 np0005541914.localdomain python3.9[206215]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:31:09 np0005541914.localdomain sudo[206213]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:09 np0005541914.localdomain sudo[206323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zwsccuizsfvawsssvmypgecebdhhddon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667869.4312258-3782-196143853452057/AnsiballZ_command.py
Dec 02 09:31:09 np0005541914.localdomain sudo[206323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:09 np0005541914.localdomain python3.9[206325]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:31:09 np0005541914.localdomain sudo[206323]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:10 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38868 DF PROTO=TCP SPT=37588 DPT=9100 SEQ=1764008649 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5441B220000000001030307) 
Dec 02 09:31:10 np0005541914.localdomain sudo[206436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fecwwpzhsriuiouvxyzgiseafvzmfmoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667870.4928536-3806-69466156406334/AnsiballZ_blockinfile.py
Dec 02 09:31:10 np0005541914.localdomain sudo[206436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:11 np0005541914.localdomain python3.9[206438]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                                            include "/etc/nftables/edpm-chains.nft"
                                                            include "/etc/nftables/edpm-rules.nft"
                                                            include "/etc/nftables/edpm-jumps.nft"
                                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:31:11 np0005541914.localdomain sudo[206436]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:11 np0005541914.localdomain sudo[206546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-brfegxsgoodctohpdmbdrskrezbnnltl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667871.5267901-3833-176128851296420/AnsiballZ_command.py
Dec 02 09:31:11 np0005541914.localdomain sudo[206546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:11 np0005541914.localdomain python3.9[206548]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:31:12 np0005541914.localdomain sudo[206546]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:12 np0005541914.localdomain sudo[206550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:31:12 np0005541914.localdomain sudo[206550]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:31:12 np0005541914.localdomain sudo[206550]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:12 np0005541914.localdomain sudo[206568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:31:12 np0005541914.localdomain sudo[206568]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:31:12 np0005541914.localdomain sudo[206708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kuyjccsqlnxktwhtqbocmcxzcogwxpxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667872.4453642-3857-72947042984935/AnsiballZ_stat.py
Dec 02 09:31:12 np0005541914.localdomain sudo[206708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:12 np0005541914.localdomain python3.9[206710]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:31:12 np0005541914.localdomain sudo[206708]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:13 np0005541914.localdomain sudo[206568]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:13 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46105 DF PROTO=TCP SPT=46988 DPT=9105 SEQ=2120226542 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54426220000000001030307) 
Dec 02 09:31:13 np0005541914.localdomain sudo[206837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wjghkxtochwryoypyarekiqwcgxmltjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667873.1744652-3882-211724551323571/AnsiballZ_command.py
Dec 02 09:31:13 np0005541914.localdomain sudo[206837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:13 np0005541914.localdomain python3.9[206839]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:31:13 np0005541914.localdomain sudo[206837]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:13 np0005541914.localdomain sudo[206843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:31:13 np0005541914.localdomain sudo[206843]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:31:13 np0005541914.localdomain sudo[206843]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:14 np0005541914.localdomain sudo[206968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-beqcbrtlthweymfgpboqptgutfxhsfdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667873.8417506-3905-231164523438621/AnsiballZ_file.py
Dec 02 09:31:14 np0005541914.localdomain sudo[206968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:14 np0005541914.localdomain python3.9[206970]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:31:14 np0005541914.localdomain sudo[206968]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:14 np0005541914.localdomain sudo[207078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pelclaenorqhxmkunhqwzadljlgjvhkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667874.5200412-3929-262145964523067/AnsiballZ_stat.py
Dec 02 09:31:14 np0005541914.localdomain sudo[207078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:15 np0005541914.localdomain python3.9[207080]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:31:15 np0005541914.localdomain sudo[207078]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:15 np0005541914.localdomain sudo[207166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-thlvwvaehciptvacfdukonztyfwfpsni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667874.5200412-3929-262145964523067/AnsiballZ_copy.py
Dec 02 09:31:15 np0005541914.localdomain sudo[207166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:15 np0005541914.localdomain python3.9[207168]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667874.5200412-3929-262145964523067/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:31:15 np0005541914.localdomain sudo[207166]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:15 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9803 DF PROTO=TCP SPT=37900 DPT=9105 SEQ=3808598112 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54431230000000001030307) 
Dec 02 09:31:16 np0005541914.localdomain sudo[207276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qoeqcihybcinumpeazknwjbsuqwsinhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667875.9209073-3974-35492129011824/AnsiballZ_stat.py
Dec 02 09:31:16 np0005541914.localdomain sudo[207276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:16 np0005541914.localdomain python3.9[207278]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:31:16 np0005541914.localdomain sudo[207276]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:16 np0005541914.localdomain sudo[207364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-txqifppaennabrddcvatgrfdvrtmjeno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667875.9209073-3974-35492129011824/AnsiballZ_copy.py
Dec 02 09:31:16 np0005541914.localdomain sudo[207364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:16 np0005541914.localdomain python3.9[207366]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667875.9209073-3974-35492129011824/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:31:16 np0005541914.localdomain sudo[207364]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:17 np0005541914.localdomain sudo[207474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jbdhlsfdzwtbtevpninnmxjwzuwxelrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667877.1514735-4019-221420938547236/AnsiballZ_stat.py
Dec 02 09:31:17 np0005541914.localdomain sudo[207474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:17 np0005541914.localdomain python3.9[207476]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:31:17 np0005541914.localdomain sudo[207474]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:17 np0005541914.localdomain sshd[207549]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:31:17 np0005541914.localdomain sudo[207564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-blsbcyxqorlekywuljxofundlxmgcmin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667877.1514735-4019-221420938547236/AnsiballZ_copy.py
Dec 02 09:31:17 np0005541914.localdomain sudo[207564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:18 np0005541914.localdomain python3.9[207566]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667877.1514735-4019-221420938547236/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:31:18 np0005541914.localdomain sudo[207564]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:18 np0005541914.localdomain sshd[207549]: Invalid user user1 from 34.78.29.97 port 53226
Dec 02 09:31:18 np0005541914.localdomain sshd[207549]: Received disconnect from 34.78.29.97 port 53226:11: Bye Bye [preauth]
Dec 02 09:31:18 np0005541914.localdomain sshd[207549]: Disconnected from invalid user user1 34.78.29.97 port 53226 [preauth]
Dec 02 09:31:18 np0005541914.localdomain sudo[207674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xvdrhcopjokcfqgfsdppqunqslcfltox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667878.3391006-4064-212088518177815/AnsiballZ_systemd.py
Dec 02 09:31:18 np0005541914.localdomain sudo[207674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:18 np0005541914.localdomain python3.9[207676]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:31:18 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:31:19 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60350 DF PROTO=TCP SPT=49468 DPT=9882 SEQ=3836888894 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5443D220000000001030307) 
Dec 02 09:31:19 np0005541914.localdomain systemd-rc-local-generator[207703]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:31:19 np0005541914.localdomain systemd-sysv-generator[207706]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:31:19 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:31:19 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:31:19 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:31:19 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:31:19 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:31:19 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:31:19 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:31:19 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:31:19 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:31:19 np0005541914.localdomain systemd[1]: Reached target edpm_libvirt.target.
Dec 02 09:31:19 np0005541914.localdomain sudo[207674]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:21 np0005541914.localdomain sudo[207823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xzjytnjwkbwxgvhktcwomwucvlmigumu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667880.4841797-4088-133482884079196/AnsiballZ_systemd.py
Dec 02 09:31:21 np0005541914.localdomain sudo[207823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:21 np0005541914.localdomain python3.9[207825]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 02 09:31:21 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:31:21 np0005541914.localdomain systemd-sysv-generator[207853]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:31:21 np0005541914.localdomain systemd-rc-local-generator[207850]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:31:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:31:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:31:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:31:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:31:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:31:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:31:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:31:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:31:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:31:22 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:31:22 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26535 DF PROTO=TCP SPT=58322 DPT=9101 SEQ=3739041820 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54449220000000001030307) 
Dec 02 09:31:22 np0005541914.localdomain systemd-rc-local-generator[207887]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:31:22 np0005541914.localdomain systemd-sysv-generator[207893]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:31:22 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:31:22 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:31:22 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:31:22 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:31:22 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:31:22 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:31:22 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:31:22 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:31:22 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:31:22 np0005541914.localdomain sudo[207823]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:23 np0005541914.localdomain sshd[159607]: pam_unix(sshd:session): session closed for user zuul
Dec 02 09:31:23 np0005541914.localdomain systemd[1]: session-52.scope: Deactivated successfully.
Dec 02 09:31:23 np0005541914.localdomain systemd[1]: session-52.scope: Consumed 3min 42.853s CPU time.
Dec 02 09:31:23 np0005541914.localdomain systemd-logind[760]: Session 52 logged out. Waiting for processes to exit.
Dec 02 09:31:23 np0005541914.localdomain systemd-logind[760]: Removed session 52.
Dec 02 09:31:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51994 DF PROTO=TCP SPT=42566 DPT=9101 SEQ=4124349749 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5445CE30000000001030307) 
Dec 02 09:31:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46108 DF PROTO=TCP SPT=46988 DPT=9105 SEQ=2120226542 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5445F220000000001030307) 
Dec 02 09:31:29 np0005541914.localdomain sshd[207918]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:31:29 np0005541914.localdomain sshd[207918]: Accepted publickey for zuul from 192.168.122.30 port 60256 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 09:31:29 np0005541914.localdomain systemd-logind[760]: New session 53 of user zuul.
Dec 02 09:31:29 np0005541914.localdomain systemd[1]: Started Session 53 of User zuul.
Dec 02 09:31:29 np0005541914.localdomain sshd[207918]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 09:31:30 np0005541914.localdomain python3.9[208029]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 09:31:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35279 DF PROTO=TCP SPT=44554 DPT=9102 SEQ=2200606831 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5446D220000000001030307) 
Dec 02 09:31:31 np0005541914.localdomain python3.9[208141]: ansible-ansible.builtin.service_facts Invoked
Dec 02 09:31:31 np0005541914.localdomain network[208158]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 02 09:31:31 np0005541914.localdomain network[208159]: 'network-scripts' will be removed from distribution in near future.
Dec 02 09:31:31 np0005541914.localdomain network[208160]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 02 09:31:33 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:31:34 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:31:34 np0005541914.localdomain systemd[1]: tmp-crun.tbHTgM.mount: Deactivated successfully.
Dec 02 09:31:34 np0005541914.localdomain podman[208210]: 2025-12-02 09:31:34.45500385 +0000 UTC m=+0.109136466 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 02 09:31:34 np0005541914.localdomain podman[208210]: 2025-12-02 09:31:34.516893481 +0000 UTC m=+0.171026037 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 02 09:31:34 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:31:34 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55003 DF PROTO=TCP SPT=42272 DPT=9882 SEQ=1824040291 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5447A620000000001030307) 
Dec 02 09:31:36 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55004 DF PROTO=TCP SPT=42272 DPT=9882 SEQ=1824040291 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54482620000000001030307) 
Dec 02 09:31:37 np0005541914.localdomain sudo[208416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wssvjpqxixpqyproiuzbxqguvngsbhqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667897.158081-103-134097552664758/AnsiballZ_setup.py
Dec 02 09:31:37 np0005541914.localdomain sudo[208416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:37 np0005541914.localdomain python3.9[208418]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 02 09:31:37 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:31:38 np0005541914.localdomain podman[208423]: 2025-12-02 09:31:38.064893697 +0000 UTC m=+0.070780004 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:31:38 np0005541914.localdomain sudo[208416]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:38 np0005541914.localdomain podman[208423]: 2025-12-02 09:31:38.100908918 +0000 UTC m=+0.106795185 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 02 09:31:38 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:31:38 np0005541914.localdomain sudo[208497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lpdsgxkgzrwhjtfszukndajugegxpbcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667897.158081-103-134097552664758/AnsiballZ_dnf.py
Dec 02 09:31:38 np0005541914.localdomain sudo[208497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:39 np0005541914.localdomain python3.9[208499]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 02 09:31:39 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20895 DF PROTO=TCP SPT=52202 DPT=9100 SEQ=2480210252 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5448F230000000001030307) 
Dec 02 09:31:43 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39303 DF PROTO=TCP SPT=58708 DPT=9105 SEQ=2019460385 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5449B620000000001030307) 
Dec 02 09:31:46 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16788 DF PROTO=TCP SPT=54210 DPT=9102 SEQ=1298376918 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD544A69E0000000001030307) 
Dec 02 09:31:46 np0005541914.localdomain sudo[208497]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:47 np0005541914.localdomain sudo[208609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iqhieiixyfwmzspdvqaofqluakyavjgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667907.1986713-139-102814428338061/AnsiballZ_stat.py
Dec 02 09:31:47 np0005541914.localdomain sudo[208609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:47 np0005541914.localdomain python3.9[208611]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:31:47 np0005541914.localdomain sudo[208609]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:49 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16790 DF PROTO=TCP SPT=54210 DPT=9102 SEQ=1298376918 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD544B2A20000000001030307) 
Dec 02 09:31:49 np0005541914.localdomain sudo[208721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tnrrtdceiljpqtypkfntasfeviqoaicf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667907.9867094-164-185938654693091/AnsiballZ_copy.py
Dec 02 09:31:49 np0005541914.localdomain sudo[208721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:49 np0005541914.localdomain python3.9[208723]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi mode=preserve remote_src=True src=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi/ backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:31:49 np0005541914.localdomain sudo[208721]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:49 np0005541914.localdomain sudo[208831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dqxxnbqjfmgiehqysqskiknyhtspgsvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667909.5461113-188-179659876817180/AnsiballZ_command.py
Dec 02 09:31:49 np0005541914.localdomain sudo[208831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:50 np0005541914.localdomain python3.9[208833]: ansible-ansible.legacy.command Invoked with _raw_params=mv "/var/lib/config-data/puppet-generated/iscsid/etc/iscsi" "/var/lib/config-data/puppet-generated/iscsid/etc/iscsi.adopted"
                                                             _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:31:50 np0005541914.localdomain sudo[208831]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:51 np0005541914.localdomain sudo[208942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ienstgkdnpxuqoklzzojsuicotxkqoii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667910.3382082-211-186486732241983/AnsiballZ_command.py
Dec 02 09:31:51 np0005541914.localdomain sudo[208942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:51 np0005541914.localdomain python3.9[208944]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:31:51 np0005541914.localdomain sudo[208942]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:51 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51996 DF PROTO=TCP SPT=42566 DPT=9101 SEQ=4124349749 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD544BD220000000001030307) 
Dec 02 09:31:51 np0005541914.localdomain sudo[209053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nqoejqxbqznbdmtvbovaztzmyzsnsrzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667911.729525-236-203547892507851/AnsiballZ_command.py
Dec 02 09:31:51 np0005541914.localdomain sudo[209053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:52 np0005541914.localdomain python3.9[209055]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -rF /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:31:52 np0005541914.localdomain sudo[209053]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:52 np0005541914.localdomain sudo[209164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tecsnhpplxedwdxgwjqdfychszdgixxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667912.4890318-263-51367556381264/AnsiballZ_stat.py
Dec 02 09:31:52 np0005541914.localdomain sudo[209164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:52 np0005541914.localdomain python3.9[209166]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:31:52 np0005541914.localdomain sudo[209164]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:53 np0005541914.localdomain sudo[209276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gzxddhuuoqwtycmzwixlgvafxuakppxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667913.355244-296-199095106867254/AnsiballZ_lineinfile.py
Dec 02 09:31:53 np0005541914.localdomain sudo[209276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:53 np0005541914.localdomain python3.9[209278]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:31:53 np0005541914.localdomain sudo[209276]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:54 np0005541914.localdomain sudo[209386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-emylnjaijttqigxvidwrowclfqjtauqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667914.3019435-322-281023769979993/AnsiballZ_systemd_service.py
Dec 02 09:31:54 np0005541914.localdomain sudo[209386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:55 np0005541914.localdomain python3.9[209388]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:31:55 np0005541914.localdomain systemd[1]: Listening on Open-iSCSI iscsid Socket.
Dec 02 09:31:55 np0005541914.localdomain sudo[209386]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:55 np0005541914.localdomain sudo[209500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xbtzhrshbinsdrqwzzalebwjcysaipka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667915.620945-347-148738153643918/AnsiballZ_systemd_service.py
Dec 02 09:31:55 np0005541914.localdomain sudo[209500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:56 np0005541914.localdomain python3.9[209502]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:31:56 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:31:56 np0005541914.localdomain systemd-rc-local-generator[209525]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:31:56 np0005541914.localdomain systemd-sysv-generator[209529]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:31:56 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:31:56 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:31:56 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:31:56 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:31:56 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:31:56 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:31:56 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:31:56 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:31:56 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:31:56 np0005541914.localdomain systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec 02 09:31:56 np0005541914.localdomain systemd[1]: Starting Open-iSCSI...
Dec 02 09:31:56 np0005541914.localdomain iscsid[209544]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi
Dec 02 09:31:56 np0005541914.localdomain iscsid[209544]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier].
Dec 02 09:31:56 np0005541914.localdomain iscsid[209544]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6.
Dec 02 09:31:56 np0005541914.localdomain iscsid[209544]: If using hardware iscsi like qla4xxx this message can be ignored.
Dec 02 09:31:56 np0005541914.localdomain iscsid[209544]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi
Dec 02 09:31:56 np0005541914.localdomain iscsid[209544]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf
Dec 02 09:31:56 np0005541914.localdomain iscsid[209544]: iscsid: can't open iscsid.ipc_auth_uid configuration file /etc/iscsi/iscsid.conf
Dec 02 09:31:56 np0005541914.localdomain systemd[1]: Started Open-iSCSI.
Dec 02 09:31:56 np0005541914.localdomain systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Dec 02 09:31:56 np0005541914.localdomain systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Dec 02 09:31:56 np0005541914.localdomain sudo[209500]: pam_unix(sudo:session): session closed for user root
Dec 02 09:31:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46423 DF PROTO=TCP SPT=32964 DPT=9101 SEQ=2095925668 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD544D2220000000001030307) 
Dec 02 09:31:58 np0005541914.localdomain systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Dec 02 09:31:58 np0005541914.localdomain sudo[209654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ghurxvppbmqmtevcxkrlnsebmvqwrhlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667918.1006918-380-238565134427118/AnsiballZ_service_facts.py
Dec 02 09:31:58 np0005541914.localdomain sudo[209654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:31:58 np0005541914.localdomain systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Dec 02 09:31:58 np0005541914.localdomain python3.9[209656]: ansible-ansible.builtin.service_facts Invoked
Dec 02 09:31:58 np0005541914.localdomain network[209685]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 02 09:31:58 np0005541914.localdomain network[209686]: 'network-scripts' will be removed from distribution in near future.
Dec 02 09:31:58 np0005541914.localdomain network[209687]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 02 09:31:58 np0005541914.localdomain systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@1.service.
Dec 02 09:31:59 np0005541914.localdomain setroubleshoot[209574]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l ac0f5b60-bb8c-402a-9293-a298ac2eb8d8
Dec 02 09:31:59 np0005541914.localdomain setroubleshoot[209574]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.
                                                                 
                                                                 *****  Plugin catchall (100. confidence) suggests   **************************
                                                                 
                                                                 If you believe that iscsid should be allowed search access on the iscsi directory by default.
                                                                 Then you should report this as a bug.
                                                                 You can generate a local policy module to allow this access.
                                                                 Do
                                                                 allow this access for now by executing:
                                                                 # ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid
                                                                 # semodule -X 300 -i my-iscsid.pp
                                                                 
Dec 02 09:31:59 np0005541914.localdomain setroubleshoot[209574]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l ac0f5b60-bb8c-402a-9293-a298ac2eb8d8
Dec 02 09:31:59 np0005541914.localdomain setroubleshoot[209574]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.
                                                                 
                                                                 *****  Plugin catchall (100. confidence) suggests   **************************
                                                                 
                                                                 If you believe that iscsid should be allowed search access on the iscsi directory by default.
                                                                 Then you should report this as a bug.
                                                                 You can generate a local policy module to allow this access.
                                                                 Do
                                                                 allow this access for now by executing:
                                                                 # ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid
                                                                 # semodule -X 300 -i my-iscsid.pp
                                                                 
Dec 02 09:31:59 np0005541914.localdomain setroubleshoot[209574]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l ac0f5b60-bb8c-402a-9293-a298ac2eb8d8
Dec 02 09:31:59 np0005541914.localdomain setroubleshoot[209574]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.
                                                                 
                                                                 *****  Plugin catchall (100. confidence) suggests   **************************
                                                                 
                                                                 If you believe that iscsid should be allowed search access on the iscsi directory by default.
                                                                 Then you should report this as a bug.
                                                                 You can generate a local policy module to allow this access.
                                                                 Do
                                                                 allow this access for now by executing:
                                                                 # ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid
                                                                 # semodule -X 300 -i my-iscsid.pp
                                                                 
Dec 02 09:31:59 np0005541914.localdomain setroubleshoot[209574]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l ac0f5b60-bb8c-402a-9293-a298ac2eb8d8
Dec 02 09:31:59 np0005541914.localdomain setroubleshoot[209574]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.
                                                                 
                                                                 *****  Plugin catchall (100. confidence) suggests   **************************
                                                                 
                                                                 If you believe that iscsid should be allowed search access on the iscsi directory by default.
                                                                 Then you should report this as a bug.
                                                                 You can generate a local policy module to allow this access.
                                                                 Do
                                                                 allow this access for now by executing:
                                                                 # ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid
                                                                 # semodule -X 300 -i my-iscsid.pp
                                                                 
Dec 02 09:31:59 np0005541914.localdomain setroubleshoot[209574]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l ac0f5b60-bb8c-402a-9293-a298ac2eb8d8
Dec 02 09:31:59 np0005541914.localdomain setroubleshoot[209574]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.
                                                                 
                                                                 *****  Plugin catchall (100. confidence) suggests   **************************
                                                                 
                                                                 If you believe that iscsid should be allowed search access on the iscsi directory by default.
                                                                 Then you should report this as a bug.
                                                                 You can generate a local policy module to allow this access.
                                                                 Do
                                                                 allow this access for now by executing:
                                                                 # ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid
                                                                 # semodule -X 300 -i my-iscsid.pp
                                                                 
Dec 02 09:31:59 np0005541914.localdomain setroubleshoot[209574]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l ac0f5b60-bb8c-402a-9293-a298ac2eb8d8
Dec 02 09:31:59 np0005541914.localdomain setroubleshoot[209574]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.
                                                                 
                                                                 *****  Plugin catchall (100. confidence) suggests   **************************
                                                                 
                                                                 If you believe that iscsid should be allowed search access on the iscsi directory by default.
                                                                 Then you should report this as a bug.
                                                                 You can generate a local policy module to allow this access.
                                                                 Do
                                                                 allow this access for now by executing:
                                                                 # ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid
                                                                 # semodule -X 300 -i my-iscsid.pp
                                                                 
Dec 02 09:32:00 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:32:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16792 DF PROTO=TCP SPT=54210 DPT=9102 SEQ=1298376918 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD544E3220000000001030307) 
Dec 02 09:32:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:32:03.138 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:32:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:32:03.139 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:32:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:32:03.139 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:32:03 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25796 DF PROTO=TCP SPT=42340 DPT=9882 SEQ=4285322317 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD544EB710000000001030307) 
Dec 02 09:32:03 np0005541914.localdomain sudo[209654]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:04 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25797 DF PROTO=TCP SPT=42340 DPT=9882 SEQ=4285322317 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD544EF620000000001030307) 
Dec 02 09:32:04 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:32:05 np0005541914.localdomain podman[209830]: 2025-12-02 09:32:05.087301879 +0000 UTC m=+0.084886379 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true)
Dec 02 09:32:05 np0005541914.localdomain podman[209830]: 2025-12-02 09:32:05.150117866 +0000 UTC m=+0.147702316 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 02 09:32:05 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:32:05 np0005541914.localdomain sudo[209945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pndjxfmzvswlzaaaaeftgglcsfwbznhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667925.3765013-410-55268586170840/AnsiballZ_file.py
Dec 02 09:32:05 np0005541914.localdomain sudo[209945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:06 np0005541914.localdomain python3.9[209947]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 02 09:32:06 np0005541914.localdomain sudo[209945]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:06 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25798 DF PROTO=TCP SPT=42340 DPT=9882 SEQ=4285322317 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD544F7620000000001030307) 
Dec 02 09:32:06 np0005541914.localdomain sudo[210055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uihakppykrmkhhyvobneyoymapogvfct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667926.339904-434-28918810899472/AnsiballZ_modprobe.py
Dec 02 09:32:06 np0005541914.localdomain sudo[210055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:06 np0005541914.localdomain python3.9[210057]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Dec 02 09:32:07 np0005541914.localdomain sudo[210055]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:07 np0005541914.localdomain sudo[210169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-haydxurvpcajlfjadygbgzbsgbxvlmwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667927.2019618-458-94511307963775/AnsiballZ_stat.py
Dec 02 09:32:07 np0005541914.localdomain sudo[210169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:07 np0005541914.localdomain python3.9[210171]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:32:07 np0005541914.localdomain sudo[210169]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:08 np0005541914.localdomain sudo[210257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ftsnonpuroubbuyrtzidtoectsapjtbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667927.2019618-458-94511307963775/AnsiballZ_copy.py
Dec 02 09:32:08 np0005541914.localdomain sudo[210257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:08 np0005541914.localdomain python3.9[210259]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667927.2019618-458-94511307963775/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:32:08 np0005541914.localdomain sudo[210257]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:08 np0005541914.localdomain sudo[210367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hiiabifdklrgbeuanisasalyzphrhtoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667928.6382415-506-82524238158817/AnsiballZ_lineinfile.py
Dec 02 09:32:08 np0005541914.localdomain sudo[210367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:08 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:32:09 np0005541914.localdomain podman[210369]: 2025-12-02 09:32:09.038864673 +0000 UTC m=+0.086106097 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 02 09:32:09 np0005541914.localdomain podman[210369]: 2025-12-02 09:32:09.072886757 +0000 UTC m=+0.120128161 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 02 09:32:09 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:32:09 np0005541914.localdomain python3.9[210370]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:32:09 np0005541914.localdomain sudo[210367]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:09 np0005541914.localdomain systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@1.service: Deactivated successfully.
Dec 02 09:32:09 np0005541914.localdomain systemd[1]: setroubleshootd.service: Deactivated successfully.
Dec 02 09:32:10 np0005541914.localdomain sudo[210495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bkkzpepgmknxmkmbsvsoruqyniafdlyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667929.6866283-530-13717605449890/AnsiballZ_systemd.py
Dec 02 09:32:10 np0005541914.localdomain sudo[210495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:10 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49094 DF PROTO=TCP SPT=48774 DPT=9100 SEQ=2725460907 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54505220000000001030307) 
Dec 02 09:32:10 np0005541914.localdomain python3.9[210497]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 02 09:32:10 np0005541914.localdomain systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 02 09:32:10 np0005541914.localdomain systemd[1]: Stopped Load Kernel Modules.
Dec 02 09:32:10 np0005541914.localdomain systemd[1]: Stopping Load Kernel Modules...
Dec 02 09:32:10 np0005541914.localdomain systemd[1]: Starting Load Kernel Modules...
Dec 02 09:32:10 np0005541914.localdomain systemd-modules-load[210501]: Module 'msr' is built in
Dec 02 09:32:10 np0005541914.localdomain systemd[1]: Finished Load Kernel Modules.
Dec 02 09:32:10 np0005541914.localdomain sudo[210495]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:11 np0005541914.localdomain sudo[210609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qvtkgzghvkztdaqppsqkrjtlihanzsmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667930.7395732-554-257622949548509/AnsiballZ_file.py
Dec 02 09:32:11 np0005541914.localdomain sudo[210609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:11 np0005541914.localdomain python3.9[210611]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:32:11 np0005541914.localdomain sudo[210609]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:11 np0005541914.localdomain sudo[210719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mvywnnvhvwiyhxgwlekyovvlokdpaidb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667931.5415282-581-231199636139758/AnsiballZ_stat.py
Dec 02 09:32:11 np0005541914.localdomain sudo[210719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:12 np0005541914.localdomain python3.9[210721]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:32:12 np0005541914.localdomain sudo[210719]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:12 np0005541914.localdomain sudo[210829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fqfopyjcrfjmigjjfcsgdjmgnijevizo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667932.2944856-608-195758091749649/AnsiballZ_stat.py
Dec 02 09:32:12 np0005541914.localdomain sudo[210829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:12 np0005541914.localdomain python3.9[210831]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:32:12 np0005541914.localdomain sudo[210829]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:13 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50232 DF PROTO=TCP SPT=54402 DPT=9105 SEQ=2939221781 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54510A20000000001030307) 
Dec 02 09:32:13 np0005541914.localdomain sudo[210939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-znxjgtactpmvwjivembxjsjwsgnrxnzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667932.9645755-632-128685646380296/AnsiballZ_stat.py
Dec 02 09:32:13 np0005541914.localdomain sudo[210939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:13 np0005541914.localdomain python3.9[210941]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:32:13 np0005541914.localdomain sudo[210939]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:13 np0005541914.localdomain sudo[211027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-odtunwlupzazsjtejubutdnssqwvvpll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667932.9645755-632-128685646380296/AnsiballZ_copy.py
Dec 02 09:32:13 np0005541914.localdomain sudo[211027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:13 np0005541914.localdomain sudo[211029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:32:13 np0005541914.localdomain sudo[211029]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:32:13 np0005541914.localdomain sudo[211029]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:13 np0005541914.localdomain sudo[211048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 02 09:32:13 np0005541914.localdomain sudo[211048]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:32:13 np0005541914.localdomain python3.9[211033]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667932.9645755-632-128685646380296/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:32:14 np0005541914.localdomain sudo[211027]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:14 np0005541914.localdomain sudo[211210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xrhxhttapgqkufjabaahfvonyzhzkkmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667934.1841953-677-212777534672017/AnsiballZ_command.py
Dec 02 09:32:14 np0005541914.localdomain sudo[211210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:14 np0005541914.localdomain python3.9[211217]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:32:14 np0005541914.localdomain sudo[211210]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:14 np0005541914.localdomain podman[211251]: 2025-12-02 09:32:14.800065507 +0000 UTC m=+0.105299489 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, vcs-type=git, CEPH_POINT_RELEASE=, release=1763362218, io.openshift.expose-services=, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, GIT_CLEAN=True, com.redhat.component=rhceph-container, name=rhceph, vendor=Red Hat, Inc., version=7, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 02 09:32:14 np0005541914.localdomain podman[211251]: 2025-12-02 09:32:14.931854699 +0000 UTC m=+0.237088711 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, vcs-type=git, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, RELEASE=main, GIT_CLEAN=True, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., ceph=True)
Dec 02 09:32:15 np0005541914.localdomain sudo[211419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gfucgzfyohstbyljefbbwgyxcieolgcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667934.8716397-700-21351487266135/AnsiballZ_lineinfile.py
Dec 02 09:32:15 np0005541914.localdomain sudo[211419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:15 np0005541914.localdomain sudo[211048]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:15 np0005541914.localdomain python3.9[211425]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:32:15 np0005541914.localdomain sudo[211428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:32:15 np0005541914.localdomain sudo[211419]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:15 np0005541914.localdomain sudo[211428]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:32:15 np0005541914.localdomain sudo[211428]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:15 np0005541914.localdomain sudo[211446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:32:15 np0005541914.localdomain sudo[211446]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:32:16 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58427 DF PROTO=TCP SPT=33362 DPT=9102 SEQ=719211157 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5451BCE0000000001030307) 
Dec 02 09:32:16 np0005541914.localdomain sudo[211592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tjyvbxoagmreqllmqcitqqirpjjwrnqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667935.5882814-724-134694739760960/AnsiballZ_replace.py
Dec 02 09:32:16 np0005541914.localdomain sudo[211592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:16 np0005541914.localdomain sudo[211446]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:16 np0005541914.localdomain python3.9[211601]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:32:16 np0005541914.localdomain sudo[211592]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:16 np0005541914.localdomain sudo[211668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:32:16 np0005541914.localdomain sudo[211668]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:32:16 np0005541914.localdomain sudo[211668]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:16 np0005541914.localdomain sudo[211732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kgsmrbynigesklxllyfqikgvtbbyhhuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667936.4760263-749-205095142102711/AnsiballZ_replace.py
Dec 02 09:32:16 np0005541914.localdomain sudo[211732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:16 np0005541914.localdomain python3.9[211734]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:32:16 np0005541914.localdomain sudo[211732]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:18 np0005541914.localdomain sudo[211842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kzvybjaavkixxqlwqjdqhuhkmfuvqmwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667937.2322035-775-149678500311042/AnsiballZ_lineinfile.py
Dec 02 09:32:18 np0005541914.localdomain sudo[211842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:18 np0005541914.localdomain python3.9[211844]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:32:18 np0005541914.localdomain sudo[211842]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:18 np0005541914.localdomain sudo[211952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nlabqgoicbsqudfucubfysnwhwlibehm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667938.4404442-775-157012121844807/AnsiballZ_lineinfile.py
Dec 02 09:32:18 np0005541914.localdomain sudo[211952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:18 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25800 DF PROTO=TCP SPT=42340 DPT=9882 SEQ=4285322317 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54527220000000001030307) 
Dec 02 09:32:18 np0005541914.localdomain python3.9[211954]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:32:18 np0005541914.localdomain sudo[211952]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:19 np0005541914.localdomain sudo[212062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pgdwyalejfbetacbplvkvpircuwfobwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667939.0414526-775-226877222179686/AnsiballZ_lineinfile.py
Dec 02 09:32:19 np0005541914.localdomain sudo[212062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:19 np0005541914.localdomain python3.9[212064]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:32:19 np0005541914.localdomain sudo[212062]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:20 np0005541914.localdomain sudo[212172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qvnrvuhikcvvyqwkooigvnxtlzsvvtdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667939.7923553-775-143403587143821/AnsiballZ_lineinfile.py
Dec 02 09:32:20 np0005541914.localdomain sudo[212172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:20 np0005541914.localdomain python3.9[212174]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:32:20 np0005541914.localdomain sudo[212172]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:20 np0005541914.localdomain sudo[212282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-itbwfrognlgoaaadsofeefzikxnaxift ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667940.5228894-863-35799764171924/AnsiballZ_stat.py
Dec 02 09:32:20 np0005541914.localdomain sudo[212282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:21 np0005541914.localdomain python3.9[212284]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:32:21 np0005541914.localdomain sudo[212282]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:21 np0005541914.localdomain sudo[212394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jxdqamkumawzgfkadanhyytuweytqqmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667941.2340772-887-63893300302401/AnsiballZ_file.py
Dec 02 09:32:21 np0005541914.localdomain sudo[212394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:21 np0005541914.localdomain python3.9[212396]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:32:21 np0005541914.localdomain sudo[212394]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:21 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46425 DF PROTO=TCP SPT=32964 DPT=9101 SEQ=2095925668 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54533230000000001030307) 
Dec 02 09:32:22 np0005541914.localdomain sudo[212504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xuuklpoghmbwoaahbvmrklywyalmvxca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667942.2351093-914-14102228383808/AnsiballZ_file.py
Dec 02 09:32:22 np0005541914.localdomain sudo[212504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:22 np0005541914.localdomain python3.9[212506]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:32:22 np0005541914.localdomain sudo[212504]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:22 np0005541914.localdomain sshd[212524]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:32:23 np0005541914.localdomain sudo[212616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rpdlnkfufiobnqqxldcwvoxmfmhisfpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667942.952681-938-73998113585734/AnsiballZ_stat.py
Dec 02 09:32:23 np0005541914.localdomain sudo[212616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:23 np0005541914.localdomain sshd[212524]: Invalid user ariel from 34.78.29.97 port 39780
Dec 02 09:32:23 np0005541914.localdomain python3.9[212618]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:32:23 np0005541914.localdomain sshd[212524]: Received disconnect from 34.78.29.97 port 39780:11: Bye Bye [preauth]
Dec 02 09:32:23 np0005541914.localdomain sshd[212524]: Disconnected from invalid user ariel 34.78.29.97 port 39780 [preauth]
Dec 02 09:32:23 np0005541914.localdomain sudo[212616]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:23 np0005541914.localdomain sudo[212673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-daonxduengpkedafqjjiilposnolwrtb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667942.952681-938-73998113585734/AnsiballZ_file.py
Dec 02 09:32:23 np0005541914.localdomain sudo[212673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:23 np0005541914.localdomain python3.9[212675]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:32:23 np0005541914.localdomain sudo[212673]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:24 np0005541914.localdomain sudo[212783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gmenguwsxidtgkuykcqzdocpqvmqcczb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667944.0233371-938-6060965473699/AnsiballZ_stat.py
Dec 02 09:32:24 np0005541914.localdomain sudo[212783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:24 np0005541914.localdomain python3.9[212785]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:32:24 np0005541914.localdomain sudo[212783]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:24 np0005541914.localdomain sudo[212840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cwviantbpvdfdjrdlxrpvphbyutaacay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667944.0233371-938-6060965473699/AnsiballZ_file.py
Dec 02 09:32:24 np0005541914.localdomain sudo[212840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:24 np0005541914.localdomain python3.9[212842]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:32:24 np0005541914.localdomain sudo[212840]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:25 np0005541914.localdomain sudo[212950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zwpblscytopppsmldoeqtkwskzxbjeeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667945.1232693-1007-90051172980291/AnsiballZ_file.py
Dec 02 09:32:25 np0005541914.localdomain sudo[212950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:25 np0005541914.localdomain python3.9[212952]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:32:25 np0005541914.localdomain sudo[212950]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:26 np0005541914.localdomain sudo[213060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-warptrxfwxmnwahvsqwlulimdciczrpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667945.7946186-1031-10755731782104/AnsiballZ_stat.py
Dec 02 09:32:26 np0005541914.localdomain sudo[213060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:26 np0005541914.localdomain python3.9[213062]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:32:26 np0005541914.localdomain sudo[213060]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:26 np0005541914.localdomain sudo[213117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tefwlflnzracxpbcokquzhkaspncfgvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667945.7946186-1031-10755731782104/AnsiballZ_file.py
Dec 02 09:32:26 np0005541914.localdomain sudo[213117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:26 np0005541914.localdomain python3.9[213119]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:32:26 np0005541914.localdomain sudo[213117]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12445 DF PROTO=TCP SPT=52094 DPT=9101 SEQ=362888855 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54547630000000001030307) 
Dec 02 09:32:27 np0005541914.localdomain sudo[213227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bcanabytzjalmhjsxicjzhiftvqolltg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667946.9455588-1067-21985277642907/AnsiballZ_stat.py
Dec 02 09:32:27 np0005541914.localdomain sudo[213227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:27 np0005541914.localdomain python3.9[213229]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:32:27 np0005541914.localdomain sudo[213227]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:28 np0005541914.localdomain sudo[213284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qciaeewiwlqvltlelecyrnqkwzgtupum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667946.9455588-1067-21985277642907/AnsiballZ_file.py
Dec 02 09:32:28 np0005541914.localdomain sudo[213284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:28 np0005541914.localdomain python3.9[213286]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:32:28 np0005541914.localdomain sudo[213284]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:28 np0005541914.localdomain sudo[213394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cxfpoultaunekfncdalrjktdnshuuofs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667948.6689525-1103-194385155775639/AnsiballZ_systemd.py
Dec 02 09:32:28 np0005541914.localdomain sudo[213394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:29 np0005541914.localdomain python3.9[213396]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:32:29 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:32:29 np0005541914.localdomain systemd-rc-local-generator[213423]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:32:29 np0005541914.localdomain systemd-sysv-generator[213427]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:32:29 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:32:29 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:32:29 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:32:29 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:32:29 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:32:29 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:32:29 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:32:29 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:32:29 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:32:30 np0005541914.localdomain sudo[213394]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58431 DF PROTO=TCP SPT=33362 DPT=9102 SEQ=719211157 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54557220000000001030307) 
Dec 02 09:32:31 np0005541914.localdomain sudo[213542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jhspzdwqjjhjjjjhrlrcxqzwjfshnrep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667950.9214-1127-36508226049194/AnsiballZ_stat.py
Dec 02 09:32:31 np0005541914.localdomain sudo[213542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:31 np0005541914.localdomain python3.9[213544]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:32:31 np0005541914.localdomain sudo[213542]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:31 np0005541914.localdomain sudo[213599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-brwlprrxpdxskwefocznmusqkytzgqif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667950.9214-1127-36508226049194/AnsiballZ_file.py
Dec 02 09:32:31 np0005541914.localdomain sudo[213599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:31 np0005541914.localdomain python3.9[213601]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:32:31 np0005541914.localdomain sudo[213599]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:32 np0005541914.localdomain sudo[213709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mdkyyxshmwjvpqviymwgopikoaxcywsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667952.1111295-1163-274558432056997/AnsiballZ_stat.py
Dec 02 09:32:32 np0005541914.localdomain sudo[213709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:32 np0005541914.localdomain python3.9[213711]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:32:32 np0005541914.localdomain sudo[213709]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:32 np0005541914.localdomain sudo[213766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-exnhnbgbhkxsovlndvkwzlrxmmsfqhdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667952.1111295-1163-274558432056997/AnsiballZ_file.py
Dec 02 09:32:32 np0005541914.localdomain sudo[213766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:32 np0005541914.localdomain python3.9[213768]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:32:33 np0005541914.localdomain sudo[213766]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:33 np0005541914.localdomain sudo[213876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pfarhogzywivplooeafmkuqunkkxnvta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667953.1932826-1199-78172741428447/AnsiballZ_systemd.py
Dec 02 09:32:33 np0005541914.localdomain sudo[213876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:33 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3443 DF PROTO=TCP SPT=59588 DPT=9882 SEQ=2604281941 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54560A10000000001030307) 
Dec 02 09:32:33 np0005541914.localdomain python3.9[213878]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:32:33 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:32:33 np0005541914.localdomain systemd-rc-local-generator[213904]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:32:33 np0005541914.localdomain systemd-sysv-generator[213907]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:32:33 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:32:33 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:32:33 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:32:33 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:32:33 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:32:33 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:32:33 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:32:33 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:32:33 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:32:34 np0005541914.localdomain systemd[1]: Starting Create netns directory...
Dec 02 09:32:34 np0005541914.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 02 09:32:34 np0005541914.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 02 09:32:34 np0005541914.localdomain systemd[1]: Finished Create netns directory.
Dec 02 09:32:34 np0005541914.localdomain sudo[213876]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:34 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3444 DF PROTO=TCP SPT=59588 DPT=9882 SEQ=2604281941 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54564A20000000001030307) 
Dec 02 09:32:34 np0005541914.localdomain sudo[214027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xilztvfpnbehuncskbygwzannkygtkfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667954.6018782-1229-149270176038541/AnsiballZ_file.py
Dec 02 09:32:34 np0005541914.localdomain sudo[214027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:35 np0005541914.localdomain python3.9[214029]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:32:35 np0005541914.localdomain sudo[214027]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:35 np0005541914.localdomain sudo[214137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lmbminzodhdijsqavacjeunkxnwsizsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667955.2911882-1253-207524145104927/AnsiballZ_stat.py
Dec 02 09:32:35 np0005541914.localdomain sudo[214137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:35 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:32:35 np0005541914.localdomain podman[214140]: 2025-12-02 09:32:35.709203274 +0000 UTC m=+0.088887335 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:32:35 np0005541914.localdomain podman[214140]: 2025-12-02 09:32:35.751010195 +0000 UTC m=+0.130694316 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller)
Dec 02 09:32:35 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:32:35 np0005541914.localdomain python3.9[214139]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:32:35 np0005541914.localdomain sudo[214137]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:36 np0005541914.localdomain sudo[214249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kdymeowhfvmodhyuxnakgpvwwszajfvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667955.2911882-1253-207524145104927/AnsiballZ_copy.py
Dec 02 09:32:36 np0005541914.localdomain sudo[214249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:36 np0005541914.localdomain python3.9[214251]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764667955.2911882-1253-207524145104927/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:32:36 np0005541914.localdomain sudo[214249]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:36 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3445 DF PROTO=TCP SPT=59588 DPT=9882 SEQ=2604281941 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5456CA20000000001030307) 
Dec 02 09:32:37 np0005541914.localdomain sudo[214359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mqcdfjfvdtaqsqlrtcipyjzxeaqgwhia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667956.873707-1304-259747637021456/AnsiballZ_file.py
Dec 02 09:32:37 np0005541914.localdomain sudo[214359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:37 np0005541914.localdomain python3.9[214361]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:32:37 np0005541914.localdomain sudo[214359]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:37 np0005541914.localdomain sudo[214469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lxyefncxdnavgdqqwlzlkwmrcktjojnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667957.572149-1327-140353524971938/AnsiballZ_stat.py
Dec 02 09:32:37 np0005541914.localdomain sudo[214469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:38 np0005541914.localdomain python3.9[214471]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:32:38 np0005541914.localdomain sudo[214469]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:38 np0005541914.localdomain sudo[214557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rqyodidujbboxonstaldaixvusuzuysq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667957.572149-1327-140353524971938/AnsiballZ_copy.py
Dec 02 09:32:38 np0005541914.localdomain sudo[214557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:38 np0005541914.localdomain python3.9[214559]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667957.572149-1327-140353524971938/.source.json _original_basename=.9fxn10en follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:32:38 np0005541914.localdomain sudo[214557]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:39 np0005541914.localdomain sudo[214667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jpnxlctxbsqsmvomzdxtwzlxdfwvmksp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667958.7825456-1373-87943968721261/AnsiballZ_file.py
Dec 02 09:32:39 np0005541914.localdomain sudo[214667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:39 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:32:39 np0005541914.localdomain systemd[1]: tmp-crun.EvvGOp.mount: Deactivated successfully.
Dec 02 09:32:39 np0005541914.localdomain podman[214670]: 2025-12-02 09:32:39.245167822 +0000 UTC m=+0.099219191 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 02 09:32:39 np0005541914.localdomain podman[214670]: 2025-12-02 09:32:39.252779945 +0000 UTC m=+0.106831214 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent)
Dec 02 09:32:39 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:32:39 np0005541914.localdomain python3.9[214669]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:32:39 np0005541914.localdomain sudo[214667]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:39 np0005541914.localdomain systemd[1]: virtnodedevd.service: Deactivated successfully.
Dec 02 09:32:39 np0005541914.localdomain sudo[214797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jnzkzbqblnuamwhczcipxifzpihnsdnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667959.604713-1396-190483048063299/AnsiballZ_stat.py
Dec 02 09:32:39 np0005541914.localdomain sudo[214797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:40 np0005541914.localdomain sudo[214797]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:40 np0005541914.localdomain sudo[214885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iascfzrvfhixxvdlmhpndnknoqnfppdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667959.604713-1396-190483048063299/AnsiballZ_copy.py
Dec 02 09:32:40 np0005541914.localdomain sudo[214885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:40 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25423 DF PROTO=TCP SPT=57064 DPT=9100 SEQ=3621728756 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5457B220000000001030307) 
Dec 02 09:32:40 np0005541914.localdomain sudo[214885]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:40 np0005541914.localdomain systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 02 09:32:41 np0005541914.localdomain sudo[214996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yejqbbjguuxltqhqdpkdmdqoyjhbfxbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667961.085097-1448-196293698189626/AnsiballZ_container_config_data.py
Dec 02 09:32:41 np0005541914.localdomain sudo[214996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:41 np0005541914.localdomain python3.9[214998]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Dec 02 09:32:41 np0005541914.localdomain sudo[214996]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:42 np0005541914.localdomain sudo[215106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-efctnjvqbpeysyiwchkpujxiqybtyuqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667962.0028632-1475-60049443050760/AnsiballZ_container_config_hash.py
Dec 02 09:32:42 np0005541914.localdomain sudo[215106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:42 np0005541914.localdomain python3.9[215108]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 02 09:32:42 np0005541914.localdomain sudo[215106]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:43 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42173 DF PROTO=TCP SPT=39690 DPT=9105 SEQ=325339395 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54585A20000000001030307) 
Dec 02 09:32:43 np0005541914.localdomain sudo[215216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kpggunmycwpryrehynjzcphlmfhxluyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667963.0773828-1502-126923854120276/AnsiballZ_podman_container_info.py
Dec 02 09:32:43 np0005541914.localdomain sudo[215216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:43 np0005541914.localdomain python3.9[215218]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 02 09:32:44 np0005541914.localdomain sudo[215216]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:46 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65288 DF PROTO=TCP SPT=35386 DPT=9102 SEQ=1053022010 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54590FF0000000001030307) 
Dec 02 09:32:47 np0005541914.localdomain sudo[215352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ientfncwjmuxqtjpisdssaynsnxshtym ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764667967.260383-1541-35425589066028/AnsiballZ_edpm_container_manage.py
Dec 02 09:32:47 np0005541914.localdomain sudo[215352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:47 np0005541914.localdomain python3[215354]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 02 09:32:49 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3447 DF PROTO=TCP SPT=59588 DPT=9882 SEQ=2604281941 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5459D220000000001030307) 
Dec 02 09:32:50 np0005541914.localdomain podman[215367]: 2025-12-02 09:32:48.102437484 +0000 UTC m=+0.047375703 image pull  quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 02 09:32:50 np0005541914.localdomain podman[215415]: 
Dec 02 09:32:50 np0005541914.localdomain podman[215415]: 2025-12-02 09:32:50.325301807 +0000 UTC m=+0.074368480 container create 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 02 09:32:50 np0005541914.localdomain podman[215415]: 2025-12-02 09:32:50.286712554 +0000 UTC m=+0.035779287 image pull  quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 02 09:32:50 np0005541914.localdomain python3[215354]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 02 09:32:50 np0005541914.localdomain sudo[215352]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:51 np0005541914.localdomain sudo[215561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-npuvjusnpgpypvlexabjcdpmqilrynqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667970.7129538-1565-21474063102520/AnsiballZ_stat.py
Dec 02 09:32:51 np0005541914.localdomain sudo[215561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:51 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12447 DF PROTO=TCP SPT=52094 DPT=9101 SEQ=362888855 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD545A7220000000001030307) 
Dec 02 09:32:51 np0005541914.localdomain python3.9[215563]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:32:51 np0005541914.localdomain sudo[215561]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:52 np0005541914.localdomain systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 02 09:32:52 np0005541914.localdomain systemd[1]: virtqemud.service: Deactivated successfully.
Dec 02 09:32:52 np0005541914.localdomain sudo[215675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-scymrflsdwkdnlcwqyiukftusnpxxlqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667972.0630093-1593-64503761518616/AnsiballZ_file.py
Dec 02 09:32:52 np0005541914.localdomain sudo[215675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:52 np0005541914.localdomain python3.9[215677]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:32:52 np0005541914.localdomain sudo[215675]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:53 np0005541914.localdomain sshd[215678]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:32:53 np0005541914.localdomain sshd[215678]: Invalid user sdadmin from 45.148.10.240 port 58242
Dec 02 09:32:53 np0005541914.localdomain sudo[215732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mqvonlmrnmfeqmcvkekzmwqosyvjbnjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667972.0630093-1593-64503761518616/AnsiballZ_stat.py
Dec 02 09:32:53 np0005541914.localdomain sudo[215732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:53 np0005541914.localdomain sshd[215678]: Connection closed by invalid user sdadmin 45.148.10.240 port 58242 [preauth]
Dec 02 09:32:53 np0005541914.localdomain python3.9[215734]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:32:53 np0005541914.localdomain sudo[215732]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:54 np0005541914.localdomain sudo[215841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dqfycwoaarjspflnhjariqoguwdjsdba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667973.8296232-1593-95182312249303/AnsiballZ_copy.py
Dec 02 09:32:54 np0005541914.localdomain sudo[215841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:54 np0005541914.localdomain python3.9[215843]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764667973.8296232-1593-95182312249303/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:32:54 np0005541914.localdomain sudo[215841]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:55 np0005541914.localdomain sudo[215896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-anpmmpimyavvmkubobcstiskepmzzhyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667973.8296232-1593-95182312249303/AnsiballZ_systemd.py
Dec 02 09:32:55 np0005541914.localdomain sudo[215896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:55 np0005541914.localdomain python3.9[215898]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 02 09:32:55 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:32:55 np0005541914.localdomain systemd-rc-local-generator[215923]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:32:55 np0005541914.localdomain systemd-sysv-generator[215927]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:32:55 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:32:55 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:32:55 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:32:55 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:32:55 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:32:55 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:32:55 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:32:55 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:32:55 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:32:55 np0005541914.localdomain sudo[215896]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:55 np0005541914.localdomain sudo[215986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bwppqqkxsvzszxulpfbljobjshgrvfhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667973.8296232-1593-95182312249303/AnsiballZ_systemd.py
Dec 02 09:32:56 np0005541914.localdomain sudo[215986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:56 np0005541914.localdomain python3.9[215988]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:32:56 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:32:56 np0005541914.localdomain systemd-rc-local-generator[216017]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:32:56 np0005541914.localdomain systemd-sysv-generator[216021]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:32:56 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:32:56 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:32:56 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:32:56 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:32:56 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:32:56 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:32:56 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:32:56 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:32:56 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:32:56 np0005541914.localdomain systemd[1]: Starting multipathd container...
Dec 02 09:32:56 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:32:56 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd2b08aa15d7ed98a99a2ca0ad0a0527b7b07dbb69bb9536db0aff80261887df/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 02 09:32:56 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd2b08aa15d7ed98a99a2ca0ad0a0527b7b07dbb69bb9536db0aff80261887df/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 02 09:32:57 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:32:57 np0005541914.localdomain podman[216030]: 2025-12-02 09:32:57.088558235 +0000 UTC m=+0.314440526 container init 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 02 09:32:57 np0005541914.localdomain multipathd[216044]: + sudo -E kolla_set_configs
Dec 02 09:32:57 np0005541914.localdomain sudo[216050]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 02 09:32:57 np0005541914.localdomain sudo[216050]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 02 09:32:57 np0005541914.localdomain sudo[216050]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 02 09:32:57 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:32:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60629 DF PROTO=TCP SPT=44204 DPT=9101 SEQ=328230410 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD545BCA20000000001030307) 
Dec 02 09:32:57 np0005541914.localdomain multipathd[216044]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 02 09:32:57 np0005541914.localdomain multipathd[216044]: INFO:__main__:Validating config file
Dec 02 09:32:57 np0005541914.localdomain multipathd[216044]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 02 09:32:57 np0005541914.localdomain multipathd[216044]: INFO:__main__:Writing out command to execute
Dec 02 09:32:57 np0005541914.localdomain sudo[216050]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:57 np0005541914.localdomain multipathd[216044]: ++ cat /run_command
Dec 02 09:32:57 np0005541914.localdomain multipathd[216044]: + CMD='/usr/sbin/multipathd -d'
Dec 02 09:32:57 np0005541914.localdomain multipathd[216044]: + ARGS=
Dec 02 09:32:57 np0005541914.localdomain multipathd[216044]: + sudo kolla_copy_cacerts
Dec 02 09:32:57 np0005541914.localdomain sudo[216066]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 02 09:32:57 np0005541914.localdomain sudo[216066]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 02 09:32:57 np0005541914.localdomain sudo[216066]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 02 09:32:57 np0005541914.localdomain sudo[216066]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:57 np0005541914.localdomain multipathd[216044]: + [[ ! -n '' ]]
Dec 02 09:32:57 np0005541914.localdomain multipathd[216044]: + . kolla_extend_start
Dec 02 09:32:57 np0005541914.localdomain multipathd[216044]: Running command: '/usr/sbin/multipathd -d'
Dec 02 09:32:57 np0005541914.localdomain multipathd[216044]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 02 09:32:57 np0005541914.localdomain multipathd[216044]: + umask 0022
Dec 02 09:32:57 np0005541914.localdomain multipathd[216044]: + exec /usr/sbin/multipathd -d
Dec 02 09:32:57 np0005541914.localdomain multipathd[216044]: 10141.453365 | --------start up--------
Dec 02 09:32:57 np0005541914.localdomain multipathd[216044]: 10141.453383 | read /etc/multipath.conf
Dec 02 09:32:57 np0005541914.localdomain podman[216030]: 2025-12-02 09:32:57.21761793 +0000 UTC m=+0.443500191 container start 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:32:57 np0005541914.localdomain podman[216030]: multipathd
Dec 02 09:32:57 np0005541914.localdomain multipathd[216044]: 10141.457354 | path checkers start up
Dec 02 09:32:57 np0005541914.localdomain systemd[1]: Started multipathd container.
Dec 02 09:32:57 np0005541914.localdomain sudo[215986]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:57 np0005541914.localdomain podman[216053]: 2025-12-02 09:32:57.290830293 +0000 UTC m=+0.154821685 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 02 09:32:57 np0005541914.localdomain podman[216053]: 2025-12-02 09:32:57.302762108 +0000 UTC m=+0.166753470 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:32:57 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:32:57 np0005541914.localdomain python3.9[216190]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:32:58 np0005541914.localdomain sudo[216300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-muxbowecllcwpkechgnilmvhbaerkugt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667978.130761-1700-150409552299973/AnsiballZ_command.py
Dec 02 09:32:58 np0005541914.localdomain sudo[216300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:58 np0005541914.localdomain python3.9[216302]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:32:58 np0005541914.localdomain sudo[216300]: pam_unix(sudo:session): session closed for user root
Dec 02 09:32:59 np0005541914.localdomain sudo[216423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dkuuxijnmpntfnkautspdvwvbpddtmle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667978.9832797-1724-98069865174362/AnsiballZ_systemd.py
Dec 02 09:32:59 np0005541914.localdomain sudo[216423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:32:59 np0005541914.localdomain python3.9[216425]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 02 09:32:59 np0005541914.localdomain systemd[1]: Stopping multipathd container...
Dec 02 09:32:59 np0005541914.localdomain multipathd[216044]: 10143.946184 | exit (signal)
Dec 02 09:32:59 np0005541914.localdomain multipathd[216044]: 10143.946879 | --------shut down-------
Dec 02 09:32:59 np0005541914.localdomain systemd[1]: libpod-2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.scope: Deactivated successfully.
Dec 02 09:32:59 np0005541914.localdomain podman[216429]: 2025-12-02 09:32:59.749210322 +0000 UTC m=+0.109450944 container died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 02 09:32:59 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.timer: Deactivated successfully.
Dec 02 09:32:59 np0005541914.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:32:59 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e-userdata-shm.mount: Deactivated successfully.
Dec 02 09:32:59 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-fd2b08aa15d7ed98a99a2ca0ad0a0527b7b07dbb69bb9536db0aff80261887df-merged.mount: Deactivated successfully.
Dec 02 09:33:00 np0005541914.localdomain podman[216429]: 2025-12-02 09:33:00.010180439 +0000 UTC m=+0.370421031 container cleanup 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 02 09:33:00 np0005541914.localdomain podman[216429]: multipathd
Dec 02 09:33:00 np0005541914.localdomain podman[216457]: 2025-12-02 09:33:00.123833311 +0000 UTC m=+0.074802593 container cleanup 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 02 09:33:00 np0005541914.localdomain podman[216457]: multipathd
Dec 02 09:33:00 np0005541914.localdomain systemd[1]: edpm_multipathd.service: Deactivated successfully.
Dec 02 09:33:00 np0005541914.localdomain systemd[1]: Stopped multipathd container.
Dec 02 09:33:00 np0005541914.localdomain systemd[1]: Starting multipathd container...
Dec 02 09:33:00 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:33:00 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd2b08aa15d7ed98a99a2ca0ad0a0527b7b07dbb69bb9536db0aff80261887df/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 02 09:33:00 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd2b08aa15d7ed98a99a2ca0ad0a0527b7b07dbb69bb9536db0aff80261887df/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 02 09:33:00 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:33:00 np0005541914.localdomain podman[216468]: 2025-12-02 09:33:00.309268574 +0000 UTC m=+0.151389721 container init 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:33:00 np0005541914.localdomain multipathd[216483]: + sudo -E kolla_set_configs
Dec 02 09:33:00 np0005541914.localdomain sudo[216489]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 02 09:33:00 np0005541914.localdomain sudo[216489]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 02 09:33:00 np0005541914.localdomain sudo[216489]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 02 09:33:00 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:33:00 np0005541914.localdomain podman[216468]: 2025-12-02 09:33:00.360257256 +0000 UTC m=+0.202378373 container start 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125)
Dec 02 09:33:00 np0005541914.localdomain podman[216468]: multipathd
Dec 02 09:33:00 np0005541914.localdomain systemd[1]: Started multipathd container.
Dec 02 09:33:00 np0005541914.localdomain multipathd[216483]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 02 09:33:00 np0005541914.localdomain multipathd[216483]: INFO:__main__:Validating config file
Dec 02 09:33:00 np0005541914.localdomain multipathd[216483]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 02 09:33:00 np0005541914.localdomain multipathd[216483]: INFO:__main__:Writing out command to execute
Dec 02 09:33:00 np0005541914.localdomain sudo[216423]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:00 np0005541914.localdomain sudo[216489]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:00 np0005541914.localdomain multipathd[216483]: ++ cat /run_command
Dec 02 09:33:00 np0005541914.localdomain multipathd[216483]: + CMD='/usr/sbin/multipathd -d'
Dec 02 09:33:00 np0005541914.localdomain multipathd[216483]: + ARGS=
Dec 02 09:33:00 np0005541914.localdomain multipathd[216483]: + sudo kolla_copy_cacerts
Dec 02 09:33:00 np0005541914.localdomain sudo[216506]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 02 09:33:00 np0005541914.localdomain sudo[216506]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 02 09:33:00 np0005541914.localdomain sudo[216506]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 02 09:33:00 np0005541914.localdomain sudo[216506]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:00 np0005541914.localdomain multipathd[216483]: + [[ ! -n '' ]]
Dec 02 09:33:00 np0005541914.localdomain multipathd[216483]: + . kolla_extend_start
Dec 02 09:33:00 np0005541914.localdomain multipathd[216483]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 02 09:33:00 np0005541914.localdomain multipathd[216483]: Running command: '/usr/sbin/multipathd -d'
Dec 02 09:33:00 np0005541914.localdomain multipathd[216483]: + umask 0022
Dec 02 09:33:00 np0005541914.localdomain multipathd[216483]: + exec /usr/sbin/multipathd -d
Dec 02 09:33:00 np0005541914.localdomain multipathd[216483]: 10144.700198 | --------start up--------
Dec 02 09:33:00 np0005541914.localdomain multipathd[216483]: 10144.700223 | read /etc/multipath.conf
Dec 02 09:33:00 np0005541914.localdomain podman[216492]: 2025-12-02 09:33:00.46224285 +0000 UTC m=+0.099120677 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, tcib_managed=true)
Dec 02 09:33:00 np0005541914.localdomain multipathd[216483]: 10144.705723 | path checkers start up
Dec 02 09:33:00 np0005541914.localdomain podman[216492]: 2025-12-02 09:33:00.501478243 +0000 UTC m=+0.138356120 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 02 09:33:00 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:33:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65292 DF PROTO=TCP SPT=35386 DPT=9102 SEQ=1053022010 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD545CD220000000001030307) 
Dec 02 09:33:01 np0005541914.localdomain sudo[216629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rdskuekqvuveihbskreodeztvipcijin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667981.1727657-1748-176556183955639/AnsiballZ_file.py
Dec 02 09:33:01 np0005541914.localdomain sudo[216629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:01 np0005541914.localdomain python3.9[216631]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:33:01 np0005541914.localdomain sudo[216629]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:02 np0005541914.localdomain sudo[216739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hisnebfauwwnptidijyzvbdocwwuqtdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667982.7028239-1784-169421854281428/AnsiballZ_file.py
Dec 02 09:33:02 np0005541914.localdomain sudo[216739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:33:03.140 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:33:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:33:03.141 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:33:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:33:03.141 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:33:03 np0005541914.localdomain python3.9[216741]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 02 09:33:03 np0005541914.localdomain sudo[216739]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:03 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64203 DF PROTO=TCP SPT=33654 DPT=9882 SEQ=2476047008 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD545D5D10000000001030307) 
Dec 02 09:33:03 np0005541914.localdomain sudo[216849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-irqwzfpwlxebkuncxocxqflebypbrkjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667983.3882694-1807-5340431606046/AnsiballZ_modprobe.py
Dec 02 09:33:03 np0005541914.localdomain sudo[216849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:03 np0005541914.localdomain python3.9[216851]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Dec 02 09:33:03 np0005541914.localdomain sudo[216849]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:04 np0005541914.localdomain sudo[216967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nwcshtadxexffkobmrvtzsltltgguqif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667984.2749496-1832-2724365468302/AnsiballZ_stat.py
Dec 02 09:33:04 np0005541914.localdomain sudo[216967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:04 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64204 DF PROTO=TCP SPT=33654 DPT=9882 SEQ=2476047008 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD545D9E20000000001030307) 
Dec 02 09:33:04 np0005541914.localdomain python3.9[216969]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:33:04 np0005541914.localdomain sudo[216967]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:05 np0005541914.localdomain sudo[217055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mkcvfpipiskdneldzvuvqrwiuohsfchk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667984.2749496-1832-2724365468302/AnsiballZ_copy.py
Dec 02 09:33:05 np0005541914.localdomain sudo[217055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:05 np0005541914.localdomain python3.9[217057]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764667984.2749496-1832-2724365468302/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:33:05 np0005541914.localdomain sudo[217055]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:05 np0005541914.localdomain sudo[217165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xkchffxkoeeaaojqlxfwhsdqihvynvae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667985.6133854-1880-129779877994790/AnsiballZ_lineinfile.py
Dec 02 09:33:05 np0005541914.localdomain sudo[217165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:05 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:33:06 np0005541914.localdomain systemd[1]: tmp-crun.leCWB1.mount: Deactivated successfully.
Dec 02 09:33:06 np0005541914.localdomain podman[217168]: 2025-12-02 09:33:06.016253516 +0000 UTC m=+0.104344679 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:33:06 np0005541914.localdomain podman[217168]: 2025-12-02 09:33:06.066353521 +0000 UTC m=+0.154444704 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:33:06 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:33:06 np0005541914.localdomain python3.9[217167]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:33:06 np0005541914.localdomain sudo[217165]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:06 np0005541914.localdomain sudo[217300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qbmczhkvrnkeefmdbbpmarfdxztydgoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667986.3321657-1904-21078327817130/AnsiballZ_systemd.py
Dec 02 09:33:06 np0005541914.localdomain sudo[217300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:06 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64205 DF PROTO=TCP SPT=33654 DPT=9882 SEQ=2476047008 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD545E1E20000000001030307) 
Dec 02 09:33:06 np0005541914.localdomain python3.9[217302]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 02 09:33:06 np0005541914.localdomain systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 02 09:33:06 np0005541914.localdomain systemd[1]: Stopped Load Kernel Modules.
Dec 02 09:33:06 np0005541914.localdomain systemd[1]: Stopping Load Kernel Modules...
Dec 02 09:33:06 np0005541914.localdomain systemd[1]: Starting Load Kernel Modules...
Dec 02 09:33:06 np0005541914.localdomain systemd-modules-load[217306]: Module 'msr' is built in
Dec 02 09:33:06 np0005541914.localdomain systemd[1]: Finished Load Kernel Modules.
Dec 02 09:33:07 np0005541914.localdomain sudo[217300]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:07 np0005541914.localdomain sudo[217414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ybmxbnpcfsjbuvpxxmcyvmiqtyogpqqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667987.3668585-1928-2207463190828/AnsiballZ_dnf.py
Dec 02 09:33:07 np0005541914.localdomain sudo[217414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:07 np0005541914.localdomain python3.9[217416]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 02 09:33:09 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:33:10 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57601 DF PROTO=TCP SPT=38354 DPT=9100 SEQ=2652238194 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD545EF230000000001030307) 
Dec 02 09:33:10 np0005541914.localdomain systemd[1]: tmp-crun.1LYvht.mount: Deactivated successfully.
Dec 02 09:33:10 np0005541914.localdomain podman[217419]: 2025-12-02 09:33:10.122681505 +0000 UTC m=+0.123239178 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:33:10 np0005541914.localdomain podman[217419]: 2025-12-02 09:33:10.15323227 +0000 UTC m=+0.153789903 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 02 09:33:10 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:33:11 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:33:11 np0005541914.localdomain systemd-rc-local-generator[217471]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:33:11 np0005541914.localdomain systemd-sysv-generator[217475]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:33:11 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:11 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:11 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:11 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:11 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:33:11 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:11 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:11 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:11 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:11 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:33:11 np0005541914.localdomain systemd-rc-local-generator[217509]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:33:11 np0005541914.localdomain systemd-sysv-generator[217512]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:33:12 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:12 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:12 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:12 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:12 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:33:12 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:12 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:12 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:12 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:12 np0005541914.localdomain systemd-logind[760]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 02 09:33:12 np0005541914.localdomain systemd-logind[760]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 02 09:33:12 np0005541914.localdomain lvm[217557]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 02 09:33:12 np0005541914.localdomain lvm[217558]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 02 09:33:12 np0005541914.localdomain lvm[217557]: VG ceph_vg1 finished
Dec 02 09:33:12 np0005541914.localdomain lvm[217558]: VG ceph_vg0 finished
Dec 02 09:33:12 np0005541914.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 02 09:33:12 np0005541914.localdomain systemd[1]: Starting man-db-cache-update.service...
Dec 02 09:33:12 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:33:12 np0005541914.localdomain systemd-sysv-generator[217612]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:33:12 np0005541914.localdomain systemd-rc-local-generator[217609]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:33:12 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:12 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:12 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:12 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:12 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:33:12 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25424 DF PROTO=TCP SPT=57064 DPT=9100 SEQ=3621728756 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD545F9230000000001030307) 
Dec 02 09:33:12 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:12 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:12 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:12 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:12 np0005541914.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Dec 02 09:33:13 np0005541914.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 02 09:33:13 np0005541914.localdomain systemd[1]: Finished man-db-cache-update.service.
Dec 02 09:33:13 np0005541914.localdomain systemd[1]: man-db-cache-update.service: Consumed 1.252s CPU time.
Dec 02 09:33:13 np0005541914.localdomain systemd[1]: run-r3d2f3b322466429da446a5651b0cf14c.service: Deactivated successfully.
Dec 02 09:33:13 np0005541914.localdomain sudo[217414]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:15 np0005541914.localdomain python3.9[218851]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 09:33:16 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43103 DF PROTO=TCP SPT=35618 DPT=9102 SEQ=2384408096 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD546062E0000000001030307) 
Dec 02 09:33:16 np0005541914.localdomain sudo[218963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kahprvglpyjrsrqkjakkoudtpkneorqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667996.2180214-1980-232149061459449/AnsiballZ_file.py
Dec 02 09:33:16 np0005541914.localdomain sudo[218963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:16 np0005541914.localdomain python3.9[218965]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:33:16 np0005541914.localdomain sudo[218963]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:16 np0005541914.localdomain sudo[218966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:33:16 np0005541914.localdomain sudo[218966]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:33:16 np0005541914.localdomain sudo[218966]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:16 np0005541914.localdomain sudo[219001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:33:16 np0005541914.localdomain sudo[219001]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:33:17 np0005541914.localdomain sudo[219001]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:17 np0005541914.localdomain sudo[219141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ducwpbmshlxobrjedswxlvmrmjufivnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764667997.2993023-2013-12189800670195/AnsiballZ_systemd_service.py
Dec 02 09:33:17 np0005541914.localdomain sudo[219141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:17 np0005541914.localdomain python3.9[219143]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 02 09:33:17 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:33:17 np0005541914.localdomain systemd-rc-local-generator[219170]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:33:17 np0005541914.localdomain systemd-sysv-generator[219173]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:33:18 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:18 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:18 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:18 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:18 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:33:18 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:18 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:18 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:18 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:18 np0005541914.localdomain sudo[219141]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:18 np0005541914.localdomain sudo[219195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:33:18 np0005541914.localdomain sudo[219195]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:33:18 np0005541914.localdomain sudo[219195]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:18 np0005541914.localdomain python3.9[219304]: ansible-ansible.builtin.service_facts Invoked
Dec 02 09:33:18 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64207 DF PROTO=TCP SPT=33654 DPT=9882 SEQ=2476047008 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54611220000000001030307) 
Dec 02 09:33:18 np0005541914.localdomain network[219321]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 02 09:33:18 np0005541914.localdomain network[219322]: 'network-scripts' will be removed from distribution in near future.
Dec 02 09:33:18 np0005541914.localdomain network[219323]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 02 09:33:20 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:33:21 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60631 DF PROTO=TCP SPT=44204 DPT=9101 SEQ=328230410 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5461D220000000001030307) 
Dec 02 09:33:26 np0005541914.localdomain sudo[219556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qenghbkepzlfvdlfepwaqanqbsaxlcep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668005.758083-2070-40624714103044/AnsiballZ_systemd_service.py
Dec 02 09:33:26 np0005541914.localdomain sudo[219556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:26 np0005541914.localdomain python3.9[219558]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:33:26 np0005541914.localdomain sudo[219556]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60949 DF PROTO=TCP SPT=56422 DPT=9101 SEQ=139676450 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54631A20000000001030307) 
Dec 02 09:33:27 np0005541914.localdomain sshd[219648]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:33:27 np0005541914.localdomain sudo[219669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jcpkpfpoofrvksjpmjqxdoguvsrfogbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668006.4511547-2070-88939844475019/AnsiballZ_systemd_service.py
Dec 02 09:33:27 np0005541914.localdomain sudo[219669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:27 np0005541914.localdomain python3.9[219671]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:33:27 np0005541914.localdomain sudo[219669]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:28 np0005541914.localdomain sshd[219648]: Received disconnect from 34.78.29.97 port 34716:11: Bye Bye [preauth]
Dec 02 09:33:28 np0005541914.localdomain sshd[219648]: Disconnected from authenticating user root 34.78.29.97 port 34716 [preauth]
Dec 02 09:33:28 np0005541914.localdomain sudo[219780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dzhribjtwkoowdcskpbaldkyphkhfhyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668008.0858939-2070-178885963563588/AnsiballZ_systemd_service.py
Dec 02 09:33:28 np0005541914.localdomain sudo[219780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:28 np0005541914.localdomain python3.9[219782]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:33:28 np0005541914.localdomain sudo[219780]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:29 np0005541914.localdomain sudo[219891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yehynzjqnfwfrprtadibayfvlaekegnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668009.0205128-2070-268158836505506/AnsiballZ_systemd_service.py
Dec 02 09:33:29 np0005541914.localdomain sudo[219891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:29 np0005541914.localdomain python3.9[219893]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:33:29 np0005541914.localdomain sudo[219891]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:29 np0005541914.localdomain sudo[220002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mmmzmbsuvhqknafdvvxktnbofencpiiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668009.7108366-2070-156363522970103/AnsiballZ_systemd_service.py
Dec 02 09:33:29 np0005541914.localdomain sudo[220002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:30 np0005541914.localdomain python3.9[220004]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:33:30 np0005541914.localdomain sudo[220002]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:30 np0005541914.localdomain sudo[220113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ttdlrgtmfiptqybhmytqjenofpeyqjeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668010.4132378-2070-191083318589472/AnsiballZ_systemd_service.py
Dec 02 09:33:30 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:33:30 np0005541914.localdomain sudo[220113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:30 np0005541914.localdomain podman[220115]: 2025-12-02 09:33:30.817013355 +0000 UTC m=+0.097731205 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:33:30 np0005541914.localdomain podman[220115]: 2025-12-02 09:33:30.85338346 +0000 UTC m=+0.134101360 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 02 09:33:30 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:33:31 np0005541914.localdomain python3.9[220116]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:33:31 np0005541914.localdomain sudo[220113]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:31 np0005541914.localdomain sudo[220244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-agvqblecvldelwybzdmpfhvxlavzfsoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668011.1729782-2070-244282919627378/AnsiballZ_systemd_service.py
Dec 02 09:33:31 np0005541914.localdomain sudo[220244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43107 DF PROTO=TCP SPT=35618 DPT=9102 SEQ=2384408096 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54643230000000001030307) 
Dec 02 09:33:31 np0005541914.localdomain python3.9[220246]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:33:31 np0005541914.localdomain sudo[220244]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:32 np0005541914.localdomain sudo[220355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dzplpnjowbcadihhykjxwcxqyewijqvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668011.9004948-2070-226144768068677/AnsiballZ_systemd_service.py
Dec 02 09:33:32 np0005541914.localdomain sudo[220355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:32 np0005541914.localdomain python3.9[220357]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:33:32 np0005541914.localdomain sudo[220355]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:33 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16438 DF PROTO=TCP SPT=54014 DPT=9882 SEQ=551791104 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5464B010000000001030307) 
Dec 02 09:33:33 np0005541914.localdomain sudo[220466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gzwdvbtwyqkkpgpxnwgznysbboxumnix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668013.4829664-2247-18287568013815/AnsiballZ_file.py
Dec 02 09:33:33 np0005541914.localdomain sudo[220466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:33 np0005541914.localdomain python3.9[220468]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:33:33 np0005541914.localdomain sudo[220466]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:34 np0005541914.localdomain sudo[220576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lacgeihonzccpsfmkxekcncfletlpbig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668014.0759823-2247-86448515268895/AnsiballZ_file.py
Dec 02 09:33:34 np0005541914.localdomain sudo[220576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:34 np0005541914.localdomain python3.9[220578]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:33:34 np0005541914.localdomain sudo[220576]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:34 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16439 DF PROTO=TCP SPT=54014 DPT=9882 SEQ=551791104 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5464F230000000001030307) 
Dec 02 09:33:35 np0005541914.localdomain sudo[220686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wzteegdwzojykbupyuapiuffnjclqthv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668014.6260173-2247-268621119611875/AnsiballZ_file.py
Dec 02 09:33:35 np0005541914.localdomain sudo[220686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:35 np0005541914.localdomain python3.9[220688]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:33:35 np0005541914.localdomain sudo[220686]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:36 np0005541914.localdomain sudo[220796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rhidgamuqzzootqjzeufsscovwiyukeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668015.7845056-2247-253800767438510/AnsiballZ_file.py
Dec 02 09:33:36 np0005541914.localdomain sudo[220796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:36 np0005541914.localdomain python3.9[220798]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:33:36 np0005541914.localdomain sudo[220796]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:36 np0005541914.localdomain sudo[220906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jqjiheeyjggyuwrdxctxxbkphiocyjgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668016.433952-2247-202735669560476/AnsiballZ_file.py
Dec 02 09:33:36 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:33:36 np0005541914.localdomain sudo[220906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:36 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16440 DF PROTO=TCP SPT=54014 DPT=9882 SEQ=551791104 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54657220000000001030307) 
Dec 02 09:33:36 np0005541914.localdomain podman[220908]: 2025-12-02 09:33:36.824092683 +0000 UTC m=+0.094284591 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 02 09:33:36 np0005541914.localdomain podman[220908]: 2025-12-02 09:33:36.859713294 +0000 UTC m=+0.129905242 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 02 09:33:36 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:33:36 np0005541914.localdomain python3.9[220909]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:33:36 np0005541914.localdomain sudo[220906]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:37 np0005541914.localdomain sudo[221040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-acdlkaglbzzjgwewxbkucelnnnfctyhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668017.4701004-2247-174418980645509/AnsiballZ_file.py
Dec 02 09:33:37 np0005541914.localdomain sudo[221040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:37 np0005541914.localdomain python3.9[221042]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:33:37 np0005541914.localdomain sudo[221040]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:38 np0005541914.localdomain sudo[221150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aqrsndcrsusbdhbovmfbcahknkyhepkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668018.0118546-2247-85669770645312/AnsiballZ_file.py
Dec 02 09:33:38 np0005541914.localdomain sudo[221150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:38 np0005541914.localdomain python3.9[221152]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:33:38 np0005541914.localdomain sudo[221150]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:39 np0005541914.localdomain sudo[221260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ovgzrjjivphalybeyepdidxkrddfvgis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668018.5666902-2247-214328199657522/AnsiballZ_file.py
Dec 02 09:33:39 np0005541914.localdomain sudo[221260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:39 np0005541914.localdomain python3.9[221262]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:33:39 np0005541914.localdomain sudo[221260]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:39 np0005541914.localdomain sudo[221370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ptrtmbcyxymllcbhcebplsrmrtlejpms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668019.550525-2418-107273286067607/AnsiballZ_file.py
Dec 02 09:33:39 np0005541914.localdomain sudo[221370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:39 np0005541914.localdomain python3.9[221372]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:33:40 np0005541914.localdomain sudo[221370]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:40 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4241 DF PROTO=TCP SPT=50620 DPT=9100 SEQ=3217129148 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54665230000000001030307) 
Dec 02 09:33:40 np0005541914.localdomain sudo[221480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-efqrnbbexsledyszdcidfwhhcbjjnnxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668020.1035306-2418-25731625225873/AnsiballZ_file.py
Dec 02 09:33:40 np0005541914.localdomain sudo[221480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:40 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:33:40 np0005541914.localdomain systemd[1]: tmp-crun.2OzpLc.mount: Deactivated successfully.
Dec 02 09:33:40 np0005541914.localdomain podman[221483]: 2025-12-02 09:33:40.448177374 +0000 UTC m=+0.082514447 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 02 09:33:40 np0005541914.localdomain podman[221483]: 2025-12-02 09:33:40.479403285 +0000 UTC m=+0.113740418 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent)
Dec 02 09:33:40 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:33:40 np0005541914.localdomain python3.9[221482]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:33:40 np0005541914.localdomain sudo[221480]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:40 np0005541914.localdomain sudo[221609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vvlzovaqfnwjiwhxdrmrlvyquzfeflut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668020.683691-2418-167231591209710/AnsiballZ_file.py
Dec 02 09:33:40 np0005541914.localdomain sudo[221609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:41 np0005541914.localdomain python3.9[221611]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:33:41 np0005541914.localdomain sudo[221609]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:41 np0005541914.localdomain sudo[221719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-trrzldoswkwwmuyzhxuvdytnoashfzqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668021.2629461-2418-176652231315943/AnsiballZ_file.py
Dec 02 09:33:41 np0005541914.localdomain sudo[221719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:41 np0005541914.localdomain python3.9[221721]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:33:41 np0005541914.localdomain sudo[221719]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:42 np0005541914.localdomain sudo[221829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xhmvhzbpqpfrrabqvtdmsfxdtibifdtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668021.861411-2418-257929197352927/AnsiballZ_file.py
Dec 02 09:33:42 np0005541914.localdomain sudo[221829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:42 np0005541914.localdomain python3.9[221831]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:33:42 np0005541914.localdomain sudo[221829]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:42 np0005541914.localdomain sudo[221939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hevyqwhxzptxemuldxtikgmvdfvytbra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668022.4349864-2418-23595727743236/AnsiballZ_file.py
Dec 02 09:33:42 np0005541914.localdomain sudo[221939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:42 np0005541914.localdomain python3.9[221941]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:33:42 np0005541914.localdomain sudo[221939]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:43 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33207 DF PROTO=TCP SPT=60846 DPT=9105 SEQ=1117703903 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54670230000000001030307) 
Dec 02 09:33:43 np0005541914.localdomain sudo[222049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tqsbegswzbnszxpluygtapxkgcvhfdub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668023.0332224-2418-235688264715906/AnsiballZ_file.py
Dec 02 09:33:43 np0005541914.localdomain sudo[222049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:43 np0005541914.localdomain python3.9[222051]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:33:43 np0005541914.localdomain sudo[222049]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:43 np0005541914.localdomain sudo[222159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tcbgtnypliwrttjauhduzvulewlagftg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668023.6227689-2418-131386406812757/AnsiballZ_file.py
Dec 02 09:33:43 np0005541914.localdomain sudo[222159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:44 np0005541914.localdomain python3.9[222161]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:33:44 np0005541914.localdomain sudo[222159]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:44 np0005541914.localdomain sudo[222269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zbrhtquvildggkqesxtqdqdmtonfpnnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668024.5808845-2592-181554699608954/AnsiballZ_command.py
Dec 02 09:33:44 np0005541914.localdomain sudo[222269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:45 np0005541914.localdomain python3.9[222271]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                                              systemctl disable --now certmonger.service
                                                              test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                                            fi
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:33:45 np0005541914.localdomain sudo[222269]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:45 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42178 DF PROTO=TCP SPT=39690 DPT=9105 SEQ=325339395 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5467B230000000001030307) 
Dec 02 09:33:46 np0005541914.localdomain python3.9[222381]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 02 09:33:47 np0005541914.localdomain sudo[222489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ozfjjgusrgpaotdefnoosmojxwzyfxoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668027.2280927-2646-182136516956149/AnsiballZ_systemd_service.py
Dec 02 09:33:47 np0005541914.localdomain sudo[222489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:47 np0005541914.localdomain python3.9[222491]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 02 09:33:47 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:33:47 np0005541914.localdomain systemd-rc-local-generator[222516]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:33:47 np0005541914.localdomain systemd-sysv-generator[222521]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:33:47 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:47 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:47 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:47 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:47 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:33:48 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:48 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:48 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:48 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:33:48 np0005541914.localdomain sudo[222489]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:48 np0005541914.localdomain sudo[222635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pnkcovhtxthluxrltcqhcfcsljchalbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668028.4342544-2670-231627352243759/AnsiballZ_command.py
Dec 02 09:33:48 np0005541914.localdomain sudo[222635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:48 np0005541914.localdomain python3.9[222637]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:33:48 np0005541914.localdomain sudo[222635]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:49 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16442 DF PROTO=TCP SPT=54014 DPT=9882 SEQ=551791104 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54687220000000001030307) 
Dec 02 09:33:49 np0005541914.localdomain sudo[222746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rbufuaqgduqtlegsgftnmiqrweliiokz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668029.0132935-2670-257253237774722/AnsiballZ_command.py
Dec 02 09:33:49 np0005541914.localdomain sudo[222746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:49 np0005541914.localdomain python3.9[222748]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:33:49 np0005541914.localdomain sudo[222746]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:49 np0005541914.localdomain sudo[222857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-amllpvbmglreujikzpjchdizypgxoaee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668029.646787-2670-86730652558498/AnsiballZ_command.py
Dec 02 09:33:49 np0005541914.localdomain sudo[222857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:50 np0005541914.localdomain python3.9[222859]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:33:50 np0005541914.localdomain sudo[222857]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:50 np0005541914.localdomain sudo[222968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cxesdawymmkmmnvmvgycwnuxiljwwpez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668030.320213-2670-227699639868491/AnsiballZ_command.py
Dec 02 09:33:50 np0005541914.localdomain sudo[222968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:50 np0005541914.localdomain python3.9[222970]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:33:50 np0005541914.localdomain sudo[222968]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:51 np0005541914.localdomain sudo[223079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pgfairosneafztihqxjulgkqxviatskh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668030.929189-2670-253893345297822/AnsiballZ_command.py
Dec 02 09:33:51 np0005541914.localdomain sudo[223079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:51 np0005541914.localdomain python3.9[223081]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:33:51 np0005541914.localdomain sudo[223079]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:51 np0005541914.localdomain sudo[223190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sgsmqeycmohjxdhoitbdrxccsmrlqqqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668031.517255-2670-277472572419873/AnsiballZ_command.py
Dec 02 09:33:51 np0005541914.localdomain sudo[223190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:51 np0005541914.localdomain python3.9[223192]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:33:51 np0005541914.localdomain sudo[223190]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:52 np0005541914.localdomain sudo[223301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xidcyvkebicazuwpfdprmwjlrriwtavd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668032.0944083-2670-47332231741485/AnsiballZ_command.py
Dec 02 09:33:52 np0005541914.localdomain sudo[223301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:52 np0005541914.localdomain python3.9[223303]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:33:52 np0005541914.localdomain sudo[223301]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:52 np0005541914.localdomain sudo[223412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yvryehzdxahlnoxwwsrmbwyvqqsxsuak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668032.6742063-2670-138455269854740/AnsiballZ_command.py
Dec 02 09:33:52 np0005541914.localdomain sudo[223412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:53 np0005541914.localdomain python3.9[223414]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:33:53 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31187 DF PROTO=TCP SPT=58044 DPT=9101 SEQ=452282954 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54697230000000001030307) 
Dec 02 09:33:53 np0005541914.localdomain sudo[223412]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:57 np0005541914.localdomain sudo[223523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tusdldyrmwkxwshbqkuxrfajthhmraui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668036.8161623-2877-160567088077247/AnsiballZ_file.py
Dec 02 09:33:57 np0005541914.localdomain sudo[223523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31188 DF PROTO=TCP SPT=58044 DPT=9101 SEQ=452282954 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD546A6E20000000001030307) 
Dec 02 09:33:57 np0005541914.localdomain python3.9[223525]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:33:57 np0005541914.localdomain sudo[223523]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:57 np0005541914.localdomain sudo[223633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fluzmnlnunslfedewqoqryywqgqkfuwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668037.3751042-2877-235206148742185/AnsiballZ_file.py
Dec 02 09:33:57 np0005541914.localdomain sudo[223633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33210 DF PROTO=TCP SPT=60846 DPT=9105 SEQ=1117703903 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD546A9230000000001030307) 
Dec 02 09:33:57 np0005541914.localdomain python3.9[223635]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:33:57 np0005541914.localdomain sudo[223633]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:58 np0005541914.localdomain sudo[223743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cnrzmmaxmwwtqbiucgpcekwghztyozsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668037.914839-2877-100911598390844/AnsiballZ_file.py
Dec 02 09:33:58 np0005541914.localdomain sudo[223743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:59 np0005541914.localdomain python3.9[223745]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:33:59 np0005541914.localdomain sudo[223743]: pam_unix(sudo:session): session closed for user root
Dec 02 09:33:59 np0005541914.localdomain sudo[223853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-veevowtkmbydrzapljwcpbrurpdferat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668039.3861415-2943-39838129982202/AnsiballZ_file.py
Dec 02 09:33:59 np0005541914.localdomain sudo[223853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:33:59 np0005541914.localdomain python3.9[223855]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:33:59 np0005541914.localdomain sudo[223853]: pam_unix(sudo:session): session closed for user root
Dec 02 09:34:00 np0005541914.localdomain sudo[223963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nvrgsrpwtkqsyhomfivceimcgsqbevoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668039.9953833-2943-56216384505687/AnsiballZ_file.py
Dec 02 09:34:00 np0005541914.localdomain sudo[223963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:34:00 np0005541914.localdomain python3.9[223965]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:34:00 np0005541914.localdomain sudo[223963]: pam_unix(sudo:session): session closed for user root
Dec 02 09:34:00 np0005541914.localdomain sudo[224073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zovkjpzhvgppeuwwrtjfqnkwbayvcqhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668040.6743078-2943-10026452898560/AnsiballZ_file.py
Dec 02 09:34:00 np0005541914.localdomain sudo[224073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:34:00 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:34:01 np0005541914.localdomain systemd[1]: tmp-crun.leTjHV.mount: Deactivated successfully.
Dec 02 09:34:01 np0005541914.localdomain podman[224076]: 2025-12-02 09:34:01.011496812 +0000 UTC m=+0.075155848 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:34:01 np0005541914.localdomain podman[224076]: 2025-12-02 09:34:01.023109723 +0000 UTC m=+0.086768759 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0)
Dec 02 09:34:01 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:34:01 np0005541914.localdomain python3.9[224075]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:34:01 np0005541914.localdomain sudo[224073]: pam_unix(sudo:session): session closed for user root
Dec 02 09:34:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26139 DF PROTO=TCP SPT=33942 DPT=9102 SEQ=915411990 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD546B7230000000001030307) 
Dec 02 09:34:01 np0005541914.localdomain sudo[224202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uxdqrfsbodpurfqfgnmbonqsvhjwymwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668041.2535088-2943-99900281360178/AnsiballZ_file.py
Dec 02 09:34:01 np0005541914.localdomain sudo[224202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:34:01 np0005541914.localdomain python3.9[224204]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:34:01 np0005541914.localdomain sudo[224202]: pam_unix(sudo:session): session closed for user root
Dec 02 09:34:02 np0005541914.localdomain sudo[224312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jwascuvlbjcokfgeetyirvsacwklckpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668041.8048773-2943-54831733866108/AnsiballZ_file.py
Dec 02 09:34:02 np0005541914.localdomain sudo[224312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:34:02 np0005541914.localdomain python3.9[224314]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:34:02 np0005541914.localdomain sudo[224312]: pam_unix(sudo:session): session closed for user root
Dec 02 09:34:02 np0005541914.localdomain sudo[224422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rybuwzqxlnqaomslybqdpbhlaoivbrrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668042.3331566-2943-187838193000764/AnsiballZ_file.py
Dec 02 09:34:02 np0005541914.localdomain sudo[224422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:34:02 np0005541914.localdomain python3.9[224424]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:34:02 np0005541914.localdomain sudo[224422]: pam_unix(sudo:session): session closed for user root
Dec 02 09:34:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:34:03.141 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:34:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:34:03.142 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:34:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:34:03.142 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:34:03 np0005541914.localdomain sudo[224532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rwoaesxrozcdukwamkhsoyuckoimlloe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668042.896979-2943-159205314743177/AnsiballZ_file.py
Dec 02 09:34:03 np0005541914.localdomain sudo[224532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:34:03 np0005541914.localdomain python3.9[224534]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:34:03 np0005541914.localdomain sudo[224532]: pam_unix(sudo:session): session closed for user root
Dec 02 09:34:03 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=273 DF PROTO=TCP SPT=32908 DPT=9882 SEQ=521459412 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD546C0310000000001030307) 
Dec 02 09:34:06 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=275 DF PROTO=TCP SPT=32908 DPT=9882 SEQ=521459412 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD546CC220000000001030307) 
Dec 02 09:34:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:34:07 np0005541914.localdomain systemd[1]: tmp-crun.YeanwO.mount: Deactivated successfully.
Dec 02 09:34:07 np0005541914.localdomain podman[224552]: 2025-12-02 09:34:07.0718719 +0000 UTC m=+0.075376255 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2)
Dec 02 09:34:07 np0005541914.localdomain podman[224552]: 2025-12-02 09:34:07.138014207 +0000 UTC m=+0.141518582 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 02 09:34:07 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:34:10 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19577 DF PROTO=TCP SPT=59234 DPT=9100 SEQ=3569422483 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD546D9220000000001030307) 
Dec 02 09:34:10 np0005541914.localdomain sudo[224668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ktwavwlreozwjyublnqzhninkgakysuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668049.56203-3268-70407187022178/AnsiballZ_getent.py
Dec 02 09:34:10 np0005541914.localdomain sudo[224668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:34:10 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:34:10 np0005541914.localdomain podman[224671]: 2025-12-02 09:34:10.708202949 +0000 UTC m=+0.086817651 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec 02 09:34:10 np0005541914.localdomain podman[224671]: 2025-12-02 09:34:10.740831644 +0000 UTC m=+0.119446316 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 02 09:34:10 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:34:10 np0005541914.localdomain python3.9[224670]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Dec 02 09:34:10 np0005541914.localdomain sudo[224668]: pam_unix(sudo:session): session closed for user root
Dec 02 09:34:11 np0005541914.localdomain sudo[224799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ftpkpvgobhigivluomkruzeouzdahoqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668051.2118933-3292-54699056031072/AnsiballZ_group.py
Dec 02 09:34:11 np0005541914.localdomain sudo[224799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:34:11 np0005541914.localdomain python3.9[224801]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 02 09:34:11 np0005541914.localdomain groupadd[224802]: group added to /etc/group: name=nova, GID=42436
Dec 02 09:34:11 np0005541914.localdomain groupadd[224802]: group added to /etc/gshadow: name=nova
Dec 02 09:34:11 np0005541914.localdomain groupadd[224802]: new group: name=nova, GID=42436
Dec 02 09:34:11 np0005541914.localdomain sudo[224799]: pam_unix(sudo:session): session closed for user root
Dec 02 09:34:12 np0005541914.localdomain sudo[224915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jygvxubfcobijtfbxurqmbzzzqlswtfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668052.0703032-3315-103322954018909/AnsiballZ_user.py
Dec 02 09:34:12 np0005541914.localdomain sudo[224915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:34:12 np0005541914.localdomain python3.9[224917]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005541914.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 02 09:34:12 np0005541914.localdomain useradd[224919]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Dec 02 09:34:12 np0005541914.localdomain useradd[224919]: add 'nova' to group 'libvirt'
Dec 02 09:34:12 np0005541914.localdomain useradd[224919]: add 'nova' to shadow group 'libvirt'
Dec 02 09:34:12 np0005541914.localdomain sudo[224915]: pam_unix(sudo:session): session closed for user root
Dec 02 09:34:13 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33344 DF PROTO=TCP SPT=41786 DPT=9105 SEQ=1859910421 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD546E5630000000001030307) 
Dec 02 09:34:13 np0005541914.localdomain sshd[224943]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:34:13 np0005541914.localdomain sshd[224943]: Accepted publickey for zuul from 192.168.122.30 port 36190 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 09:34:13 np0005541914.localdomain systemd-logind[760]: New session 54 of user zuul.
Dec 02 09:34:13 np0005541914.localdomain systemd[1]: Started Session 54 of User zuul.
Dec 02 09:34:13 np0005541914.localdomain sshd[224943]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 09:34:13 np0005541914.localdomain sshd[224946]: Received disconnect from 192.168.122.30 port 36190:11: disconnected by user
Dec 02 09:34:13 np0005541914.localdomain sshd[224946]: Disconnected from user zuul 192.168.122.30 port 36190
Dec 02 09:34:13 np0005541914.localdomain sshd[224943]: pam_unix(sshd:session): session closed for user zuul
Dec 02 09:34:13 np0005541914.localdomain systemd[1]: session-54.scope: Deactivated successfully.
Dec 02 09:34:13 np0005541914.localdomain systemd-logind[760]: Session 54 logged out. Waiting for processes to exit.
Dec 02 09:34:13 np0005541914.localdomain systemd-logind[760]: Removed session 54.
Dec 02 09:34:14 np0005541914.localdomain python3.9[225054]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:34:15 np0005541914.localdomain python3.9[225140]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668054.120079-3391-212330499939368/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:34:15 np0005541914.localdomain python3.9[225248]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:34:15 np0005541914.localdomain python3.9[225303]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:34:16 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35276 DF PROTO=TCP SPT=45496 DPT=9102 SEQ=1369579200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD546F08E0000000001030307) 
Dec 02 09:34:16 np0005541914.localdomain python3.9[225411]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:34:17 np0005541914.localdomain python3.9[225497]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668056.1458085-3391-212501758833158/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:34:17 np0005541914.localdomain python3.9[225605]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:34:18 np0005541914.localdomain python3.9[225691]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668057.1997192-3391-227233417575978/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=2618deabb92e3bb6763a4ba7147e78332a2d3a7c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:34:18 np0005541914.localdomain sudo[225747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:34:18 np0005541914.localdomain sudo[225747]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:34:18 np0005541914.localdomain sudo[225747]: pam_unix(sudo:session): session closed for user root
Dec 02 09:34:18 np0005541914.localdomain sudo[225798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:34:18 np0005541914.localdomain sudo[225798]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:34:18 np0005541914.localdomain python3.9[225834]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:34:19 np0005541914.localdomain sudo[225798]: pam_unix(sudo:session): session closed for user root
Dec 02 09:34:19 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35278 DF PROTO=TCP SPT=45496 DPT=9102 SEQ=1369579200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD546FCA20000000001030307) 
Dec 02 09:34:19 np0005541914.localdomain python3.9[225940]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668058.2300456-3391-89488593237671/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:34:19 np0005541914.localdomain python3.9[226061]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:34:19 np0005541914.localdomain sudo[226062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:34:19 np0005541914.localdomain sudo[226062]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:34:19 np0005541914.localdomain sudo[226062]: pam_unix(sudo:session): session closed for user root
Dec 02 09:34:20 np0005541914.localdomain python3.9[226165]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668059.2794132-3391-134617593127501/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:34:20 np0005541914.localdomain sudo[226273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ssgervikzhxmwpjpreqmlojgfhcjjdqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668060.6258936-3640-15356851293617/AnsiballZ_file.py
Dec 02 09:34:20 np0005541914.localdomain sudo[226273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:34:21 np0005541914.localdomain python3.9[226275]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:34:21 np0005541914.localdomain sudo[226273]: pam_unix(sudo:session): session closed for user root
Dec 02 09:34:21 np0005541914.localdomain sudo[226383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gahpxcrfborbpvnwegaqjbwhgvgheljn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668061.2844408-3663-249192961909339/AnsiballZ_copy.py
Dec 02 09:34:21 np0005541914.localdomain sudo[226383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:34:21 np0005541914.localdomain python3.9[226385]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:34:21 np0005541914.localdomain sudo[226383]: pam_unix(sudo:session): session closed for user root
Dec 02 09:34:21 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31190 DF PROTO=TCP SPT=58044 DPT=9101 SEQ=452282954 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54707230000000001030307) 
Dec 02 09:34:22 np0005541914.localdomain sudo[226493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-svdppssnafynqgjjoizbezlyabzjwbie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668061.9239995-3688-60130816119889/AnsiballZ_stat.py
Dec 02 09:34:22 np0005541914.localdomain sudo[226493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:34:22 np0005541914.localdomain python3.9[226495]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:34:22 np0005541914.localdomain sudo[226493]: pam_unix(sudo:session): session closed for user root
Dec 02 09:34:22 np0005541914.localdomain sudo[226605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kuafiuzedfwjvdavkbosneyhjvoihyrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668062.6430845-3715-234347577854446/AnsiballZ_file.py
Dec 02 09:34:22 np0005541914.localdomain sudo[226605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:34:23 np0005541914.localdomain python3.9[226607]: ansible-ansible.builtin.file Invoked with group=nova mode=0400 owner=nova path=/var/lib/nova/compute_id state=file recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:34:23 np0005541914.localdomain sudo[226605]: pam_unix(sudo:session): session closed for user root
Dec 02 09:34:23 np0005541914.localdomain python3.9[226715]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:34:24 np0005541914.localdomain python3.9[226825]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:34:24 np0005541914.localdomain python3.9[226911]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668064.0266976-3767-250150558813287/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:34:25 np0005541914.localdomain python3.9[227019]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:34:26 np0005541914.localdomain python3.9[227105]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668065.2211702-3811-253836097410432/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:34:26 np0005541914.localdomain sudo[227213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-osqjugxkotpvlufzemduinuwmodmqsci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668066.608295-3862-39785315652477/AnsiballZ_container_config_data.py
Dec 02 09:34:26 np0005541914.localdomain sudo[227213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:34:27 np0005541914.localdomain python3.9[227215]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Dec 02 09:34:27 np0005541914.localdomain sudo[227213]: pam_unix(sudo:session): session closed for user root
Dec 02 09:34:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22783 DF PROTO=TCP SPT=35984 DPT=9101 SEQ=1731321425 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5471C220000000001030307) 
Dec 02 09:34:27 np0005541914.localdomain sudo[227323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-urtpklxixaubpsqpanpcwiyiajdktord ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668067.4078724-3889-93724691550567/AnsiballZ_container_config_hash.py
Dec 02 09:34:27 np0005541914.localdomain sudo[227323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:34:27 np0005541914.localdomain python3.9[227325]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 02 09:34:27 np0005541914.localdomain sudo[227323]: pam_unix(sudo:session): session closed for user root
Dec 02 09:34:28 np0005541914.localdomain sudo[227433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-berptsjzhmsqlqyjgaqtkfdkvqaurytm ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764668068.3206825-3919-49268167688354/AnsiballZ_edpm_container_manage.py
Dec 02 09:34:28 np0005541914.localdomain sudo[227433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:34:28 np0005541914.localdomain python3[227435]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Dec 02 09:34:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35280 DF PROTO=TCP SPT=45496 DPT=9102 SEQ=1369579200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5472D230000000001030307) 
Dec 02 09:34:31 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:34:32 np0005541914.localdomain podman[227462]: 2025-12-02 09:34:32.087481095 +0000 UTC m=+0.091888049 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:34:32 np0005541914.localdomain podman[227462]: 2025-12-02 09:34:32.094182633 +0000 UTC m=+0.098589617 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec 02 09:34:32 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:34:32 np0005541914.localdomain sshd[227492]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:34:33 np0005541914.localdomain sshd[227492]: Received disconnect from 34.78.29.97 port 37006:11: Bye Bye [preauth]
Dec 02 09:34:33 np0005541914.localdomain sshd[227492]: Disconnected from authenticating user root 34.78.29.97 port 37006 [preauth]
Dec 02 09:34:33 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46519 DF PROTO=TCP SPT=34970 DPT=9882 SEQ=2390784494 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54735610000000001030307) 
Dec 02 09:34:34 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46520 DF PROTO=TCP SPT=34970 DPT=9882 SEQ=2390784494 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54739620000000001030307) 
Dec 02 09:34:36 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46521 DF PROTO=TCP SPT=34970 DPT=9882 SEQ=2390784494 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54741630000000001030307) 
Dec 02 09:34:37 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:34:40 np0005541914.localdomain podman[227508]: 2025-12-02 09:34:40.100886642 +0000 UTC m=+2.101207369 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 02 09:34:40 np0005541914.localdomain podman[227449]: 2025-12-02 09:34:28.898360153 +0000 UTC m=+0.055670532 image pull  quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 02 09:34:40 np0005541914.localdomain podman[227508]: 2025-12-02 09:34:40.190776457 +0000 UTC m=+2.191097174 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:34:40 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:34:40 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49518 DF PROTO=TCP SPT=57808 DPT=9100 SEQ=3813979202 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5474F230000000001030307) 
Dec 02 09:34:40 np0005541914.localdomain podman[227554]: 
Dec 02 09:34:40 np0005541914.localdomain podman[227554]: 2025-12-02 09:34:40.403419111 +0000 UTC m=+0.088480173 container create 21fdb0dbdd9f58ae102d96a43fbe2e853b5f997904471f5738055c23f246e34e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, container_name=nova_compute_init, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:34:40 np0005541914.localdomain podman[227554]: 2025-12-02 09:34:40.362944121 +0000 UTC m=+0.048005203 image pull  quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 02 09:34:40 np0005541914.localdomain python3[227435]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Dec 02 09:34:40 np0005541914.localdomain sudo[227433]: pam_unix(sudo:session): session closed for user root
Dec 02 09:34:40 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:34:41 np0005541914.localdomain systemd[1]: tmp-crun.SQCzJO.mount: Deactivated successfully.
Dec 02 09:34:41 np0005541914.localdomain podman[227610]: 2025-12-02 09:34:41.133937539 +0000 UTC m=+0.134248496 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:34:41 np0005541914.localdomain podman[227610]: 2025-12-02 09:34:41.164960914 +0000 UTC m=+0.165271881 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:34:41 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:34:42 np0005541914.localdomain sudo[227718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-stqytbpgbvpsctmaefdyluvghsiudqts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668082.6657867-3943-201293915867258/AnsiballZ_stat.py
Dec 02 09:34:42 np0005541914.localdomain sudo[227718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:34:43 np0005541914.localdomain python3.9[227720]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:34:43 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10786 DF PROTO=TCP SPT=42652 DPT=9105 SEQ=18637374 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5475A620000000001030307) 
Dec 02 09:34:43 np0005541914.localdomain sudo[227718]: pam_unix(sudo:session): session closed for user root
Dec 02 09:34:44 np0005541914.localdomain sudo[227830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oinzlauzlbvwpvsqmfbyonpaxwteezor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668083.7605906-3979-243878891734181/AnsiballZ_container_config_data.py
Dec 02 09:34:44 np0005541914.localdomain sudo[227830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:34:44 np0005541914.localdomain python3.9[227832]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Dec 02 09:34:44 np0005541914.localdomain sudo[227830]: pam_unix(sudo:session): session closed for user root
Dec 02 09:34:44 np0005541914.localdomain sudo[227940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gjvlcstwijojlbjlcuzgllxrafcdeeps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668084.5107744-4006-226270915465573/AnsiballZ_container_config_hash.py
Dec 02 09:34:44 np0005541914.localdomain sudo[227940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:34:44 np0005541914.localdomain python3.9[227942]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 02 09:34:45 np0005541914.localdomain sudo[227940]: pam_unix(sudo:session): session closed for user root
Dec 02 09:34:45 np0005541914.localdomain sudo[228050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-chohbxchxzhvhmslunedtwffryjgtzfe ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764668085.4355223-4036-171493649125989/AnsiballZ_edpm_container_manage.py
Dec 02 09:34:45 np0005541914.localdomain sudo[228050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:34:45 np0005541914.localdomain python3[228052]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec 02 09:34:46 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51877 DF PROTO=TCP SPT=44782 DPT=9102 SEQ=1218652330 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54765BE0000000001030307) 
Dec 02 09:34:46 np0005541914.localdomain python3[228052]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                               {
                                                                    "Id": "5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3",
                                                                    "Digest": "sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5",
                                                                    "RepoTags": [
                                                                         "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"
                                                                    ],
                                                                    "RepoDigests": [
                                                                         "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5"
                                                                    ],
                                                                    "Parent": "",
                                                                    "Comment": "",
                                                                    "Created": "2025-12-01T06:31:10.62653219Z",
                                                                    "Config": {
                                                                         "User": "nova",
                                                                         "Env": [
                                                                              "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                                              "LANG=en_US.UTF-8",
                                                                              "TZ=UTC",
                                                                              "container=oci"
                                                                         ],
                                                                         "Entrypoint": [
                                                                              "dumb-init",
                                                                              "--single-child",
                                                                              "--"
                                                                         ],
                                                                         "Cmd": [
                                                                              "kolla_start"
                                                                         ],
                                                                         "Labels": {
                                                                              "io.buildah.version": "1.41.3",
                                                                              "maintainer": "OpenStack Kubernetes Operator team",
                                                                              "org.label-schema.build-date": "20251125",
                                                                              "org.label-schema.license": "GPLv2",
                                                                              "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                              "org.label-schema.schema-version": "1.0",
                                                                              "org.label-schema.vendor": "CentOS",
                                                                              "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "tcib_managed": "true"
                                                                         },
                                                                         "StopSignal": "SIGTERM"
                                                                    },
                                                                    "Version": "",
                                                                    "Author": "",
                                                                    "Architecture": "amd64",
                                                                    "Os": "linux",
                                                                    "Size": 1211779450,
                                                                    "VirtualSize": 1211779450,
                                                                    "GraphDriver": {
                                                                         "Name": "overlay",
                                                                         "Data": {
                                                                              "LowerDir": "/var/lib/containers/storage/overlay/bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22/diff:/var/lib/containers/storage/overlay/11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",
                                                                              "UpperDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/diff",
                                                                              "WorkDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/work"
                                                                         }
                                                                    },
                                                                    "RootFS": {
                                                                         "Type": "layers",
                                                                         "Layers": [
                                                                              "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",
                                                                              "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",
                                                                              "sha256:86c2cd3987225f8a9bf38cc88e9c24b56bdf4a194f2301186519b4a7571b0c92",
                                                                              "sha256:baa8e0bc73d6b505f07c40d4f69a464312cc41ae2045c7975dd4759c27721a22",
                                                                              "sha256:d0cde44181262e43c105085c32a5af158b232f2e2ce4fe4b50530d7cdc5126cd"
                                                                         ]
                                                                    },
                                                                    "Labels": {
                                                                         "io.buildah.version": "1.41.3",
                                                                         "maintainer": "OpenStack Kubernetes Operator team",
                                                                         "org.label-schema.build-date": "20251125",
                                                                         "org.label-schema.license": "GPLv2",
                                                                         "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                         "org.label-schema.schema-version": "1.0",
                                                                         "org.label-schema.vendor": "CentOS",
                                                                         "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                         "tcib_managed": "true"
                                                                    },
                                                                    "Annotations": {},
                                                                    "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                                    "User": "nova",
                                                                    "History": [
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223494528Z",
                                                                              "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223562059Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251125\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:39.054452717Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025707917Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                                              "comment": "FROM quay.io/centos/centos:stream9",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025744608Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025767729Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025791379Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.02581523Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025867611Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.469442331Z",
                                                                              "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:02.029095017Z",
                                                                              "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:05.672474685Z",
                                                                              "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.113425253Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.532320725Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.370061347Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.805172373Z",
                                                                              "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.259306372Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.625948784Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.028304824Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.423316076Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.801219631Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.239187116Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.70996597Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.147342611Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.5739488Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.006975065Z",
                                                                              "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.421255505Z",
                                                                              "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.066694755Z",
                                                                              "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.475695836Z",
                                                                              "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.8971372Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /openstack",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:18.542651107Z",
                                                                              "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622503041Z",
                                                                              "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622561802Z",
                                                                              "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622578342Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622594423Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:22.080892529Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:15.092312074Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:53.218820537Z",
                                                                              "created_by": "/bin/sh -c dnf install -y python3-barbicanclient python3-cinderclient python3-designateclient python3-glanceclient python3-ironicclient python3-keystoneclient python3-manilaclient python3-neutronclient python3-novaclient python3-observabilityclient python3-octaviaclient python3-openstackclient python3-swiftclient python3-pymemcache && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:56.858075591Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:17:53.072482982Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-os:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:18:02.761216507Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage nova",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:18:03.785234187Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /etc/ssh && touch /etc/ssh/ssh_known_host",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:19:17.194997182Z",
                                                                              "created_by": "/bin/sh -c dnf install -y openstack-nova-common && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:19:24.14458279Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:29:30.048641643Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-nova-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:09.707360362Z",
                                                                              "created_by": "/bin/sh -c dnf -y install e2fsprogs xfsprogs xorriso iscsi-initiator-utils nfs-utils targetcli nvme-cli device-mapper-multipath ceph-common openssh-clients openstack-nova-compute openvswitch swtpm swtpm-tools && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:10.208898452Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage nova",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:10.624465805Z",
                                                                              "created_by": "/bin/sh -c rm -f /etc/machine-id",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:10.624514176Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER nova",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:18.661822382Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         }
                                                                    ],
                                                                    "NamesHistory": [
                                                                         "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"
                                                                    ]
                                                               }
                                                          ]
                                                          : quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 02 09:34:46 np0005541914.localdomain podman[228102]: 2025-12-02 09:34:46.355682187 +0000 UTC m=+0.088854258 container remove 6b81f17245677f673cb3b4e1a7b4b615e0e7187fa246a297cb0ca4781eeb8c9e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'd89676d7ec0a7c13ef9894fdb26c6e3a-51230b537c6b56095225b7a0a6b952d0'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, tcib_managed=true, name=rhosp17/openstack-nova-compute, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.12 17.1_20251118.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1761123044, version=17.1.12, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream, managed_by=tripleo_ansible, batch=17.1_20251118.1, org.opencontainers.image.revision=d13aeaae6d02e9d9273775f1920879be7af2cf2d, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=d13aeaae6d02e9d9273775f1920879be7af2cf2d, vendor=Red Hat, Inc., container_name=nova_compute, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2025-11-19T00:36:58Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team)
Dec 02 09:34:46 np0005541914.localdomain python3[228052]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force nova_compute
Dec 02 09:34:46 np0005541914.localdomain podman[228115]: 
Dec 02 09:34:46 np0005541914.localdomain podman[228115]: 2025-12-02 09:34:46.448192261 +0000 UTC m=+0.079155580 container create e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 02 09:34:46 np0005541914.localdomain podman[228115]: 2025-12-02 09:34:46.40038795 +0000 UTC m=+0.031351299 image pull  quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 02 09:34:46 np0005541914.localdomain python3[228052]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Dec 02 09:34:46 np0005541914.localdomain sudo[228050]: pam_unix(sudo:session): session closed for user root
Dec 02 09:34:47 np0005541914.localdomain sudo[228259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oljgmzzydeekfrgyfoknbesxuixgpbuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668086.7771866-4060-69632834283454/AnsiballZ_stat.py
Dec 02 09:34:47 np0005541914.localdomain sudo[228259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:34:47 np0005541914.localdomain python3.9[228261]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:34:47 np0005541914.localdomain sudo[228259]: pam_unix(sudo:session): session closed for user root
Dec 02 09:34:47 np0005541914.localdomain sudo[228371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yegrnjwbqiczqiwaciygncpnucmtzsjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668087.5700088-4086-234788342146718/AnsiballZ_file.py
Dec 02 09:34:47 np0005541914.localdomain sudo[228371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:34:48 np0005541914.localdomain python3.9[228373]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:34:48 np0005541914.localdomain sudo[228371]: pam_unix(sudo:session): session closed for user root
Dec 02 09:34:48 np0005541914.localdomain sudo[228480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cvqdfvyeiygtdnsivlpfckcxfhqbgjvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668088.1030197-4086-222717080797742/AnsiballZ_copy.py
Dec 02 09:34:48 np0005541914.localdomain sudo[228480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:34:48 np0005541914.localdomain python3.9[228482]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764668088.1030197-4086-222717080797742/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:34:48 np0005541914.localdomain sudo[228480]: pam_unix(sudo:session): session closed for user root
Dec 02 09:34:48 np0005541914.localdomain sudo[228535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lkuhuvpbeadotezuwdcffwncmkdqzomh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668088.1030197-4086-222717080797742/AnsiballZ_systemd.py
Dec 02 09:34:48 np0005541914.localdomain sudo[228535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:34:48 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46523 DF PROTO=TCP SPT=34970 DPT=9882 SEQ=2390784494 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54771230000000001030307) 
Dec 02 09:34:49 np0005541914.localdomain python3.9[228537]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 02 09:34:49 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:34:49 np0005541914.localdomain systemd-rc-local-generator[228559]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:34:49 np0005541914.localdomain systemd-sysv-generator[228565]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:34:49 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:34:49 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:34:49 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:34:49 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:34:49 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:34:49 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:34:49 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:34:49 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:34:49 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:34:49 np0005541914.localdomain sudo[228535]: pam_unix(sudo:session): session closed for user root
Dec 02 09:34:49 np0005541914.localdomain sudo[228626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ddcclzdcexuxlnfjjfjfkdrujjzqxtbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668088.1030197-4086-222717080797742/AnsiballZ_systemd.py
Dec 02 09:34:49 np0005541914.localdomain sudo[228626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:34:50 np0005541914.localdomain python3.9[228628]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:34:50 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:34:50 np0005541914.localdomain systemd-rc-local-generator[228655]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:34:50 np0005541914.localdomain systemd-sysv-generator[228661]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:34:50 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:34:50 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:34:50 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:34:50 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:34:50 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:34:50 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:34:50 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:34:50 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:34:50 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:34:50 np0005541914.localdomain systemd[1]: Starting nova_compute container...
Dec 02 09:34:50 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:34:50 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd847ae8b8d450ddddf78efaf612113cebe913c0aa9acb083d5c321023fdf168/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 02 09:34:50 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd847ae8b8d450ddddf78efaf612113cebe913c0aa9acb083d5c321023fdf168/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 02 09:34:50 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd847ae8b8d450ddddf78efaf612113cebe913c0aa9acb083d5c321023fdf168/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 02 09:34:50 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd847ae8b8d450ddddf78efaf612113cebe913c0aa9acb083d5c321023fdf168/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 02 09:34:50 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd847ae8b8d450ddddf78efaf612113cebe913c0aa9acb083d5c321023fdf168/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 02 09:34:50 np0005541914.localdomain podman[228669]: 2025-12-02 09:34:50.706340083 +0000 UTC m=+0.105003202 container init e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 02 09:34:50 np0005541914.localdomain podman[228669]: 2025-12-02 09:34:50.71877622 +0000 UTC m=+0.117439349 container start e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 02 09:34:50 np0005541914.localdomain podman[228669]: nova_compute
Dec 02 09:34:50 np0005541914.localdomain systemd[1]: Started nova_compute container.
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: + sudo -E kolla_set_configs
Dec 02 09:34:50 np0005541914.localdomain sudo[228626]: pam_unix(sudo:session): session closed for user root
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: INFO:__main__:Validating config file
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: INFO:__main__:Copying service configuration files
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: INFO:__main__:Deleting /etc/ceph
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: INFO:__main__:Creating directory /etc/ceph
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: INFO:__main__:Setting permission for /etc/ceph
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: INFO:__main__:Writing out command to execute
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: ++ cat /run_command
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: + CMD=nova-compute
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: + ARGS=
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: + sudo kolla_copy_cacerts
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: + [[ ! -n '' ]]
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: + . kolla_extend_start
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: Running command: 'nova-compute'
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: + echo 'Running command: '\''nova-compute'\'''
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: + umask 0022
Dec 02 09:34:50 np0005541914.localdomain nova_compute[228682]: + exec nova-compute
Dec 02 09:34:51 np0005541914.localdomain python3.9[228802]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:34:51 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22785 DF PROTO=TCP SPT=35984 DPT=9101 SEQ=1731321425 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5477D230000000001030307) 
Dec 02 09:34:52 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:52.567 228686 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 02 09:34:52 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:52.568 228686 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 02 09:34:52 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:52.568 228686 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 02 09:34:52 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:52.568 228686 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Dec 02 09:34:52 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:52.690 228686 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:34:52 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:52.700 228686 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:34:52 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:52.700 228686 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.091 228686 INFO nova.virt.driver [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Dec 02 09:34:53 np0005541914.localdomain python3.9[228914]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.210 228686 INFO nova.compute.provider_config [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.218 228686 WARNING nova.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.: nova.exception.TooOldComputeService: Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.218 228686 DEBUG oslo_concurrency.lockutils [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.218 228686 DEBUG oslo_concurrency.lockutils [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.218 228686 DEBUG oslo_concurrency.lockutils [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.219 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.219 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.219 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.219 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.219 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.219 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.220 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.220 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.220 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.220 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.220 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.220 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.220 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.221 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.221 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.221 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.221 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.221 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.221 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.221 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] console_host                   = np0005541914.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.222 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.222 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.222 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.222 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.222 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.222 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.222 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.223 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.223 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.223 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.223 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.223 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.223 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.224 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.224 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.224 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.224 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.224 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.224 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] host                           = np0005541914.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.224 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.225 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.225 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.225 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.225 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.225 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.225 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.226 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.226 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.226 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.226 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.226 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.226 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.226 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.227 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.227 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.227 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.227 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.227 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.227 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.227 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.227 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.228 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.228 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.228 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.228 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.228 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.228 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.228 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.229 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.229 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.229 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.229 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.229 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.229 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.229 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.230 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.230 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.230 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.230 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.230 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.230 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] my_block_storage_ip            = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.230 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] my_ip                          = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.231 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.231 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.231 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.231 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.231 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.231 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.231 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.231 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.232 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.232 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.232 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.232 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.232 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.232 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.232 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.233 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.233 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.233 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.233 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.233 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.233 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.233 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.234 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.234 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.234 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.234 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.234 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.234 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.234 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.234 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.235 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.235 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.235 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.235 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.235 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.235 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.235 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.236 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.236 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.236 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.236 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.236 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.236 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.236 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.236 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.237 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.237 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.237 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.237 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.237 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.237 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.237 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.238 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.238 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.238 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.238 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.238 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.238 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.238 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.239 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.239 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.239 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.239 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.239 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.239 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.239 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.240 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.240 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.240 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.240 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.240 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.240 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.241 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.241 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.241 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.241 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.241 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.241 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.241 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.242 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.242 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.242 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.242 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.242 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.242 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.242 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.243 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.243 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.243 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.243 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.243 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.243 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.243 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.244 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.244 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.244 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.244 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.244 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.244 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.245 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.245 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.245 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.245 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.245 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.245 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.245 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.245 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.246 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.246 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.246 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.246 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.246 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.246 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.246 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.247 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.247 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.247 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.247 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.247 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.247 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.248 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.248 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.248 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.248 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.248 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.249 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.249 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.249 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.249 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.249 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.250 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.250 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.250 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.250 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.250 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.250 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.250 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.250 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.251 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.251 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.251 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.251 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.251 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.251 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.251 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.252 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.252 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.252 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.252 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.252 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.252 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.252 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.253 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.253 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.253 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.253 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.253 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.253 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.253 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.254 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.254 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.254 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.254 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.254 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.254 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.254 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.254 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.255 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.255 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.255 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.255 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.255 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.255 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.255 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.256 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.256 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.256 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.256 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.256 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.256 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.256 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.257 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.257 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.257 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.257 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.257 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.257 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.257 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.258 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.258 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.258 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.258 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.258 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.258 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.259 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.259 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.259 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.259 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.259 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.259 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.259 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.260 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.260 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.260 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.260 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.260 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.260 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.260 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.260 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.261 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.261 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.261 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.261 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.261 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.261 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.261 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.262 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.262 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.262 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.262 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.262 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.262 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.262 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.262 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.263 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.263 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.263 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.263 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.263 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.263 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.263 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.264 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.264 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.264 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.264 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.264 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.264 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.264 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.265 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.265 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.265 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.265 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.265 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.265 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.265 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.265 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.266 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.266 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.266 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.266 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.266 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.266 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.266 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.267 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.267 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.267 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.267 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.267 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.267 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.267 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.267 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.268 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.268 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.268 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.268 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.268 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.268 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.268 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.269 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.269 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.269 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.269 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.269 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.269 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.270 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.270 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.270 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.270 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.270 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.270 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.270 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.270 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.271 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.271 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.271 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.271 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.271 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.271 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.271 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.272 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.272 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.272 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.272 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.272 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.272 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.272 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.272 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.273 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.273 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.273 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.273 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.273 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.273 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.273 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.273 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.274 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.274 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.274 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.274 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.274 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.274 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.274 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.275 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.275 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.275 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.275 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.275 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.275 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.275 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.275 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.276 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.276 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.276 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.276 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.276 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.276 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.276 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.277 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.277 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.277 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.277 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.277 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.277 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.277 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.277 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.278 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.278 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.278 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.278 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.278 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.278 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.278 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.278 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.279 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.279 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.279 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.279 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.279 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.279 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.279 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.280 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.280 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.280 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.280 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.280 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.280 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.280 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.280 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.281 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.281 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.281 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.281 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.281 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.281 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.281 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.281 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.282 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.282 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.282 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.282 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.282 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.282 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.283 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.283 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.283 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.283 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.283 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.283 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.283 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.284 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.284 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.284 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.284 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.284 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.284 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.284 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.285 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.285 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.285 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.285 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.285 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.285 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.285 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.286 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.286 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.286 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.286 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.286 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.286 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.286 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.286 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.287 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.287 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.287 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.287 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.287 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.287 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.287 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.288 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.288 228686 WARNING oslo_config.cfg [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: and ``live_migration_inbound_addr`` respectively.
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: ).  Its value may be silently ignored in the future.
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.288 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.live_migration_uri     = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.288 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.288 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.289 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.289 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.289 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.289 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.289 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.289 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.290 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.290 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.290 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.290 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.290 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.290 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.291 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.291 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.291 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.291 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.rbd_secret_uuid        = c7c8e171-a193-56fb-95fa-8879fcfa7074 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.291 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.291 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.292 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.292 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.292 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.292 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.292 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.293 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.293 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.293 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.293 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.293 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.294 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.294 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.294 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.294 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.294 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.294 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.294 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.294 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.295 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.295 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.295 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.295 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.295 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.295 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.295 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.296 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.296 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.296 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.296 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.296 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.296 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.296 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.297 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.297 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.297 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.297 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.297 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.297 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.298 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.298 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.298 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.298 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.298 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.298 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.298 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.299 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.299 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.299 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.299 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.299 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.299 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.299 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.299 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.300 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.300 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.300 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.300 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.300 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.300 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.300 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.301 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.301 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.301 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.301 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.301 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.301 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.301 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.302 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.302 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.302 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.auth_url             = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.302 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.302 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.302 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.302 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.303 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.303 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.303 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.303 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.303 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.303 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.303 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.303 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.304 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.304 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.304 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.304 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.304 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.304 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.304 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.305 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.305 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.305 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.305 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.305 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.305 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.305 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.305 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.306 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.306 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.306 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.306 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.306 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.306 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.307 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.307 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.307 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.307 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.307 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.308 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.308 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.308 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.308 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.308 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.309 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.309 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.309 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.309 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.309 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.310 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.310 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.310 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.310 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.310 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.310 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.310 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.311 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.311 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.311 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.311 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.311 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.311 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.311 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.311 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.312 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.312 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.312 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.312 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.312 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.313 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.313 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.313 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.313 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.313 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.313 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.313 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.313 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.314 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.314 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.314 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.314 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.314 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.314 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.314 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.314 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.315 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.315 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.315 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.315 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.315 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.315 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.315 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.316 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.316 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.316 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.316 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.316 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.316 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.316 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.317 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.317 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.317 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.317 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.317 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.317 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.317 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.317 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.318 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.318 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.318 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.318 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.318 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.318 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.318 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.319 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.319 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.319 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.319 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.319 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.319 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.319 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.320 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.320 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.320 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.320 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.320 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.320 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.321 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.321 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.321 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.321 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.321 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.321 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.321 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.321 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.322 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.322 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.322 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.322 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.322 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.322 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.322 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.322 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.323 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.323 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.323 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.323 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.323 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.323 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.323 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.324 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.324 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.324 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.324 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.324 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.324 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.324 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.325 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.325 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vnc.novncproxy_base_url        = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.325 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.325 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.325 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.325 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vnc.server_proxyclient_address = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.326 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.326 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.326 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.326 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.disable_compute_service_check_for_ffu = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.326 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.326 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.326 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.326 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.327 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.327 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.327 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.327 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.327 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.327 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.327 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.328 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.328 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.328 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.328 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.328 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.328 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.328 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.329 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.329 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.329 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.329 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.329 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.329 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.330 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.330 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.330 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.330 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.330 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.330 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.330 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.330 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.331 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.331 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.331 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.331 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.331 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.331 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.331 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.332 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.332 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.332 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.332 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.332 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.332 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.332 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.333 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.333 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.333 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.333 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.333 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.333 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.333 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.334 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.334 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.334 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.334 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.334 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.334 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.334 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.335 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.335 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.335 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.335 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.335 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.335 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.335 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.336 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.336 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.336 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.336 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.336 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.336 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.336 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.336 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.337 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.337 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.337 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.337 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.337 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.337 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.337 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.338 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.338 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.338 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.338 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.338 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.auth_url            = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.338 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.338 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.339 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.339 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.339 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.339 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.339 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.339 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.339 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.339 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.340 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.340 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.340 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.340 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.340 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.340 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.340 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.341 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.341 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.341 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.341 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.341 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.341 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.341 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.341 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.342 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.342 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.342 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.342 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.342 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.342 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.342 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.343 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.343 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.343 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.343 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.343 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.343 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.344 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.344 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.344 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.344 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.344 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.344 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.344 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.344 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.345 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.345 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.345 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.345 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.345 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.345 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.345 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.346 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.346 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.346 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.346 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.346 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.346 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.346 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.347 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.347 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.347 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.347 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.347 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.347 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.347 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.347 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.348 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.348 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.348 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.348 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.348 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.348 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.348 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.349 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.349 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.349 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.349 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.349 228686 DEBUG oslo_service.service [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.350 228686 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.383 228686 INFO nova.virt.node [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Determined node identity 9ec09c1a-d246-41d7-94f4-b482f646a9f1 from /var/lib/nova/compute_id
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.384 228686 DEBUG nova.virt.libvirt.host [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.385 228686 DEBUG nova.virt.libvirt.host [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.385 228686 DEBUG nova.virt.libvirt.host [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.385 228686 DEBUG nova.virt.libvirt.host [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Dec 02 09:34:53 np0005541914.localdomain systemd[1]: Started libvirt QEMU daemon.
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.448 228686 DEBUG nova.virt.libvirt.host [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f6ca50d8df0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.451 228686 DEBUG nova.virt.libvirt.host [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f6ca50d8df0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.451 228686 INFO nova.virt.libvirt.driver [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Connection event '1' reason 'None'
Dec 02 09:34:53 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:53.469 228686 DEBUG nova.virt.libvirt.volume.mount [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Dec 02 09:34:53 np0005541914.localdomain python3.9[229075]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:54.350 228686 INFO nova.virt.libvirt.host [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Libvirt host capabilities <capabilities>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]: 
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <host>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <uuid>64aa5208-7bf7-490c-857b-3c1a3cae8bb3</uuid>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <cpu>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <arch>x86_64</arch>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model>EPYC-Rome-v4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <vendor>AMD</vendor>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <microcode version='16777317'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <signature family='23' model='49' stepping='0'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <maxphysaddr mode='emulate' bits='40'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature name='x2apic'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature name='tsc-deadline'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature name='osxsave'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature name='hypervisor'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature name='tsc_adjust'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature name='spec-ctrl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature name='stibp'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature name='arch-capabilities'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature name='ssbd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature name='cmp_legacy'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature name='topoext'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature name='virt-ssbd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature name='lbrv'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature name='tsc-scale'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature name='vmcb-clean'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature name='pause-filter'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature name='pfthreshold'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature name='svme-addr-chk'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature name='rdctl-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature name='skip-l1dfl-vmentry'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature name='mds-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature name='pschange-mc-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <pages unit='KiB' size='4'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <pages unit='KiB' size='2048'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <pages unit='KiB' size='1048576'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </cpu>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <power_management>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <suspend_mem/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <suspend_disk/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <suspend_hybrid/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </power_management>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <iommu support='no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <migration_features>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <live/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <uri_transports>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <uri_transport>tcp</uri_transport>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <uri_transport>rdma</uri_transport>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </uri_transports>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </migration_features>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <topology>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <cells num='1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <cell id='0'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:           <memory unit='KiB'>16116612</memory>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:           <pages unit='KiB' size='4'>4029153</pages>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:           <pages unit='KiB' size='2048'>0</pages>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:           <pages unit='KiB' size='1048576'>0</pages>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:           <distances>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:             <sibling id='0' value='10'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:           </distances>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:           <cpus num='8'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:           </cpus>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         </cell>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </cells>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </topology>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <cache>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </cache>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <secmodel>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model>selinux</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <doi>0</doi>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </secmodel>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <secmodel>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model>dac</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <doi>0</doi>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <baselabel type='kvm'>+107:+107</baselabel>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <baselabel type='qemu'>+107:+107</baselabel>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </secmodel>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   </host>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]: 
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <guest>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <os_type>hvm</os_type>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <arch name='i686'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <wordsize>32</wordsize>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <domain type='qemu'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <domain type='kvm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </arch>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <features>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <pae/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <nonpae/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <acpi default='on' toggle='yes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <apic default='on' toggle='no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <cpuselection/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <deviceboot/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <disksnapshot default='on' toggle='no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <externalSnapshot/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </features>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   </guest>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]: 
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <guest>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <os_type>hvm</os_type>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <arch name='x86_64'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <wordsize>64</wordsize>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <domain type='qemu'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <domain type='kvm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </arch>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <features>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <acpi default='on' toggle='yes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <apic default='on' toggle='no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <cpuselection/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <deviceboot/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <disksnapshot default='on' toggle='no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <externalSnapshot/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </features>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   </guest>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]: 
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]: </capabilities>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]: 
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:54.360 228686 DEBUG nova.virt.libvirt.host [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:54.378 228686 DEBUG nova.virt.libvirt.host [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]: <domainCapabilities>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <path>/usr/libexec/qemu-kvm</path>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <domain>kvm</domain>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <arch>i686</arch>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <vcpu max='240'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <iothreads supported='yes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <os supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <enum name='firmware'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <loader supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='type'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>rom</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>pflash</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='readonly'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>yes</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>no</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='secure'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>no</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </loader>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   </os>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <cpu>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <mode name='host-passthrough' supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='hostPassthroughMigratable'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>on</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>off</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </mode>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <mode name='maximum' supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='maximumMigratable'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>on</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>off</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </mode>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <mode name='host-model' supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <vendor>AMD</vendor>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='x2apic'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='tsc-deadline'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='hypervisor'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='tsc_adjust'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='spec-ctrl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='stibp'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='ssbd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='cmp_legacy'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='overflow-recov'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='succor'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='ibrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='amd-ssbd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='virt-ssbd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='lbrv'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='tsc-scale'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='vmcb-clean'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='pause-filter'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='pfthreshold'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='svme-addr-chk'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='disable' name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </mode>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <mode name='custom' supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Broadwell'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Broadwell-IBRS'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Broadwell-noTSX'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Broadwell-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Broadwell-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Broadwell-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Broadwell-v4'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cascadelake-Server'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cascadelake-Server-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cascadelake-Server-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cascadelake-Server-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cascadelake-Server-v4'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cascadelake-Server-v5'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cooperlake'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cooperlake-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cooperlake-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Denverton'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='mpx'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Denverton-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='mpx'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Denverton-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Denverton-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Dhyana-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-Genoa'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amd-psfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='auto-ibrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='no-nested-data-bp'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='null-sel-clr-base'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='stibp-always-on'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-Genoa-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amd-psfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='auto-ibrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='no-nested-data-bp'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='null-sel-clr-base'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='stibp-always-on'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-Milan'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-Milan-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-Milan-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amd-psfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='no-nested-data-bp'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='null-sel-clr-base'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='stibp-always-on'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-Rome'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-Rome-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-Rome-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-Rome-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-v4'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='GraniteRapids'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-int8'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-tile'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='bus-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fbsdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrc'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fzrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='mcdt-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pbrsb-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='prefetchiti'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='psdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='serialize'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='tsx-ldtrk'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='GraniteRapids-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-int8'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-tile'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='bus-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fbsdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrc'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fzrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='mcdt-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pbrsb-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='prefetchiti'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='psdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='serialize'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='tsx-ldtrk'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='GraniteRapids-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-int8'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-tile'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx10'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx10-128'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx10-256'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx10-512'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='bus-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='cldemote'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fbsdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrc'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fzrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='mcdt-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdir64b'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdiri'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pbrsb-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='prefetchiti'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='psdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='serialize'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='tsx-ldtrk'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Haswell'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Haswell-IBRS'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Haswell-noTSX'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Haswell-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Haswell-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Haswell-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Haswell-v4'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Icelake-Server'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Icelake-Server-noTSX'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Icelake-Server-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Icelake-Server-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Icelake-Server-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Icelake-Server-v4'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Icelake-Server-v5'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Icelake-Server-v6'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Icelake-Server-v7'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='IvyBridge'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='IvyBridge-IBRS'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='IvyBridge-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='IvyBridge-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='KnightsMill'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-4fmaps'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-4vnniw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512er'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512pf'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='KnightsMill-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-4fmaps'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-4vnniw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512er'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512pf'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Opteron_G4'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fma4'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xop'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Opteron_G4-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fma4'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xop'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Opteron_G5'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fma4'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='tbm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xop'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Opteron_G5-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fma4'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='tbm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xop'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='SapphireRapids'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-int8'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-tile'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='bus-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrc'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fzrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='serialize'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='tsx-ldtrk'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='SapphireRapids-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-int8'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-tile'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='bus-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrc'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fzrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='serialize'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='tsx-ldtrk'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='SapphireRapids-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-int8'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-tile'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='bus-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fbsdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrc'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fzrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='psdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='serialize'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='tsx-ldtrk'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='SapphireRapids-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-int8'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-tile'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='bus-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='cldemote'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fbsdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrc'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fzrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdir64b'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdiri'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='psdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='serialize'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='tsx-ldtrk'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='SierraForest'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-ne-convert'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni-int8'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='bus-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='cmpccxadd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fbsdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='mcdt-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pbrsb-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='psdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='serialize'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='SierraForest-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-ne-convert'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni-int8'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='bus-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='cmpccxadd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fbsdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='mcdt-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pbrsb-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='psdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='serialize'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Client'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Client-IBRS'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Client-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Client-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Client-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Client-v4'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Server'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Server-IBRS'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Server-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Server-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Server-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Server-v4'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Server-v5'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Snowridge'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='cldemote'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='core-capability'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdir64b'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdiri'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='mpx'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='split-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Snowridge-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='cldemote'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='core-capability'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdir64b'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdiri'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='mpx'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='split-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Snowridge-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='cldemote'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='core-capability'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdir64b'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdiri'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='split-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Snowridge-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='cldemote'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='core-capability'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdir64b'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdiri'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='split-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Snowridge-v4'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='cldemote'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdir64b'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdiri'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='athlon'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='3dnow'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='3dnowext'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='athlon-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='3dnow'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='3dnowext'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='core2duo'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='core2duo-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='coreduo'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='coreduo-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='n270'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='n270-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='phenom'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='3dnow'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='3dnowext'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='phenom-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='3dnow'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='3dnowext'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </mode>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   </cpu>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <memoryBacking supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <enum name='sourceType'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <value>file</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <value>anonymous</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <value>memfd</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   </memoryBacking>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <devices>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <disk supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='diskDevice'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>disk</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>cdrom</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>floppy</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>lun</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='bus'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>ide</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>fdc</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>scsi</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>virtio</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>usb</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>sata</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='model'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>virtio</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>virtio-transitional</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>virtio-non-transitional</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </disk>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <graphics supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='type'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>vnc</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>egl-headless</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>dbus</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </graphics>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <video supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='modelType'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>vga</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>cirrus</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>virtio</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>none</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>bochs</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>ramfb</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </video>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <hostdev supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='mode'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>subsystem</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='startupPolicy'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>default</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>mandatory</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>requisite</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>optional</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='subsysType'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>usb</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>pci</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>scsi</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='capsType'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='pciBackend'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </hostdev>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <rng supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='model'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>virtio</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>virtio-transitional</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>virtio-non-transitional</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='backendModel'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>random</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>egd</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>builtin</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </rng>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <filesystem supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='driverType'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>path</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>handle</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>virtiofs</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </filesystem>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <tpm supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='model'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>tpm-tis</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>tpm-crb</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='backendModel'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>emulator</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>external</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='backendVersion'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>2.0</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </tpm>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <redirdev supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='bus'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>usb</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </redirdev>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <channel supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='type'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>pty</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>unix</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </channel>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <crypto supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='model'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='type'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>qemu</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='backendModel'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>builtin</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </crypto>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <interface supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='backendType'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>default</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>passt</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </interface>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <panic supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='model'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>isa</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>hyperv</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </panic>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <console supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='type'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>null</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>vc</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>pty</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>dev</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>file</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>pipe</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>stdio</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>udp</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>tcp</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>unix</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>qemu-vdagent</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>dbus</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </console>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   </devices>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <features>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <gic supported='no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <vmcoreinfo supported='yes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <genid supported='yes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <backingStoreInput supported='yes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <backup supported='yes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <async-teardown supported='yes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <ps2 supported='yes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <sev supported='no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <sgx supported='no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <hyperv supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='features'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>relaxed</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>vapic</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>spinlocks</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>vpindex</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>runtime</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>synic</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>stimer</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>reset</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>vendor_id</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>frequencies</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>reenlightenment</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>tlbflush</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>ipi</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>avic</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>emsr_bitmap</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>xmm_input</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <defaults>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <spinlocks>4095</spinlocks>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <stimer_direct>on</stimer_direct>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <tlbflush_direct>off</tlbflush_direct>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <tlbflush_extended>off</tlbflush_extended>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </defaults>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </hyperv>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <launchSecurity supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='sectype'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>tdx</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </launchSecurity>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   </features>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]: </domainCapabilities>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:54.387 228686 DEBUG nova.virt.libvirt.host [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]: <domainCapabilities>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <path>/usr/libexec/qemu-kvm</path>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <domain>kvm</domain>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <arch>i686</arch>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <vcpu max='1024'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <iothreads supported='yes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <os supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <enum name='firmware'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <loader supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='type'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>rom</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>pflash</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='readonly'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>yes</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>no</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='secure'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>no</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </loader>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   </os>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <cpu>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <mode name='host-passthrough' supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='hostPassthroughMigratable'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>on</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>off</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </mode>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <mode name='maximum' supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='maximumMigratable'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>on</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>off</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </mode>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <mode name='host-model' supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <vendor>AMD</vendor>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='x2apic'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='tsc-deadline'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='hypervisor'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='tsc_adjust'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='spec-ctrl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='stibp'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='ssbd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='cmp_legacy'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='overflow-recov'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='succor'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='ibrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='amd-ssbd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='virt-ssbd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='lbrv'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='tsc-scale'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='vmcb-clean'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='pause-filter'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='pfthreshold'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='svme-addr-chk'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='disable' name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </mode>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <mode name='custom' supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Broadwell'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Broadwell-IBRS'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Broadwell-noTSX'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Broadwell-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Broadwell-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Broadwell-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Broadwell-v4'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cascadelake-Server'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cascadelake-Server-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cascadelake-Server-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cascadelake-Server-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cascadelake-Server-v4'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cascadelake-Server-v5'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cooperlake'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cooperlake-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cooperlake-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Denverton'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='mpx'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Denverton-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='mpx'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Denverton-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Denverton-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Dhyana-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-Genoa'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amd-psfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='auto-ibrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='no-nested-data-bp'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='null-sel-clr-base'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='stibp-always-on'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-Genoa-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amd-psfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='auto-ibrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='no-nested-data-bp'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='null-sel-clr-base'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='stibp-always-on'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-Milan'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-Milan-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-Milan-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amd-psfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='no-nested-data-bp'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='null-sel-clr-base'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='stibp-always-on'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-Rome'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-Rome-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-Rome-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-Rome-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-v4'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='GraniteRapids'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-int8'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-tile'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='bus-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fbsdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrc'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fzrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='mcdt-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pbrsb-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='prefetchiti'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='psdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='serialize'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='tsx-ldtrk'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='GraniteRapids-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-int8'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-tile'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='bus-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fbsdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrc'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fzrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='mcdt-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pbrsb-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='prefetchiti'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='psdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='serialize'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='tsx-ldtrk'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='GraniteRapids-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-int8'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-tile'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx10'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx10-128'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx10-256'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx10-512'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='bus-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='cldemote'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fbsdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrc'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fzrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='mcdt-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdir64b'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdiri'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pbrsb-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='prefetchiti'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='psdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='serialize'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='tsx-ldtrk'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Haswell'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Haswell-IBRS'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Haswell-noTSX'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Haswell-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Haswell-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Haswell-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Haswell-v4'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Icelake-Server'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Icelake-Server-noTSX'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Icelake-Server-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Icelake-Server-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Icelake-Server-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Icelake-Server-v4'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Icelake-Server-v5'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Icelake-Server-v6'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Icelake-Server-v7'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='IvyBridge'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='IvyBridge-IBRS'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='IvyBridge-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='IvyBridge-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='KnightsMill'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-4fmaps'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-4vnniw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512er'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512pf'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='KnightsMill-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-4fmaps'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-4vnniw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512er'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512pf'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Opteron_G4'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fma4'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xop'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Opteron_G4-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fma4'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xop'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Opteron_G5'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fma4'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='tbm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xop'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Opteron_G5-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fma4'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='tbm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xop'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='SapphireRapids'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-int8'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-tile'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='bus-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrc'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fzrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='serialize'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='tsx-ldtrk'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='SapphireRapids-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-int8'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-tile'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='bus-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrc'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fzrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='serialize'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='tsx-ldtrk'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='SapphireRapids-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-int8'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-tile'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='bus-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fbsdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrc'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fzrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='psdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='serialize'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='tsx-ldtrk'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='SapphireRapids-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-int8'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-tile'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='bus-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='cldemote'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fbsdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrc'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fzrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdir64b'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdiri'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='psdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='serialize'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='tsx-ldtrk'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='SierraForest'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-ne-convert'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni-int8'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='bus-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='cmpccxadd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fbsdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='mcdt-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pbrsb-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='psdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='serialize'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='SierraForest-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-ne-convert'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni-int8'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='bus-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='cmpccxadd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fbsdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='mcdt-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pbrsb-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='psdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='serialize'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Client'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Client-IBRS'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Client-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Client-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Client-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Client-v4'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Server'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Server-IBRS'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Server-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Server-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Server-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Server-v4'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Server-v5'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Snowridge'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='cldemote'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='core-capability'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdir64b'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdiri'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='mpx'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='split-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Snowridge-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='cldemote'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='core-capability'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdir64b'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdiri'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='mpx'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='split-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Snowridge-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='cldemote'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='core-capability'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdir64b'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdiri'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='split-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Snowridge-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='cldemote'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='core-capability'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdir64b'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdiri'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='split-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Snowridge-v4'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='cldemote'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdir64b'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdiri'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='athlon'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='3dnow'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='3dnowext'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='athlon-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='3dnow'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='3dnowext'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='core2duo'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='core2duo-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='coreduo'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='coreduo-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='n270'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='n270-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='phenom'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='3dnow'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='3dnowext'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='phenom-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='3dnow'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='3dnowext'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </mode>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   </cpu>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <memoryBacking supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <enum name='sourceType'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <value>file</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <value>anonymous</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <value>memfd</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   </memoryBacking>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <devices>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <disk supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='diskDevice'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>disk</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>cdrom</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>floppy</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>lun</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='bus'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>fdc</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>scsi</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>virtio</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>usb</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>sata</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='model'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>virtio</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>virtio-transitional</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>virtio-non-transitional</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </disk>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <graphics supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='type'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>vnc</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>egl-headless</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>dbus</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </graphics>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <video supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='modelType'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>vga</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>cirrus</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>virtio</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>none</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>bochs</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>ramfb</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </video>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <hostdev supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='mode'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>subsystem</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='startupPolicy'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>default</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>mandatory</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>requisite</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>optional</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='subsysType'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>usb</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>pci</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>scsi</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='capsType'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='pciBackend'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </hostdev>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <rng supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='model'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>virtio</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>virtio-transitional</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>virtio-non-transitional</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='backendModel'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>random</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>egd</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>builtin</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </rng>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <filesystem supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='driverType'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>path</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>handle</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>virtiofs</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </filesystem>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <tpm supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='model'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>tpm-tis</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>tpm-crb</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='backendModel'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>emulator</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>external</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='backendVersion'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>2.0</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </tpm>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <redirdev supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='bus'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>usb</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </redirdev>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <channel supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='type'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>pty</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>unix</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </channel>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <crypto supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='model'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='type'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>qemu</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='backendModel'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>builtin</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </crypto>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <interface supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='backendType'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>default</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>passt</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </interface>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <panic supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='model'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>isa</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>hyperv</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </panic>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <console supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='type'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>null</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>vc</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>pty</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>dev</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>file</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>pipe</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>stdio</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>udp</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>tcp</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>unix</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>qemu-vdagent</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>dbus</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </console>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   </devices>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <features>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <gic supported='no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <vmcoreinfo supported='yes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <genid supported='yes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <backingStoreInput supported='yes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <backup supported='yes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <async-teardown supported='yes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <ps2 supported='yes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <sev supported='no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <sgx supported='no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <hyperv supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='features'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>relaxed</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>vapic</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>spinlocks</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>vpindex</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>runtime</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>synic</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>stimer</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>reset</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>vendor_id</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>frequencies</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>reenlightenment</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>tlbflush</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>ipi</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>avic</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>emsr_bitmap</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>xmm_input</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <defaults>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <spinlocks>4095</spinlocks>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <stimer_direct>on</stimer_direct>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <tlbflush_direct>off</tlbflush_direct>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <tlbflush_extended>off</tlbflush_extended>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </defaults>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </hyperv>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <launchSecurity supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='sectype'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>tdx</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </launchSecurity>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   </features>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]: </domainCapabilities>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:54.434 228686 DEBUG nova.virt.libvirt.host [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:54.439 228686 DEBUG nova.virt.libvirt.host [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]: <domainCapabilities>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <path>/usr/libexec/qemu-kvm</path>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <domain>kvm</domain>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <arch>x86_64</arch>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <vcpu max='240'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <iothreads supported='yes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <os supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <enum name='firmware'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <loader supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='type'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>rom</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>pflash</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='readonly'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>yes</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>no</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='secure'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>no</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </loader>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   </os>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <cpu>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <mode name='host-passthrough' supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='hostPassthroughMigratable'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>on</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>off</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </mode>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <mode name='maximum' supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='maximumMigratable'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>on</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>off</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </mode>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <mode name='host-model' supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <vendor>AMD</vendor>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='x2apic'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='tsc-deadline'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='hypervisor'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='tsc_adjust'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='spec-ctrl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='stibp'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='ssbd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='cmp_legacy'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='overflow-recov'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='succor'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='ibrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='amd-ssbd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='virt-ssbd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='lbrv'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='tsc-scale'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='vmcb-clean'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='pause-filter'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='pfthreshold'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='svme-addr-chk'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='disable' name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </mode>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <mode name='custom' supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Broadwell'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Broadwell-IBRS'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Broadwell-noTSX'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Broadwell-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Broadwell-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Broadwell-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Broadwell-v4'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cascadelake-Server'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cascadelake-Server-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cascadelake-Server-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cascadelake-Server-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cascadelake-Server-v4'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cascadelake-Server-v5'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cooperlake'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cooperlake-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cooperlake-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Denverton'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='mpx'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Denverton-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='mpx'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Denverton-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Denverton-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Dhyana-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-Genoa'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amd-psfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='auto-ibrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='no-nested-data-bp'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='null-sel-clr-base'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='stibp-always-on'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-Genoa-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amd-psfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='auto-ibrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='no-nested-data-bp'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='null-sel-clr-base'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='stibp-always-on'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-Milan'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-Milan-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-Milan-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amd-psfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='no-nested-data-bp'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='null-sel-clr-base'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='stibp-always-on'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-Rome'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-Rome-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-Rome-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-Rome-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-v4'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='GraniteRapids'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-int8'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-tile'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='bus-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fbsdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrc'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fzrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='mcdt-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pbrsb-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='prefetchiti'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='psdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='serialize'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='tsx-ldtrk'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='GraniteRapids-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-int8'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-tile'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='bus-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fbsdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrc'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fzrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='mcdt-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pbrsb-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='prefetchiti'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='psdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='serialize'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='tsx-ldtrk'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='GraniteRapids-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-int8'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-tile'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx10'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx10-128'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx10-256'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx10-512'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='bus-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='cldemote'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fbsdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrc'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fzrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='mcdt-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdir64b'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdiri'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pbrsb-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='prefetchiti'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='psdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='serialize'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='tsx-ldtrk'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Haswell'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Haswell-IBRS'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Haswell-noTSX'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Haswell-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Haswell-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Haswell-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Haswell-v4'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Icelake-Server'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Icelake-Server-noTSX'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Icelake-Server-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Icelake-Server-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Icelake-Server-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Icelake-Server-v4'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Icelake-Server-v5'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Icelake-Server-v6'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Icelake-Server-v7'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='IvyBridge'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='IvyBridge-IBRS'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='IvyBridge-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='IvyBridge-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='KnightsMill'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-4fmaps'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-4vnniw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512er'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512pf'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='KnightsMill-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-4fmaps'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-4vnniw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512er'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512pf'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Opteron_G4'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fma4'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xop'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Opteron_G4-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fma4'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xop'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Opteron_G5'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fma4'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='tbm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xop'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Opteron_G5-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fma4'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='tbm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xop'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='SapphireRapids'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-int8'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-tile'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='bus-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrc'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fzrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='serialize'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='tsx-ldtrk'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='SapphireRapids-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-int8'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-tile'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='bus-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrc'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fzrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='serialize'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='tsx-ldtrk'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='SapphireRapids-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-int8'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-tile'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='bus-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fbsdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrc'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fzrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='psdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='serialize'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='tsx-ldtrk'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='SapphireRapids-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-int8'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-tile'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='bus-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='cldemote'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fbsdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrc'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fzrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdir64b'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdiri'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='psdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='serialize'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='tsx-ldtrk'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='SierraForest'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-ne-convert'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni-int8'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='bus-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='cmpccxadd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fbsdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='mcdt-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pbrsb-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='psdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='serialize'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='SierraForest-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-ne-convert'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni-int8'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='bus-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='cmpccxadd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fbsdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='mcdt-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pbrsb-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='psdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='serialize'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Client'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Client-IBRS'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Client-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Client-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Client-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Client-v4'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Server'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Server-IBRS'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Server-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Server-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Server-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Server-v4'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Server-v5'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Snowridge'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='cldemote'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='core-capability'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdir64b'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdiri'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='mpx'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='split-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Snowridge-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='cldemote'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='core-capability'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdir64b'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdiri'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='mpx'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='split-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Snowridge-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='cldemote'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='core-capability'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdir64b'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdiri'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='split-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Snowridge-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='cldemote'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='core-capability'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdir64b'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdiri'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='split-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Snowridge-v4'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='cldemote'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdir64b'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdiri'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='athlon'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='3dnow'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='3dnowext'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='athlon-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='3dnow'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='3dnowext'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='core2duo'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='core2duo-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='coreduo'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='coreduo-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='n270'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='n270-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='phenom'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='3dnow'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='3dnowext'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='phenom-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='3dnow'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='3dnowext'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </mode>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   </cpu>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <memoryBacking supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <enum name='sourceType'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <value>file</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <value>anonymous</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <value>memfd</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   </memoryBacking>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <devices>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <disk supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='diskDevice'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>disk</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>cdrom</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>floppy</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>lun</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='bus'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>ide</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>fdc</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>scsi</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>virtio</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>usb</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>sata</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='model'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>virtio</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>virtio-transitional</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>virtio-non-transitional</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </disk>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <graphics supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='type'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>vnc</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>egl-headless</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>dbus</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </graphics>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <video supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='modelType'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>vga</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>cirrus</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>virtio</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>none</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>bochs</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>ramfb</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </video>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <hostdev supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='mode'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>subsystem</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='startupPolicy'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>default</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>mandatory</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>requisite</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>optional</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='subsysType'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>usb</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>pci</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>scsi</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='capsType'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='pciBackend'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </hostdev>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <rng supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='model'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>virtio</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>virtio-transitional</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>virtio-non-transitional</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='backendModel'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>random</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>egd</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>builtin</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </rng>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <filesystem supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='driverType'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>path</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>handle</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>virtiofs</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </filesystem>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <tpm supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='model'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>tpm-tis</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>tpm-crb</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='backendModel'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>emulator</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>external</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='backendVersion'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>2.0</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </tpm>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <redirdev supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='bus'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>usb</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </redirdev>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <channel supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='type'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>pty</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>unix</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </channel>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <crypto supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='model'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='type'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>qemu</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='backendModel'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>builtin</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </crypto>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <interface supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='backendType'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>default</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>passt</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </interface>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <panic supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='model'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>isa</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>hyperv</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </panic>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <console supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='type'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>null</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>vc</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>pty</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>dev</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>file</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>pipe</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>stdio</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>udp</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>tcp</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>unix</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>qemu-vdagent</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>dbus</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </console>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   </devices>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <features>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <gic supported='no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <vmcoreinfo supported='yes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <genid supported='yes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <backingStoreInput supported='yes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <backup supported='yes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <async-teardown supported='yes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <ps2 supported='yes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <sev supported='no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <sgx supported='no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <hyperv supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='features'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>relaxed</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>vapic</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>spinlocks</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>vpindex</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>runtime</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>synic</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>stimer</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>reset</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>vendor_id</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>frequencies</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>reenlightenment</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>tlbflush</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>ipi</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>avic</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>emsr_bitmap</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>xmm_input</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <defaults>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <spinlocks>4095</spinlocks>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <stimer_direct>on</stimer_direct>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <tlbflush_direct>off</tlbflush_direct>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <tlbflush_extended>off</tlbflush_extended>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </defaults>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </hyperv>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <launchSecurity supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='sectype'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>tdx</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </launchSecurity>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   </features>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]: </domainCapabilities>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:54.518 228686 DEBUG nova.virt.libvirt.host [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]: <domainCapabilities>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <path>/usr/libexec/qemu-kvm</path>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <domain>kvm</domain>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <arch>x86_64</arch>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <vcpu max='1024'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <iothreads supported='yes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <os supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <enum name='firmware'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <value>efi</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <loader supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='type'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>rom</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>pflash</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='readonly'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>yes</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>no</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='secure'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>yes</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>no</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </loader>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   </os>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <cpu>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <mode name='host-passthrough' supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='hostPassthroughMigratable'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>on</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>off</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </mode>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <mode name='maximum' supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='maximumMigratable'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>on</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>off</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </mode>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <mode name='host-model' supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <vendor>AMD</vendor>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='x2apic'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='tsc-deadline'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='hypervisor'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='tsc_adjust'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='spec-ctrl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='stibp'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='ssbd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='cmp_legacy'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='overflow-recov'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='succor'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='ibrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='amd-ssbd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='virt-ssbd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='lbrv'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='tsc-scale'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='vmcb-clean'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='pause-filter'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='pfthreshold'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='svme-addr-chk'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <feature policy='disable' name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </mode>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <mode name='custom' supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Broadwell'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Broadwell-IBRS'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Broadwell-noTSX'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Broadwell-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Broadwell-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Broadwell-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Broadwell-v4'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cascadelake-Server'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cascadelake-Server-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cascadelake-Server-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cascadelake-Server-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cascadelake-Server-v4'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cascadelake-Server-v5'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cooperlake'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cooperlake-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Cooperlake-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Denverton'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='mpx'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Denverton-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='mpx'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Denverton-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Denverton-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Dhyana-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-Genoa'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amd-psfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='auto-ibrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='no-nested-data-bp'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='null-sel-clr-base'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='stibp-always-on'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-Genoa-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amd-psfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='auto-ibrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='no-nested-data-bp'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='null-sel-clr-base'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='stibp-always-on'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-Milan'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-Milan-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-Milan-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amd-psfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='no-nested-data-bp'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='null-sel-clr-base'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='stibp-always-on'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-Rome'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-Rome-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-Rome-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-Rome-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='EPYC-v4'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='GraniteRapids'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-int8'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-tile'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='bus-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fbsdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrc'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fzrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='mcdt-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pbrsb-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='prefetchiti'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='psdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='serialize'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='tsx-ldtrk'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='GraniteRapids-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-int8'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-tile'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='bus-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fbsdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrc'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fzrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='mcdt-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pbrsb-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='prefetchiti'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='psdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='serialize'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='tsx-ldtrk'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='GraniteRapids-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-int8'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-tile'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx10'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx10-128'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx10-256'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx10-512'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='bus-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='cldemote'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fbsdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrc'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fzrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='mcdt-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdir64b'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdiri'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pbrsb-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='prefetchiti'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='psdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='serialize'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='tsx-ldtrk'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Haswell'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Haswell-IBRS'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Haswell-noTSX'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Haswell-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Haswell-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Haswell-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Haswell-v4'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Icelake-Server'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Icelake-Server-noTSX'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Icelake-Server-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Icelake-Server-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Icelake-Server-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Icelake-Server-v4'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Icelake-Server-v5'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Icelake-Server-v6'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Icelake-Server-v7'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='IvyBridge'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='IvyBridge-IBRS'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='IvyBridge-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='IvyBridge-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='KnightsMill'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-4fmaps'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-4vnniw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512er'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512pf'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='KnightsMill-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-4fmaps'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-4vnniw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512er'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512pf'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Opteron_G4'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fma4'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xop'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Opteron_G4-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fma4'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xop'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Opteron_G5'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fma4'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='tbm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xop'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Opteron_G5-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fma4'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='tbm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xop'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='SapphireRapids'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-int8'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-tile'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='bus-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrc'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fzrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='serialize'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='tsx-ldtrk'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='SapphireRapids-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-int8'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-tile'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='bus-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrc'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fzrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='serialize'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='tsx-ldtrk'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='SapphireRapids-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-int8'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-tile'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='bus-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fbsdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrc'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fzrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='psdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='serialize'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='tsx-ldtrk'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='SapphireRapids-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-int8'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='amx-tile'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-bf16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-fp16'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bitalg'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vbmi2'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='bus-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='cldemote'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fbsdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrc'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fzrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='la57'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdir64b'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdiri'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='psdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='serialize'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='taa-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='tsx-ldtrk'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xfd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='SierraForest'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-ne-convert'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni-int8'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='bus-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='cmpccxadd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fbsdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='mcdt-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pbrsb-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='psdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='serialize'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='SierraForest-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-ifma'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-ne-convert'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx-vnni-int8'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='bus-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='cmpccxadd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fbsdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='fsrs'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ibrs-all'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='mcdt-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pbrsb-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='psdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='serialize'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vaes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='vpclmulqdq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Client'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Client-IBRS'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Client-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Client-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Client-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Client-v4'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Server'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Server-IBRS'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Server-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Server-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='hle'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='rtm'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Server-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Server-v4'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Skylake-Server-v5'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512bw'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512cd'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512dq'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512f'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='avx512vl'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='invpcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pcid'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='pku'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Snowridge'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='cldemote'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='core-capability'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdir64b'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdiri'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='mpx'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='split-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Snowridge-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='cldemote'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='core-capability'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdir64b'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdiri'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='mpx'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='split-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Snowridge-v2'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='cldemote'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='core-capability'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdir64b'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdiri'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='split-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Snowridge-v3'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='cldemote'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='core-capability'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdir64b'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdiri'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='split-lock-detect'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='Snowridge-v4'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='cldemote'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='erms'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='gfni'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdir64b'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='movdiri'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='xsaves'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='athlon'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='3dnow'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='3dnowext'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='athlon-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='3dnow'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='3dnowext'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='core2duo'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='core2duo-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='coreduo'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='coreduo-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='n270'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='n270-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='ss'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='phenom'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='3dnow'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='3dnowext'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <blockers model='phenom-v1'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='3dnow'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <feature name='3dnowext'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </blockers>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </mode>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   </cpu>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <memoryBacking supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <enum name='sourceType'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <value>file</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <value>anonymous</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <value>memfd</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   </memoryBacking>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <devices>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <disk supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='diskDevice'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>disk</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>cdrom</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>floppy</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>lun</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='bus'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>fdc</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>scsi</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>virtio</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>usb</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>sata</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='model'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>virtio</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>virtio-transitional</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>virtio-non-transitional</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </disk>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <graphics supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='type'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>vnc</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>egl-headless</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>dbus</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </graphics>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <video supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='modelType'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>vga</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>cirrus</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>virtio</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>none</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>bochs</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>ramfb</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </video>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <hostdev supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='mode'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>subsystem</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='startupPolicy'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>default</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>mandatory</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>requisite</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>optional</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='subsysType'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>usb</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>pci</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>scsi</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='capsType'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='pciBackend'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </hostdev>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <rng supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='model'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>virtio</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>virtio-transitional</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>virtio-non-transitional</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='backendModel'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>random</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>egd</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>builtin</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </rng>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <filesystem supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='driverType'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>path</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>handle</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>virtiofs</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </filesystem>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <tpm supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='model'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>tpm-tis</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>tpm-crb</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='backendModel'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>emulator</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>external</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='backendVersion'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>2.0</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </tpm>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <redirdev supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='bus'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>usb</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </redirdev>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <channel supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='type'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>pty</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>unix</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </channel>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <crypto supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='model'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='type'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>qemu</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='backendModel'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>builtin</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </crypto>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <interface supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='backendType'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>default</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>passt</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </interface>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <panic supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='model'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>isa</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>hyperv</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </panic>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <console supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='type'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>null</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>vc</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>pty</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>dev</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>file</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>pipe</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>stdio</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>udp</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>tcp</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>unix</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>qemu-vdagent</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>dbus</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </console>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   </devices>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   <features>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <gic supported='no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <vmcoreinfo supported='yes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <genid supported='yes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <backingStoreInput supported='yes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <backup supported='yes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <async-teardown supported='yes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <ps2 supported='yes'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <sev supported='no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <sgx supported='no'/>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <hyperv supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='features'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>relaxed</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>vapic</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>spinlocks</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>vpindex</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>runtime</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>synic</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>stimer</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>reset</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>vendor_id</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>frequencies</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>reenlightenment</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>tlbflush</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>ipi</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>avic</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>emsr_bitmap</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>xmm_input</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <defaults>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <spinlocks>4095</spinlocks>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <stimer_direct>on</stimer_direct>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <tlbflush_direct>off</tlbflush_direct>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <tlbflush_extended>off</tlbflush_extended>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </defaults>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </hyperv>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     <launchSecurity supported='yes'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       <enum name='sectype'>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:         <value>tdx</value>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:       </enum>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:     </launchSecurity>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:   </features>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]: </domainCapabilities>
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:54.586 228686 DEBUG nova.virt.libvirt.host [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:54.586 228686 DEBUG nova.virt.libvirt.host [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:54.586 228686 DEBUG nova.virt.libvirt.host [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:54.586 228686 INFO nova.virt.libvirt.host [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Secure Boot support detected
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:54.589 228686 INFO nova.virt.libvirt.driver [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:54.589 228686 INFO nova.virt.libvirt.driver [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:54.596 228686 DEBUG nova.virt.libvirt.driver [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:54.611 228686 INFO nova.virt.node [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Determined node identity 9ec09c1a-d246-41d7-94f4-b482f646a9f1 from /var/lib/nova/compute_id
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:54.624 228686 DEBUG nova.compute.manager [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Verified node 9ec09c1a-d246-41d7-94f4-b482f646a9f1 matches my host np0005541914.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568
Dec 02 09:34:54 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:54.648 228686 INFO nova.compute.manager [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Dec 02 09:34:55 np0005541914.localdomain sudo[229195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rrihmkioogwwrwoqhorbybyfywhzoisv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668094.5486505-4267-272619842092568/AnsiballZ_podman_container.py
Dec 02 09:34:55 np0005541914.localdomain sudo[229195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:34:55 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:55.059 228686 INFO nova.service [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Updating service version for nova-compute on np0005541914.localdomain from 57 to 66
Dec 02 09:34:55 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:55.098 228686 DEBUG oslo_concurrency.lockutils [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:34:55 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:55.098 228686 DEBUG oslo_concurrency.lockutils [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:34:55 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:55.099 228686 DEBUG oslo_concurrency.lockutils [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:34:55 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:55.099 228686 DEBUG nova.compute.resource_tracker [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 09:34:55 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:55.100 228686 DEBUG oslo_concurrency.processutils [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:34:55 np0005541914.localdomain python3.9[229197]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 02 09:34:55 np0005541914.localdomain sudo[229195]: pam_unix(sudo:session): session closed for user root
Dec 02 09:34:55 np0005541914.localdomain systemd-journald[47679]: Field hash table of /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal has a fill level at 120.1 (400 of 333 items), suggesting rotation.
Dec 02 09:34:55 np0005541914.localdomain systemd-journald[47679]: /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal: Journal header limits reached or header out-of-date, rotating.
Dec 02 09:34:55 np0005541914.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 02 09:34:55 np0005541914.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 02 09:34:55 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:55.570 228686 DEBUG oslo_concurrency.processutils [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:34:55 np0005541914.localdomain systemd[1]: Started libvirt nodedev daemon.
Dec 02 09:34:55 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:55.960 228686 WARNING nova.virt.libvirt.driver [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 09:34:55 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:55.962 228686 DEBUG nova.compute.resource_tracker [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=13614MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 09:34:55 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:55.963 228686 DEBUG oslo_concurrency.lockutils [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:34:55 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:55.963 228686 DEBUG oslo_concurrency.lockutils [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:34:56 np0005541914.localdomain sudo[229375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jgaqgejyiqjdpuavqxbhdpztsnqrbvkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668095.7701325-4292-182818112608982/AnsiballZ_systemd.py
Dec 02 09:34:56 np0005541914.localdomain sudo[229375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:34:56 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:56.110 228686 DEBUG nova.compute.resource_tracker [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 09:34:56 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:56.111 228686 DEBUG nova.compute.resource_tracker [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 09:34:56 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:56.133 228686 DEBUG nova.scheduler.client.report [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Refreshing inventories for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 02 09:34:56 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:56.190 228686 DEBUG nova.scheduler.client.report [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Updating ProviderTree inventory for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 02 09:34:56 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:56.191 228686 DEBUG nova.compute.provider_tree [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Updating inventory in ProviderTree for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 02 09:34:56 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:56.214 228686 DEBUG nova.scheduler.client.report [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Refreshing aggregate associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 02 09:34:56 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:56.288 228686 DEBUG nova.scheduler.client.report [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Refreshing trait associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, traits: COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_LAN9118,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AESNI,HW_CPU_X86_SSSE3,HW_CPU_X86_BMI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SHA,HW_CPU_X86_FMA3,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_F16C,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_ABM,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_TRUSTED_CERTS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 02 09:34:56 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:56.305 228686 DEBUG oslo_concurrency.processutils [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:34:56 np0005541914.localdomain python3.9[229377]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 02 09:34:56 np0005541914.localdomain systemd[1]: Stopping nova_compute container...
Dec 02 09:34:56 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:56.573 228686 DEBUG oslo_concurrency.lockutils [None req-80642093-62fb-492b-9433-bc788811bc2b - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:34:56 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:56.574 228686 DEBUG oslo_concurrency.lockutils [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 02 09:34:56 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:56.574 228686 DEBUG oslo_concurrency.lockutils [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 02 09:34:56 np0005541914.localdomain nova_compute[228682]: 2025-12-02 09:34:56.575 228686 DEBUG oslo_concurrency.lockutils [None req-620f62f4-03f7-4de2-be09-188dfd3e500c - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 02 09:34:56 np0005541914.localdomain virtqemud[228953]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec 02 09:34:56 np0005541914.localdomain virtqemud[228953]: hostname: np0005541914.localdomain
Dec 02 09:34:56 np0005541914.localdomain virtqemud[228953]: End of file while reading data: Input/output error
Dec 02 09:34:56 np0005541914.localdomain systemd[1]: libpod-e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256.scope: Deactivated successfully.
Dec 02 09:34:56 np0005541914.localdomain podman[229401]: 2025-12-02 09:34:56.993784121 +0000 UTC m=+0.496911706 container died e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251125, config_id=edpm, container_name=nova_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 02 09:34:56 np0005541914.localdomain systemd[1]: libpod-e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256.scope: Consumed 3.645s CPU time.
Dec 02 09:34:57 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256-userdata-shm.mount: Deactivated successfully.
Dec 02 09:34:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18165 DF PROTO=TCP SPT=46438 DPT=9101 SEQ=2661148423 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54791630000000001030307) 
Dec 02 09:34:57 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-dd847ae8b8d450ddddf78efaf612113cebe913c0aa9acb083d5c321023fdf168-merged.mount: Deactivated successfully.
Dec 02 09:34:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10789 DF PROTO=TCP SPT=42652 DPT=9105 SEQ=18637374 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54793220000000001030307) 
Dec 02 09:35:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51881 DF PROTO=TCP SPT=44782 DPT=9102 SEQ=1218652330 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD547A1220000000001030307) 
Dec 02 09:35:01 np0005541914.localdomain podman[229401]: 2025-12-02 09:35:01.359398423 +0000 UTC m=+4.862526038 container cleanup e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0)
Dec 02 09:35:01 np0005541914.localdomain podman[229401]: nova_compute
Dec 02 09:35:01 np0005541914.localdomain podman[229569]: error opening file `/run/crun/e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256/status`: No such file or directory
Dec 02 09:35:01 np0005541914.localdomain podman[229558]: 2025-12-02 09:35:01.442897331 +0000 UTC m=+0.056182979 container cleanup e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute)
Dec 02 09:35:01 np0005541914.localdomain podman[229558]: nova_compute
Dec 02 09:35:01 np0005541914.localdomain systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Dec 02 09:35:01 np0005541914.localdomain systemd[1]: Stopped nova_compute container.
Dec 02 09:35:01 np0005541914.localdomain systemd[1]: Starting nova_compute container...
Dec 02 09:35:01 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:35:01 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd847ae8b8d450ddddf78efaf612113cebe913c0aa9acb083d5c321023fdf168/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 02 09:35:01 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd847ae8b8d450ddddf78efaf612113cebe913c0aa9acb083d5c321023fdf168/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 02 09:35:01 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd847ae8b8d450ddddf78efaf612113cebe913c0aa9acb083d5c321023fdf168/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 02 09:35:01 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd847ae8b8d450ddddf78efaf612113cebe913c0aa9acb083d5c321023fdf168/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 02 09:35:01 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd847ae8b8d450ddddf78efaf612113cebe913c0aa9acb083d5c321023fdf168/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 02 09:35:01 np0005541914.localdomain podman[229571]: 2025-12-02 09:35:01.585925643 +0000 UTC m=+0.104913300 container init e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=nova_compute)
Dec 02 09:35:01 np0005541914.localdomain podman[229571]: 2025-12-02 09:35:01.591981595 +0000 UTC m=+0.110969262 container start e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=nova_compute)
Dec 02 09:35:01 np0005541914.localdomain podman[229571]: nova_compute
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: + sudo -E kolla_set_configs
Dec 02 09:35:01 np0005541914.localdomain systemd[1]: Started nova_compute container.
Dec 02 09:35:01 np0005541914.localdomain sudo[229375]: pam_unix(sudo:session): session closed for user root
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Validating config file
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Copying service configuration files
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Deleting /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Deleting /etc/ceph
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Creating directory /etc/ceph
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Setting permission for /etc/ceph
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Writing out command to execute
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: ++ cat /run_command
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: + CMD=nova-compute
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: + ARGS=
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: + sudo kolla_copy_cacerts
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: + [[ ! -n '' ]]
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: + . kolla_extend_start
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: + echo 'Running command: '\''nova-compute'\'''
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: Running command: 'nova-compute'
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: + umask 0022
Dec 02 09:35:01 np0005541914.localdomain nova_compute[229585]: + exec nova-compute
Dec 02 09:35:02 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:35:02 np0005541914.localdomain podman[229614]: 2025-12-02 09:35:02.335702006 +0000 UTC m=+0.088190488 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible)
Dec 02 09:35:02 np0005541914.localdomain podman[229614]: 2025-12-02 09:35:02.350244969 +0000 UTC m=+0.102733431 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible)
Dec 02 09:35:02 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:35:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:35:03.143 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:35:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:35:03.144 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:35:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:35:03.144 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:35:03 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:03.480 229589 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 02 09:35:03 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:03.481 229589 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 02 09:35:03 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:03.481 229589 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 02 09:35:03 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:03.481 229589 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Dec 02 09:35:03 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:03.609 229589 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:35:03 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14214 DF PROTO=TCP SPT=33630 DPT=9882 SEQ=58620176 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD547AA910000000001030307) 
Dec 02 09:35:03 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:03.631 229589 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:35:03 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:03.632 229589 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 02 09:35:03 np0005541914.localdomain sudo[229729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-byhzxozuugasyyhzadrbnitzhsoikvvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668103.0144944-4318-5702758765265/AnsiballZ_podman_container.py
Dec 02 09:35:03 np0005541914.localdomain sudo[229729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:35:04 np0005541914.localdomain python3.9[229731]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.097 229589 INFO nova.virt.driver [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.219 229589 INFO nova.compute.provider_config [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.300 229589 WARNING nova.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.: nova.exception.TooOldComputeService: Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.300 229589 DEBUG oslo_concurrency.lockutils [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.300 229589 DEBUG oslo_concurrency.lockutils [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.301 229589 DEBUG oslo_concurrency.lockutils [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.301 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.301 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.301 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.301 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.301 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.302 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.302 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.302 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.302 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.302 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.302 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.302 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.303 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.303 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.303 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.303 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.303 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.303 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.303 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.303 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] console_host                   = np0005541914.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.304 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.304 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.304 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.304 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.304 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.304 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.304 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.305 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.305 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.305 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.305 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.305 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.305 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.305 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.306 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.306 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.306 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.306 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.306 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] host                           = np0005541914.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.306 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.306 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.307 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.307 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.307 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.307 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.307 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.307 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.307 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.307 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.308 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.308 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.308 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.308 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.308 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.308 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.308 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.309 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.309 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.309 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.309 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.309 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.309 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.309 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.309 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.310 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.310 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.310 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.310 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.310 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.310 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.310 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.311 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.311 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.311 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.311 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.311 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.311 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.311 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.311 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.312 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.312 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] my_block_storage_ip            = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.312 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] my_ip                          = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.312 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.312 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.312 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.312 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.313 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.313 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.313 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.313 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.313 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.313 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.313 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.313 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.314 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.314 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.314 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.314 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.314 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.314 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.314 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.315 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.315 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.315 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.315 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.315 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.315 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.315 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.315 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.316 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.316 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.316 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.316 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.316 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.316 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.316 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.316 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.317 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.317 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.317 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.317 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.317 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.317 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.317 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.318 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.318 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.318 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.318 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.318 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.318 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.318 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.319 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.319 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.319 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.319 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.319 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.319 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.319 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.320 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.320 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.320 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.320 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.320 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.320 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.320 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.320 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.321 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.321 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.321 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.321 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.321 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.321 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.322 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.322 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.322 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.322 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.322 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.323 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.323 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.323 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.323 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.323 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.323 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.323 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.324 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.324 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.324 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.324 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.324 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.324 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.324 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.324 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.325 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.325 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.325 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.325 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.325 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.325 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.325 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.326 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.326 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.326 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.326 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.326 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.326 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.326 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.327 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.327 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.327 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.327 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.327 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.327 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.327 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.328 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.328 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.328 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.328 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.328 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.328 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.328 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.329 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.329 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.329 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.329 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.329 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.329 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.329 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.329 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.330 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.330 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.330 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.330 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.330 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.330 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.331 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.331 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.331 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.331 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.331 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.331 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.331 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.331 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.332 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.332 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.332 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.332 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.332 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.332 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.332 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.333 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.333 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.333 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.333 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.333 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.333 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.333 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.333 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.334 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.334 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.334 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.334 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.334 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.334 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.335 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.335 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.335 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.335 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.335 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.335 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.335 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.335 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.336 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.336 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.336 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.336 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.336 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.336 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.336 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.337 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.337 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.337 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.337 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.337 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.337 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.337 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.337 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.338 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.338 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.338 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.338 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.338 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.338 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.338 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.339 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.339 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.339 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.339 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.339 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.339 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.339 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.340 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.340 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.340 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.340 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.340 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.340 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.340 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.341 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.341 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.341 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.341 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.341 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.341 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.341 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.341 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.342 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.342 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.342 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.342 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.342 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.342 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.343 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.343 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.343 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.343 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.343 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.343 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.343 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.344 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.344 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.344 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.344 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.344 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.344 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.344 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.344 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.345 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.345 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.345 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.345 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.345 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.345 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.345 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.346 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.346 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.346 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.346 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.346 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.346 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.346 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.347 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.347 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.347 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.347 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.347 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.347 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.347 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.348 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.348 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.348 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.348 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.348 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.348 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.348 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.348 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.349 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.349 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.349 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.349 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.349 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.349 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.349 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.350 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.350 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.350 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.350 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.350 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.350 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.351 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.351 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.351 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.351 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.351 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.351 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.351 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.352 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.352 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.352 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.352 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.352 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.352 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.352 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.353 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.353 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.353 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.353 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.353 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.353 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.353 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.353 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.354 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.354 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.354 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.354 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.354 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.354 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.354 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.355 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.355 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.355 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.355 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.355 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.355 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.355 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.356 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.356 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.356 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.356 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.356 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.356 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.356 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.357 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.357 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.357 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.357 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.357 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.357 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.357 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.357 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.358 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.358 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.358 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.358 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.358 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.358 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.358 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.359 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.359 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.359 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.359 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.359 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.359 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.359 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.359 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.360 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.360 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.360 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.360 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.360 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.360 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.360 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.361 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.361 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.361 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.361 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.361 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.361 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.361 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.361 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.362 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.362 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.362 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.362 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.362 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.362 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.362 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.363 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.363 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.363 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.363 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.363 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.363 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.363 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.364 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.364 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.364 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.364 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.364 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.364 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.364 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.364 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.365 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.365 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.365 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.365 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.365 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.366 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.366 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.366 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.366 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.366 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.366 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.366 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.367 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.367 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.367 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.367 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.367 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.367 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.367 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.367 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.368 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.368 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.368 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.368 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.368 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.368 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.368 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.369 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.369 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.369 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.369 229589 WARNING oslo_config.cfg [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: and ``live_migration_inbound_addr`` respectively.
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: ).  Its value may be silently ignored in the future.
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.369 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.live_migration_uri     = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.369 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.370 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.370 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.370 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.370 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.370 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.370 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.370 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.371 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.371 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.371 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.371 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.371 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.371 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.371 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.372 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.372 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.372 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.rbd_secret_uuid        = c7c8e171-a193-56fb-95fa-8879fcfa7074 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.372 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.372 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.372 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.372 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.373 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.373 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.373 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.373 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.373 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.373 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain systemd[1]: Started libpod-conmon-21fdb0dbdd9f58ae102d96a43fbe2e853b5f997904471f5738055c23f246e34e.scope.
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.373 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.374 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.374 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.374 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.374 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.374 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.374 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.374 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.375 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.375 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.375 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.375 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.375 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.375 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.375 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.376 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.376 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.376 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.376 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.376 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.376 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.377 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.377 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.377 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.377 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.377 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.377 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.377 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.377 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.378 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.378 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.378 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.378 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.378 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.378 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.378 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.379 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.379 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.379 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.379 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.379 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.380 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.380 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.380 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.380 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.380 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.380 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.380 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.381 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.381 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.381 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.381 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.381 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.381 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.381 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.382 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.382 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.382 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.382 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.382 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.382 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.382 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.auth_url             = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.383 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.383 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.383 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.383 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.383 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.383 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.383 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.383 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.384 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.384 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.384 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.384 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.384 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.384 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.384 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.385 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.385 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.385 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.385 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.385 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.385 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.385 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.386 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.386 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.386 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.386 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.386 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.386 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.386 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.387 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.387 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.387 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.387 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.387 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.387 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.387 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.387 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.388 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.388 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.388 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.388 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.388 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.388 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.389 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.389 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.389 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.389 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.389 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.389 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.390 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.390 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.390 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.390 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.390 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.390 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.390 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.391 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.391 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.391 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.391 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.391 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.391 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.391 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.391 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.392 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.392 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.392 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.392 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.392 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.392 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.392 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.393 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.393 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.393 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.393 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.393 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.393 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.393 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.394 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.394 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.394 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.394 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.394 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.394 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.394 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.395 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.395 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.395 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.395 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.395 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.395 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.395 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.396 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.396 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.396 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.396 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.396 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.396 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.396 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.397 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.397 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.397 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.397 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.397 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.397 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.397 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.398 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.398 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.398 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.398 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.398 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.398 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.398 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.398 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.399 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.399 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.399 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.399 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.399 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.399 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.399 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.400 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.400 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.400 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.400 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.400 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.400 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.400 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.400 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.401 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.401 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.401 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.401 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.401 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.401 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.401 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.401 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.402 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.402 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.402 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.402 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.402 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.402 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.402 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.403 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.403 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.403 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.403 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.403 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.403 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.403 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.403 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.404 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.404 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.404 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.404 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.404 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.404 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vnc.novncproxy_base_url        = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.404 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.405 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.405 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.405 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vnc.server_proxyclient_address = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.405 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.405 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.405 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.405 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.disable_compute_service_check_for_ffu = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.406 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.406 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.406 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.406 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.406 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.406 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.406 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.407 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.407 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.407 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.407 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.407 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.407 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.408 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.408 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.408 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.408 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.408 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.408 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.408 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.409 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.409 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.409 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac17f28608a7cfce4db232908145eceefc4390121a03756b7f9081a0f7d2c6d6/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.409 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.409 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.409 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.409 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.410 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.410 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.410 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.410 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.410 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac17f28608a7cfce4db232908145eceefc4390121a03756b7f9081a0f7d2c6d6/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec 02 09:35:04 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac17f28608a7cfce4db232908145eceefc4390121a03756b7f9081a0f7d2c6d6/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.410 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.411 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.411 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.411 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.411 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.411 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.411 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.411 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.412 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.412 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.412 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.412 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.412 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.412 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.412 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.412 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.413 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.413 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.413 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.413 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.413 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.413 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.413 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.414 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.414 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.414 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.414 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.414 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.414 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.414 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.414 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.415 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.415 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.415 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.415 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.415 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.415 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.415 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.416 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.416 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.416 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.416 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.416 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.416 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.416 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.417 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.417 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.417 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.417 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.417 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.417 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.417 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.418 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.418 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.418 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.auth_url            = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.418 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.418 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.418 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.418 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.419 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.419 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.419 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.419 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.419 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.419 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.419 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.419 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.420 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.420 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.420 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.420 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.420 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.420 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain podman[229754]: 2025-12-02 09:35:04.419904209 +0000 UTC m=+0.167036128 container init 21fdb0dbdd9f58ae102d96a43fbe2e853b5f997904471f5738055c23f246e34e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=nova_compute_init, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.420 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.421 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.421 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.421 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.421 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.421 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.421 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.421 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.421 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.422 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.422 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.422 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.422 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.422 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.422 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.422 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.423 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.423 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.423 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.423 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.423 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.423 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.423 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.423 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.424 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.424 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.424 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.424 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.424 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.424 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.424 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.425 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.425 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.425 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.425 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.425 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.425 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.425 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.425 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.426 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.426 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.426 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.426 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.426 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.426 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.426 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.427 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.427 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.427 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.427 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.427 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.427 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.427 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.427 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.428 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.428 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.428 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.428 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.428 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.428 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.428 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.428 229589 DEBUG oslo_service.service [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.430 229589 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Dec 02 09:35:04 np0005541914.localdomain podman[229754]: 2025-12-02 09:35:04.430509606 +0000 UTC m=+0.177641525 container start 21fdb0dbdd9f58ae102d96a43fbe2e853b5f997904471f5738055c23f246e34e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 02 09:35:04 np0005541914.localdomain python3.9[229731]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.451 229589 INFO nova.virt.node [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Determined node identity 9ec09c1a-d246-41d7-94f4-b482f646a9f1 from /var/lib/nova/compute_id
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.452 229589 DEBUG nova.virt.libvirt.host [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.453 229589 DEBUG nova.virt.libvirt.host [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.453 229589 DEBUG nova.virt.libvirt.host [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.453 229589 DEBUG nova.virt.libvirt.host [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.464 229589 DEBUG nova.virt.libvirt.host [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f7dd424f8e0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.466 229589 DEBUG nova.virt.libvirt.host [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f7dd424f8e0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.467 229589 INFO nova.virt.libvirt.driver [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Connection event '1' reason 'None'
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.472 229589 INFO nova.virt.libvirt.host [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Libvirt host capabilities <capabilities>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <host>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <uuid>64aa5208-7bf7-490c-857b-3c1a3cae8bb3</uuid>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <cpu>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <arch>x86_64</arch>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model>EPYC-Rome-v4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <vendor>AMD</vendor>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <microcode version='16777317'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <signature family='23' model='49' stepping='0'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <maxphysaddr mode='emulate' bits='40'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature name='x2apic'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature name='tsc-deadline'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature name='osxsave'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature name='hypervisor'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature name='tsc_adjust'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature name='spec-ctrl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature name='stibp'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature name='arch-capabilities'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature name='ssbd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature name='cmp_legacy'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature name='topoext'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature name='virt-ssbd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature name='lbrv'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature name='tsc-scale'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature name='vmcb-clean'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature name='pause-filter'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature name='pfthreshold'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature name='svme-addr-chk'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature name='rdctl-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature name='skip-l1dfl-vmentry'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature name='mds-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature name='pschange-mc-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <pages unit='KiB' size='4'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <pages unit='KiB' size='2048'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <pages unit='KiB' size='1048576'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </cpu>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <power_management>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <suspend_mem/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <suspend_disk/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <suspend_hybrid/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </power_management>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <iommu support='no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <migration_features>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <live/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <uri_transports>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <uri_transport>tcp</uri_transport>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <uri_transport>rdma</uri_transport>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </uri_transports>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </migration_features>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <topology>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <cells num='1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <cell id='0'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:           <memory unit='KiB'>16116612</memory>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:           <pages unit='KiB' size='4'>4029153</pages>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:           <pages unit='KiB' size='2048'>0</pages>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:           <pages unit='KiB' size='1048576'>0</pages>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:           <distances>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:             <sibling id='0' value='10'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:           </distances>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:           <cpus num='8'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:           </cpus>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         </cell>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </cells>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </topology>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <cache>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </cache>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <secmodel>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model>selinux</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <doi>0</doi>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </secmodel>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <secmodel>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model>dac</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <doi>0</doi>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <baselabel type='kvm'>+107:+107</baselabel>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <baselabel type='qemu'>+107:+107</baselabel>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </secmodel>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   </host>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <guest>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <os_type>hvm</os_type>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <arch name='i686'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <wordsize>32</wordsize>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <domain type='qemu'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <domain type='kvm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </arch>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <features>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <pae/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <nonpae/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <acpi default='on' toggle='yes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <apic default='on' toggle='no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <cpuselection/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <deviceboot/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <disksnapshot default='on' toggle='no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <externalSnapshot/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </features>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   </guest>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <guest>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <os_type>hvm</os_type>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <arch name='x86_64'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <wordsize>64</wordsize>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <domain type='qemu'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <domain type='kvm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </arch>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <features>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <acpi default='on' toggle='yes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <apic default='on' toggle='no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <cpuselection/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <deviceboot/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <disksnapshot default='on' toggle='no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <externalSnapshot/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </features>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   </guest>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: </capabilities>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.479 229589 DEBUG nova.virt.libvirt.host [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.482 229589 DEBUG nova.virt.libvirt.volume.mount [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.484 229589 DEBUG nova.virt.libvirt.host [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: <domainCapabilities>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <path>/usr/libexec/qemu-kvm</path>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <domain>kvm</domain>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <arch>i686</arch>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <vcpu max='240'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <iothreads supported='yes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <os supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <enum name='firmware'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <loader supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='type'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>rom</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>pflash</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='readonly'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>yes</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>no</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='secure'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>no</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </loader>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   </os>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <cpu>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <mode name='host-passthrough' supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='hostPassthroughMigratable'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>on</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>off</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </mode>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <mode name='maximum' supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='maximumMigratable'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>on</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>off</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </mode>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <mode name='host-model' supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <vendor>AMD</vendor>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='x2apic'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='tsc-deadline'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='hypervisor'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='tsc_adjust'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='spec-ctrl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='stibp'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='ssbd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='cmp_legacy'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='overflow-recov'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='succor'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='ibrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='amd-ssbd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='virt-ssbd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='lbrv'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='tsc-scale'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='vmcb-clean'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='pause-filter'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='pfthreshold'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='svme-addr-chk'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='disable' name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </mode>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <mode name='custom' supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Broadwell'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Broadwell-IBRS'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Broadwell-noTSX'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Broadwell-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Broadwell-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Broadwell-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Broadwell-v4'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cascadelake-Server'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cascadelake-Server-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cascadelake-Server-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cascadelake-Server-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cascadelake-Server-v4'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cascadelake-Server-v5'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cooperlake'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cooperlake-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cooperlake-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Denverton'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='mpx'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Denverton-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='mpx'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Denverton-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Denverton-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Dhyana-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-Genoa'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amd-psfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='auto-ibrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='no-nested-data-bp'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='null-sel-clr-base'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='stibp-always-on'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-Genoa-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amd-psfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='auto-ibrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='no-nested-data-bp'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='null-sel-clr-base'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='stibp-always-on'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-Milan'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-Milan-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-Milan-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amd-psfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='no-nested-data-bp'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='null-sel-clr-base'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='stibp-always-on'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-Rome'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-Rome-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-Rome-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-Rome-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-v4'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='GraniteRapids'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-int8'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-tile'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='bus-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fbsdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrc'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fzrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='mcdt-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pbrsb-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='prefetchiti'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='psdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='serialize'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute_init[229774]: INFO:nova_statedir:Applying nova statedir ownership
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='tsx-ldtrk'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute_init[229774]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute_init[229774]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute_init[229774]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute_init[229774]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute_init[229774]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute_init[229774]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute_init[229774]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='GraniteRapids-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute_init[229774]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/delay-nova-compute
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute_init[229774]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute_init[229774]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-int8'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute_init[229774]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-tile'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute_init[229774]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute_init[229774]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute_init[229774]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute_init[229774]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache already 42436:42436
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute_init[229774]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache to system_u:object_r:container_file_t:s0
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute_init[229774]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute_init[229774]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache/python-entrypoints already 42436:42436
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute_init[229774]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache/python-entrypoints to system_u:object_r:container_file_t:s0
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute_init[229774]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/b234715fc878456b41e32c4fbc669b417044dbe6c6684bbc9059e5c93396ffea
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute_init[229774]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/20273498b7380904530133bcb3f720bd45f4f00b810dc4597d81d23acd8f9673
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute_init[229774]: INFO:nova_statedir:Nova statedir ownership complete
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='bus-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fbsdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrc'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fzrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='mcdt-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pbrsb-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='prefetchiti'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='psdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='serialize'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='tsx-ldtrk'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='GraniteRapids-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-int8'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-tile'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx10'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx10-128'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx10-256'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx10-512'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='bus-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='cldemote'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fbsdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrc'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fzrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='mcdt-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdir64b'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdiri'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pbrsb-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='prefetchiti'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='psdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='serialize'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='tsx-ldtrk'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Haswell'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Haswell-IBRS'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Haswell-noTSX'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Haswell-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Haswell-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Haswell-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Haswell-v4'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Icelake-Server'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Icelake-Server-noTSX'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Icelake-Server-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Icelake-Server-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Icelake-Server-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Icelake-Server-v4'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Icelake-Server-v5'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Icelake-Server-v6'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Icelake-Server-v7'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='IvyBridge'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='IvyBridge-IBRS'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='IvyBridge-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='IvyBridge-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='KnightsMill'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-4fmaps'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-4vnniw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512er'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512pf'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='KnightsMill-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-4fmaps'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-4vnniw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512er'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512pf'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Opteron_G4'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fma4'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xop'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Opteron_G4-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fma4'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xop'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Opteron_G5'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fma4'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='tbm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xop'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Opteron_G5-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fma4'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='tbm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xop'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='SapphireRapids'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-int8'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-tile'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='bus-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrc'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fzrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='serialize'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='tsx-ldtrk'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='SapphireRapids-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-int8'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-tile'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='bus-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrc'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fzrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='serialize'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='tsx-ldtrk'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='SapphireRapids-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-int8'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-tile'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='bus-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fbsdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrc'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fzrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='psdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='serialize'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='tsx-ldtrk'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='SapphireRapids-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-int8'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-tile'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='bus-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='cldemote'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fbsdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrc'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fzrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdir64b'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdiri'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='psdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='serialize'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='tsx-ldtrk'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='SierraForest'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-ne-convert'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni-int8'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='bus-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='cmpccxadd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fbsdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='mcdt-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pbrsb-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='psdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='serialize'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='SierraForest-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-ne-convert'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni-int8'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='bus-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='cmpccxadd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fbsdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='mcdt-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pbrsb-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='psdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='serialize'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Client'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Client-IBRS'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Client-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Client-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Client-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Client-v4'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Server'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Server-IBRS'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 02 09:35:04 np0005541914.localdomain systemd[1]: libpod-21fdb0dbdd9f58ae102d96a43fbe2e853b5f997904471f5738055c23f246e34e.scope: Deactivated successfully.
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Server-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Server-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Server-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Server-v4'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Server-v5'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Snowridge'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='cldemote'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='core-capability'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdir64b'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdiri'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='mpx'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='split-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Snowridge-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='cldemote'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='core-capability'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdir64b'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdiri'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='mpx'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='split-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Snowridge-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='cldemote'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='core-capability'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdir64b'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdiri'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='split-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Snowridge-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='cldemote'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='core-capability'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdir64b'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdiri'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='split-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Snowridge-v4'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='cldemote'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdir64b'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdiri'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='athlon'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='3dnow'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='3dnowext'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='athlon-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='3dnow'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='3dnowext'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='core2duo'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='core2duo-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='coreduo'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='coreduo-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='n270'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='n270-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='phenom'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='3dnow'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='3dnowext'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='phenom-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='3dnow'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='3dnowext'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </mode>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   </cpu>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <memoryBacking supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <enum name='sourceType'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <value>file</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <value>anonymous</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <value>memfd</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   </memoryBacking>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <devices>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <disk supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='diskDevice'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>disk</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>cdrom</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>floppy</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>lun</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='bus'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>ide</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>fdc</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>scsi</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>virtio</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>usb</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>sata</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='model'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>virtio</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>virtio-transitional</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>virtio-non-transitional</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </disk>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <graphics supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='type'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>vnc</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>egl-headless</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>dbus</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </graphics>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <video supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='modelType'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>vga</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>cirrus</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>virtio</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>none</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>bochs</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>ramfb</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </video>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <hostdev supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='mode'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>subsystem</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='startupPolicy'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>default</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>mandatory</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>requisite</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>optional</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='subsysType'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>usb</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>pci</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>scsi</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='capsType'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='pciBackend'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </hostdev>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <rng supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='model'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>virtio</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>virtio-transitional</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>virtio-non-transitional</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='backendModel'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>random</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>egd</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>builtin</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </rng>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <filesystem supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='driverType'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>path</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>handle</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>virtiofs</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </filesystem>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <tpm supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='model'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>tpm-tis</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>tpm-crb</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='backendModel'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>emulator</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>external</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='backendVersion'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>2.0</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </tpm>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <redirdev supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='bus'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>usb</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </redirdev>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <channel supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='type'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>pty</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>unix</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </channel>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <crypto supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='model'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='type'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>qemu</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='backendModel'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>builtin</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </crypto>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <interface supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='backendType'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>default</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>passt</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </interface>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <panic supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='model'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>isa</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>hyperv</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </panic>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <console supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='type'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>null</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>vc</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>pty</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>dev</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>file</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>pipe</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>stdio</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>udp</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>tcp</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>unix</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>qemu-vdagent</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>dbus</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </console>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   </devices>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <features>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <gic supported='no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <vmcoreinfo supported='yes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <genid supported='yes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <backingStoreInput supported='yes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <backup supported='yes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <async-teardown supported='yes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <ps2 supported='yes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <sev supported='no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <sgx supported='no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <hyperv supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='features'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>relaxed</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>vapic</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>spinlocks</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>vpindex</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>runtime</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>synic</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>stimer</value>
Dec 02 09:35:04 np0005541914.localdomain podman[229775]: 2025-12-02 09:35:04.518224668 +0000 UTC m=+0.067206620 container died 21fdb0dbdd9f58ae102d96a43fbe2e853b5f997904471f5738055c23f246e34e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251125)
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>reset</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>vendor_id</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>frequencies</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>reenlightenment</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>tlbflush</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>ipi</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>avic</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>emsr_bitmap</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>xmm_input</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <defaults>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <spinlocks>4095</spinlocks>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <stimer_direct>on</stimer_direct>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <tlbflush_direct>off</tlbflush_direct>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <tlbflush_extended>off</tlbflush_extended>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </defaults>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </hyperv>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <launchSecurity supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='sectype'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>tdx</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </launchSecurity>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   </features>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: </domainCapabilities>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.488 229589 DEBUG nova.virt.libvirt.host [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: <domainCapabilities>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <path>/usr/libexec/qemu-kvm</path>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <domain>kvm</domain>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <arch>i686</arch>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <vcpu max='1024'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <iothreads supported='yes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <os supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <enum name='firmware'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <loader supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='type'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>rom</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>pflash</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='readonly'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>yes</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>no</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='secure'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>no</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </loader>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   </os>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <cpu>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <mode name='host-passthrough' supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='hostPassthroughMigratable'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>on</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>off</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </mode>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <mode name='maximum' supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='maximumMigratable'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>on</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>off</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </mode>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <mode name='host-model' supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <vendor>AMD</vendor>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='x2apic'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='tsc-deadline'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='hypervisor'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='tsc_adjust'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='spec-ctrl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='stibp'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='ssbd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='cmp_legacy'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='overflow-recov'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='succor'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='ibrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='amd-ssbd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='virt-ssbd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='lbrv'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='tsc-scale'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='vmcb-clean'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='pause-filter'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='pfthreshold'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='svme-addr-chk'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='disable' name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </mode>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <mode name='custom' supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Broadwell'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Broadwell-IBRS'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Broadwell-noTSX'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Broadwell-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Broadwell-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Broadwell-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Broadwell-v4'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cascadelake-Server'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cascadelake-Server-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cascadelake-Server-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cascadelake-Server-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cascadelake-Server-v4'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cascadelake-Server-v5'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cooperlake'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cooperlake-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cooperlake-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Denverton'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='mpx'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Denverton-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='mpx'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Denverton-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Denverton-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Dhyana-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-Genoa'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amd-psfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='auto-ibrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='no-nested-data-bp'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='null-sel-clr-base'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='stibp-always-on'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-Genoa-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amd-psfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='auto-ibrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='no-nested-data-bp'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='null-sel-clr-base'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='stibp-always-on'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-Milan'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-Milan-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-Milan-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amd-psfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='no-nested-data-bp'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='null-sel-clr-base'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='stibp-always-on'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-Rome'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-Rome-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-Rome-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-Rome-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-v4'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='GraniteRapids'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-int8'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-tile'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='bus-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fbsdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrc'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fzrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='mcdt-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pbrsb-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='prefetchiti'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='psdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='serialize'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='tsx-ldtrk'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='GraniteRapids-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-int8'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-tile'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='bus-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fbsdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrc'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fzrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='mcdt-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pbrsb-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='prefetchiti'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='psdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='serialize'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='tsx-ldtrk'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='GraniteRapids-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-int8'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-tile'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx10'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx10-128'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx10-256'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx10-512'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='bus-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='cldemote'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fbsdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrc'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fzrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='mcdt-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdir64b'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdiri'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pbrsb-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='prefetchiti'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='psdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='serialize'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='tsx-ldtrk'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Haswell'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Haswell-IBRS'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Haswell-noTSX'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Haswell-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Haswell-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Haswell-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Haswell-v4'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Icelake-Server'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Icelake-Server-noTSX'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Icelake-Server-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Icelake-Server-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Icelake-Server-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Icelake-Server-v4'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Icelake-Server-v5'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Icelake-Server-v6'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Icelake-Server-v7'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='IvyBridge'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='IvyBridge-IBRS'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='IvyBridge-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='IvyBridge-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='KnightsMill'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-4fmaps'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-4vnniw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512er'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512pf'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='KnightsMill-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-4fmaps'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-4vnniw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512er'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512pf'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Opteron_G4'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fma4'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xop'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Opteron_G4-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fma4'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xop'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Opteron_G5'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fma4'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='tbm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xop'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Opteron_G5-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fma4'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='tbm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xop'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='SapphireRapids'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-int8'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-tile'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='bus-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrc'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fzrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='serialize'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='tsx-ldtrk'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='SapphireRapids-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-int8'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-tile'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='bus-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrc'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fzrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='serialize'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='tsx-ldtrk'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='SapphireRapids-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-int8'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-tile'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='bus-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fbsdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrc'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fzrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='psdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='serialize'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='tsx-ldtrk'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='SapphireRapids-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-int8'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-tile'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='bus-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='cldemote'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fbsdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrc'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fzrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdir64b'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdiri'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='psdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='serialize'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='tsx-ldtrk'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='SierraForest'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-ne-convert'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni-int8'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='bus-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='cmpccxadd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fbsdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='mcdt-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pbrsb-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='psdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='serialize'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='SierraForest-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-ne-convert'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni-int8'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='bus-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='cmpccxadd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fbsdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='mcdt-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pbrsb-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='psdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='serialize'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Client'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Client-IBRS'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Client-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Client-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Client-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Client-v4'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Server'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Server-IBRS'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Server-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Server-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Server-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Server-v4'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Server-v5'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Snowridge'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='cldemote'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='core-capability'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdir64b'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdiri'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='mpx'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='split-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Snowridge-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='cldemote'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='core-capability'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdir64b'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdiri'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='mpx'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='split-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Snowridge-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='cldemote'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='core-capability'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdir64b'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdiri'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='split-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Snowridge-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='cldemote'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='core-capability'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdir64b'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdiri'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='split-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Snowridge-v4'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='cldemote'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdir64b'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdiri'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='athlon'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='3dnow'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='3dnowext'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='athlon-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='3dnow'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='3dnowext'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='core2duo'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='core2duo-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='coreduo'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='coreduo-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='n270'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='n270-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='phenom'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='3dnow'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='3dnowext'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='phenom-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='3dnow'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='3dnowext'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </mode>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   </cpu>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <memoryBacking supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <enum name='sourceType'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <value>file</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <value>anonymous</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <value>memfd</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   </memoryBacking>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <devices>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <disk supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='diskDevice'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>disk</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>cdrom</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>floppy</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>lun</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='bus'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>fdc</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>scsi</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>virtio</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>usb</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>sata</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='model'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>virtio</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>virtio-transitional</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>virtio-non-transitional</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </disk>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <graphics supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='type'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>vnc</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>egl-headless</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>dbus</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </graphics>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <video supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='modelType'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>vga</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>cirrus</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>virtio</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>none</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>bochs</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>ramfb</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </video>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <hostdev supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='mode'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>subsystem</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='startupPolicy'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>default</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>mandatory</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>requisite</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>optional</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='subsysType'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>usb</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>pci</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>scsi</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='capsType'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='pciBackend'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </hostdev>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <rng supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='model'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>virtio</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>virtio-transitional</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>virtio-non-transitional</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='backendModel'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>random</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>egd</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>builtin</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </rng>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <filesystem supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='driverType'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>path</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>handle</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>virtiofs</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </filesystem>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <tpm supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='model'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>tpm-tis</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>tpm-crb</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='backendModel'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>emulator</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>external</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='backendVersion'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>2.0</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </tpm>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <redirdev supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='bus'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>usb</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </redirdev>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <channel supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='type'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>pty</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>unix</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </channel>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <crypto supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='model'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='type'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>qemu</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='backendModel'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>builtin</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </crypto>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <interface supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='backendType'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>default</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>passt</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </interface>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <panic supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='model'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>isa</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>hyperv</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </panic>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <console supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='type'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>null</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>vc</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>pty</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>dev</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>file</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>pipe</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>stdio</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>udp</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>tcp</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>unix</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>qemu-vdagent</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>dbus</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </console>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   </devices>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <features>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <gic supported='no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <vmcoreinfo supported='yes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <genid supported='yes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <backingStoreInput supported='yes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <backup supported='yes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <async-teardown supported='yes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <ps2 supported='yes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <sev supported='no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <sgx supported='no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <hyperv supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='features'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>relaxed</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>vapic</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>spinlocks</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>vpindex</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>runtime</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>synic</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>stimer</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>reset</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>vendor_id</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>frequencies</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>reenlightenment</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>tlbflush</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>ipi</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>avic</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>emsr_bitmap</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>xmm_input</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <defaults>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <spinlocks>4095</spinlocks>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <stimer_direct>on</stimer_direct>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <tlbflush_direct>off</tlbflush_direct>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <tlbflush_extended>off</tlbflush_extended>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </defaults>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </hyperv>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <launchSecurity supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='sectype'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>tdx</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </launchSecurity>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   </features>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: </domainCapabilities>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.512 229589 DEBUG nova.virt.libvirt.host [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.518 229589 DEBUG nova.virt.libvirt.host [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: <domainCapabilities>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <path>/usr/libexec/qemu-kvm</path>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <domain>kvm</domain>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <arch>x86_64</arch>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <vcpu max='240'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <iothreads supported='yes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <os supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <enum name='firmware'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <loader supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='type'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>rom</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>pflash</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='readonly'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>yes</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>no</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='secure'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>no</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </loader>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   </os>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <cpu>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <mode name='host-passthrough' supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='hostPassthroughMigratable'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>on</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>off</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </mode>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <mode name='maximum' supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='maximumMigratable'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>on</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>off</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </mode>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <mode name='host-model' supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <vendor>AMD</vendor>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='x2apic'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='tsc-deadline'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='hypervisor'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='tsc_adjust'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='spec-ctrl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='stibp'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='ssbd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='cmp_legacy'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='overflow-recov'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='succor'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='ibrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='amd-ssbd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='virt-ssbd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='lbrv'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='tsc-scale'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='vmcb-clean'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='pause-filter'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='pfthreshold'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='svme-addr-chk'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='disable' name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </mode>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <mode name='custom' supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Broadwell'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Broadwell-IBRS'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Broadwell-noTSX'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Broadwell-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Broadwell-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Broadwell-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Broadwell-v4'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cascadelake-Server'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cascadelake-Server-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cascadelake-Server-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cascadelake-Server-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cascadelake-Server-v4'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cascadelake-Server-v5'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cooperlake'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cooperlake-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cooperlake-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Denverton'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='mpx'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Denverton-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='mpx'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Denverton-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Denverton-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Dhyana-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-Genoa'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amd-psfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='auto-ibrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='no-nested-data-bp'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='null-sel-clr-base'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='stibp-always-on'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-Genoa-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amd-psfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='auto-ibrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='no-nested-data-bp'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='null-sel-clr-base'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='stibp-always-on'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-Milan'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-Milan-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-Milan-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amd-psfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='no-nested-data-bp'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='null-sel-clr-base'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='stibp-always-on'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-Rome'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-Rome-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-Rome-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-Rome-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-v4'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='GraniteRapids'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-int8'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-tile'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='bus-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fbsdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrc'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fzrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='mcdt-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pbrsb-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='prefetchiti'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='psdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='serialize'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='tsx-ldtrk'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='GraniteRapids-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-int8'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-tile'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='bus-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fbsdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrc'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fzrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='mcdt-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pbrsb-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='prefetchiti'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='psdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='serialize'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='tsx-ldtrk'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='GraniteRapids-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-int8'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-tile'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx10'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx10-128'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx10-256'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx10-512'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='bus-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='cldemote'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fbsdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrc'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fzrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='mcdt-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdir64b'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdiri'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pbrsb-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='prefetchiti'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='psdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='serialize'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='tsx-ldtrk'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Haswell'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Haswell-IBRS'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Haswell-noTSX'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Haswell-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Haswell-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Haswell-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Haswell-v4'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Icelake-Server'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Icelake-Server-noTSX'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Icelake-Server-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Icelake-Server-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Icelake-Server-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Icelake-Server-v4'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Icelake-Server-v5'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Icelake-Server-v6'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Icelake-Server-v7'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='IvyBridge'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='IvyBridge-IBRS'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='IvyBridge-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='IvyBridge-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='KnightsMill'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-4fmaps'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-4vnniw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512er'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512pf'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='KnightsMill-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-4fmaps'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-4vnniw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512er'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512pf'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Opteron_G4'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fma4'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xop'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Opteron_G4-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fma4'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xop'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Opteron_G5'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fma4'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='tbm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xop'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Opteron_G5-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fma4'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='tbm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xop'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='SapphireRapids'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-int8'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-tile'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='bus-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrc'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fzrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='serialize'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='tsx-ldtrk'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='SapphireRapids-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-int8'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-tile'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='bus-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrc'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fzrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='serialize'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='tsx-ldtrk'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='SapphireRapids-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-int8'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-tile'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='bus-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fbsdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrc'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fzrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='psdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='serialize'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='tsx-ldtrk'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='SapphireRapids-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-int8'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-tile'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='bus-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='cldemote'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fbsdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrc'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fzrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdir64b'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdiri'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='psdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='serialize'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='tsx-ldtrk'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='SierraForest'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-ne-convert'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni-int8'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='bus-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='cmpccxadd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fbsdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='mcdt-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pbrsb-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='psdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='serialize'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='SierraForest-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-ne-convert'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni-int8'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='bus-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='cmpccxadd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fbsdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='mcdt-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pbrsb-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='psdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='serialize'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Client'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Client-IBRS'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Client-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Client-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Client-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Client-v4'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Server'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Server-IBRS'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Server-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Server-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Server-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Server-v4'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Server-v5'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Snowridge'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='cldemote'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='core-capability'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdir64b'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdiri'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='mpx'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='split-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Snowridge-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='cldemote'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='core-capability'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdir64b'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdiri'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='mpx'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='split-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Snowridge-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='cldemote'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='core-capability'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdir64b'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdiri'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='split-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Snowridge-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='cldemote'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='core-capability'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdir64b'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdiri'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='split-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Snowridge-v4'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='cldemote'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdir64b'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdiri'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='athlon'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='3dnow'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='3dnowext'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='athlon-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='3dnow'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='3dnowext'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='core2duo'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='core2duo-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='coreduo'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='coreduo-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='n270'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='n270-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='phenom'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='3dnow'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='3dnowext'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='phenom-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='3dnow'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='3dnowext'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </mode>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   </cpu>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <memoryBacking supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <enum name='sourceType'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <value>file</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <value>anonymous</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <value>memfd</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   </memoryBacking>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <devices>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <disk supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='diskDevice'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>disk</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>cdrom</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>floppy</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>lun</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='bus'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>ide</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>fdc</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>scsi</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>virtio</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>usb</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>sata</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='model'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>virtio</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>virtio-transitional</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>virtio-non-transitional</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </disk>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <graphics supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='type'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>vnc</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>egl-headless</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>dbus</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </graphics>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <video supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='modelType'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>vga</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>cirrus</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>virtio</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>none</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>bochs</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>ramfb</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </video>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <hostdev supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='mode'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>subsystem</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='startupPolicy'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>default</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>mandatory</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>requisite</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>optional</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='subsysType'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>usb</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>pci</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>scsi</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='capsType'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='pciBackend'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </hostdev>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <rng supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='model'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>virtio</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>virtio-transitional</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>virtio-non-transitional</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='backendModel'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>random</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>egd</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>builtin</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </rng>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <filesystem supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='driverType'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>path</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>handle</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>virtiofs</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </filesystem>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <tpm supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='model'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>tpm-tis</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>tpm-crb</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='backendModel'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>emulator</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>external</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='backendVersion'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>2.0</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </tpm>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <redirdev supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='bus'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>usb</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </redirdev>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <channel supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='type'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>pty</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>unix</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </channel>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <crypto supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='model'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='type'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>qemu</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='backendModel'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>builtin</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </crypto>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <interface supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='backendType'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>default</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>passt</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </interface>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <panic supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='model'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>isa</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>hyperv</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </panic>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <console supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='type'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>null</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>vc</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>pty</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>dev</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>file</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>pipe</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>stdio</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>udp</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>tcp</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>unix</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>qemu-vdagent</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>dbus</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </console>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   </devices>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <features>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <gic supported='no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <vmcoreinfo supported='yes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <genid supported='yes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <backingStoreInput supported='yes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <backup supported='yes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <async-teardown supported='yes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <ps2 supported='yes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <sev supported='no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <sgx supported='no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <hyperv supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='features'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>relaxed</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>vapic</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>spinlocks</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>vpindex</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>runtime</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>synic</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>stimer</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>reset</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>vendor_id</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>frequencies</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>reenlightenment</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>tlbflush</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>ipi</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>avic</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>emsr_bitmap</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>xmm_input</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <defaults>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <spinlocks>4095</spinlocks>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <stimer_direct>on</stimer_direct>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <tlbflush_direct>off</tlbflush_direct>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <tlbflush_extended>off</tlbflush_extended>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </defaults>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </hyperv>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <launchSecurity supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='sectype'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>tdx</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </launchSecurity>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   </features>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: </domainCapabilities>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.564 229589 DEBUG nova.virt.libvirt.host [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: <domainCapabilities>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <path>/usr/libexec/qemu-kvm</path>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <domain>kvm</domain>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <arch>x86_64</arch>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <vcpu max='1024'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <iothreads supported='yes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <os supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <enum name='firmware'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <value>efi</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <loader supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='type'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>rom</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>pflash</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='readonly'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>yes</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>no</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='secure'>
Dec 02 09:35:04 np0005541914.localdomain sudo[229729]: pam_unix(sudo:session): session closed for user root
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>yes</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>no</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </loader>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   </os>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <cpu>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <mode name='host-passthrough' supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='hostPassthroughMigratable'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>on</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>off</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </mode>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <mode name='maximum' supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='maximumMigratable'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>on</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>off</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </mode>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <mode name='host-model' supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <vendor>AMD</vendor>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='x2apic'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='tsc-deadline'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='hypervisor'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='tsc_adjust'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='spec-ctrl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='stibp'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='ssbd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='cmp_legacy'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='overflow-recov'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='succor'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='ibrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='amd-ssbd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='virt-ssbd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='lbrv'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='tsc-scale'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='vmcb-clean'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='pause-filter'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='pfthreshold'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='svme-addr-chk'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <feature policy='disable' name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </mode>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <mode name='custom' supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Broadwell'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Broadwell-IBRS'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Broadwell-noTSX'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Broadwell-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Broadwell-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Broadwell-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Broadwell-v4'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cascadelake-Server'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cascadelake-Server-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cascadelake-Server-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cascadelake-Server-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cascadelake-Server-v4'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cascadelake-Server-v5'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cooperlake'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cooperlake-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Cooperlake-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Denverton'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='mpx'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Denverton-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='mpx'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Denverton-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Denverton-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Dhyana-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-Genoa'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amd-psfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='auto-ibrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='no-nested-data-bp'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='null-sel-clr-base'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='stibp-always-on'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-Genoa-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amd-psfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='auto-ibrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='no-nested-data-bp'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='null-sel-clr-base'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='stibp-always-on'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-Milan'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-Milan-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-Milan-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amd-psfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='no-nested-data-bp'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='null-sel-clr-base'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='stibp-always-on'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-Rome'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-Rome-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-Rome-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-Rome-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='EPYC-v4'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='GraniteRapids'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-int8'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-tile'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='bus-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fbsdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrc'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fzrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='mcdt-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pbrsb-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='prefetchiti'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='psdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='serialize'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='tsx-ldtrk'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='GraniteRapids-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-int8'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-tile'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='bus-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fbsdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrc'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fzrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='mcdt-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pbrsb-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='prefetchiti'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='psdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='serialize'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='tsx-ldtrk'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='GraniteRapids-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-int8'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-tile'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx10'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx10-128'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx10-256'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx10-512'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='bus-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='cldemote'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fbsdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrc'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fzrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain podman[229806]: 2025-12-02 09:35:04.670638898 +0000 UTC m=+0.150857862 container cleanup 21fdb0dbdd9f58ae102d96a43fbe2e853b5f997904471f5738055c23f246e34e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='mcdt-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdir64b'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdiri'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pbrsb-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='prefetchiti'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='psdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='serialize'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='tsx-ldtrk'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Haswell'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Haswell-IBRS'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Haswell-noTSX'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain systemd[1]: libpod-conmon-21fdb0dbdd9f58ae102d96a43fbe2e853b5f997904471f5738055c23f246e34e.scope: Deactivated successfully.
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Haswell-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Haswell-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Haswell-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Haswell-v4'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Icelake-Server'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Icelake-Server-noTSX'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Icelake-Server-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Icelake-Server-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Icelake-Server-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Icelake-Server-v4'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Icelake-Server-v5'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Icelake-Server-v6'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Icelake-Server-v7'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='IvyBridge'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='IvyBridge-IBRS'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='IvyBridge-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='IvyBridge-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='KnightsMill'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-4fmaps'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-4vnniw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512er'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512pf'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='KnightsMill-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-4fmaps'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-4vnniw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512er'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512pf'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Opteron_G4'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fma4'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xop'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Opteron_G4-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fma4'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xop'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Opteron_G5'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fma4'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='tbm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xop'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Opteron_G5-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fma4'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='tbm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xop'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='SapphireRapids'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-int8'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-tile'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='bus-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrc'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fzrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='serialize'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='tsx-ldtrk'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='SapphireRapids-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-int8'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-tile'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='bus-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrc'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fzrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='serialize'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='tsx-ldtrk'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='SapphireRapids-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-int8'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-tile'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='bus-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fbsdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrc'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fzrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='psdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='serialize'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='tsx-ldtrk'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='SapphireRapids-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-int8'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='amx-tile'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-bf16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-fp16'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bitalg'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vbmi2'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='bus-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='cldemote'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fbsdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrc'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fzrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='la57'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdir64b'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdiri'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='psdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='serialize'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='taa-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='tsx-ldtrk'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xfd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='SierraForest'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-ne-convert'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni-int8'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='bus-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='cmpccxadd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fbsdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='mcdt-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pbrsb-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='psdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='serialize'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='SierraForest-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-ifma'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-ne-convert'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx-vnni-int8'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='bus-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='cmpccxadd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fbsdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='fsrs'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ibrs-all'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='mcdt-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pbrsb-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='psdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='serialize'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vaes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='vpclmulqdq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Client'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Client-IBRS'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Client-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Client-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Client-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Client-v4'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Server'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Server-IBRS'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Server-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Server-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='hle'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='rtm'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Server-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Server-v4'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Skylake-Server-v5'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512bw'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512cd'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512dq'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512f'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='avx512vl'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='invpcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pcid'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='pku'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Snowridge'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='cldemote'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='core-capability'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdir64b'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdiri'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='mpx'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='split-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Snowridge-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='cldemote'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='core-capability'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdir64b'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdiri'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='mpx'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='split-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Snowridge-v2'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='cldemote'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='core-capability'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdir64b'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdiri'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='split-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Snowridge-v3'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='cldemote'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='core-capability'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdir64b'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdiri'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='split-lock-detect'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='Snowridge-v4'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='cldemote'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='erms'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='gfni'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdir64b'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='movdiri'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='xsaves'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='athlon'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='3dnow'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='3dnowext'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='athlon-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='3dnow'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='3dnowext'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='core2duo'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='core2duo-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='coreduo'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='coreduo-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='n270'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='n270-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='ss'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='phenom'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='3dnow'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='3dnowext'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <blockers model='phenom-v1'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='3dnow'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <feature name='3dnowext'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </blockers>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </mode>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   </cpu>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <memoryBacking supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <enum name='sourceType'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <value>file</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <value>anonymous</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <value>memfd</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   </memoryBacking>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <devices>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <disk supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='diskDevice'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>disk</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>cdrom</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>floppy</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>lun</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='bus'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>fdc</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>scsi</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>virtio</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>usb</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>sata</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='model'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>virtio</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>virtio-transitional</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>virtio-non-transitional</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </disk>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <graphics supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='type'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>vnc</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>egl-headless</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>dbus</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </graphics>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <video supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='modelType'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>vga</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>cirrus</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>virtio</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>none</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>bochs</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>ramfb</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </video>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <hostdev supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='mode'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>subsystem</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='startupPolicy'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>default</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>mandatory</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>requisite</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>optional</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='subsysType'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>usb</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>pci</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>scsi</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='capsType'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='pciBackend'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </hostdev>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <rng supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='model'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>virtio</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>virtio-transitional</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>virtio-non-transitional</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='backendModel'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>random</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>egd</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>builtin</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </rng>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <filesystem supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='driverType'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>path</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>handle</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>virtiofs</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </filesystem>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <tpm supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='model'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>tpm-tis</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>tpm-crb</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='backendModel'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>emulator</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>external</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='backendVersion'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>2.0</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </tpm>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <redirdev supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='bus'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>usb</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </redirdev>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <channel supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='type'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>pty</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>unix</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </channel>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <crypto supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='model'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='type'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>qemu</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='backendModel'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>builtin</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </crypto>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <interface supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='backendType'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>default</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>passt</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </interface>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <panic supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='model'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>isa</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>hyperv</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </panic>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <console supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='type'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>null</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>vc</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>pty</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>dev</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>file</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>pipe</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>stdio</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>udp</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>tcp</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>unix</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>qemu-vdagent</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>dbus</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </console>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   </devices>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   <features>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <gic supported='no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <vmcoreinfo supported='yes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <genid supported='yes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <backingStoreInput supported='yes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <backup supported='yes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <async-teardown supported='yes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <ps2 supported='yes'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <sev supported='no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <sgx supported='no'/>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <hyperv supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='features'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>relaxed</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>vapic</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>spinlocks</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>vpindex</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>runtime</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>synic</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>stimer</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>reset</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>vendor_id</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>frequencies</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>reenlightenment</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>tlbflush</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>ipi</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>avic</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>emsr_bitmap</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>xmm_input</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <defaults>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <spinlocks>4095</spinlocks>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <stimer_direct>on</stimer_direct>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <tlbflush_direct>off</tlbflush_direct>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <tlbflush_extended>off</tlbflush_extended>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </defaults>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </hyperv>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     <launchSecurity supported='yes'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       <enum name='sectype'>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:         <value>tdx</value>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:       </enum>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:     </launchSecurity>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:   </features>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: </domainCapabilities>
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.617 229589 DEBUG nova.virt.libvirt.host [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.618 229589 DEBUG nova.virt.libvirt.host [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.618 229589 DEBUG nova.virt.libvirt.host [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.618 229589 INFO nova.virt.libvirt.host [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Secure Boot support detected
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.622 229589 INFO nova.virt.libvirt.driver [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.623 229589 INFO nova.virt.libvirt.driver [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.634 229589 DEBUG nova.virt.libvirt.driver [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.660 229589 INFO nova.virt.node [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Determined node identity 9ec09c1a-d246-41d7-94f4-b482f646a9f1 from /var/lib/nova/compute_id
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.673 229589 DEBUG nova.compute.manager [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Verified node 9ec09c1a-d246-41d7-94f4-b482f646a9f1 matches my host np0005541914.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.711 229589 INFO nova.compute.manager [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.787 229589 DEBUG oslo_concurrency.lockutils [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.787 229589 DEBUG oslo_concurrency.lockutils [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.788 229589 DEBUG oslo_concurrency.lockutils [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.788 229589 DEBUG nova.compute.resource_tracker [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 09:35:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:04.789 229589 DEBUG oslo_concurrency.processutils [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:35:05 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:05.227 229589 DEBUG oslo_concurrency.processutils [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:35:05 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-ac17f28608a7cfce4db232908145eceefc4390121a03756b7f9081a0f7d2c6d6-merged.mount: Deactivated successfully.
Dec 02 09:35:05 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-21fdb0dbdd9f58ae102d96a43fbe2e853b5f997904471f5738055c23f246e34e-userdata-shm.mount: Deactivated successfully.
Dec 02 09:35:05 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:05.433 229589 WARNING nova.virt.libvirt.driver [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 09:35:05 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:05.435 229589 DEBUG nova.compute.resource_tracker [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=13602MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 09:35:05 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:05.435 229589 DEBUG oslo_concurrency.lockutils [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:35:05 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:05.436 229589 DEBUG oslo_concurrency.lockutils [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:35:05 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:05.768 229589 DEBUG nova.compute.resource_tracker [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 09:35:05 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:05.768 229589 DEBUG nova.compute.resource_tracker [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 09:35:05 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:05.827 229589 DEBUG nova.scheduler.client.report [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Refreshing inventories for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 02 09:35:05 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:05.883 229589 DEBUG nova.scheduler.client.report [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Updating ProviderTree inventory for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 02 09:35:05 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:05.884 229589 DEBUG nova.compute.provider_tree [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Updating inventory in ProviderTree for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 02 09:35:05 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:05.900 229589 DEBUG nova.scheduler.client.report [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Refreshing aggregate associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 02 09:35:05 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:05.928 229589 DEBUG nova.scheduler.client.report [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Refreshing trait associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, traits: COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_BMI,COMPUTE_VOLUME_EXTEND,COMPUTE_NODE,HW_CPU_X86_AESNI,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_RESCUE_BFV,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE42,HW_CPU_X86_F16C,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_LAN9118,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_MMX,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 02 09:35:05 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:05.946 229589 DEBUG oslo_concurrency.processutils [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:35:06 np0005541914.localdomain sshd[207918]: pam_unix(sshd:session): session closed for user zuul
Dec 02 09:35:06 np0005541914.localdomain systemd[1]: session-53.scope: Deactivated successfully.
Dec 02 09:35:06 np0005541914.localdomain systemd[1]: session-53.scope: Consumed 2min 15.864s CPU time.
Dec 02 09:35:06 np0005541914.localdomain systemd-logind[760]: Session 53 logged out. Waiting for processes to exit.
Dec 02 09:35:06 np0005541914.localdomain systemd-logind[760]: Removed session 53.
Dec 02 09:35:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:06.423 229589 DEBUG oslo_concurrency.processutils [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:35:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:06.429 229589 DEBUG nova.virt.libvirt.host [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Dec 02 09:35:06 np0005541914.localdomain nova_compute[229585]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Dec 02 09:35:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:06.429 229589 INFO nova.virt.libvirt.host [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] kernel doesn't support AMD SEV
Dec 02 09:35:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:06.431 229589 DEBUG nova.compute.provider_tree [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 09:35:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:06.431 229589 DEBUG nova.virt.libvirt.driver [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 02 09:35:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:06.459 229589 DEBUG nova.scheduler.client.report [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 09:35:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:06.577 229589 DEBUG nova.compute.provider_tree [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Updating resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 generation from 2 to 3 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 02 09:35:06 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14216 DF PROTO=TCP SPT=33630 DPT=9882 SEQ=58620176 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD547B6A30000000001030307) 
Dec 02 09:35:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:06.826 229589 DEBUG nova.compute.resource_tracker [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 09:35:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:06.827 229589 DEBUG oslo_concurrency.lockutils [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.391s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:35:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:06.827 229589 DEBUG nova.service [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Dec 02 09:35:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:06.924 229589 DEBUG nova.service [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Dec 02 09:35:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:06.925 229589 DEBUG nova.servicegroup.drivers.db [None req-03d32758-e3b3-45f7-ba50-efc6fab12914 - - - - - -] DB_Driver: join new ServiceGroup member np0005541914.localdomain to the compute group, service = <Service: host=np0005541914.localdomain, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Dec 02 09:35:08 np0005541914.localdomain sshd[229893]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:35:09 np0005541914.localdomain sshd[229893]: Invalid user admin from 45.148.10.240 port 59164
Dec 02 09:35:09 np0005541914.localdomain sshd[229893]: Connection closed by invalid user admin 45.148.10.240 port 59164 [preauth]
Dec 02 09:35:10 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46586 DF PROTO=TCP SPT=48348 DPT=9100 SEQ=2276887532 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD547C5220000000001030307) 
Dec 02 09:35:10 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:35:11 np0005541914.localdomain systemd[1]: tmp-crun.cZ3qbc.mount: Deactivated successfully.
Dec 02 09:35:11 np0005541914.localdomain podman[229895]: 2025-12-02 09:35:11.100990944 +0000 UTC m=+0.099023673 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:35:11 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:35:11 np0005541914.localdomain podman[229895]: 2025-12-02 09:35:11.210257412 +0000 UTC m=+0.208290151 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec 02 09:35:11 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:35:11 np0005541914.localdomain podman[229920]: 2025-12-02 09:35:11.281512779 +0000 UTC m=+0.066926591 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 02 09:35:11 np0005541914.localdomain podman[229920]: 2025-12-02 09:35:11.290931479 +0000 UTC m=+0.076345271 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 02 09:35:11 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:35:11 np0005541914.localdomain sshd[229939]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:35:11 np0005541914.localdomain sshd[229941]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:35:12 np0005541914.localdomain sshd[229939]: Accepted publickey for zuul from 192.168.122.30 port 47538 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 09:35:12 np0005541914.localdomain systemd-logind[760]: New session 55 of user zuul.
Dec 02 09:35:12 np0005541914.localdomain systemd[1]: Started Session 55 of User zuul.
Dec 02 09:35:12 np0005541914.localdomain sshd[229939]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 09:35:12 np0005541914.localdomain python3.9[230052]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 09:35:13 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47414 DF PROTO=TCP SPT=53308 DPT=9105 SEQ=4237463935 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD547CFA20000000001030307) 
Dec 02 09:35:13 np0005541914.localdomain sshd[229941]: Received disconnect from 43.225.159.111 port 55296:11:  [preauth]
Dec 02 09:35:13 np0005541914.localdomain sshd[229941]: Disconnected from authenticating user root 43.225.159.111 port 55296 [preauth]
Dec 02 09:35:14 np0005541914.localdomain sudo[230164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rrnfrhmnjwgnvfixlvugsuhvjkykxngh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668113.8325274-71-209199501409586/AnsiballZ_systemd_service.py
Dec 02 09:35:14 np0005541914.localdomain sudo[230164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:35:14 np0005541914.localdomain python3.9[230166]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 02 09:35:14 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:35:14 np0005541914.localdomain systemd-sysv-generator[230192]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:35:14 np0005541914.localdomain systemd-rc-local-generator[230189]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:35:14 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:35:14 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:35:14 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:35:14 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:35:14 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:35:14 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:35:14 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:35:14 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:35:14 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:35:15 np0005541914.localdomain sudo[230164]: pam_unix(sudo:session): session closed for user root
Dec 02 09:35:15 np0005541914.localdomain python3.9[230310]: ansible-ansible.builtin.service_facts Invoked
Dec 02 09:35:15 np0005541914.localdomain network[230327]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 02 09:35:15 np0005541914.localdomain network[230328]: 'network-scripts' will be removed from distribution in near future.
Dec 02 09:35:15 np0005541914.localdomain network[230329]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 02 09:35:16 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49660 DF PROTO=TCP SPT=55914 DPT=9102 SEQ=4108157995 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD547DAEE0000000001030307) 
Dec 02 09:35:19 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49662 DF PROTO=TCP SPT=55914 DPT=9102 SEQ=4108157995 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD547E6E20000000001030307) 
Dec 02 09:35:19 np0005541914.localdomain sudo[230360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:35:19 np0005541914.localdomain sudo[230360]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:35:19 np0005541914.localdomain sudo[230360]: pam_unix(sudo:session): session closed for user root
Dec 02 09:35:19 np0005541914.localdomain sudo[230378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:35:19 np0005541914.localdomain sudo[230378]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:35:20 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:35:20 np0005541914.localdomain sshd[230420]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:35:20 np0005541914.localdomain sudo[230378]: pam_unix(sudo:session): session closed for user root
Dec 02 09:35:21 np0005541914.localdomain sshd[230420]: Received disconnect from 217.170.199.90 port 55442:11:  [preauth]
Dec 02 09:35:21 np0005541914.localdomain sshd[230420]: Disconnected from authenticating user root 217.170.199.90 port 55442 [preauth]
Dec 02 09:35:21 np0005541914.localdomain sudo[230448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:35:21 np0005541914.localdomain sudo[230448]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:35:21 np0005541914.localdomain sudo[230448]: pam_unix(sudo:session): session closed for user root
Dec 02 09:35:21 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18167 DF PROTO=TCP SPT=46438 DPT=9101 SEQ=2661148423 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD547F1220000000001030307) 
Dec 02 09:35:24 np0005541914.localdomain sudo[230650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dekizpwbpbmbjukmsvpjaozerjvwwheg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668124.2487004-128-256386933961498/AnsiballZ_systemd_service.py
Dec 02 09:35:24 np0005541914.localdomain sudo[230650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:35:24 np0005541914.localdomain python3.9[230652]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:35:24 np0005541914.localdomain sudo[230650]: pam_unix(sudo:session): session closed for user root
Dec 02 09:35:25 np0005541914.localdomain sudo[230761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qadnvetegtsjikmhvyyjkcbafvfxkwcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668125.1981773-158-177980398751013/AnsiballZ_file.py
Dec 02 09:35:25 np0005541914.localdomain sudo[230761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:35:26 np0005541914.localdomain python3.9[230763]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:35:26 np0005541914.localdomain sudo[230761]: pam_unix(sudo:session): session closed for user root
Dec 02 09:35:26 np0005541914.localdomain systemd-journald[47679]: Field hash table of /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal has a fill level at 76.3 (254 of 333 items), suggesting rotation.
Dec 02 09:35:26 np0005541914.localdomain systemd-journald[47679]: /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal: Journal header limits reached or header out-of-date, rotating.
Dec 02 09:35:26 np0005541914.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 02 09:35:26 np0005541914.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 02 09:35:26 np0005541914.localdomain sudo[230872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bvstzllfkvpvbbwrkcdtozumvspzpqst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668126.3675945-182-137908912725634/AnsiballZ_file.py
Dec 02 09:35:26 np0005541914.localdomain sudo[230872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:35:26 np0005541914.localdomain python3.9[230874]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:35:26 np0005541914.localdomain sudo[230872]: pam_unix(sudo:session): session closed for user root
Dec 02 09:35:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4590 DF PROTO=TCP SPT=42676 DPT=9101 SEQ=2818963414 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54806620000000001030307) 
Dec 02 09:35:27 np0005541914.localdomain sudo[230982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oihxqjyebzdfuvzpelgkigsegrcvdwqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668127.2394195-209-148654394503959/AnsiballZ_command.py
Dec 02 09:35:27 np0005541914.localdomain sudo[230982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:35:27 np0005541914.localdomain python3.9[230984]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                                              systemctl disable --now certmonger.service
                                                              test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                                            fi
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:35:27 np0005541914.localdomain sudo[230982]: pam_unix(sudo:session): session closed for user root
Dec 02 09:35:28 np0005541914.localdomain python3.9[231094]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 02 09:35:29 np0005541914.localdomain sudo[231202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xdvwkvwavtzelsuxyjnyeikvzjputhej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668129.4839556-263-67294873007433/AnsiballZ_systemd_service.py
Dec 02 09:35:29 np0005541914.localdomain sudo[231202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:35:30 np0005541914.localdomain python3.9[231204]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 02 09:35:30 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:35:30 np0005541914.localdomain systemd-sysv-generator[231233]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:35:30 np0005541914.localdomain systemd-rc-local-generator[231228]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:35:30 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:35:30 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:35:30 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:35:30 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:35:30 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:35:30 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:35:30 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:35:30 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:35:30 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:35:30 np0005541914.localdomain sudo[231202]: pam_unix(sudo:session): session closed for user root
Dec 02 09:35:31 np0005541914.localdomain sudo[231348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wyizfpvjplevjcieuhnpfnchhyqfcdev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668130.963406-287-152808354735308/AnsiballZ_command.py
Dec 02 09:35:31 np0005541914.localdomain sudo[231348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:35:31 np0005541914.localdomain python3.9[231350]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:35:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49664 DF PROTO=TCP SPT=55914 DPT=9102 SEQ=4108157995 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54817220000000001030307) 
Dec 02 09:35:31 np0005541914.localdomain sudo[231348]: pam_unix(sudo:session): session closed for user root
Dec 02 09:35:32 np0005541914.localdomain sudo[231459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uljhuznncovdxlvztyllnnajosdlrajb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668132.675667-314-20493359323771/AnsiballZ_file.py
Dec 02 09:35:32 np0005541914.localdomain sudo[231459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:35:32 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:35:33 np0005541914.localdomain systemd[1]: tmp-crun.IvMKa8.mount: Deactivated successfully.
Dec 02 09:35:33 np0005541914.localdomain podman[231462]: 2025-12-02 09:35:33.091043809 +0000 UTC m=+0.108007088 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.build-date=20251125, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:35:33 np0005541914.localdomain podman[231462]: 2025-12-02 09:35:33.100411948 +0000 UTC m=+0.117375107 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:35:33 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:35:33 np0005541914.localdomain python3.9[231461]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:35:33 np0005541914.localdomain sudo[231459]: pam_unix(sudo:session): session closed for user root
Dec 02 09:35:33 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6224 DF PROTO=TCP SPT=54898 DPT=9882 SEQ=2354412042 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5481FC00000000001030307) 
Dec 02 09:35:34 np0005541914.localdomain python3.9[231588]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:35:34 np0005541914.localdomain python3.9[231698]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:35:34 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6225 DF PROTO=TCP SPT=54898 DPT=9882 SEQ=2354412042 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54823E30000000001030307) 
Dec 02 09:35:35 np0005541914.localdomain python3.9[231784]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668134.2179036-362-20581371179412/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=6bcdd3baf62a4327544f9fc7c77a2d84b60d8110 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:35:35 np0005541914.localdomain sudo[231892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zxrblwubnwwgctyymqxwgkdgffbqgzmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668135.4079797-407-228581421205973/AnsiballZ_group.py
Dec 02 09:35:35 np0005541914.localdomain sudo[231892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:35:36 np0005541914.localdomain python3.9[231894]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Dec 02 09:35:36 np0005541914.localdomain sudo[231892]: pam_unix(sudo:session): session closed for user root
Dec 02 09:35:36 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6226 DF PROTO=TCP SPT=54898 DPT=9882 SEQ=2354412042 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5482BE20000000001030307) 
Dec 02 09:35:36 np0005541914.localdomain sudo[232002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sfzwghgbmcqszssytxvqtfbwdrmixqkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668136.450696-440-91100353200926/AnsiballZ_getent.py
Dec 02 09:35:36 np0005541914.localdomain sudo[232002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:35:37 np0005541914.localdomain python3.9[232004]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Dec 02 09:35:37 np0005541914.localdomain sudo[232002]: pam_unix(sudo:session): session closed for user root
Dec 02 09:35:37 np0005541914.localdomain sudo[232113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aypacjvscgubzryozwpkqclxseazftqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668137.281227-464-3657462633506/AnsiballZ_group.py
Dec 02 09:35:37 np0005541914.localdomain sudo[232113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:35:37 np0005541914.localdomain sshd[232116]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:35:37 np0005541914.localdomain python3.9[232115]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 02 09:35:37 np0005541914.localdomain groupadd[232118]: group added to /etc/group: name=ceilometer, GID=42405
Dec 02 09:35:37 np0005541914.localdomain groupadd[232118]: group added to /etc/gshadow: name=ceilometer
Dec 02 09:35:37 np0005541914.localdomain groupadd[232118]: new group: name=ceilometer, GID=42405
Dec 02 09:35:37 np0005541914.localdomain sudo[232113]: pam_unix(sudo:session): session closed for user root
Dec 02 09:35:38 np0005541914.localdomain sshd[232116]: Received disconnect from 34.78.29.97 port 55634:11: Bye Bye [preauth]
Dec 02 09:35:38 np0005541914.localdomain sshd[232116]: Disconnected from authenticating user root 34.78.29.97 port 55634 [preauth]
Dec 02 09:35:38 np0005541914.localdomain sudo[232231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qpmpvcjqjpfmdsyzqyguvbexvsdyjhho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668138.033399-487-21963552205284/AnsiballZ_user.py
Dec 02 09:35:38 np0005541914.localdomain sudo[232231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:35:38 np0005541914.localdomain python3.9[232233]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005541914.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 02 09:35:38 np0005541914.localdomain useradd[232235]: new user: name=ceilometer, UID=42405, GID=42405, home=/home/ceilometer, shell=/sbin/nologin, from=/dev/pts/0
Dec 02 09:35:38 np0005541914.localdomain useradd[232235]: add 'ceilometer' to group 'libvirt'
Dec 02 09:35:38 np0005541914.localdomain useradd[232235]: add 'ceilometer' to shadow group 'libvirt'
Dec 02 09:35:38 np0005541914.localdomain sudo[232231]: pam_unix(sudo:session): session closed for user root
Dec 02 09:35:40 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22111 DF PROTO=TCP SPT=47092 DPT=9100 SEQ=51859181 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54839220000000001030307) 
Dec 02 09:35:40 np0005541914.localdomain python3.9[232349]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:35:41 np0005541914.localdomain python3.9[232435]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764668140.0778627-566-268474356238982/.source.conf _original_basename=ceilometer.conf follow=False checksum=9b40aa523dc31738ea523cc852832670ccea382a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:35:41 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:35:41 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:35:42 np0005541914.localdomain podman[232527]: 2025-12-02 09:35:42.097298009 +0000 UTC m=+0.092485365 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent)
Dec 02 09:35:42 np0005541914.localdomain podman[232527]: 2025-12-02 09:35:42.106864414 +0000 UTC m=+0.102051770 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 02 09:35:42 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:35:42 np0005541914.localdomain systemd[1]: tmp-crun.uqpZO1.mount: Deactivated successfully.
Dec 02 09:35:42 np0005541914.localdomain podman[232530]: 2025-12-02 09:35:42.205598386 +0000 UTC m=+0.198332884 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, container_name=ovn_controller)
Dec 02 09:35:42 np0005541914.localdomain python3.9[232557]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:35:42 np0005541914.localdomain podman[232530]: 2025-12-02 09:35:42.312285641 +0000 UTC m=+0.305020099 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 02 09:35:42 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:35:42 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46587 DF PROTO=TCP SPT=48348 DPT=9100 SEQ=2276887532 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54843230000000001030307) 
Dec 02 09:35:42 np0005541914.localdomain python3.9[232671]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764668141.7454681-566-90961809143074/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:35:43 np0005541914.localdomain python3.9[232779]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:35:43 np0005541914.localdomain python3.9[232865]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764668142.9077556-566-148277575007804/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:35:44 np0005541914.localdomain python3.9[232973]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:35:45 np0005541914.localdomain python3.9[233081]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:35:45 np0005541914.localdomain python3.9[233189]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:35:45 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:45.926 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:35:45 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:35:45.958 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:35:46 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29516 DF PROTO=TCP SPT=38034 DPT=9102 SEQ=3925383142 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD548501E0000000001030307) 
Dec 02 09:35:46 np0005541914.localdomain python3.9[233275]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764668145.4036455-743-176029571531069/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:35:46 np0005541914.localdomain python3.9[233383]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:35:47 np0005541914.localdomain python3.9[233438]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:35:47 np0005541914.localdomain python3.9[233546]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:35:48 np0005541914.localdomain python3.9[233632]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764668147.4501677-743-101728219918554/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=d15068604cf730dd6e7b88a19d62f57d3a39f94f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:35:48 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6228 DF PROTO=TCP SPT=54898 DPT=9882 SEQ=2354412042 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5485B220000000001030307) 
Dec 02 09:35:48 np0005541914.localdomain python3.9[233740]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:35:49 np0005541914.localdomain python3.9[233826]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764668148.5240872-743-103922710603031/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:35:49 np0005541914.localdomain python3.9[233934]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:35:50 np0005541914.localdomain python3.9[234020]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764668149.5444264-743-23224108120926/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:35:50 np0005541914.localdomain python3.9[234128]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:35:51 np0005541914.localdomain python3.9[234214]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764668150.5383701-743-216745527289083/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=7e5ab36b7368c1d4a00810e02af11a7f7d7c84e8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:35:51 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4592 DF PROTO=TCP SPT=42676 DPT=9101 SEQ=2818963414 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54867220000000001030307) 
Dec 02 09:35:52 np0005541914.localdomain python3.9[234322]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:35:52 np0005541914.localdomain python3.9[234408]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764668151.5860927-743-44243444731744/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:35:53 np0005541914.localdomain python3.9[234516]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:35:53 np0005541914.localdomain python3.9[234602]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764668152.7256374-743-178796731201777/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=0e4ea521b0035bea70b7a804346a5c89364dcbc3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:35:54 np0005541914.localdomain python3.9[234710]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:35:55 np0005541914.localdomain python3.9[234796]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764668153.8120742-743-51626548515279/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=b056dcaaba7624b93826bb95ee9e82f81bde6c72 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:35:56 np0005541914.localdomain python3.9[234904]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:35:56 np0005541914.localdomain python3.9[234990]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764668155.6411343-743-172783995893710/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=885ccc6f5edd8803cb385bdda5648d0b3017b4e4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:35:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58328 DF PROTO=TCP SPT=55876 DPT=9101 SEQ=2613760775 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5487BA20000000001030307) 
Dec 02 09:35:57 np0005541914.localdomain python3.9[235098]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:35:58 np0005541914.localdomain python3.9[235184]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764668156.6742268-743-151556739770795/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:35:58 np0005541914.localdomain sudo[235292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yvphpwofwoctdqsrfpgqtqigkkhkvwtb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668158.6124513-1208-124982843289016/AnsiballZ_file.py
Dec 02 09:35:58 np0005541914.localdomain sudo[235292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:35:59 np0005541914.localdomain python3.9[235294]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:35:59 np0005541914.localdomain sudo[235292]: pam_unix(sudo:session): session closed for user root
Dec 02 09:35:59 np0005541914.localdomain sudo[235402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zsptyndyvbhsdggnvalbcmmagtfcjqaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668159.3109212-1232-204432604984228/AnsiballZ_systemd_service.py
Dec 02 09:35:59 np0005541914.localdomain sudo[235402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:35:59 np0005541914.localdomain python3.9[235404]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:35:59 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:36:00 np0005541914.localdomain systemd-rc-local-generator[235428]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:36:00 np0005541914.localdomain systemd-sysv-generator[235432]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:36:00 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:00 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:00 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:00 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:00 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:36:00 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:00 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:00 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:00 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:00 np0005541914.localdomain systemd[1]: Listening on Podman API Socket.
Dec 02 09:36:00 np0005541914.localdomain sudo[235402]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:00 np0005541914.localdomain sudo[235552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jryohfznvevlvuvcnejbtjfmfovlvlhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668160.6335156-1259-98420025090222/AnsiballZ_stat.py
Dec 02 09:36:00 np0005541914.localdomain sudo[235552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:01 np0005541914.localdomain python3.9[235554]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:36:01 np0005541914.localdomain sudo[235552]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:01 np0005541914.localdomain sudo[235640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sbbycnwdxysvnxfbvfaezeseqguzeoxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668160.6335156-1259-98420025090222/AnsiballZ_copy.py
Dec 02 09:36:01 np0005541914.localdomain sudo[235640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29520 DF PROTO=TCP SPT=38034 DPT=9102 SEQ=3925383142 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5488D220000000001030307) 
Dec 02 09:36:01 np0005541914.localdomain python3.9[235642]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668160.6335156-1259-98420025090222/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:36:01 np0005541914.localdomain sudo[235640]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:01 np0005541914.localdomain sudo[235695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ymcpikjqxzsljsfjkijcqjwlnxjtnvlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668160.6335156-1259-98420025090222/AnsiballZ_stat.py
Dec 02 09:36:01 np0005541914.localdomain sudo[235695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:02 np0005541914.localdomain python3.9[235697]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:36:02 np0005541914.localdomain sudo[235695]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:02 np0005541914.localdomain sudo[235783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qicajfzemmhyxjrnyacrrlfkolhsydqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668160.6335156-1259-98420025090222/AnsiballZ_copy.py
Dec 02 09:36:02 np0005541914.localdomain sudo[235783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:02 np0005541914.localdomain python3.9[235785]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668160.6335156-1259-98420025090222/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:36:02 np0005541914.localdomain sudo[235783]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:36:03.144 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:36:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:36:03.146 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:36:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:36:03.147 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:36:03 np0005541914.localdomain sudo[235893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ewuztzkutctcwcgakdfuctlwubgzwhff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668163.1091068-1343-74336279440281/AnsiballZ_container_config_data.py
Dec 02 09:36:03 np0005541914.localdomain sudo[235893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:03 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:36:03 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50877 DF PROTO=TCP SPT=53216 DPT=9882 SEQ=87131067 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54894F10000000001030307) 
Dec 02 09:36:03 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:36:03.644 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:36:03 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:36:03.644 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:36:03 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:36:03.645 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 09:36:03 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:36:03.645 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 09:36:03 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:36:03.659 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 09:36:03 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:36:03.659 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:36:03 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:36:03.660 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:36:03 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:36:03.660 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:36:03 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:36:03.660 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:36:03 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:36:03.660 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:36:03 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:36:03.661 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:36:03 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:36:03.661 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 09:36:03 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:36:03.661 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:36:03 np0005541914.localdomain podman[235896]: 2025-12-02 09:36:03.666672856 +0000 UTC m=+0.103753550 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 02 09:36:03 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:36:03.678 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:36:03 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:36:03.678 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:36:03 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:36:03.679 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:36:03 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:36:03.679 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 09:36:03 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:36:03.679 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:36:03 np0005541914.localdomain podman[235896]: 2025-12-02 09:36:03.682779404 +0000 UTC m=+0.119860078 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 02 09:36:03 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:36:03 np0005541914.localdomain python3.9[235895]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False
Dec 02 09:36:03 np0005541914.localdomain sudo[235893]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:36:04.139 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:36:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:36:04.364 229589 WARNING nova.virt.libvirt.driver [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 09:36:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:36:04.367 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=13598MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 09:36:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:36:04.367 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:36:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:36:04.367 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:36:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:36:04.425 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 09:36:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:36:04.426 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 09:36:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:36:04.447 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:36:04 np0005541914.localdomain sudo[236044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jgcsebbzwxzupzjuqnessyzlmuciekyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668164.054244-1370-56475672969409/AnsiballZ_container_config_hash.py
Dec 02 09:36:04 np0005541914.localdomain sudo[236044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:04 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50878 DF PROTO=TCP SPT=53216 DPT=9882 SEQ=87131067 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54898E20000000001030307) 
Dec 02 09:36:04 np0005541914.localdomain python3.9[236046]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 02 09:36:04 np0005541914.localdomain sudo[236044]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:36:04.902 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:36:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:36:04.909 229589 DEBUG nova.compute.provider_tree [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 09:36:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:36:04.927 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 09:36:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:36:04.929 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 09:36:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:36:04.930 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:36:06 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50879 DF PROTO=TCP SPT=53216 DPT=9882 SEQ=87131067 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD548A0E20000000001030307) 
Dec 02 09:36:06 np0005541914.localdomain sudo[236175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tsnyqthcoclloxuxgmhnozaucffbfxle ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764668165.341297-1399-230534486116631/AnsiballZ_edpm_container_manage.py
Dec 02 09:36:06 np0005541914.localdomain sudo[236175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:07 np0005541914.localdomain python3[236177]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec 02 09:36:07 np0005541914.localdomain python3[236177]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                               {
                                                                    "Id": "343ba269c9fe0a56d7572c8ca328dbce002017c4dd4986f43667971dd03085c2",
                                                                    "Digest": "sha256:667029e1ec7e63fffa1a096f432f6160b441ba36df1bddc9066cbd1129b82009",
                                                                    "RepoTags": [
                                                                         "quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified"
                                                                    ],
                                                                    "RepoDigests": [
                                                                         "quay.io/podified-antelope-centos9/openstack-ceilometer-compute@sha256:667029e1ec7e63fffa1a096f432f6160b441ba36df1bddc9066cbd1129b82009"
                                                                    ],
                                                                    "Parent": "",
                                                                    "Comment": "",
                                                                    "Created": "2025-12-01T06:21:53.58682213Z",
                                                                    "Config": {
                                                                         "User": "root",
                                                                         "Env": [
                                                                              "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                                              "LANG=en_US.UTF-8",
                                                                              "TZ=UTC",
                                                                              "container=oci"
                                                                         ],
                                                                         "Entrypoint": [
                                                                              "dumb-init",
                                                                              "--single-child",
                                                                              "--"
                                                                         ],
                                                                         "Cmd": [
                                                                              "kolla_start"
                                                                         ],
                                                                         "Labels": {
                                                                              "io.buildah.version": "1.41.3",
                                                                              "maintainer": "OpenStack Kubernetes Operator team",
                                                                              "org.label-schema.build-date": "20251125",
                                                                              "org.label-schema.license": "GPLv2",
                                                                              "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                              "org.label-schema.schema-version": "1.0",
                                                                              "org.label-schema.vendor": "CentOS",
                                                                              "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "tcib_managed": "true"
                                                                         },
                                                                         "StopSignal": "SIGTERM"
                                                                    },
                                                                    "Version": "",
                                                                    "Author": "",
                                                                    "Architecture": "amd64",
                                                                    "Os": "linux",
                                                                    "Size": 505175293,
                                                                    "VirtualSize": 505175293,
                                                                    "GraphDriver": {
                                                                         "Name": "overlay",
                                                                         "Data": {
                                                                              "LowerDir": "/var/lib/containers/storage/overlay/4b9c41fe9442d39f0f731cbd431e2ad53f3df5a873cab9bbccc810ab289d4d69/diff:/var/lib/containers/storage/overlay/11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",
                                                                              "UpperDir": "/var/lib/containers/storage/overlay/ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79/diff",
                                                                              "WorkDir": "/var/lib/containers/storage/overlay/ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79/work"
                                                                         }
                                                                    },
                                                                    "RootFS": {
                                                                         "Type": "layers",
                                                                         "Layers": [
                                                                              "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",
                                                                              "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",
                                                                              "sha256:86c2cd3987225f8a9bf38cc88e9c24b56bdf4a194f2301186519b4a7571b0c92",
                                                                              "sha256:a47016624274f5ebad76019f5a2e465c1737f96caa539b36f90ab8e33592f415",
                                                                              "sha256:38a03f5e96658211fb28e2f87c11ffad531281d1797368f48e6cd4af7ac97c0e"
                                                                         ]
                                                                    },
                                                                    "Labels": {
                                                                         "io.buildah.version": "1.41.3",
                                                                         "maintainer": "OpenStack Kubernetes Operator team",
                                                                         "org.label-schema.build-date": "20251125",
                                                                         "org.label-schema.license": "GPLv2",
                                                                         "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                         "org.label-schema.schema-version": "1.0",
                                                                         "org.label-schema.vendor": "CentOS",
                                                                         "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                         "tcib_managed": "true"
                                                                    },
                                                                    "Annotations": {},
                                                                    "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                                    "User": "root",
                                                                    "History": [
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223494528Z",
                                                                              "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223562059Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251125\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:39.054452717Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025707917Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                                              "comment": "FROM quay.io/centos/centos:stream9",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025744608Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025767729Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025791379Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.02581523Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025867611Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.469442331Z",
                                                                              "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:02.029095017Z",
                                                                              "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:05.672474685Z",
                                                                              "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.113425253Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.532320725Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.370061347Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.805172373Z",
                                                                              "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.259306372Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.625948784Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.028304824Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.423316076Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.801219631Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.239187116Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.70996597Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.147342611Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.5739488Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.006975065Z",
                                                                              "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.421255505Z",
                                                                              "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.066694755Z",
                                                                              "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.475695836Z",
                                                                              "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.8971372Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /openstack",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:18.542651107Z",
                                                                              "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622503041Z",
                                                                              "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622561802Z",
                                                                              "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622578342Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622594423Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:22.080892529Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:15.092312074Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:53.218820537Z",
                                                                              "created_by": "/bin/sh -c dnf install -y python3-barbicanclient python3-cinderclient python3-designateclient python3-glanceclient python3-ironicclient python3-keystoneclient python3-manilaclient python3-neutronclient python3-novaclient python3-observabilityclient python3-octaviaclient python3-openstackclient python3-swiftclient python3-pymemcache && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:56.858075591Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:14:56.244673147Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-os:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:14:56.960273159Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage ceilometer",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:15:37.588899909Z",
                                                                              "created_by": "/bin/sh -c dnf -y install openstack-ceilometer-common && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:15:41.197123864Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:21:19.680010224Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-ceilometer-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:21:53.584924649Z",
                                                                              "created_by": "/bin/sh -c dnf -y install openstack-ceilometer-compute && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:21:56.278821402Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         }
                                                                    ],
                                                                    "NamesHistory": [
                                                                         "quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified"
                                                                    ]
                                                               }
                                                          ]
                                                          : quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified
Dec 02 09:36:07 np0005541914.localdomain podman[236228]: 2025-12-02 09:36:07.507566399 +0000 UTC m=+0.148951819 container remove 814af8db360b2d0b2332586abd412d0c81d6c73cdd91f55a96f6d160d50ed3ae (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-type=git, build-date=2025-11-19T00:11:48Z, konflux.additional-tags=17.1.12 17.1_20251118.1, tcib_managed=true, batch=17.1_20251118.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:dd3e22348293588538689be8c51c23472fd4ca53650b3898401947ef9c7e1a05, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, vcs-ref=073ea4b06e5aa460399b0c251f416da40b228676, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp17/openstack-ceilometer-compute, release=1761123044, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '885e9e62222ac12bce952717b40ccfc4'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=073ea4b06e5aa460399b0c251f416da40b228676, distribution-scope=public, io.buildah.version=1.41.4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., version=17.1.12, architecture=x86_64, cpe=cpe:/a:redhat:rhel_e4s:9.2::appstream)
Dec 02 09:36:07 np0005541914.localdomain python3[236177]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ceilometer_agent_compute
Dec 02 09:36:07 np0005541914.localdomain podman[236242]: 
Dec 02 09:36:07 np0005541914.localdomain podman[236242]: 2025-12-02 09:36:07.623632979 +0000 UTC m=+0.093866555 container create a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:36:07 np0005541914.localdomain podman[236242]: 2025-12-02 09:36:07.579442361 +0000 UTC m=+0.049702528 image pull  quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified
Dec 02 09:36:07 np0005541914.localdomain python3[236177]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck compute --label config_id=edpm --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified kolla_start
Dec 02 09:36:07 np0005541914.localdomain sudo[236175]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:08 np0005541914.localdomain sudo[236388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lvtjrffyqkysroxugprhpwmhwjaomagp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668168.457263-1424-152855215749215/AnsiballZ_stat.py
Dec 02 09:36:08 np0005541914.localdomain sudo[236388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:08 np0005541914.localdomain python3.9[236390]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:36:08 np0005541914.localdomain sudo[236388]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:09 np0005541914.localdomain sudo[236500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-scwjyljrbpcuoyuorcpphjijxzezmrwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668169.2078984-1450-76677055585918/AnsiballZ_file.py
Dec 02 09:36:09 np0005541914.localdomain sudo[236500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:09 np0005541914.localdomain python3.9[236502]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:36:09 np0005541914.localdomain sudo[236500]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:10 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24097 DF PROTO=TCP SPT=35220 DPT=9100 SEQ=2092379517 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD548AF230000000001030307) 
Dec 02 09:36:10 np0005541914.localdomain sudo[236609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ytnkqfnidjzahskqjrqehjhqiavcmrvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668169.7661154-1450-193599814280817/AnsiballZ_copy.py
Dec 02 09:36:10 np0005541914.localdomain sudo[236609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:10 np0005541914.localdomain python3.9[236611]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764668169.7661154-1450-193599814280817/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:36:10 np0005541914.localdomain sudo[236609]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:10 np0005541914.localdomain sudo[236664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-actotfbgqbbdtmccaaebzsnmxlhzpumd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668169.7661154-1450-193599814280817/AnsiballZ_systemd.py
Dec 02 09:36:10 np0005541914.localdomain sudo[236664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:11 np0005541914.localdomain python3.9[236666]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 02 09:36:11 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:36:11 np0005541914.localdomain systemd-rc-local-generator[236694]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:36:11 np0005541914.localdomain systemd-sysv-generator[236697]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:36:11 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:11 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:11 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:11 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:11 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:36:11 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:11 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:11 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:11 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:11 np0005541914.localdomain sudo[236664]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:11 np0005541914.localdomain sudo[236755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tmexxtqdxmmlmznhmepqwsmyhyxmsbpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668169.7661154-1450-193599814280817/AnsiballZ_systemd.py
Dec 02 09:36:11 np0005541914.localdomain sudo[236755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:12 np0005541914.localdomain python3.9[236757]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:36:12 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:36:12 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:36:12 np0005541914.localdomain systemd-sysv-generator[236803]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:36:12 np0005541914.localdomain systemd-rc-local-generator[236797]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:36:12 np0005541914.localdomain podman[236759]: 2025-12-02 09:36:12.274515889 +0000 UTC m=+0.111634465 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 02 09:36:12 np0005541914.localdomain podman[236759]: 2025-12-02 09:36:12.286049845 +0000 UTC m=+0.123168431 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 02 09:36:12 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:12 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:12 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:12 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:12 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:36:12 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:12 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:12 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:12 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:12 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:36:12 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:36:12 np0005541914.localdomain systemd[1]: Starting ceilometer_agent_compute container...
Dec 02 09:36:12 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:36:12 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4498335cbf3e241b11d64a5f10bf301f1a8b589a19155db4d4e0636308a7a555/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec 02 09:36:12 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4498335cbf3e241b11d64a5f10bf301f1a8b589a19155db4d4e0636308a7a555/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec 02 09:36:12 np0005541914.localdomain podman[236816]: 2025-12-02 09:36:12.693674485 +0000 UTC m=+0.148066311 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller)
Dec 02 09:36:12 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:36:12 np0005541914.localdomain podman[236817]: 2025-12-02 09:36:12.715264593 +0000 UTC m=+0.165745808 container init a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 02 09:36:12 np0005541914.localdomain ceilometer_agent_compute[236841]: + sudo -E kolla_set_configs
Dec 02 09:36:12 np0005541914.localdomain podman[236816]: 2025-12-02 09:36:12.741923657 +0000 UTC m=+0.196315473 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 02 09:36:12 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:36:12 np0005541914.localdomain ceilometer_agent_compute[236841]: sudo: unable to send audit message: Operation not permitted
Dec 02 09:36:12 np0005541914.localdomain podman[236817]: ceilometer_agent_compute
Dec 02 09:36:12 np0005541914.localdomain sudo[236863]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 02 09:36:12 np0005541914.localdomain podman[236817]: 2025-12-02 09:36:12.756191799 +0000 UTC m=+0.206673054 container start a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec 02 09:36:12 np0005541914.localdomain sudo[236863]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 02 09:36:12 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:36:12 np0005541914.localdomain sudo[236863]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 02 09:36:12 np0005541914.localdomain systemd[1]: Started ceilometer_agent_compute container.
Dec 02 09:36:12 np0005541914.localdomain ceilometer_agent_compute[236841]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 02 09:36:12 np0005541914.localdomain ceilometer_agent_compute[236841]: INFO:__main__:Validating config file
Dec 02 09:36:12 np0005541914.localdomain ceilometer_agent_compute[236841]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 02 09:36:12 np0005541914.localdomain ceilometer_agent_compute[236841]: INFO:__main__:Copying service configuration files
Dec 02 09:36:12 np0005541914.localdomain ceilometer_agent_compute[236841]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec 02 09:36:12 np0005541914.localdomain ceilometer_agent_compute[236841]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec 02 09:36:12 np0005541914.localdomain sudo[236755]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:12 np0005541914.localdomain ceilometer_agent_compute[236841]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec 02 09:36:12 np0005541914.localdomain ceilometer_agent_compute[236841]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec 02 09:36:12 np0005541914.localdomain ceilometer_agent_compute[236841]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec 02 09:36:12 np0005541914.localdomain ceilometer_agent_compute[236841]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec 02 09:36:12 np0005541914.localdomain ceilometer_agent_compute[236841]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 02 09:36:12 np0005541914.localdomain ceilometer_agent_compute[236841]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 02 09:36:12 np0005541914.localdomain ceilometer_agent_compute[236841]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 02 09:36:12 np0005541914.localdomain ceilometer_agent_compute[236841]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 02 09:36:12 np0005541914.localdomain ceilometer_agent_compute[236841]: INFO:__main__:Writing out command to execute
Dec 02 09:36:12 np0005541914.localdomain sudo[236863]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:12 np0005541914.localdomain ceilometer_agent_compute[236841]: ++ cat /run_command
Dec 02 09:36:12 np0005541914.localdomain ceilometer_agent_compute[236841]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec 02 09:36:12 np0005541914.localdomain ceilometer_agent_compute[236841]: + ARGS=
Dec 02 09:36:12 np0005541914.localdomain ceilometer_agent_compute[236841]: + sudo kolla_copy_cacerts
Dec 02 09:36:12 np0005541914.localdomain sudo[236880]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 02 09:36:12 np0005541914.localdomain ceilometer_agent_compute[236841]: sudo: unable to send audit message: Operation not permitted
Dec 02 09:36:12 np0005541914.localdomain sudo[236880]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 02 09:36:12 np0005541914.localdomain sudo[236880]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 02 09:36:12 np0005541914.localdomain sudo[236880]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:12 np0005541914.localdomain ceilometer_agent_compute[236841]: + [[ ! -n '' ]]
Dec 02 09:36:12 np0005541914.localdomain ceilometer_agent_compute[236841]: + . kolla_extend_start
Dec 02 09:36:12 np0005541914.localdomain ceilometer_agent_compute[236841]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Dec 02 09:36:12 np0005541914.localdomain ceilometer_agent_compute[236841]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec 02 09:36:12 np0005541914.localdomain ceilometer_agent_compute[236841]: + umask 0022
Dec 02 09:36:12 np0005541914.localdomain ceilometer_agent_compute[236841]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Dec 02 09:36:12 np0005541914.localdomain podman[236864]: 2025-12-02 09:36:12.857403279 +0000 UTC m=+0.099718285 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 02 09:36:12 np0005541914.localdomain podman[236864]: 2025-12-02 09:36:12.887065957 +0000 UTC m=+0.129380933 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 02 09:36:12 np0005541914.localdomain podman[236864]: unhealthy
Dec 02 09:36:12 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:36:12 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Failed with result 'exit-code'.
Dec 02 09:36:13 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9455 DF PROTO=TCP SPT=41292 DPT=9105 SEQ=4090967844 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD548BA220000000001030307) 
Dec 02 09:36:13 np0005541914.localdomain sudo[236995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ooxqznukgsfvqfxxsggrfrrkamjigmen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668173.024246-1523-138888766473804/AnsiballZ_systemd.py
Dec 02 09:36:13 np0005541914.localdomain sudo[236995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:13 np0005541914.localdomain systemd[1]: tmp-crun.8jVGNC.mount: Deactivated successfully.
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.564 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.565 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.565 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.565 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.566 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.566 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.566 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.566 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.566 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.567 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.567 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.567 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.567 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.567 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.567 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.567 2 DEBUG cotyledon.oslo_config_glue [-] host                           = np0005541914.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.567 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.568 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.568 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.568 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.568 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.568 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.568 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.568 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.568 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.568 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.568 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.568 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.569 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.569 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.569 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.569 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.569 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.569 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.569 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.569 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.569 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.569 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.569 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.569 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.570 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.570 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.570 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.570 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.570 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.570 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.570 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.570 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.570 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.570 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.570 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.570 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.571 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.571 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.571 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.571 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.571 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.571 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.571 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.571 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.571 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.571 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.571 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.571 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.572 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.572 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.572 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.572 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.572 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.572 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.572 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.572 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.573 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.573 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.573 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.573 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.573 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.573 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.573 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.573 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.573 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.573 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.573 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.573 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.574 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.574 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.574 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.574 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.574 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.574 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.574 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.574 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.574 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.575 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.575 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.575 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.575 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.575 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.575 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.575 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.575 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.575 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.575 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.575 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.576 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.576 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.576 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.576 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.576 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.576 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.576 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.576 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.576 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.576 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.577 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.577 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.577 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.577 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.577 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.577 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.577 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.577 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.577 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.577 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.577 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.578 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.578 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.578 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.578 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.578 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.578 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.578 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.578 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.578 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.578 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.578 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.578 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.579 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.579 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.579 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.579 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.579 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.579 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.579 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.579 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.579 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.579 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.579 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.580 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.580 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.580 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.580 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.580 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.580 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.580 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.580 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.580 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.580 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.580 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.580 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.581 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.581 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.581 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.581 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.581 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.596 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.597 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.598 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec 02 09:36:13 np0005541914.localdomain python3.9[236997]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 02 09:36:13 np0005541914.localdomain systemd[1]: Stopping ceilometer_agent_compute container...
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.674 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.739 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.739 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.739 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.739 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.740 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.740 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.740 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.740 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.740 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.740 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.740 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.740 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.740 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.741 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.741 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.741 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.741 12 DEBUG cotyledon.oslo_config_glue [-] host                           = np0005541914.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.741 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.741 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.741 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.741 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.741 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.741 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.742 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.742 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.742 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.742 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.742 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.742 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.742 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.742 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.742 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.742 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.743 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.743 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.743 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.743 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.743 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.743 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.743 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.743 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.743 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.743 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.743 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.743 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.743 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.744 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.744 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.744 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.744 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.744 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.744 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.744 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.744 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.744 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.744 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.744 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.744 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.744 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.745 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.745 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.745 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.745 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.745 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.745 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.745 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.745 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.745 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.745 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.745 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.745 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.745 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.746 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.746 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.746 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.746 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.746 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.746 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.746 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.746 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.746 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.746 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.746 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.746 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.747 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.747 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.747 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.747 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.747 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.747 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.747 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.747 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.747 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.747 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.747 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.748 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.748 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.748 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.748 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.748 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.748 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.748 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.748 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.748 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.748 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.748 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.749 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.749 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.749 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.749 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.749 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.749 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.749 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.749 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.749 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.749 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.749 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.750 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.750 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.750 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.750 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.750 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.750 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.750 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.750 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.750 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.750 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.750 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.750 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.751 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.751 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.751 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.751 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.751 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.751 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.751 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.751 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.751 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.751 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.751 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.751 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.752 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.752 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.752 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.752 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.752 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.752 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.752 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.752 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.752 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.752 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.752 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.752 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.752 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.753 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.753 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.753 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.753 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.753 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.753 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.753 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.754 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.754 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.754 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.754 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.754 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.754 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.754 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.754 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.754 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.754 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.754 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.755 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.755 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.755 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.755 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.755 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.755 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.755 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.755 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.755 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.755 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.755 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.756 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.756 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.756 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.756 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.756 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.756 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.756 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.756 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.756 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.756 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.756 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.756 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.757 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.757 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.757 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.757 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.757 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.757 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.757 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.757 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.757 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.757 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.757 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.757 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.758 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.758 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.758 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.758 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.758 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.758 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.758 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.762 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.770 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.773 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.773 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.774 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.774 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.774 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.774 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.774 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.774 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.774 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.775 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.775 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.775 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.775 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.775 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.775 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.775 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.776 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.776 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.776 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.776 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.776 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.776 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.776 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.777 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.777 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.845 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.846 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.846 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Dec 02 09:36:13 np0005541914.localdomain ceilometer_agent_compute[236841]: 2025-12-02 09:36:13.851 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Dec 02 09:36:13 np0005541914.localdomain virtqemud[228953]: End of file while reading data: Input/output error
Dec 02 09:36:13 np0005541914.localdomain virtqemud[228953]: End of file while reading data: Input/output error
Dec 02 09:36:14 np0005541914.localdomain systemd[1]: libpod-a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.scope: Deactivated successfully.
Dec 02 09:36:14 np0005541914.localdomain systemd[1]: libpod-a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.scope: Consumed 1.225s CPU time.
Dec 02 09:36:14 np0005541914.localdomain podman[237004]: 2025-12-02 09:36:14.029083875 +0000 UTC m=+0.355780987 container died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:36:14 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.timer: Deactivated successfully.
Dec 02 09:36:14 np0005541914.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:36:14 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b-userdata-shm.mount: Deactivated successfully.
Dec 02 09:36:14 np0005541914.localdomain podman[237004]: 2025-12-02 09:36:14.140901684 +0000 UTC m=+0.467598706 container cleanup a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:36:14 np0005541914.localdomain podman[237004]: ceilometer_agent_compute
Dec 02 09:36:14 np0005541914.localdomain podman[237035]: 2025-12-02 09:36:14.236082958 +0000 UTC m=+0.056785198 container cleanup a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec 02 09:36:14 np0005541914.localdomain podman[237035]: ceilometer_agent_compute
Dec 02 09:36:14 np0005541914.localdomain systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully.
Dec 02 09:36:14 np0005541914.localdomain systemd[1]: Stopped ceilometer_agent_compute container.
Dec 02 09:36:14 np0005541914.localdomain systemd[1]: Starting ceilometer_agent_compute container...
Dec 02 09:36:14 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:36:14 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4498335cbf3e241b11d64a5f10bf301f1a8b589a19155db4d4e0636308a7a555/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec 02 09:36:14 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4498335cbf3e241b11d64a5f10bf301f1a8b589a19155db4d4e0636308a7a555/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec 02 09:36:14 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:36:14 np0005541914.localdomain podman[237046]: 2025-12-02 09:36:14.376842762 +0000 UTC m=+0.109660893 container init a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Dec 02 09:36:14 np0005541914.localdomain ceilometer_agent_compute[237061]: + sudo -E kolla_set_configs
Dec 02 09:36:14 np0005541914.localdomain sudo[237067]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 02 09:36:14 np0005541914.localdomain sudo[237067]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 02 09:36:14 np0005541914.localdomain ceilometer_agent_compute[237061]: sudo: unable to send audit message: Operation not permitted
Dec 02 09:36:14 np0005541914.localdomain sudo[237067]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 02 09:36:14 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:36:14 np0005541914.localdomain podman[237046]: 2025-12-02 09:36:14.419415229 +0000 UTC m=+0.152233270 container start a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:36:14 np0005541914.localdomain podman[237046]: ceilometer_agent_compute
Dec 02 09:36:14 np0005541914.localdomain systemd[1]: Started ceilometer_agent_compute container.
Dec 02 09:36:14 np0005541914.localdomain ceilometer_agent_compute[237061]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 02 09:36:14 np0005541914.localdomain ceilometer_agent_compute[237061]: INFO:__main__:Validating config file
Dec 02 09:36:14 np0005541914.localdomain ceilometer_agent_compute[237061]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 02 09:36:14 np0005541914.localdomain ceilometer_agent_compute[237061]: INFO:__main__:Copying service configuration files
Dec 02 09:36:14 np0005541914.localdomain ceilometer_agent_compute[237061]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec 02 09:36:14 np0005541914.localdomain ceilometer_agent_compute[237061]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec 02 09:36:14 np0005541914.localdomain ceilometer_agent_compute[237061]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec 02 09:36:14 np0005541914.localdomain ceilometer_agent_compute[237061]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec 02 09:36:14 np0005541914.localdomain ceilometer_agent_compute[237061]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec 02 09:36:14 np0005541914.localdomain ceilometer_agent_compute[237061]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec 02 09:36:14 np0005541914.localdomain ceilometer_agent_compute[237061]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 02 09:36:14 np0005541914.localdomain ceilometer_agent_compute[237061]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 02 09:36:14 np0005541914.localdomain ceilometer_agent_compute[237061]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec 02 09:36:14 np0005541914.localdomain ceilometer_agent_compute[237061]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 02 09:36:14 np0005541914.localdomain ceilometer_agent_compute[237061]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 02 09:36:14 np0005541914.localdomain ceilometer_agent_compute[237061]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec 02 09:36:14 np0005541914.localdomain ceilometer_agent_compute[237061]: INFO:__main__:Writing out command to execute
Dec 02 09:36:14 np0005541914.localdomain sudo[237067]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:14 np0005541914.localdomain ceilometer_agent_compute[237061]: ++ cat /run_command
Dec 02 09:36:14 np0005541914.localdomain ceilometer_agent_compute[237061]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec 02 09:36:14 np0005541914.localdomain ceilometer_agent_compute[237061]: + ARGS=
Dec 02 09:36:14 np0005541914.localdomain ceilometer_agent_compute[237061]: + sudo kolla_copy_cacerts
Dec 02 09:36:14 np0005541914.localdomain sudo[237082]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 02 09:36:14 np0005541914.localdomain ceilometer_agent_compute[237061]: sudo: unable to send audit message: Operation not permitted
Dec 02 09:36:14 np0005541914.localdomain sudo[237082]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 02 09:36:14 np0005541914.localdomain sudo[237082]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Dec 02 09:36:14 np0005541914.localdomain sudo[237082]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:14 np0005541914.localdomain ceilometer_agent_compute[237061]: + [[ ! -n '' ]]
Dec 02 09:36:14 np0005541914.localdomain ceilometer_agent_compute[237061]: + . kolla_extend_start
Dec 02 09:36:14 np0005541914.localdomain ceilometer_agent_compute[237061]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec 02 09:36:14 np0005541914.localdomain ceilometer_agent_compute[237061]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Dec 02 09:36:14 np0005541914.localdomain ceilometer_agent_compute[237061]: + umask 0022
Dec 02 09:36:14 np0005541914.localdomain ceilometer_agent_compute[237061]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Dec 02 09:36:14 np0005541914.localdomain sudo[236995]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:14 np0005541914.localdomain podman[237070]: 2025-12-02 09:36:14.480671804 +0000 UTC m=+0.055685414 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 02 09:36:14 np0005541914.localdomain podman[237070]: 2025-12-02 09:36:14.513805649 +0000 UTC m=+0.088819179 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 02 09:36:14 np0005541914.localdomain podman[237070]: unhealthy
Dec 02 09:36:14 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:36:14 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Failed with result 'exit-code'.
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.200 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.201 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.201 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.201 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.201 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.201 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.201 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.201 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.201 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.201 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.201 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.202 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.202 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.202 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.202 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.202 2 DEBUG cotyledon.oslo_config_glue [-] host                           = np0005541914.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.202 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.202 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.202 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.202 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.203 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.203 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.203 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.203 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.203 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.203 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.203 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.203 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.203 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.203 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.203 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.203 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.203 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.204 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.204 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.204 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.204 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.204 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.204 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.204 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.204 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.204 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.204 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.204 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.204 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.204 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.205 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.205 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.205 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.205 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.205 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.205 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.205 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.205 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.205 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.205 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.205 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.205 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.205 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.206 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.206 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.206 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.206 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.206 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.206 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.206 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.206 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.206 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.206 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.206 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.207 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.207 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.207 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.207 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.207 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.207 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.207 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.207 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.207 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.207 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.207 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.208 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.208 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.208 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.208 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.208 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.208 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.208 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.208 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.208 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.208 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.208 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.208 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.209 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.209 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.209 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.209 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.209 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.209 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.209 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.209 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.209 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.209 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.209 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.210 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.210 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.210 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.210 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.210 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.210 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.210 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.210 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.210 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.210 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.210 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.210 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.211 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.211 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.211 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.211 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.211 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.211 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.211 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.211 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.211 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.211 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.211 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.211 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.212 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.212 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.212 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.212 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.212 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.212 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.212 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.212 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.212 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.212 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.212 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.212 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.213 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.213 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.213 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.213 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.213 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.213 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.213 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.213 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.213 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.213 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.213 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.213 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.214 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.214 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.214 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.214 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.214 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.214 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.214 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.214 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.214 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.214 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.214 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.214 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.214 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.232 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.233 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.235 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.253 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.397 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.397 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.397 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.397 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.397 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.397 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.398 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.398 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.398 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.398 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.398 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.398 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.398 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.399 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.399 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.399 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.399 12 DEBUG cotyledon.oslo_config_glue [-] host                           = np0005541914.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.399 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.399 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.399 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.400 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.400 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.400 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.400 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.400 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.400 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.400 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.400 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.400 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.401 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.401 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.401 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.401 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.401 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.401 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.401 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.402 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.402 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.402 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.402 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.402 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.402 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.402 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.402 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.402 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.403 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.403 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.403 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.403 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.403 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.403 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.403 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.403 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.404 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.404 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.404 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.404 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.404 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.404 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.404 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.404 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.404 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.405 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.405 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.405 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.405 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.405 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.405 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.405 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.405 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.406 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.406 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.406 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.406 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.406 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.406 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.406 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.406 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.407 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.407 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.407 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.407 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.407 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.407 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.407 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.407 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.407 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.408 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.408 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.408 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.408 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.408 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.408 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.408 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.408 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.409 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.409 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.409 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.409 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.409 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.409 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.409 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.409 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.410 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.410 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.410 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.410 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.410 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.410 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.410 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.410 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.410 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.411 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.411 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.411 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.411 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.411 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.411 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.411 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.411 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.412 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.412 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.412 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.412 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.412 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.412 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.412 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.412 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.413 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.413 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.413 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.413 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.413 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.413 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.413 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.413 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.414 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.414 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.414 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.414 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.414 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.414 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.414 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.414 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.414 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.415 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.415 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.415 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.415 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.415 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.415 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.415 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.415 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.415 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.416 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.416 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.416 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.416 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.416 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.416 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.416 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.416 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.416 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.417 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.417 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.417 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.417 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.417 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.417 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.417 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.417 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.417 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.418 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.418 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.418 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.418 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.418 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.418 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.418 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.418 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.418 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.419 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.419 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.419 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.419 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.419 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.419 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.419 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.419 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.419 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.420 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.420 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.420 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.420 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.420 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.420 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.420 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.420 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.420 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.421 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.421 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.421 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.421 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.421 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.421 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.421 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.421 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.422 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.422 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.422 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.422 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.422 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.422 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.422 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.422 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.422 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.423 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.423 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.423 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.426 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.431 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:36:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:36:15 np0005541914.localdomain sudo[237203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vfagmrwdfcnyqestwhlohamlfdmsneea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668175.5734465-1547-62652876652314/AnsiballZ_stat.py
Dec 02 09:36:15 np0005541914.localdomain sudo[237203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:15 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47419 DF PROTO=TCP SPT=53308 DPT=9105 SEQ=4237463935 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD548C5220000000001030307) 
Dec 02 09:36:16 np0005541914.localdomain python3.9[237205]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:36:16 np0005541914.localdomain sudo[237203]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:16 np0005541914.localdomain sudo[237291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ezlvnglotpeagqvdixnjwlmonkqwjqen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668175.5734465-1547-62652876652314/AnsiballZ_copy.py
Dec 02 09:36:16 np0005541914.localdomain sudo[237291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:16 np0005541914.localdomain python3.9[237293]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668175.5734465-1547-62652876652314/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:36:16 np0005541914.localdomain sudo[237291]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:17 np0005541914.localdomain sudo[237401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nmdcuimdtrvqtzdmnfpxiriysybewgss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668176.962986-1598-6847384065038/AnsiballZ_container_config_data.py
Dec 02 09:36:17 np0005541914.localdomain sudo[237401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:17 np0005541914.localdomain python3.9[237403]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False
Dec 02 09:36:17 np0005541914.localdomain sudo[237401]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:17 np0005541914.localdomain sudo[237511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ppzokxhcollripkibeeasaybsmgxqrsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668177.7301886-1625-142587158139554/AnsiballZ_container_config_hash.py
Dec 02 09:36:17 np0005541914.localdomain sudo[237511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:18 np0005541914.localdomain python3.9[237513]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 02 09:36:18 np0005541914.localdomain sudo[237511]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:18 np0005541914.localdomain sudo[237621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zhknthneovskmzvtbzyagxtvhugnlirh ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764668178.5562894-1654-227523348148533/AnsiballZ_edpm_container_manage.py
Dec 02 09:36:18 np0005541914.localdomain sudo[237621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:19 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50881 DF PROTO=TCP SPT=53216 DPT=9882 SEQ=87131067 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD548D1220000000001030307) 
Dec 02 09:36:19 np0005541914.localdomain python3[237623]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec 02 09:36:19 np0005541914.localdomain podman[237660]: 
Dec 02 09:36:19 np0005541914.localdomain podman[237660]: 2025-12-02 09:36:19.395353461 +0000 UTC m=+0.088727665 container create 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 02 09:36:19 np0005541914.localdomain podman[237660]: 2025-12-02 09:36:19.352124194 +0000 UTC m=+0.045498468 image pull  quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c
Dec 02 09:36:19 np0005541914.localdomain python3[237623]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Dec 02 09:36:19 np0005541914.localdomain sudo[237621]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:20 np0005541914.localdomain sudo[237806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tjjqbcljmlbmvypimqdmvnwjlclzmajn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668179.9016657-1678-242700321636726/AnsiballZ_stat.py
Dec 02 09:36:20 np0005541914.localdomain sudo[237806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:20 np0005541914.localdomain python3.9[237808]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:36:20 np0005541914.localdomain sudo[237806]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:21 np0005541914.localdomain sudo[237918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uvjglrmfsqyfbnaymgxznuhjlyevnyib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668180.7215996-1705-119877105744039/AnsiballZ_file.py
Dec 02 09:36:21 np0005541914.localdomain sudo[237918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:21 np0005541914.localdomain sudo[237920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:36:21 np0005541914.localdomain sudo[237920]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:36:21 np0005541914.localdomain sudo[237920]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:21 np0005541914.localdomain sudo[237939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:36:21 np0005541914.localdomain sudo[237939]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:36:21 np0005541914.localdomain python3.9[237928]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:36:21 np0005541914.localdomain sudo[237918]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:22 np0005541914.localdomain sudo[237939]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:22 np0005541914.localdomain sudo[238095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ncnfvyygdzdlkevuoszvibkoybaoovzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668181.7171264-1705-229647947346802/AnsiballZ_copy.py
Dec 02 09:36:22 np0005541914.localdomain sudo[238095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:22 np0005541914.localdomain python3.9[238097]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764668181.7171264-1705-229647947346802/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:36:22 np0005541914.localdomain sudo[238095]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:22 np0005541914.localdomain sudo[238150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lvbzdwtfjvhqmzvjshpuutfkeowyflcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668181.7171264-1705-229647947346802/AnsiballZ_systemd.py
Dec 02 09:36:22 np0005541914.localdomain sudo[238150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:22 np0005541914.localdomain python3.9[238152]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 02 09:36:22 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:36:23 np0005541914.localdomain systemd-rc-local-generator[238178]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:36:23 np0005541914.localdomain systemd-sysv-generator[238182]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:36:23 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:23 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:23 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:23 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:23 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:36:23 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:23 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:23 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:23 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:23 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47557 DF PROTO=TCP SPT=59252 DPT=9101 SEQ=335826367 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD548E1220000000001030307) 
Dec 02 09:36:23 np0005541914.localdomain sudo[238188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:36:23 np0005541914.localdomain sudo[238188]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:36:23 np0005541914.localdomain sudo[238188]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:23 np0005541914.localdomain sudo[238150]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:23 np0005541914.localdomain sudo[238258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-clkxvorajynhnprakddxasjwpqzaeoqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668181.7171264-1705-229647947346802/AnsiballZ_systemd.py
Dec 02 09:36:23 np0005541914.localdomain sudo[238258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:23 np0005541914.localdomain python3.9[238260]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:36:23 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:36:24 np0005541914.localdomain systemd-rc-local-generator[238286]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:36:24 np0005541914.localdomain systemd-sysv-generator[238290]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:36:24 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:24 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:24 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:24 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:24 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:36:24 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:24 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:24 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:24 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:24 np0005541914.localdomain systemd[1]: Starting node_exporter container...
Dec 02 09:36:24 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:36:24 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:36:24 np0005541914.localdomain podman[238300]: 2025-12-02 09:36:24.375480755 +0000 UTC m=+0.140365943 container init 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.396Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.396Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.396Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.397Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.397Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.397Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.397Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=arp
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=bcache
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=bonding
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=btrfs
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=conntrack
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=cpu
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=diskstats
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=edac
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=filefd
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=filesystem
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=infiniband
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=ipvs
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=loadavg
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=mdadm
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=meminfo
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=netclass
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=netdev
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=netstat
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=nfs
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=nfsd
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=nvme
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=schedstat
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=sockstat
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=softnet
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=systemd
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=tapestats
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=vmstat
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=xfs
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.398Z caller=node_exporter.go:117 level=info collector=zfs
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.399Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec 02 09:36:24 np0005541914.localdomain node_exporter[238314]: ts=2025-12-02T09:36:24.399Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9100
Dec 02 09:36:24 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:36:24 np0005541914.localdomain podman[238300]: 2025-12-02 09:36:24.420417495 +0000 UTC m=+0.185302643 container start 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 02 09:36:24 np0005541914.localdomain podman[238300]: node_exporter
Dec 02 09:36:24 np0005541914.localdomain systemd[1]: Started node_exporter container.
Dec 02 09:36:24 np0005541914.localdomain sudo[238258]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:24 np0005541914.localdomain podman[238323]: 2025-12-02 09:36:24.520694437 +0000 UTC m=+0.094705171 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=starting, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 02 09:36:24 np0005541914.localdomain podman[238323]: 2025-12-02 09:36:24.554766321 +0000 UTC m=+0.128777075 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 09:36:24 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:36:24 np0005541914.localdomain sudo[238451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-myarphbhwqxuzhbuagesnwneubqabbyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668184.6578097-1778-102528481262762/AnsiballZ_systemd.py
Dec 02 09:36:24 np0005541914.localdomain sudo[238451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:25 np0005541914.localdomain python3.9[238453]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 02 09:36:25 np0005541914.localdomain systemd[1]: Stopping node_exporter container...
Dec 02 09:36:25 np0005541914.localdomain systemd[1]: tmp-crun.5lhpMO.mount: Deactivated successfully.
Dec 02 09:36:25 np0005541914.localdomain systemd[1]: libpod-3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.scope: Deactivated successfully.
Dec 02 09:36:25 np0005541914.localdomain podman[238457]: 2025-12-02 09:36:25.340748304 +0000 UTC m=+0.075379993 container died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 02 09:36:25 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.timer: Deactivated successfully.
Dec 02 09:36:25 np0005541914.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:36:25 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6-userdata-shm.mount: Deactivated successfully.
Dec 02 09:36:25 np0005541914.localdomain podman[238457]: 2025-12-02 09:36:25.393168205 +0000 UTC m=+0.127799864 container cleanup 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 09:36:25 np0005541914.localdomain podman[238457]: node_exporter
Dec 02 09:36:25 np0005541914.localdomain systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 02 09:36:25 np0005541914.localdomain podman[238484]: 2025-12-02 09:36:25.476318267 +0000 UTC m=+0.054180896 container cleanup 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 09:36:25 np0005541914.localdomain podman[238484]: node_exporter
Dec 02 09:36:25 np0005541914.localdomain systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Dec 02 09:36:25 np0005541914.localdomain systemd[1]: Stopped node_exporter container.
Dec 02 09:36:25 np0005541914.localdomain systemd[1]: Starting node_exporter container...
Dec 02 09:36:25 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:36:25 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:36:25 np0005541914.localdomain podman[238496]: 2025-12-02 09:36:25.636150111 +0000 UTC m=+0.129840927 container init 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.646Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.646Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.646Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.646Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.647Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.647Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.647Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.647Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.647Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.648Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.648Z caller=node_exporter.go:117 level=info collector=arp
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.648Z caller=node_exporter.go:117 level=info collector=bcache
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.648Z caller=node_exporter.go:117 level=info collector=bonding
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.648Z caller=node_exporter.go:117 level=info collector=btrfs
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.648Z caller=node_exporter.go:117 level=info collector=conntrack
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.648Z caller=node_exporter.go:117 level=info collector=cpu
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.648Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.648Z caller=node_exporter.go:117 level=info collector=diskstats
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.648Z caller=node_exporter.go:117 level=info collector=edac
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.648Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.649Z caller=node_exporter.go:117 level=info collector=filefd
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.649Z caller=node_exporter.go:117 level=info collector=filesystem
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.649Z caller=node_exporter.go:117 level=info collector=infiniband
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.649Z caller=node_exporter.go:117 level=info collector=ipvs
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.649Z caller=node_exporter.go:117 level=info collector=loadavg
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.649Z caller=node_exporter.go:117 level=info collector=mdadm
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.649Z caller=node_exporter.go:117 level=info collector=meminfo
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.649Z caller=node_exporter.go:117 level=info collector=netclass
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.649Z caller=node_exporter.go:117 level=info collector=netdev
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.649Z caller=node_exporter.go:117 level=info collector=netstat
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.649Z caller=node_exporter.go:117 level=info collector=nfs
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.649Z caller=node_exporter.go:117 level=info collector=nfsd
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.649Z caller=node_exporter.go:117 level=info collector=nvme
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.649Z caller=node_exporter.go:117 level=info collector=schedstat
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.649Z caller=node_exporter.go:117 level=info collector=sockstat
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.649Z caller=node_exporter.go:117 level=info collector=softnet
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.650Z caller=node_exporter.go:117 level=info collector=systemd
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.650Z caller=node_exporter.go:117 level=info collector=tapestats
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.650Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.650Z caller=node_exporter.go:117 level=info collector=vmstat
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.650Z caller=node_exporter.go:117 level=info collector=xfs
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.650Z caller=node_exporter.go:117 level=info collector=zfs
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.651Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec 02 09:36:25 np0005541914.localdomain node_exporter[238511]: ts=2025-12-02T09:36:25.651Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9100
Dec 02 09:36:25 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:36:25 np0005541914.localdomain podman[238496]: 2025-12-02 09:36:25.671630049 +0000 UTC m=+0.165320835 container start 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 09:36:25 np0005541914.localdomain podman[238496]: node_exporter
Dec 02 09:36:25 np0005541914.localdomain systemd[1]: Started node_exporter container.
Dec 02 09:36:25 np0005541914.localdomain sudo[238451]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:25 np0005541914.localdomain podman[238521]: 2025-12-02 09:36:25.734548205 +0000 UTC m=+0.061514204 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=starting, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 02 09:36:25 np0005541914.localdomain podman[238521]: 2025-12-02 09:36:25.768369951 +0000 UTC m=+0.095335980 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 02 09:36:25 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:36:26 np0005541914.localdomain sudo[238650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kajpkhisibacmgmdcvhlmiggkulbuhnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668185.9031549-1802-181690434975877/AnsiballZ_stat.py
Dec 02 09:36:26 np0005541914.localdomain sudo[238650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:26 np0005541914.localdomain python3.9[238652]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:36:26 np0005541914.localdomain sudo[238650]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:26 np0005541914.localdomain sudo[238738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ydscphsakvebtlkmbcceuhoxxecucrsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668185.9031549-1802-181690434975877/AnsiballZ_copy.py
Dec 02 09:36:26 np0005541914.localdomain sudo[238738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:26 np0005541914.localdomain python3.9[238740]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668185.9031549-1802-181690434975877/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:36:26 np0005541914.localdomain sudo[238738]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47558 DF PROTO=TCP SPT=59252 DPT=9101 SEQ=335826367 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD548F0E30000000001030307) 
Dec 02 09:36:27 np0005541914.localdomain sudo[238848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-csweenfojlmwiwtguljznrkzfgxzdfbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668187.2806787-1853-36509464146105/AnsiballZ_container_config_data.py
Dec 02 09:36:27 np0005541914.localdomain sudo[238848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9458 DF PROTO=TCP SPT=41292 DPT=9105 SEQ=4090967844 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD548F3230000000001030307) 
Dec 02 09:36:27 np0005541914.localdomain python3.9[238850]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False
Dec 02 09:36:27 np0005541914.localdomain sudo[238848]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:28 np0005541914.localdomain sudo[238958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-svubvqupituntntvmpcegpdguvlcrdwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668188.0180073-1879-195678225754270/AnsiballZ_container_config_hash.py
Dec 02 09:36:28 np0005541914.localdomain sudo[238958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:28 np0005541914.localdomain python3.9[238960]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 02 09:36:28 np0005541914.localdomain sudo[238958]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:29 np0005541914.localdomain sudo[239068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ozkrxllwvrwwbodgmxbrauqdsrsabczo ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764668188.8456857-1910-2117311603926/AnsiballZ_edpm_container_manage.py
Dec 02 09:36:29 np0005541914.localdomain sudo[239068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:29 np0005541914.localdomain python3[239070]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec 02 09:36:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44083 DF PROTO=TCP SPT=33812 DPT=9102 SEQ=3103970790 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54901230000000001030307) 
Dec 02 09:36:32 np0005541914.localdomain podman[239084]: 2025-12-02 09:36:29.590020268 +0000 UTC m=+0.032738663 image pull  quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd
Dec 02 09:36:32 np0005541914.localdomain podman[239157]: 
Dec 02 09:36:32 np0005541914.localdomain podman[239157]: 2025-12-02 09:36:32.289421712 +0000 UTC m=+0.083897157 container create 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 02 09:36:32 np0005541914.localdomain podman[239157]: 2025-12-02 09:36:32.251874541 +0000 UTC m=+0.046350006 image pull  quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd
Dec 02 09:36:32 np0005541914.localdomain python3[239070]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd
Dec 02 09:36:32 np0005541914.localdomain sudo[239068]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:33 np0005541914.localdomain sudo[239303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-izeeopifegmqvmlcjfavqiwtifpmyssk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668193.3857064-1935-210151817388650/AnsiballZ_stat.py
Dec 02 09:36:33 np0005541914.localdomain sudo[239303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:33 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15796 DF PROTO=TCP SPT=41480 DPT=9882 SEQ=3523525307 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5490A210000000001030307) 
Dec 02 09:36:33 np0005541914.localdomain python3.9[239305]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:36:33 np0005541914.localdomain sudo[239303]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:33 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:36:34 np0005541914.localdomain podman[239325]: 2025-12-02 09:36:34.079872856 +0000 UTC m=+0.081306756 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 02 09:36:34 np0005541914.localdomain podman[239325]: 2025-12-02 09:36:34.090350021 +0000 UTC m=+0.091783921 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:36:34 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:36:34 np0005541914.localdomain sudo[239433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nvimqqyprelkxvgzgupcbnrdqyovsndi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668194.1460118-1960-182968193687495/AnsiballZ_file.py
Dec 02 09:36:34 np0005541914.localdomain sudo[239433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:34 np0005541914.localdomain python3.9[239435]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:36:34 np0005541914.localdomain sudo[239433]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:35 np0005541914.localdomain sudo[239542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-imkurppqxmffwxrfmpnlmicqwgygnhvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668194.7149575-1960-175626787906577/AnsiballZ_copy.py
Dec 02 09:36:35 np0005541914.localdomain sudo[239542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:35 np0005541914.localdomain python3.9[239544]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764668194.7149575-1960-175626787906577/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:36:35 np0005541914.localdomain sudo[239542]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:35 np0005541914.localdomain sudo[239597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xkivvihfcfxgdduxretsfiwrnjykeauf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668194.7149575-1960-175626787906577/AnsiballZ_systemd.py
Dec 02 09:36:35 np0005541914.localdomain sudo[239597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:35 np0005541914.localdomain python3.9[239599]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 02 09:36:35 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:36:36 np0005541914.localdomain systemd-rc-local-generator[239624]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:36:36 np0005541914.localdomain systemd-sysv-generator[239627]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:36:36 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:36 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:36 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:36 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:36 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:36:36 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:36 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:36 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:36 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:36 np0005541914.localdomain sudo[239597]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:36 np0005541914.localdomain sudo[239688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cyfumaurdipgaughjfnddkbcjuukumcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668194.7149575-1960-175626787906577/AnsiballZ_systemd.py
Dec 02 09:36:36 np0005541914.localdomain sudo[239688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:36 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15798 DF PROTO=TCP SPT=41480 DPT=9882 SEQ=3523525307 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54916220000000001030307) 
Dec 02 09:36:36 np0005541914.localdomain python3.9[239690]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:36:36 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:36:37 np0005541914.localdomain systemd-sysv-generator[239719]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:36:37 np0005541914.localdomain systemd-rc-local-generator[239715]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:36:37 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:37 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:37 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:37 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:37 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:36:37 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:37 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:37 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:37 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:36:37 np0005541914.localdomain systemd[1]: Starting podman_exporter container...
Dec 02 09:36:37 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:36:37 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:36:37 np0005541914.localdomain podman[239731]: 2025-12-02 09:36:37.419721059 +0000 UTC m=+0.146922005 container init 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 09:36:37 np0005541914.localdomain podman_exporter[239746]: ts=2025-12-02T09:36:37.440Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Dec 02 09:36:37 np0005541914.localdomain podman_exporter[239746]: ts=2025-12-02T09:36:37.440Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Dec 02 09:36:37 np0005541914.localdomain podman_exporter[239746]: ts=2025-12-02T09:36:37.440Z caller=handler.go:94 level=info msg="enabled collectors"
Dec 02 09:36:37 np0005541914.localdomain podman_exporter[239746]: ts=2025-12-02T09:36:37.440Z caller=handler.go:105 level=info collector=container
Dec 02 09:36:37 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:36:37 np0005541914.localdomain podman[239731]: 2025-12-02 09:36:37.451308317 +0000 UTC m=+0.178509273 container start 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 09:36:37 np0005541914.localdomain podman[239731]: podman_exporter
Dec 02 09:36:37 np0005541914.localdomain systemd[1]: Starting Podman API Service...
Dec 02 09:36:37 np0005541914.localdomain systemd[1]: Started podman_exporter container.
Dec 02 09:36:37 np0005541914.localdomain systemd[1]: Started Podman API Service.
Dec 02 09:36:37 np0005541914.localdomain podman[239757]: time="2025-12-02T09:36:37Z" level=info msg="/usr/bin/podman filtering at log level info"
Dec 02 09:36:37 np0005541914.localdomain podman[239757]: time="2025-12-02T09:36:37Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Dec 02 09:36:37 np0005541914.localdomain sudo[239688]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:37 np0005541914.localdomain podman[239757]: time="2025-12-02T09:36:37Z" level=info msg="Setting parallel job count to 25"
Dec 02 09:36:37 np0005541914.localdomain podman[239757]: time="2025-12-02T09:36:37Z" level=info msg="Using systemd socket activation to determine API endpoint"
Dec 02 09:36:37 np0005541914.localdomain podman[239757]: time="2025-12-02T09:36:37Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"/run/podman/podman.sock\""
Dec 02 09:36:37 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:36:37 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Dec 02 09:36:37 np0005541914.localdomain podman[239757]: time="2025-12-02T09:36:37Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 09:36:37 np0005541914.localdomain podman[239756]: 2025-12-02 09:36:37.527671889 +0000 UTC m=+0.069803940 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 09:36:37 np0005541914.localdomain podman[239756]: 2025-12-02 09:36:37.537235675 +0000 UTC m=+0.079367736 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 02 09:36:37 np0005541914.localdomain podman[239756]: unhealthy
Dec 02 09:36:37 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:36:37 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Failed with result 'exit-code'.
Dec 02 09:36:37 np0005541914.localdomain sudo[239898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cfynzasygqxyzydnkwifspurmsvkesyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668197.6819527-2033-55962073790515/AnsiballZ_systemd.py
Dec 02 09:36:37 np0005541914.localdomain sudo[239898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:38 np0005541914.localdomain python3.9[239900]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 02 09:36:38 np0005541914.localdomain systemd[1]: Stopping podman_exporter container...
Dec 02 09:36:38 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:36:37 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 2790 "" "Go-http-client/1.1"
Dec 02 09:36:38 np0005541914.localdomain systemd[1]: libpod-8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.scope: Deactivated successfully.
Dec 02 09:36:38 np0005541914.localdomain podman[239904]: 2025-12-02 09:36:38.283949004 +0000 UTC m=+0.054961922 container died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 02 09:36:38 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.timer: Deactivated successfully.
Dec 02 09:36:38 np0005541914.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:36:38 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0-userdata-shm.mount: Deactivated successfully.
Dec 02 09:36:39 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:36:39 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:36:40 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27297 DF PROTO=TCP SPT=36326 DPT=9100 SEQ=1793506217 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54923220000000001030307) 
Dec 02 09:36:40 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:36:40 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-b567875599537fe8ddd2e294c4bc2f350557061ef7c40059b2f561379ea2a798-merged.mount: Deactivated successfully.
Dec 02 09:36:40 np0005541914.localdomain podman[239904]: 2025-12-02 09:36:40.587275984 +0000 UTC m=+2.358288892 container cleanup 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 02 09:36:40 np0005541914.localdomain podman[239904]: podman_exporter
Dec 02 09:36:40 np0005541914.localdomain podman[239918]: 2025-12-02 09:36:40.599077049 +0000 UTC m=+2.307819821 container cleanup 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 09:36:42 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:36:42 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:36:42 np0005541914.localdomain sshd[239931]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:36:42 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:36:42 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:36:42 np0005541914.localdomain systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 02 09:36:42 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:36:42 np0005541914.localdomain podman[239939]: 2025-12-02 09:36:42.862143294 +0000 UTC m=+0.077530849 container cleanup 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 09:36:42 np0005541914.localdomain podman[239939]: podman_exporter
Dec 02 09:36:42 np0005541914.localdomain podman[239933]: 2025-12-02 09:36:42.842640021 +0000 UTC m=+0.091820712 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 02 09:36:42 np0005541914.localdomain podman[239933]: 2025-12-02 09:36:42.922796621 +0000 UTC m=+0.171977262 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec 02 09:36:43 np0005541914.localdomain sshd[239931]: Invalid user prueba from 34.78.29.97 port 58554
Dec 02 09:36:43 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61735 DF PROTO=TCP SPT=46410 DPT=9105 SEQ=1950222307 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5492F230000000001030307) 
Dec 02 09:36:43 np0005541914.localdomain sshd[239931]: Received disconnect from 34.78.29.97 port 58554:11: Bye Bye [preauth]
Dec 02 09:36:43 np0005541914.localdomain sshd[239931]: Disconnected from invalid user prueba 34.78.29.97 port 58554 [preauth]
Dec 02 09:36:43 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:36:43 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:36:44 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:36:44 np0005541914.localdomain systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Dec 02 09:36:44 np0005541914.localdomain systemd[1]: Stopped podman_exporter container.
Dec 02 09:36:44 np0005541914.localdomain systemd[1]: Starting podman_exporter container...
Dec 02 09:36:44 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:36:44 np0005541914.localdomain podman[239942]: 2025-12-02 09:36:44.071722641 +0000 UTC m=+1.281932916 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:36:44 np0005541914.localdomain podman[239942]: 2025-12-02 09:36:44.110724578 +0000 UTC m=+1.320934843 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:36:44 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:36:44 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:36:44 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:36:44 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:36:44 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:36:44 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:36:44 np0005541914.localdomain podman[239999]: 2025-12-02 09:36:44.76255347 +0000 UTC m=+0.142692925 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 02 09:36:44 np0005541914.localdomain podman[239999]: 2025-12-02 09:36:44.797722089 +0000 UTC m=+0.177861484 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec 02 09:36:44 np0005541914.localdomain podman[239999]: unhealthy
Dec 02 09:36:44 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:36:44 np0005541914.localdomain podman[239974]: 2025-12-02 09:36:44.844099954 +0000 UTC m=+0.800519324 container init 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 02 09:36:44 np0005541914.localdomain podman_exporter[240012]: ts=2025-12-02T09:36:44.859Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Dec 02 09:36:44 np0005541914.localdomain podman_exporter[240012]: ts=2025-12-02T09:36:44.859Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Dec 02 09:36:44 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:36:44 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Dec 02 09:36:44 np0005541914.localdomain podman[239757]: time="2025-12-02T09:36:44Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 09:36:44 np0005541914.localdomain podman_exporter[240012]: ts=2025-12-02T09:36:44.860Z caller=handler.go:94 level=info msg="enabled collectors"
Dec 02 09:36:44 np0005541914.localdomain podman_exporter[240012]: ts=2025-12-02T09:36:44.860Z caller=handler.go:105 level=info collector=container
Dec 02 09:36:44 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:36:44 np0005541914.localdomain podman[239974]: 2025-12-02 09:36:44.925914014 +0000 UTC m=+0.882333394 container start 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 02 09:36:44 np0005541914.localdomain podman[239974]: podman_exporter
Dec 02 09:36:45 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:36:45 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Failed with result 'exit-code'.
Dec 02 09:36:45 np0005541914.localdomain systemd[1]: Started podman_exporter container.
Dec 02 09:36:45 np0005541914.localdomain podman[240029]: 2025-12-02 09:36:45.098899185 +0000 UTC m=+0.217562991 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 09:36:45 np0005541914.localdomain sudo[239898]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:45 np0005541914.localdomain podman[240029]: 2025-12-02 09:36:45.108789651 +0000 UTC m=+0.227453467 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 09:36:45 np0005541914.localdomain podman[240029]: unhealthy
Dec 02 09:36:46 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24324 DF PROTO=TCP SPT=43650 DPT=9102 SEQ=863891639 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5493A7E0000000001030307) 
Dec 02 09:36:47 np0005541914.localdomain sudo[240160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pptvzlckcttcigmnvvemrhxberpcqoim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668206.748281-2057-170889118739154/AnsiballZ_stat.py
Dec 02 09:36:47 np0005541914.localdomain sudo[240160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:47 np0005541914.localdomain python3.9[240162]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:36:47 np0005541914.localdomain sudo[240160]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:47 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:36:47 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-6dfa5ad77b1341d3196a32aea0408575f7ecd87125bb33cfdce442fdca4faf78-merged.mount: Deactivated successfully.
Dec 02 09:36:47 np0005541914.localdomain sudo[240248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-erbiispqpfbszhcdingflnvyggbuhpvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668206.748281-2057-170889118739154/AnsiballZ_copy.py
Dec 02 09:36:47 np0005541914.localdomain sudo[240248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:47 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-6dfa5ad77b1341d3196a32aea0408575f7ecd87125bb33cfdce442fdca4faf78-merged.mount: Deactivated successfully.
Dec 02 09:36:47 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:36:47 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Failed with result 'exit-code'.
Dec 02 09:36:47 np0005541914.localdomain python3.9[240250]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668206.748281-2057-170889118739154/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:36:47 np0005541914.localdomain sudo[240248]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:48 np0005541914.localdomain sudo[240358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uvtpfcdkbywzhdfjfgspxlkbubsfiftw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668208.2454555-2108-191384783579407/AnsiballZ_container_config_data.py
Dec 02 09:36:48 np0005541914.localdomain sudo[240358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:48 np0005541914.localdomain python3.9[240360]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False
Dec 02 09:36:48 np0005541914.localdomain sudo[240358]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:49 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24326 DF PROTO=TCP SPT=43650 DPT=9102 SEQ=863891639 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54946A20000000001030307) 
Dec 02 09:36:49 np0005541914.localdomain sudo[240468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dxitfnlpqkmfqkhgsiuzzvwblawcveuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668209.0004535-2135-26815650837560/AnsiballZ_container_config_hash.py
Dec 02 09:36:49 np0005541914.localdomain sudo[240468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:49 np0005541914.localdomain python3.9[240470]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 02 09:36:49 np0005541914.localdomain sudo[240468]: pam_unix(sudo:session): session closed for user root
Dec 02 09:36:50 np0005541914.localdomain sudo[240578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-osdnzxiejkdooyhfvfbsfgebkvojylwb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764668209.8313231-2164-63705934219450/AnsiballZ_edpm_container_manage.py
Dec 02 09:36:50 np0005541914.localdomain sudo[240578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:36:50 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:36:50 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:36:50 np0005541914.localdomain python3[240580]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec 02 09:36:50 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 02 09:36:50 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:36:50 np0005541914.localdomain podman[239757]: time="2025-12-02T09:36:50Z" level=error msg="Unmounting /var/lib/containers/storage/overlay/f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958/merged: invalid argument"
Dec 02 09:36:50 np0005541914.localdomain podman[239757]: time="2025-12-02T09:36:50Z" level=error msg="Getting root fs size for \"04715a69146858c8339bc8101e67a39c455c4d6a76b51ebad6e24f8a290e5fbf\": getting diffsize of layer \"f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958\" and its parent \"3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae\": creating overlay mount to /var/lib/containers/storage/overlay/f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958/merged, mount_data=\"lowerdir=/var/lib/containers/storage/overlay/l/RTTUT65UCTAX2SKDGZLIU47BZC:/var/lib/containers/storage/overlay/l/RHRLV6JCIALNITZ4KR55MMCPF5:/var/lib/containers/storage/overlay/l/LK2RBR3EFG2ZZO4YQKJAOD6X6T,upperdir=/var/lib/containers/storage/overlay/f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958/diff,workdir=/var/lib/containers/storage/overlay/f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958/work,nodev,metacopy=on\": no such file or directory"
Dec 02 09:36:50 np0005541914.localdomain kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:36:50 np0005541914.localdomain kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:36:51 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47560 DF PROTO=TCP SPT=59252 DPT=9101 SEQ=335826367 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54951220000000001030307) 
Dec 02 09:36:52 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:36:52 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:36:52 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:36:53 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 02 09:36:53 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:36:53 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:36:53 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-fe6b5bb5c3faac5bc7b25f16619728c4e2a2d4a71d222c2e5e52b063609b5512-merged.mount: Deactivated successfully.
Dec 02 09:36:54 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:36:54 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:36:55 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:36:55 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:36:55 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:36:55 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:36:55 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:36:56 np0005541914.localdomain podman[240609]: 2025-12-02 09:36:56.072084387 +0000 UTC m=+0.073413625 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 09:36:56 np0005541914.localdomain podman[240609]: 2025-12-02 09:36:56.083422493 +0000 UTC m=+0.084751731 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 02 09:36:56 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:36:56 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:36:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35409 DF PROTO=TCP SPT=41556 DPT=9101 SEQ=2170108911 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54966230000000001030307) 
Dec 02 09:36:57 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:36:58 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:36:59 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:36:59 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:36:59 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-6dfa5ad77b1341d3196a32aea0408575f7ecd87125bb33cfdce442fdca4faf78-merged.mount: Deactivated successfully.
Dec 02 09:36:59 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:36:59 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:37:00 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:37:00 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:37:01 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:37:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24328 DF PROTO=TCP SPT=43650 DPT=9102 SEQ=863891639 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54977220000000001030307) 
Dec 02 09:37:01 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:37:01 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:37:02 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:37:02 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:37:02 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:37:02 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:37:03 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 02 09:37:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:37:03.145 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:37:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:37:03.146 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:37:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:37:03.146 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:37:03 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14596 DF PROTO=TCP SPT=48604 DPT=9882 SEQ=1361442821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5497F510000000001030307) 
Dec 02 09:37:04 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:37:04 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:37:04 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:37:04 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:37:04 np0005541914.localdomain podman[240645]: 2025-12-02 09:37:04.360243986 +0000 UTC m=+0.076530579 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true)
Dec 02 09:37:04 np0005541914.localdomain podman[240645]: 2025-12-02 09:37:04.373972795 +0000 UTC m=+0.090259418 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 02 09:37:04 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:37:04 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14597 DF PROTO=TCP SPT=48604 DPT=9882 SEQ=1361442821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54983620000000001030307) 
Dec 02 09:37:04 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:37:04 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:37:04 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:37:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:37:04.923 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:37:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:37:04.923 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:37:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:37:04.943 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:37:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:37:04.944 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:37:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:37:04.944 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:37:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:37:04.945 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:37:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:37:04.945 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:37:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:37:04.945 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 09:37:05 np0005541914.localdomain systemd[1]: tmp-crun.NrHf4q.mount: Deactivated successfully.
Dec 02 09:37:05 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-b31a729f52d6f9ece82ff86db83ec0c0420ae47f49a38ed5b1f2bb83a229399e-merged.mount: Deactivated successfully.
Dec 02 09:37:05 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:37:05.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:37:05 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:37:05.641 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 09:37:05 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:37:05.642 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 09:37:05 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:37:05.671 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 09:37:05 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:37:05.671 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:37:05 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:37:05.672 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:37:05 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:37:05.707 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:37:05 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:37:05.708 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:37:05 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:37:05.708 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:37:05 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:37:05.708 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 09:37:05 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:37:05.709 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:37:05 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:37:05 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:37:06 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:37:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:37:06.184 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:37:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:37:06.391 229589 WARNING nova.virt.libvirt.driver [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 09:37:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:37:06.392 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=13212MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 09:37:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:37:06.392 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:37:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:37:06.392 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:37:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:37:06.468 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 09:37:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:37:06.469 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 09:37:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:37:06.485 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:37:06 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14598 DF PROTO=TCP SPT=48604 DPT=9882 SEQ=1361442821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5498B620000000001030307) 
Dec 02 09:37:06 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:37:06 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:37:06 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:37:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:37:06.938 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:37:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:37:06.943 229589 DEBUG nova.compute.provider_tree [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 09:37:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:37:06.959 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 09:37:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:37:06.962 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 09:37:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:37:06.962 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:37:07 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:37:07 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:37:08 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:37:08 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:37:09 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 02 09:37:09 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:37:09 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:37:09 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:37:09 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:37:09 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-fe6b5bb5c3faac5bc7b25f16619728c4e2a2d4a71d222c2e5e52b063609b5512-merged.mount: Deactivated successfully.
Dec 02 09:37:10 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19249 DF PROTO=TCP SPT=42758 DPT=9100 SEQ=1761696839 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54999220000000001030307) 
Dec 02 09:37:10 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:37:10 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:37:10 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:37:11 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:37:11 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:37:11 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:37:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 09:37:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 6600.1 total, 600.0 interval
                                                          Cumulative writes: 4846 writes, 21K keys, 4846 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 4846 writes, 677 syncs, 7.16 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 02 09:37:11 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:37:11 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:37:12 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:37:13 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30359 DF PROTO=TCP SPT=39692 DPT=9105 SEQ=824406069 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD549A4620000000001030307) 
Dec 02 09:37:13 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:37:13 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:37:13 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 02 09:37:14 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-54218a875306d5e9e02be164dfc59f569c03cec4fa589e4979e72cb65e05c169-merged.mount: Deactivated successfully.
Dec 02 09:37:14 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:37:14 np0005541914.localdomain podman[240719]: 2025-12-02 09:37:14.212057246 +0000 UTC m=+0.077096887 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 02 09:37:14 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:37:14 np0005541914.localdomain podman[240719]: 2025-12-02 09:37:14.250829501 +0000 UTC m=+0.115869132 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec 02 09:37:14 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:37:14 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:37:15 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:37:15 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:37:15 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:37:15 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:37:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 09:37:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 6600.2 total, 600.0 interval
                                                          Cumulative writes: 5767 writes, 25K keys, 5767 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5767 writes, 746 syncs, 7.73 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 02 09:37:16 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2810 DF PROTO=TCP SPT=52684 DPT=9102 SEQ=1994394866 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD549AFAE0000000001030307) 
Dec 02 09:37:16 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully.
Dec 02 09:37:16 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully.
Dec 02 09:37:16 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully.
Dec 02 09:37:16 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:37:16 np0005541914.localdomain podman[240736]: 2025-12-02 09:37:16.553587282 +0000 UTC m=+1.801506999 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller)
Dec 02 09:37:16 np0005541914.localdomain podman[240747]: 2025-12-02 09:37:16.620330131 +0000 UTC m=+1.495154579 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 02 09:37:16 np0005541914.localdomain podman[240736]: 2025-12-02 09:37:16.652861794 +0000 UTC m=+1.900781521 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 02 09:37:16 np0005541914.localdomain podman[240747]: 2025-12-02 09:37:16.703441079 +0000 UTC m=+1.578265517 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 02 09:37:16 np0005541914.localdomain podman[240747]: unhealthy
Dec 02 09:37:17 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:37:17 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:37:17 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:37:17 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:37:18 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 02 09:37:18 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully.
Dec 02 09:37:18 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully.
Dec 02 09:37:18 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:37:18 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:37:18 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Failed with result 'exit-code'.
Dec 02 09:37:18 np0005541914.localdomain podman[240595]: 2025-12-02 09:36:50.801435865 +0000 UTC m=+0.028708429 image pull  quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7
Dec 02 09:37:18 np0005541914.localdomain podman[240787]: 2025-12-02 09:37:18.380444693 +0000 UTC m=+0.382881738 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 02 09:37:18 np0005541914.localdomain podman[240787]: 2025-12-02 09:37:18.419811776 +0000 UTC m=+0.422248761 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 09:37:18 np0005541914.localdomain podman[240787]: unhealthy
Dec 02 09:37:18 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14600 DF PROTO=TCP SPT=48604 DPT=9882 SEQ=1361442821 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD549BB220000000001030307) 
Dec 02 09:37:19 np0005541914.localdomain sshd[240810]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:37:19 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:37:19 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 02 09:37:19 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 02 09:37:19 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 02 09:37:19 np0005541914.localdomain sshd[240810]: Invalid user admin from 45.148.10.240 port 44018
Dec 02 09:37:19 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:37:19 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Failed with result 'exit-code'.
Dec 02 09:37:19 np0005541914.localdomain sshd[240810]: Connection closed by invalid user admin 45.148.10.240 port 44018 [preauth]
Dec 02 09:37:21 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:37:21 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-b31a729f52d6f9ece82ff86db83ec0c0420ae47f49a38ed5b1f2bb83a229399e-merged.mount: Deactivated successfully.
Dec 02 09:37:21 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-b31a729f52d6f9ece82ff86db83ec0c0420ae47f49a38ed5b1f2bb83a229399e-merged.mount: Deactivated successfully.
Dec 02 09:37:22 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35411 DF PROTO=TCP SPT=41556 DPT=9101 SEQ=2170108911 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD549C7220000000001030307) 
Dec 02 09:37:22 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 02 09:37:23 np0005541914.localdomain sudo[240848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:37:23 np0005541914.localdomain sudo[240848]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:37:23 np0005541914.localdomain sudo[240848]: pam_unix(sudo:session): session closed for user root
Dec 02 09:37:23 np0005541914.localdomain sudo[240866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:37:23 np0005541914.localdomain sudo[240866]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:37:24 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:37:24 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 02 09:37:24 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 02 09:37:24 np0005541914.localdomain podman[240836]: 2025-12-02 09:37:22.109528469 +0000 UTC m=+0.044936473 image pull  quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7
Dec 02 09:37:24 np0005541914.localdomain sudo[240866]: pam_unix(sudo:session): session closed for user root
Dec 02 09:37:26 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:37:26 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:37:26 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:37:26 np0005541914.localdomain sudo[240915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:37:26 np0005541914.localdomain sudo[240915]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:37:26 np0005541914.localdomain sudo[240915]: pam_unix(sudo:session): session closed for user root
Dec 02 09:37:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4292 DF PROTO=TCP SPT=45518 DPT=9101 SEQ=3058026625 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD549DB230000000001030307) 
Dec 02 09:37:27 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully.
Dec 02 09:37:27 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-ac17f28608a7cfce4db232908145eceefc4390121a03756b7f9081a0f7d2c6d6-merged.mount: Deactivated successfully.
Dec 02 09:37:27 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:37:27 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:37:27 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:37:27 np0005541914.localdomain podman[240933]: 2025-12-02 09:37:27.610699724 +0000 UTC m=+0.086575996 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 09:37:27 np0005541914.localdomain podman[240933]: 2025-12-02 09:37:27.626942241 +0000 UTC m=+0.102818583 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 09:37:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30362 DF PROTO=TCP SPT=39692 DPT=9105 SEQ=824406069 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD549DD220000000001030307) 
Dec 02 09:37:28 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:37:28 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c229f79c70cf5be9a27371d03399d655b2b0280f5e9159c8f223d964c49a7e53-merged.mount: Deactivated successfully.
Dec 02 09:37:28 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-70249a3a7715ea2081744d13dd83fad2e62b9b24ab69f2af1c4f45ccd311c7a7-merged.mount: Deactivated successfully.
Dec 02 09:37:28 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:37:29 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:37:29 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:37:29 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:37:30 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-2bd01f86bd06174222a9d55fe041ff06edb278c28aedc59c96738054f88e995d-merged.mount: Deactivated successfully.
Dec 02 09:37:30 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c229f79c70cf5be9a27371d03399d655b2b0280f5e9159c8f223d964c49a7e53-merged.mount: Deactivated successfully.
Dec 02 09:37:31 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c229f79c70cf5be9a27371d03399d655b2b0280f5e9159c8f223d964c49a7e53-merged.mount: Deactivated successfully.
Dec 02 09:37:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2814 DF PROTO=TCP SPT=52684 DPT=9102 SEQ=1994394866 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD549EB220000000001030307) 
Dec 02 09:37:32 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 02 09:37:32 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-54218a875306d5e9e02be164dfc59f569c03cec4fa589e4979e72cb65e05c169-merged.mount: Deactivated successfully.
Dec 02 09:37:33 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-2bd01f86bd06174222a9d55fe041ff06edb278c28aedc59c96738054f88e995d-merged.mount: Deactivated successfully.
Dec 02 09:37:33 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4612 DF PROTO=TCP SPT=37402 DPT=9882 SEQ=4210985897 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD549F4810000000001030307) 
Dec 02 09:37:33 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 02 09:37:33 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 02 09:37:33 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 02 09:37:34 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully.
Dec 02 09:37:34 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 02 09:37:34 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully.
Dec 02 09:37:34 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:37:35 np0005541914.localdomain podman[240954]: 2025-12-02 09:37:35.091083535 +0000 UTC m=+0.099536403 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:37:35 np0005541914.localdomain podman[240954]: 2025-12-02 09:37:35.105879227 +0000 UTC m=+0.114332055 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 02 09:37:35 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 02 09:37:35 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 02 09:37:36 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully.
Dec 02 09:37:36 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully.
Dec 02 09:37:36 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-70249a3a7715ea2081744d13dd83fad2e62b9b24ab69f2af1c4f45ccd311c7a7-merged.mount: Deactivated successfully.
Dec 02 09:37:36 np0005541914.localdomain podman[240836]: 
Dec 02 09:37:36 np0005541914.localdomain podman[240836]: 2025-12-02 09:37:36.50460889 +0000 UTC m=+14.440016874 container create bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, config_id=edpm, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.33.7, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, name=ubi9-minimal, maintainer=Red Hat, Inc., release=1755695350)
Dec 02 09:37:36 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:37:36 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4614 DF PROTO=TCP SPT=37402 DPT=9882 SEQ=4210985897 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54A00A20000000001030307) 
Dec 02 09:37:37 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 02 09:37:37 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 02 09:37:37 np0005541914.localdomain python3[240580]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7
Dec 02 09:37:37 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully.
Dec 02 09:37:37 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 02 09:37:37 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 02 09:37:37 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 02 09:37:38 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:37:38 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:37:38 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:37:38 np0005541914.localdomain sudo[240578]: pam_unix(sudo:session): session closed for user root
Dec 02 09:37:38 np0005541914.localdomain sudo[241105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vywwucyfaxmgeksjaqzakjbgpbnnfsnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668258.7291503-2188-208274323469594/AnsiballZ_stat.py
Dec 02 09:37:38 np0005541914.localdomain sudo[241105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:37:39 np0005541914.localdomain python3.9[241107]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:37:39 np0005541914.localdomain sudo[241105]: pam_unix(sudo:session): session closed for user root
Dec 02 09:37:39 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully.
Dec 02 09:37:39 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c13e199db7335dd51d53d563216fcc1a3ed75eba14190a583a84b8f73b6c9d42-merged.mount: Deactivated successfully.
Dec 02 09:37:39 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 02 09:37:39 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a6426b16bb5884060eaf559f46c5a81bf85811eff8d5d75aaee95a48f0b492cc-merged.mount: Deactivated successfully.
Dec 02 09:37:39 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a6426b16bb5884060eaf559f46c5a81bf85811eff8d5d75aaee95a48f0b492cc-merged.mount: Deactivated successfully.
Dec 02 09:37:40 np0005541914.localdomain sudo[241217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jzrnmoxkvolcorffmfonjstardtzjixm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668260.0237772-2215-204210319539006/AnsiballZ_file.py
Dec 02 09:37:40 np0005541914.localdomain sudo[241217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:37:40 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully.
Dec 02 09:37:40 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-ac17f28608a7cfce4db232908145eceefc4390121a03756b7f9081a0f7d2c6d6-merged.mount: Deactivated successfully.
Dec 02 09:37:40 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 02 09:37:40 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52399 DF PROTO=TCP SPT=45552 DPT=9100 SEQ=4105249519 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54A0F220000000001030307) 
Dec 02 09:37:40 np0005541914.localdomain python3.9[241219]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:37:40 np0005541914.localdomain sudo[241217]: pam_unix(sudo:session): session closed for user root
Dec 02 09:37:40 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:37:40 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:37:40 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:37:40 np0005541914.localdomain sudo[241326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aryqhaglmcqbfemeqaycoqkqpafrjniw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668260.571447-2215-121249282091281/AnsiballZ_copy.py
Dec 02 09:37:40 np0005541914.localdomain sudo[241326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:37:41 np0005541914.localdomain python3.9[241328]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764668260.571447-2215-121249282091281/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:37:41 np0005541914.localdomain sudo[241326]: pam_unix(sudo:session): session closed for user root
Dec 02 09:37:41 np0005541914.localdomain sudo[241381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ithjzludpnnvwxvqirtgflbygxkrrfsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668260.571447-2215-121249282091281/AnsiballZ_systemd.py
Dec 02 09:37:41 np0005541914.localdomain sudo[241381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:37:41 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 02 09:37:41 np0005541914.localdomain python3.9[241383]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 02 09:37:41 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:37:41 np0005541914.localdomain systemd-rc-local-generator[241407]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:37:41 np0005541914.localdomain systemd-sysv-generator[241410]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:37:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:37:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:37:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:37:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:37:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:37:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:37:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:37:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:37:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:37:41 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully.
Dec 02 09:37:41 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 02 09:37:42 np0005541914.localdomain sudo[241381]: pam_unix(sudo:session): session closed for user root
Dec 02 09:37:42 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 02 09:37:42 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully.
Dec 02 09:37:42 np0005541914.localdomain sudo[241473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dcurbijqrnruxhmnyolcbmovknegkwhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668260.571447-2215-121249282091281/AnsiballZ_systemd.py
Dec 02 09:37:42 np0005541914.localdomain sudo[241473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:37:42 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c229f79c70cf5be9a27371d03399d655b2b0280f5e9159c8f223d964c49a7e53-merged.mount: Deactivated successfully.
Dec 02 09:37:42 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-70249a3a7715ea2081744d13dd83fad2e62b9b24ab69f2af1c4f45ccd311c7a7-merged.mount: Deactivated successfully.
Dec 02 09:37:42 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-70249a3a7715ea2081744d13dd83fad2e62b9b24ab69f2af1c4f45ccd311c7a7-merged.mount: Deactivated successfully.
Dec 02 09:37:42 np0005541914.localdomain python3.9[241475]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:37:42 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:37:42 np0005541914.localdomain systemd-sysv-generator[241504]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:37:42 np0005541914.localdomain systemd-rc-local-generator[241500]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:37:42 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:37:42 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:37:42 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:37:42 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:37:42 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:37:42 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:37:42 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:37:42 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:37:42 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:37:42 np0005541914.localdomain systemd[1]: Starting openstack_network_exporter container...
Dec 02 09:37:43 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53188 DF PROTO=TCP SPT=55894 DPT=9105 SEQ=311588103 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54A19A20000000001030307) 
Dec 02 09:37:43 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 02 09:37:44 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-2bd01f86bd06174222a9d55fe041ff06edb278c28aedc59c96738054f88e995d-merged.mount: Deactivated successfully.
Dec 02 09:37:44 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c229f79c70cf5be9a27371d03399d655b2b0280f5e9159c8f223d964c49a7e53-merged.mount: Deactivated successfully.
Dec 02 09:37:44 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c229f79c70cf5be9a27371d03399d655b2b0280f5e9159c8f223d964c49a7e53-merged.mount: Deactivated successfully.
Dec 02 09:37:44 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:37:44 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f7a62300b732c32f6efb3e00bec43152396765f8f0add798fb8ed1cb89b7154/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 02 09:37:44 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f7a62300b732c32f6efb3e00bec43152396765f8f0add798fb8ed1cb89b7154/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 02 09:37:44 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:37:44 np0005541914.localdomain podman[241516]: 2025-12-02 09:37:44.689959987 +0000 UTC m=+1.733630404 container init bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., release=1755695350, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, architecture=x86_64, name=ubi9-minimal, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 02 09:37:44 np0005541914.localdomain openstack_network_exporter[241530]: INFO    09:37:44 main.go:48: registering *bridge.Collector
Dec 02 09:37:44 np0005541914.localdomain openstack_network_exporter[241530]: INFO    09:37:44 main.go:48: registering *coverage.Collector
Dec 02 09:37:44 np0005541914.localdomain openstack_network_exporter[241530]: INFO    09:37:44 main.go:48: registering *datapath.Collector
Dec 02 09:37:44 np0005541914.localdomain openstack_network_exporter[241530]: INFO    09:37:44 main.go:48: registering *iface.Collector
Dec 02 09:37:44 np0005541914.localdomain openstack_network_exporter[241530]: INFO    09:37:44 main.go:48: registering *memory.Collector
Dec 02 09:37:44 np0005541914.localdomain openstack_network_exporter[241530]: INFO    09:37:44 main.go:48: registering *ovnnorthd.Collector
Dec 02 09:37:44 np0005541914.localdomain openstack_network_exporter[241530]: INFO    09:37:44 main.go:48: registering *ovn.Collector
Dec 02 09:37:44 np0005541914.localdomain openstack_network_exporter[241530]: INFO    09:37:44 main.go:48: registering *ovsdbserver.Collector
Dec 02 09:37:44 np0005541914.localdomain openstack_network_exporter[241530]: INFO    09:37:44 main.go:48: registering *pmd_perf.Collector
Dec 02 09:37:44 np0005541914.localdomain openstack_network_exporter[241530]: INFO    09:37:44 main.go:48: registering *pmd_rxq.Collector
Dec 02 09:37:44 np0005541914.localdomain openstack_network_exporter[241530]: INFO    09:37:44 main.go:48: registering *vswitch.Collector
Dec 02 09:37:44 np0005541914.localdomain openstack_network_exporter[241530]: NOTICE  09:37:44 main.go:82: listening on http://:9105/metrics
Dec 02 09:37:44 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:37:44 np0005541914.localdomain podman[241516]: 2025-12-02 09:37:44.727865065 +0000 UTC m=+1.771535482 container start bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.6, config_id=edpm, name=ubi9-minimal, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, managed_by=edpm_ansible)
Dec 02 09:37:44 np0005541914.localdomain podman[241516]: openstack_network_exporter
Dec 02 09:37:46 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17613 DF PROTO=TCP SPT=39276 DPT=9102 SEQ=4230449605 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54A24DE0000000001030307) 
Dec 02 09:37:46 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 02 09:37:46 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-2bd01f86bd06174222a9d55fe041ff06edb278c28aedc59c96738054f88e995d-merged.mount: Deactivated successfully.
Dec 02 09:37:46 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-2bd01f86bd06174222a9d55fe041ff06edb278c28aedc59c96738054f88e995d-merged.mount: Deactivated successfully.
Dec 02 09:37:46 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:37:46 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:37:46 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:37:47 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:37:47 np0005541914.localdomain systemd[1]: Started openstack_network_exporter container.
Dec 02 09:37:47 np0005541914.localdomain podman[241540]: 2025-12-02 09:37:47.172855522 +0000 UTC m=+2.438736477 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=starting, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, release=1755695350, version=9.6, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, config_id=edpm, distribution-scope=public, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 09:37:47 np0005541914.localdomain sudo[241473]: pam_unix(sudo:session): session closed for user root
Dec 02 09:37:47 np0005541914.localdomain podman[241540]: 2025-12-02 09:37:47.25332922 +0000 UTC m=+2.519210145 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, architecture=x86_64, config_id=edpm, container_name=openstack_network_exporter, name=ubi9-minimal, managed_by=edpm_ansible)
Dec 02 09:37:47 np0005541914.localdomain sudo[241681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vugblefahxftqqgrfklcwkhegjauwrdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668267.3971002-2288-60496788312366/AnsiballZ_systemd.py
Dec 02 09:37:47 np0005541914.localdomain sudo[241681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:37:47 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 02 09:37:47 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 02 09:37:47 np0005541914.localdomain python3.9[241683]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 02 09:37:48 np0005541914.localdomain systemd[1]: Stopping openstack_network_exporter container...
Dec 02 09:37:48 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 02 09:37:48 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:37:48 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:37:48 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:37:48 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:37:48 np0005541914.localdomain sshd[241720]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:37:49 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17615 DF PROTO=TCP SPT=39276 DPT=9102 SEQ=4230449605 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54A30E20000000001030307) 
Dec 02 09:37:49 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:37:49 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:37:49 np0005541914.localdomain podman[241698]: 2025-12-02 09:37:49.245370999 +0000 UTC m=+0.410008708 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=edpm)
Dec 02 09:37:49 np0005541914.localdomain systemd[1]: libpod-bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.scope: Deactivated successfully.
Dec 02 09:37:49 np0005541914.localdomain podman[241698]: 2025-12-02 09:37:49.282370759 +0000 UTC m=+0.447008418 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:37:49 np0005541914.localdomain podman[241698]: unhealthy
Dec 02 09:37:49 np0005541914.localdomain podman[241699]: 2025-12-02 09:37:49.338995919 +0000 UTC m=+0.500687378 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:37:49 np0005541914.localdomain podman[241687]: 2025-12-02 09:37:49.342609199 +0000 UTC m=+1.313691605 container died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., config_id=edpm, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, vcs-type=git, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, release=1755695350, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=)
Dec 02 09:37:49 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.timer: Deactivated successfully.
Dec 02 09:37:49 np0005541914.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:37:49 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:37:49 np0005541914.localdomain podman[241699]: 2025-12-02 09:37:49.397812686 +0000 UTC m=+0.559504175 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec 02 09:37:49 np0005541914.localdomain sshd[241720]: Invalid user rancher from 34.78.29.97 port 33504
Dec 02 09:37:49 np0005541914.localdomain sshd[241720]: Received disconnect from 34.78.29.97 port 33504:11: Bye Bye [preauth]
Dec 02 09:37:49 np0005541914.localdomain sshd[241720]: Disconnected from invalid user rancher 34.78.29.97 port 33504 [preauth]
Dec 02 09:37:49 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 02 09:37:49 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be-userdata-shm.mount: Deactivated successfully.
Dec 02 09:37:50 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:37:50 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:37:50 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:37:50 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:37:50 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Failed with result 'exit-code'.
Dec 02 09:37:50 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:37:50 np0005541914.localdomain podman[241754]: 2025-12-02 09:37:50.460765459 +0000 UTC m=+1.087213456 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 09:37:50 np0005541914.localdomain podman[241687]: 2025-12-02 09:37:50.46896268 +0000 UTC m=+2.440045106 container cleanup bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, config_id=edpm, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, container_name=openstack_network_exporter, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7)
Dec 02 09:37:50 np0005541914.localdomain podman[241687]: openstack_network_exporter
Dec 02 09:37:50 np0005541914.localdomain podman[241732]: 2025-12-02 09:37:50.479056359 +0000 UTC m=+1.186297944 container cleanup bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, architecture=x86_64, container_name=openstack_network_exporter, release=1755695350, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 02 09:37:50 np0005541914.localdomain podman[241754]: 2025-12-02 09:37:50.493023185 +0000 UTC m=+1.119471202 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 09:37:50 np0005541914.localdomain podman[241754]: unhealthy
Dec 02 09:37:50 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-9f7a62300b732c32f6efb3e00bec43152396765f8f0add798fb8ed1cb89b7154-merged.mount: Deactivated successfully.
Dec 02 09:37:50 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:37:50 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:37:51 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:37:51 np0005541914.localdomain systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 02 09:37:51 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:37:51 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Failed with result 'exit-code'.
Dec 02 09:37:51 np0005541914.localdomain podman[241783]: 2025-12-02 09:37:51.156848246 +0000 UTC m=+0.057724675 container cleanup bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=, name=ubi9-minimal, io.buildah.version=1.33.7, architecture=x86_64, container_name=openstack_network_exporter, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350)
Dec 02 09:37:51 np0005541914.localdomain podman[241783]: openstack_network_exporter
Dec 02 09:37:51 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4294 DF PROTO=TCP SPT=45518 DPT=9101 SEQ=3058026625 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54A3B220000000001030307) 
Dec 02 09:37:52 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-70249a3a7715ea2081744d13dd83fad2e62b9b24ab69f2af1c4f45ccd311c7a7-merged.mount: Deactivated successfully.
Dec 02 09:37:52 np0005541914.localdomain systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Dec 02 09:37:52 np0005541914.localdomain systemd[1]: Stopped openstack_network_exporter container.
Dec 02 09:37:52 np0005541914.localdomain systemd[1]: Starting openstack_network_exporter container...
Dec 02 09:37:52 np0005541914.localdomain podman[241553]: 2025-12-02 09:37:52.777083945 +0000 UTC m=+6.025152594 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3)
Dec 02 09:37:52 np0005541914.localdomain podman[241553]: 2025-12-02 09:37:52.815796108 +0000 UTC m=+6.063864787 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent)
Dec 02 09:37:53 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:37:53 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully.
Dec 02 09:37:53 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully.
Dec 02 09:37:55 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:37:55 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d3c0368aac3df7a24e1cc908793cb027783f4fd6a7c0af2cb89163a01527dd3a-merged.mount: Deactivated successfully.
Dec 02 09:37:55 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d3c0368aac3df7a24e1cc908793cb027783f4fd6a7c0af2cb89163a01527dd3a-merged.mount: Deactivated successfully.
Dec 02 09:37:55 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:37:55 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:37:55 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:37:55 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:37:55 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f7a62300b732c32f6efb3e00bec43152396765f8f0add798fb8ed1cb89b7154/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 02 09:37:55 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f7a62300b732c32f6efb3e00bec43152396765f8f0add798fb8ed1cb89b7154/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec 02 09:37:55 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:37:55 np0005541914.localdomain podman[241797]: 2025-12-02 09:37:55.340624843 +0000 UTC m=+2.570477421 container init bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, architecture=x86_64, config_id=edpm, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vcs-type=git, managed_by=edpm_ansible, vendor=Red Hat, Inc., distribution-scope=public, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41)
Dec 02 09:37:55 np0005541914.localdomain openstack_network_exporter[241816]: INFO    09:37:55 main.go:48: registering *bridge.Collector
Dec 02 09:37:55 np0005541914.localdomain openstack_network_exporter[241816]: INFO    09:37:55 main.go:48: registering *coverage.Collector
Dec 02 09:37:55 np0005541914.localdomain openstack_network_exporter[241816]: INFO    09:37:55 main.go:48: registering *datapath.Collector
Dec 02 09:37:55 np0005541914.localdomain openstack_network_exporter[241816]: INFO    09:37:55 main.go:48: registering *iface.Collector
Dec 02 09:37:55 np0005541914.localdomain openstack_network_exporter[241816]: INFO    09:37:55 main.go:48: registering *memory.Collector
Dec 02 09:37:55 np0005541914.localdomain openstack_network_exporter[241816]: INFO    09:37:55 main.go:48: registering *ovnnorthd.Collector
Dec 02 09:37:55 np0005541914.localdomain openstack_network_exporter[241816]: INFO    09:37:55 main.go:48: registering *ovn.Collector
Dec 02 09:37:55 np0005541914.localdomain openstack_network_exporter[241816]: INFO    09:37:55 main.go:48: registering *ovsdbserver.Collector
Dec 02 09:37:55 np0005541914.localdomain openstack_network_exporter[241816]: INFO    09:37:55 main.go:48: registering *pmd_perf.Collector
Dec 02 09:37:55 np0005541914.localdomain openstack_network_exporter[241816]: INFO    09:37:55 main.go:48: registering *pmd_rxq.Collector
Dec 02 09:37:55 np0005541914.localdomain openstack_network_exporter[241816]: INFO    09:37:55 main.go:48: registering *vswitch.Collector
Dec 02 09:37:55 np0005541914.localdomain openstack_network_exporter[241816]: NOTICE  09:37:55 main.go:82: listening on http://:9105/metrics
Dec 02 09:37:55 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:37:55 np0005541914.localdomain podman[241797]: 2025-12-02 09:37:55.374554049 +0000 UTC m=+2.604406637 container start bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, vcs-type=git, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41)
Dec 02 09:37:55 np0005541914.localdomain podman[241797]: openstack_network_exporter
Dec 02 09:37:55 np0005541914.localdomain podman[239757]: time="2025-12-02T09:37:55Z" level=error msg="Getting root fs size for \"2412a810b4535cda8993bf8c7a954b3e0996d36e1a8a6596d7e2636ed241549c\": getting diffsize of layer \"efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf\" and its parent \"c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6\": unmounting layer efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf: replacing mount point \"/var/lib/containers/storage/overlay/efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf/merged\": device or resource busy"
Dec 02 09:37:56 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:37:56 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully.
Dec 02 09:37:56 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully.
Dec 02 09:37:56 np0005541914.localdomain systemd[1]: Started openstack_network_exporter container.
Dec 02 09:37:56 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:37:56 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:37:56 np0005541914.localdomain podman[241826]: 2025-12-02 09:37:56.139335004 +0000 UTC m=+0.759460783 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=starting, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, version=9.6, config_id=edpm, release=1755695350, architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-type=git)
Dec 02 09:37:56 np0005541914.localdomain sudo[241681]: pam_unix(sudo:session): session closed for user root
Dec 02 09:37:56 np0005541914.localdomain podman[241826]: 2025-12-02 09:37:56.184803074 +0000 UTC m=+0.804928793 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, io.buildah.version=1.33.7, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, version=9.6, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 02 09:37:57 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:37:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7545 DF PROTO=TCP SPT=40806 DPT=9101 SEQ=3362580343 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54A50630000000001030307) 
Dec 02 09:37:57 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:37:57 np0005541914.localdomain sudo[241951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bevvrdrmzgbfmyqficiarvvcciabcdzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668277.101998-2312-43701379627882/AnsiballZ_find.py
Dec 02 09:37:57 np0005541914.localdomain sudo[241951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:37:57 np0005541914.localdomain python3.9[241953]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 02 09:37:57 np0005541914.localdomain sudo[241951]: pam_unix(sudo:session): session closed for user root
Dec 02 09:37:58 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:37:58 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a6426b16bb5884060eaf559f46c5a81bf85811eff8d5d75aaee95a48f0b492cc-merged.mount: Deactivated successfully.
Dec 02 09:37:58 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 02 09:37:58 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully.
Dec 02 09:37:58 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-400c7ba0962a9736ae4730e3c3204c67b2bad8d9266c2a49e5c729fb35c892ee-merged.mount: Deactivated successfully.
Dec 02 09:37:58 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:37:59 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 02 09:37:59 np0005541914.localdomain systemd[1]: tmp-crun.M69nHR.mount: Deactivated successfully.
Dec 02 09:37:59 np0005541914.localdomain podman[241971]: 2025-12-02 09:37:59.102658732 +0000 UTC m=+0.104763676 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 09:37:59 np0005541914.localdomain podman[241971]: 2025-12-02 09:37:59.1400229 +0000 UTC m=+0.142127804 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 09:37:59 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully.
Dec 02 09:37:59 np0005541914.localdomain kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:37:59 np0005541914.localdomain kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:37:59 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:37:59 np0005541914.localdomain podman[239757]: time="2025-12-02T09:37:59Z" level=error msg="Getting root fs size for \"306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c\": unmounting layer 4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36: replacing mount point \"/var/lib/containers/storage/overlay/4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36/merged\": device or resource busy"
Dec 02 09:37:59 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:01 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 02 09:38:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17617 DF PROTO=TCP SPT=39276 DPT=9102 SEQ=4230449605 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54A61220000000001030307) 
Dec 02 09:38:02 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 02 09:38:02 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Dec 02 09:38:02 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Dec 02 09:38:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:38:03.146 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:38:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:38:03.146 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:38:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:38:03.147 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:38:03 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:38:03 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:38:03 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34743 DF PROTO=TCP SPT=58138 DPT=9882 SEQ=1213125736 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54A69B10000000001030307) 
Dec 02 09:38:04 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 02 09:38:04 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 02 09:38:04 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34744 DF PROTO=TCP SPT=58138 DPT=9882 SEQ=1213125736 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54A6DA30000000001030307) 
Dec 02 09:38:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:38:04.931 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:38:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:38:04.932 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:38:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:38:04.932 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:38:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:38:04.932 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:38:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:38:04.933 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:38:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:38:04.933 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:38:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:38:04.933 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 09:38:05 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:38:05 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:38:05 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:38:05 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:05 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:38:05 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:38:05.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:38:05 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:38:05.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:38:05 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:38:05.663 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:38:05 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:38:05.664 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:38:05 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:38:05.664 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:38:05 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:38:05.664 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 09:38:05 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:38:05.665 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:38:05 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:38:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:38:06.088 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:38:06 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:38:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:38:06.247 229589 WARNING nova.virt.libvirt.driver [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 09:38:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:38:06.248 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=13206MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 09:38:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:38:06.248 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:38:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:38:06.249 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:38:06 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:06 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:06 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:06 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:38:06.334 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 09:38:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:38:06.334 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 09:38:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:38:06.354 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:38:06 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:38:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:38:06 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:38:06 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:38:06 np0005541914.localdomain podman[242036]: 2025-12-02 09:38:06.630875159 +0000 UTC m=+0.080516983 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 02 09:38:06 np0005541914.localdomain podman[242036]: 2025-12-02 09:38:06.664880752 +0000 UTC m=+0.114522576 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 02 09:38:06 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34745 DF PROTO=TCP SPT=58138 DPT=9882 SEQ=1213125736 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54A75A30000000001030307) 
Dec 02 09:38:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:38:06.814 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:38:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:38:06.821 229589 DEBUG nova.compute.provider_tree [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 09:38:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:38:06.838 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 09:38:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:38:06.840 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 09:38:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:38:06.841 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:38:07 np0005541914.localdomain systemd[1]: tmp-crun.JJCrz5.mount: Deactivated successfully.
Dec 02 09:38:07 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Dec 02 09:38:07 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a9f966c4c02ca72bf571aaf0656247c88b73268323ddd77e58521b9ea3db73d1-merged.mount: Deactivated successfully.
Dec 02 09:38:07 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:07 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:07 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:38:07 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:38:07.841 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:38:07 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:38:07.842 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 09:38:07 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:38:07.842 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 09:38:07 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:38:07.855 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 09:38:08 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:38:08 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:38:08 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:38:10 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:38:10 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 02 09:38:10 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23105 DF PROTO=TCP SPT=53472 DPT=9100 SEQ=2106322161 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54A83230000000001030307) 
Dec 02 09:38:10 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 02 09:38:11 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:38:11 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d3c0368aac3df7a24e1cc908793cb027783f4fd6a7c0af2cb89163a01527dd3a-merged.mount: Deactivated successfully.
Dec 02 09:38:11 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:12 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:38:12 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:38:12 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:38:12 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully.
Dec 02 09:38:12 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52400 DF PROTO=TCP SPT=45552 DPT=9100 SEQ=4105249519 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54A8D220000000001030307) 
Dec 02 09:38:13 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:38:13 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:38:13 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:38:13 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:13 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:38:13 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:13 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:13 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:13 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:13 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:14 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:14 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully.
Dec 02 09:38:14 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-400c7ba0962a9736ae4730e3c3204c67b2bad8d9266c2a49e5c729fb35c892ee-merged.mount: Deactivated successfully.
Dec 02 09:38:14 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:38:14 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:38:14 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a1185e7325783fe8cba63270bc6e59299386d7c73e4bc34c560a1fbc9e6d7e2c-merged.mount: Deactivated successfully.
Dec 02 09:38:14 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-2cd9444c84550fbd551e3826a8110fcc009757858b99e84f1119041f2325189b-merged.mount: Deactivated successfully.
Dec 02 09:38:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.432 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:38:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.433 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:38:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.433 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:38:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.433 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:38:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.433 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:38:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.433 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:38:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.433 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:38:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.433 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:38:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.433 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:38:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.433 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:38:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:38:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:38:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:38:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:38:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:38:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:38:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:38:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:38:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:38:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:38:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:38:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:38:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:38:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:38:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:38:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:38:15 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully.
Dec 02 09:38:15 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 02 09:38:15 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 02 09:38:16 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42430 DF PROTO=TCP SPT=50318 DPT=9102 SEQ=3794331324 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54A9A110000000001030307) 
Dec 02 09:38:16 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 02 09:38:16 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-47f9fab5806f96664fad9b3e3421bfde63bb6a7412470abd2bfea5e9a57acc82-merged.mount: Deactivated successfully.
Dec 02 09:38:17 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 02 09:38:18 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 02 09:38:18 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Dec 02 09:38:18 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Dec 02 09:38:18 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34747 DF PROTO=TCP SPT=58138 DPT=9882 SEQ=1213125736 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54AA5220000000001030307) 
Dec 02 09:38:19 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 02 09:38:19 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully.
Dec 02 09:38:20 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 02 09:38:20 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 02 09:38:20 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:38:20 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:38:21 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 02 09:38:21 np0005541914.localdomain systemd[1]: tmp-crun.hNaSSy.mount: Deactivated successfully.
Dec 02 09:38:21 np0005541914.localdomain podman[242057]: 2025-12-02 09:38:21.072933145 +0000 UTC m=+0.072985601 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Dec 02 09:38:21 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:38:21 np0005541914.localdomain podman[242056]: 2025-12-02 09:38:21.141689334 +0000 UTC m=+0.145641091 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 02 09:38:21 np0005541914.localdomain podman[242057]: 2025-12-02 09:38:21.143018046 +0000 UTC m=+0.143070522 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:38:21 np0005541914.localdomain podman[242056]: 2025-12-02 09:38:21.225843447 +0000 UTC m=+0.229795194 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute)
Dec 02 09:38:21 np0005541914.localdomain podman[242056]: unhealthy
Dec 02 09:38:21 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:38:21 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:38:21 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Failed with result 'exit-code'.
Dec 02 09:38:21 np0005541914.localdomain podman[242090]: 2025-12-02 09:38:21.404300055 +0000 UTC m=+0.275901479 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 09:38:21 np0005541914.localdomain podman[242090]: 2025-12-02 09:38:21.437823764 +0000 UTC m=+0.309425228 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 09:38:21 np0005541914.localdomain podman[242090]: unhealthy
Dec 02 09:38:21 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7547 DF PROTO=TCP SPT=40806 DPT=9101 SEQ=3362580343 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54AB1220000000001030307) 
Dec 02 09:38:22 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 02 09:38:22 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:38:22 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:38:22 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:38:22 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:22 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:38:22 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Failed with result 'exit-code'.
Dec 02 09:38:22 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:22 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:22 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:22 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:23 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:38:23 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:38:24 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Dec 02 09:38:24 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a9f966c4c02ca72bf571aaf0656247c88b73268323ddd77e58521b9ea3db73d1-merged.mount: Deactivated successfully.
Dec 02 09:38:25 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully.
Dec 02 09:38:25 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:38:25 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a1e958529aaf3ea18edfde977fa21cc545be3514f2ed0637a72be1cc0091549c-merged.mount: Deactivated successfully.
Dec 02 09:38:25 np0005541914.localdomain podman[242120]: 2025-12-02 09:38:25.594682606 +0000 UTC m=+0.099389332 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 02 09:38:25 np0005541914.localdomain podman[242120]: 2025-12-02 09:38:25.623885592 +0000 UTC m=+0.128592288 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 02 09:38:26 np0005541914.localdomain systemd[1]: tmp-crun.UYcgep.mount: Deactivated successfully.
Dec 02 09:38:26 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:38:26 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 02 09:38:26 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 02 09:38:26 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:38:27 np0005541914.localdomain sudo[242139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:38:27 np0005541914.localdomain sudo[242139]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:38:27 np0005541914.localdomain sudo[242139]: pam_unix(sudo:session): session closed for user root
Dec 02 09:38:27 np0005541914.localdomain sudo[242157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:38:27 np0005541914.localdomain sudo[242157]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:38:27 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:38:27 np0005541914.localdomain systemd[1]: tmp-crun.cdbb8c.mount: Deactivated successfully.
Dec 02 09:38:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12055 DF PROTO=TCP SPT=55838 DPT=9101 SEQ=938634073 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54AC5A20000000001030307) 
Dec 02 09:38:27 np0005541914.localdomain podman[242175]: 2025-12-02 09:38:27.163817635 +0000 UTC m=+0.061953262 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, version=9.6, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Dec 02 09:38:27 np0005541914.localdomain podman[242175]: 2025-12-02 09:38:27.200756619 +0000 UTC m=+0.098892206 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, release=1755695350, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc.)
Dec 02 09:38:27 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:38:27 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully.
Dec 02 09:38:28 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:38:28 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:38:28 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:38:28 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:38:28 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:28 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:29 np0005541914.localdomain sudo[242157]: pam_unix(sudo:session): session closed for user root
Dec 02 09:38:29 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:38:29 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:38:29 np0005541914.localdomain podman[239757]: time="2025-12-02T09:38:29Z" level=error msg="Getting root fs size for \"64316efbac2c8f0c0f408a553249de7f4ed5edff37903335d3a7fdd0eb442c60\": getting diffsize of layer \"efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf\" and its parent \"c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6\": unmounting layer efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf: replacing mount point \"/var/lib/containers/storage/overlay/efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf/merged\": device or resource busy"
Dec 02 09:38:29 np0005541914.localdomain podman[242226]: 2025-12-02 09:38:29.775173533 +0000 UTC m=+0.088214118 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 09:38:29 np0005541914.localdomain podman[242226]: 2025-12-02 09:38:29.816549873 +0000 UTC m=+0.129590408 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 02 09:38:30 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:38:30 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:38:30 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully.
Dec 02 09:38:30 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-0fd78fb44465760df7c4be9cb01e48acc01a9b6623f14c40fffd8cb0fbb72ecf-merged.mount: Deactivated successfully.
Dec 02 09:38:30 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:30 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:38:31 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:38:31 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:38:31 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 02 09:38:31 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:31 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:31 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:31 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:31 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:31 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42434 DF PROTO=TCP SPT=50318 DPT=9102 SEQ=3794331324 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54AD7230000000001030307) 
Dec 02 09:38:32 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:38:32 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:38:32 np0005541914.localdomain sudo[242249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:38:32 np0005541914.localdomain sudo[242249]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:38:32 np0005541914.localdomain sudo[242249]: pam_unix(sudo:session): session closed for user root
Dec 02 09:38:33 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 02 09:38:33 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-46d22fb86a8cbaa2935fad3e910e4610328c0a9c2837bb75cb2a0cd28ff52849-merged.mount: Deactivated successfully.
Dec 02 09:38:33 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47766 DF PROTO=TCP SPT=45816 DPT=9882 SEQ=1537288816 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54ADEE10000000001030307) 
Dec 02 09:38:34 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 02 09:38:34 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-47f9fab5806f96664fad9b3e3421bfde63bb6a7412470abd2bfea5e9a57acc82-merged.mount: Deactivated successfully.
Dec 02 09:38:34 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47767 DF PROTO=TCP SPT=45816 DPT=9882 SEQ=1537288816 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54AE2E20000000001030307) 
Dec 02 09:38:35 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:38:35 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:38:35 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:38:36 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully.
Dec 02 09:38:36 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47768 DF PROTO=TCP SPT=45816 DPT=9882 SEQ=1537288816 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54AEAE20000000001030307) 
Dec 02 09:38:37 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:38:37 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:38:37 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:38:37 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 02 09:38:37 np0005541914.localdomain podman[242267]: 2025-12-02 09:38:37.947596551 +0000 UTC m=+0.088564219 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible)
Dec 02 09:38:37 np0005541914.localdomain podman[242267]: 2025-12-02 09:38:37.958068162 +0000 UTC m=+0.099035790 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 02 09:38:38 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:38:38 np0005541914.localdomain systemd[1]: tmp-crun.NVESwl.mount: Deactivated successfully.
Dec 02 09:38:38 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:38:39 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:38:39 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:38:40 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:38:40 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:38:40 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:38:40 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42583 DF PROTO=TCP SPT=47492 DPT=9100 SEQ=3262698782 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54AF9230000000001030307) 
Dec 02 09:38:40 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:40 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:41 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:38:42 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully.
Dec 02 09:38:42 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a1e958529aaf3ea18edfde977fa21cc545be3514f2ed0637a72be1cc0091549c-merged.mount: Deactivated successfully.
Dec 02 09:38:43 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:38:43 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64106 DF PROTO=TCP SPT=54532 DPT=9105 SEQ=2084957773 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54B03E20000000001030307) 
Dec 02 09:38:43 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-9bcbe901bb45e8070f2f315648c2b8d8a4260ab9ddef9da25ac029ee28a25fc8-merged.mount: Deactivated successfully.
Dec 02 09:38:43 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a1e958529aaf3ea18edfde977fa21cc545be3514f2ed0637a72be1cc0091549c-merged.mount: Deactivated successfully.
Dec 02 09:38:44 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully.
Dec 02 09:38:44 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully.
Dec 02 09:38:44 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully.
Dec 02 09:38:44 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:38:45 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:38:45 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully.
Dec 02 09:38:45 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-0fd78fb44465760df7c4be9cb01e48acc01a9b6623f14c40fffd8cb0fbb72ecf-merged.mount: Deactivated successfully.
Dec 02 09:38:45 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully.
Dec 02 09:38:45 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d4bf0a50fd432b1e17b5b60f382aa20fe21251bda35e0089667eec28efb9c70f-merged.mount: Deactivated successfully.
Dec 02 09:38:45 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53193 DF PROTO=TCP SPT=55894 DPT=9105 SEQ=311588103 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54B0F220000000001030307) 
Dec 02 09:38:46 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:38:46 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 02 09:38:47 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:38:47 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:38:47 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:38:48 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 02 09:38:48 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-46d22fb86a8cbaa2935fad3e910e4610328c0a9c2837bb75cb2a0cd28ff52849-merged.mount: Deactivated successfully.
Dec 02 09:38:49 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47770 DF PROTO=TCP SPT=45816 DPT=9882 SEQ=1537288816 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54B1B220000000001030307) 
Dec 02 09:38:49 np0005541914.localdomain sshd[242286]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:38:50 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:38:50 np0005541914.localdomain sshd[242286]: Received disconnect from 34.78.29.97 port 49776:11: Bye Bye [preauth]
Dec 02 09:38:50 np0005541914.localdomain sshd[242286]: Disconnected from authenticating user root 34.78.29.97 port 49776 [preauth]
Dec 02 09:38:50 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:38:50 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:38:50 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:50 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:51 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:38:51 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:38:51 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:38:51 np0005541914.localdomain podman[242288]: 2025-12-02 09:38:51.473444986 +0000 UTC m=+0.080010577 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 02 09:38:51 np0005541914.localdomain podman[242289]: 2025-12-02 09:38:51.540632997 +0000 UTC m=+0.143384351 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec 02 09:38:51 np0005541914.localdomain podman[242288]: 2025-12-02 09:38:51.563545551 +0000 UTC m=+0.170111242 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec 02 09:38:51 np0005541914.localdomain podman[242288]: unhealthy
Dec 02 09:38:51 np0005541914.localdomain podman[242289]: 2025-12-02 09:38:51.600042251 +0000 UTC m=+0.202793575 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Dec 02 09:38:52 np0005541914.localdomain systemd[1]: tmp-crun.3JBfZv.mount: Deactivated successfully.
Dec 02 09:38:52 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:38:52 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:38:52 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:38:52 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:38:52 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:38:52 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:38:52 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:38:52 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Failed with result 'exit-code'.
Dec 02 09:38:52 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:38:52 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:52 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:52 np0005541914.localdomain podman[242330]: 2025-12-02 09:38:52.85051816 +0000 UTC m=+0.383278984 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 09:38:52 np0005541914.localdomain podman[242330]: 2025-12-02 09:38:52.883018098 +0000 UTC m=+0.415778852 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 09:38:52 np0005541914.localdomain podman[242330]: unhealthy
Dec 02 09:38:53 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43163 DF PROTO=TCP SPT=36102 DPT=9102 SEQ=1319628034 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54B2B220000000001030307) 
Dec 02 09:38:53 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:38:53 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:38:53 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:38:53 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:38:53 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Failed with result 'exit-code'.
Dec 02 09:38:54 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:38:54 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:38:54 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:38:54 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:38:56 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-297413164ba634cc6890ee6589cadf094aa7e1bc60468b5e2b171a73d85ccd70-merged.mount: Deactivated successfully.
Dec 02 09:38:56 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-297413164ba634cc6890ee6589cadf094aa7e1bc60468b5e2b171a73d85ccd70-merged.mount: Deactivated successfully.
Dec 02 09:38:56 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:38:57 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:38:57 np0005541914.localdomain podman[242355]: 2025-12-02 09:38:57.084952793 +0000 UTC m=+0.089235150 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 02 09:38:57 np0005541914.localdomain podman[242355]: 2025-12-02 09:38:57.094878497 +0000 UTC m=+0.099160934 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 02 09:38:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16474 DF PROTO=TCP SPT=58868 DPT=9101 SEQ=1055772396 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54B3AE20000000001030307) 
Dec 02 09:38:57 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully.
Dec 02 09:38:57 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully.
Dec 02 09:38:57 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-9bcbe901bb45e8070f2f315648c2b8d8a4260ab9ddef9da25ac029ee28a25fc8-merged.mount: Deactivated successfully.
Dec 02 09:38:57 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully.
Dec 02 09:38:57 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:38:58 np0005541914.localdomain podman[239757]: time="2025-12-02T09:38:58Z" level=error msg="Unmounting /var/lib/containers/storage/overlay/efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf/merged: invalid argument"
Dec 02 09:38:58 np0005541914.localdomain podman[239757]: time="2025-12-02T09:38:58Z" level=error msg="Getting root fs size for \"7f052286f4e335d8d24dc834e47a500ce9df94f9e0c9499a5327ee5cef14ee4e\": getting diffsize of layer \"efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf\" and its parent \"c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6\": creating overlay mount to /var/lib/containers/storage/overlay/efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf/merged, mount_data=\"lowerdir=/var/lib/containers/storage/overlay/l/LK2RBR3EFG2ZZO4YQKJAOD6X6T,upperdir=/var/lib/containers/storage/overlay/efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf/diff,workdir=/var/lib/containers/storage/overlay/efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf/work,nodev,metacopy=on\": no such file or directory"
Dec 02 09:38:58 np0005541914.localdomain kernel: overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:58 np0005541914.localdomain kernel: overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:58 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully.
Dec 02 09:38:58 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully.
Dec 02 09:38:58 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:38:58 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:38:58 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:38:58 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:38:59 np0005541914.localdomain podman[242372]: 2025-12-02 09:38:59.053174371 +0000 UTC m=+0.061844649 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=ubi9-minimal, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, distribution-scope=public, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, release=1755695350)
Dec 02 09:38:59 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:38:59 np0005541914.localdomain podman[242372]: 2025-12-02 09:38:59.063483838 +0000 UTC m=+0.072154116 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, config_id=edpm, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9)
Dec 02 09:38:59 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully.
Dec 02 09:38:59 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d40ebd622fb49c1d984ae69be39f1f1d5d9bbd0185c9e75888b797dd6f2afb7e-merged.mount: Deactivated successfully.
Dec 02 09:38:59 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully.
Dec 02 09:38:59 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 02 09:39:00 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 02 09:39:00 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:39:00 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d8311fd89fa9ff9a4d8824219b7d14d00721d421cc1a51c3601cb914a56f4bfc-merged.mount: Deactivated successfully.
Dec 02 09:39:00 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:39:00 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d4bf0a50fd432b1e17b5b60f382aa20fe21251bda35e0089667eec28efb9c70f-merged.mount: Deactivated successfully.
Dec 02 09:39:00 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d4bf0a50fd432b1e17b5b60f382aa20fe21251bda35e0089667eec28efb9c70f-merged.mount: Deactivated successfully.
Dec 02 09:39:00 np0005541914.localdomain podman[242393]: 2025-12-02 09:39:00.843889953 +0000 UTC m=+0.066852613 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 02 09:39:00 np0005541914.localdomain podman[242393]: 2025-12-02 09:39:00.874544773 +0000 UTC m=+0.097507473 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 02 09:39:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43164 DF PROTO=TCP SPT=36102 DPT=9102 SEQ=1319628034 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54B4B220000000001030307) 
Dec 02 09:39:01 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 02 09:39:03 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:39:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:39:03.147 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:39:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:39:03.147 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:39:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:39:03.147 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:39:03 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:39:03 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:39:03 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:39:03 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51481 DF PROTO=TCP SPT=40582 DPT=9882 SEQ=3845157538 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54B54110000000001030307) 
Dec 02 09:39:03 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:39:03.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:39:04 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:39:04 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully.
Dec 02 09:39:04 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully.
Dec 02 09:39:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:39:04.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:39:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:39:04.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:39:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:39:04.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:39:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:39:04.642 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 09:39:04 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51482 DF PROTO=TCP SPT=40582 DPT=9882 SEQ=3845157538 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54B58220000000001030307) 
Dec 02 09:39:05 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:39:05 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:39:05 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:39:05 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:39:05 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:39:05.636 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:39:05 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:39:05.637 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:39:05 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:39:05.658 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:39:06 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:39:06 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully.
Dec 02 09:39:06 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-1f51912cd7ca4d93a076413ed4727a62a427f09f722d7bf72e350182571c8db0-merged.mount: Deactivated successfully.
Dec 02 09:39:06 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-1f51912cd7ca4d93a076413ed4727a62a427f09f722d7bf72e350182571c8db0-merged.mount: Deactivated successfully.
Dec 02 09:39:06 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:39:06 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:39:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:39:06.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:39:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:39:06.660 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:39:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:39:06.661 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:39:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:39:06.661 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:39:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:39:06.661 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 09:39:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:39:06.661 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:39:06 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51483 DF PROTO=TCP SPT=40582 DPT=9882 SEQ=3845157538 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54B60220000000001030307) 
Dec 02 09:39:07 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:39:07 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:39:07.125 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:39:07 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:39:07.279 229589 WARNING nova.virt.libvirt.driver [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 09:39:07 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:39:07.281 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=13164MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 09:39:07 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:39:07.281 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:39:07 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:39:07.281 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:39:07 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:39:07.340 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 09:39:07 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:39:07.341 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 09:39:07 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:39:07.355 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:39:07 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:39:07 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:39:07 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:39:07 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:39:07.820 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:39:07 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:39:07.826 229589 DEBUG nova.compute.provider_tree [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 09:39:07 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:39:07.840 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 09:39:07 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:39:07.842 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 09:39:07 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:39:07.842 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:39:08 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 02 09:39:08 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:39:08.843 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:39:08 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:39:08.843 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 09:39:08 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:39:08.844 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 09:39:08 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:39:08 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:39:08.860 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 09:39:08 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:39:08.860 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:39:08 np0005541914.localdomain podman[242460]: 2025-12-02 09:39:08.920970323 +0000 UTC m=+0.060766734 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 02 09:39:08 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-853ccb0b7aef1ea23933a0a39c3ed46ab9d9a29acf9ba782f87031dcfb79c247-merged.mount: Deactivated successfully.
Dec 02 09:39:08 np0005541914.localdomain podman[242460]: 2025-12-02 09:39:08.935859027 +0000 UTC m=+0.075655438 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:39:09 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-853ccb0b7aef1ea23933a0a39c3ed46ab9d9a29acf9ba782f87031dcfb79c247-merged.mount: Deactivated successfully.
Dec 02 09:39:10 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32856 DF PROTO=TCP SPT=60250 DPT=9100 SEQ=2569318876 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54B6D220000000001030307) 
Dec 02 09:39:10 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:39:10 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-297413164ba634cc6890ee6589cadf094aa7e1bc60468b5e2b171a73d85ccd70-merged.mount: Deactivated successfully.
Dec 02 09:39:10 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-297413164ba634cc6890ee6589cadf094aa7e1bc60468b5e2b171a73d85ccd70-merged.mount: Deactivated successfully.
Dec 02 09:39:10 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:39:11 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully.
Dec 02 09:39:11 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully.
Dec 02 09:39:11 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully.
Dec 02 09:39:12 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:39:12 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 02 09:39:12 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully.
Dec 02 09:39:12 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:39:12 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:39:13 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65227 DF PROTO=TCP SPT=53506 DPT=9105 SEQ=988059590 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54B79220000000001030307) 
Dec 02 09:39:13 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:39:13 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:39:13 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:39:13 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:39:14 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully.
Dec 02 09:39:14 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:39:14 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d40ebd622fb49c1d984ae69be39f1f1d5d9bbd0185c9e75888b797dd6f2afb7e-merged.mount: Deactivated successfully.
Dec 02 09:39:14 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:39:14 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d40ebd622fb49c1d984ae69be39f1f1d5d9bbd0185c9e75888b797dd6f2afb7e-merged.mount: Deactivated successfully.
Dec 02 09:39:14 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:39:15 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully.
Dec 02 09:39:15 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:39:15 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 02 09:39:15 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully.
Dec 02 09:39:16 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14153 DF PROTO=TCP SPT=41232 DPT=9102 SEQ=2255338904 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54B846E0000000001030307) 
Dec 02 09:39:16 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-4470c8636ef8d59ecd85925ad81ff603b150c7b82e82b0e5d5ff653ec51e0d36-merged.mount: Deactivated successfully.
Dec 02 09:39:16 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-48fc1270cbb31781d8896eae0014e3b5a5e48738fd6cff2aa76953f22a08ee71-merged.mount: Deactivated successfully.
Dec 02 09:39:16 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-853ccb0b7aef1ea23933a0a39c3ed46ab9d9a29acf9ba782f87031dcfb79c247-merged.mount: Deactivated successfully.
Dec 02 09:39:17 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-73f9890a30d4cca7075aebf2d1c79838b39a1c605ffe5291a19916efb9ec9b29-merged.mount: Deactivated successfully.
Dec 02 09:39:17 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-73f9890a30d4cca7075aebf2d1c79838b39a1c605ffe5291a19916efb9ec9b29-merged.mount: Deactivated successfully.
Dec 02 09:39:17 np0005541914.localdomain sshd[229944]: Received disconnect from 192.168.122.30 port 47538:11: disconnected by user
Dec 02 09:39:17 np0005541914.localdomain sshd[229944]: Disconnected from user zuul 192.168.122.30 port 47538
Dec 02 09:39:17 np0005541914.localdomain sshd[229939]: pam_unix(sshd:session): session closed for user zuul
Dec 02 09:39:17 np0005541914.localdomain systemd[1]: session-55.scope: Deactivated successfully.
Dec 02 09:39:17 np0005541914.localdomain systemd[1]: session-55.scope: Consumed 59.546s CPU time.
Dec 02 09:39:17 np0005541914.localdomain systemd-logind[760]: Session 55 logged out. Waiting for processes to exit.
Dec 02 09:39:17 np0005541914.localdomain systemd-logind[760]: Removed session 55.
Dec 02 09:39:18 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:39:18 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully.
Dec 02 09:39:18 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:39:18 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:39:19 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14155 DF PROTO=TCP SPT=41232 DPT=9102 SEQ=2255338904 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54B90620000000001030307) 
Dec 02 09:39:19 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a802e2c2182c5081dae453e00ae55ca652c01124f4ff691b910ec76e11c97f5a-merged.mount: Deactivated successfully.
Dec 02 09:39:19 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-1f51912cd7ca4d93a076413ed4727a62a427f09f722d7bf72e350182571c8db0-merged.mount: Deactivated successfully.
Dec 02 09:39:19 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 02 09:39:19 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:39:20 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-b4f761d90eeb5a4c1ea51e856783cf8398e02a6caf306b90498250a43e5bbae1-merged.mount: Deactivated successfully.
Dec 02 09:39:20 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e1fac4507a16e359f79966290a44e975bb0ed717e8b6cc0e34b61e8c96e0a1a3-merged.mount: Deactivated successfully.
Dec 02 09:39:20 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully.
Dec 02 09:39:21 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:39:21 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 02 09:39:21 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-853ccb0b7aef1ea23933a0a39c3ed46ab9d9a29acf9ba782f87031dcfb79c247-merged.mount: Deactivated successfully.
Dec 02 09:39:21 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:39:21 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:39:21 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16476 DF PROTO=TCP SPT=58868 DPT=9101 SEQ=1055772396 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54B9B220000000001030307) 
Dec 02 09:39:22 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:39:22 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:39:22 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:39:22 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:39:22 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:39:23 np0005541914.localdomain podman[242479]: 2025-12-02 09:39:23.050655763 +0000 UTC m=+0.106633003 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 02 09:39:23 np0005541914.localdomain podman[242479]: 2025-12-02 09:39:23.079964546 +0000 UTC m=+0.135941806 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:39:23 np0005541914.localdomain podman[242479]: unhealthy
Dec 02 09:39:23 np0005541914.localdomain podman[242480]: 2025-12-02 09:39:23.09550136 +0000 UTC m=+0.148241212 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Dec 02 09:39:23 np0005541914.localdomain podman[242480]: 2025-12-02 09:39:23.200126631 +0000 UTC m=+0.252866493 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 02 09:39:23 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:39:23 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:39:23 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:39:23 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:39:23 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Failed with result 'exit-code'.
Dec 02 09:39:23 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:39:23 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:39:24 np0005541914.localdomain podman[242523]: 2025-12-02 09:39:24.07722446 +0000 UTC m=+0.081566139 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 02 09:39:24 np0005541914.localdomain podman[242523]: 2025-12-02 09:39:24.088775622 +0000 UTC m=+0.093117301 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 09:39:24 np0005541914.localdomain podman[242523]: unhealthy
Dec 02 09:39:24 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 02 09:39:24 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:39:24 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:39:24 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:39:25 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully.
Dec 02 09:39:25 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a962ed19f38fa02a2bde769e5b1e4ad9f81e2456610cd4047cfb92b422afb6bb-merged.mount: Deactivated successfully.
Dec 02 09:39:25 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a962ed19f38fa02a2bde769e5b1e4ad9f81e2456610cd4047cfb92b422afb6bb-merged.mount: Deactivated successfully.
Dec 02 09:39:25 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:39:25 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Failed with result 'exit-code'.
Dec 02 09:39:26 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:39:26 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:39:27 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:39:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52420 DF PROTO=TCP SPT=46650 DPT=9101 SEQ=1238365235 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54BAFE20000000001030307) 
Dec 02 09:39:27 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:39:27 np0005541914.localdomain podman[242546]: 2025-12-02 09:39:27.581504031 +0000 UTC m=+0.087109617 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:39:27 np0005541914.localdomain podman[242546]: 2025-12-02 09:39:27.619601653 +0000 UTC m=+0.125207309 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 02 09:39:27 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79-merged.mount: Deactivated successfully.
Dec 02 09:39:28 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 02 09:39:28 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-4b9c41fe9442d39f0f731cbd431e2ad53f3df5a873cab9bbccc810ab289d4d69-merged.mount: Deactivated successfully.
Dec 02 09:39:28 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-4b9c41fe9442d39f0f731cbd431e2ad53f3df5a873cab9bbccc810ab289d4d69-merged.mount: Deactivated successfully.
Dec 02 09:39:29 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-853ccb0b7aef1ea23933a0a39c3ed46ab9d9a29acf9ba782f87031dcfb79c247-merged.mount: Deactivated successfully.
Dec 02 09:39:29 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-73f9890a30d4cca7075aebf2d1c79838b39a1c605ffe5291a19916efb9ec9b29-merged.mount: Deactivated successfully.
Dec 02 09:39:29 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-73f9890a30d4cca7075aebf2d1c79838b39a1c605ffe5291a19916efb9ec9b29-merged.mount: Deactivated successfully.
Dec 02 09:39:29 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:39:30 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 02 09:39:30 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 02 09:39:30 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 02 09:39:30 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:39:30 np0005541914.localdomain podman[242564]: 2025-12-02 09:39:30.246547018 +0000 UTC m=+0.092232763 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, maintainer=Red Hat, Inc., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, architecture=x86_64, build-date=2025-08-20T13:12:41)
Dec 02 09:39:30 np0005541914.localdomain podman[242564]: 2025-12-02 09:39:30.260803533 +0000 UTC m=+0.106489308 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, name=ubi9-minimal)
Dec 02 09:39:31 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 02 09:39:31 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 02 09:39:31 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 02 09:39:31 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully.
Dec 02 09:39:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14157 DF PROTO=TCP SPT=41232 DPT=9102 SEQ=2255338904 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54BC1230000000001030307) 
Dec 02 09:39:31 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully.
Dec 02 09:39:31 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:39:33 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79-merged.mount: Deactivated successfully.
Dec 02 09:39:33 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:39:33 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 02 09:39:33 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:39:33 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56315 DF PROTO=TCP SPT=42962 DPT=9882 SEQ=1853791139 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54BC9410000000001030307) 
Dec 02 09:39:33 np0005541914.localdomain podman[242584]: 2025-12-02 09:39:33.638857685 +0000 UTC m=+0.071090049 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 09:39:33 np0005541914.localdomain podman[242584]: 2025-12-02 09:39:33.649043906 +0000 UTC m=+0.081276270 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 02 09:39:33 np0005541914.localdomain sudo[242599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:39:33 np0005541914.localdomain sudo[242599]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:39:33 np0005541914.localdomain sudo[242599]: pam_unix(sudo:session): session closed for user root
Dec 02 09:39:33 np0005541914.localdomain sudo[242625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:39:33 np0005541914.localdomain sudo[242625]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:39:34 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-14ed6d3c1e7f0efbf3e5310f077b6fbf5a3cd333e0b5df7204752cd3df15a8b7-merged.mount: Deactivated successfully.
Dec 02 09:39:34 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully.
Dec 02 09:39:34 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:39:34 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:39:34 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:39:34 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:39:34 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56316 DF PROTO=TCP SPT=42962 DPT=9882 SEQ=1853791139 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54BCD630000000001030307) 
Dec 02 09:39:35 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:39:35 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:39:35 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:39:35 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:39:35 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:39:35 np0005541914.localdomain sudo[242625]: pam_unix(sudo:session): session closed for user root
Dec 02 09:39:35 np0005541914.localdomain podman[239757]: time="2025-12-02T09:39:35Z" level=error msg="Getting root fs size for \"a548c2ff58f0fac68171c484bc56f01793a35da78bc1e9b62e76858e6f9b179a\": unmounting layer c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6: replacing mount point \"/var/lib/containers/storage/overlay/c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6/merged\": device or resource busy"
Dec 02 09:39:35 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:39:35 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:39:36 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully.
Dec 02 09:39:36 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-6ac3d5ef6cd74f750bad6e1bed4e64701dec5212d5cf52ac16ce138246b77afa-merged.mount: Deactivated successfully.
Dec 02 09:39:36 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:39:36 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:39:36 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:39:36 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:39:36 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56317 DF PROTO=TCP SPT=42962 DPT=9882 SEQ=1853791139 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54BD5620000000001030307) 
Dec 02 09:39:37 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-6ac3d5ef6cd74f750bad6e1bed4e64701dec5212d5cf52ac16ce138246b77afa-merged.mount: Deactivated successfully.
Dec 02 09:39:37 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:39:37 np0005541914.localdomain sudo[242674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:39:37 np0005541914.localdomain sudo[242674]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:39:37 np0005541914.localdomain sudo[242674]: pam_unix(sudo:session): session closed for user root
Dec 02 09:39:37 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 02 09:39:37 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Dec 02 09:39:38 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Dec 02 09:39:38 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-5072aa4283df2440f817438926274b2ecc1fbb999174180268a40a1b62865efd-merged.mount: Deactivated successfully.
Dec 02 09:39:38 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a962ed19f38fa02a2bde769e5b1e4ad9f81e2456610cd4047cfb92b422afb6bb-merged.mount: Deactivated successfully.
Dec 02 09:39:38 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a962ed19f38fa02a2bde769e5b1e4ad9f81e2456610cd4047cfb92b422afb6bb-merged.mount: Deactivated successfully.
Dec 02 09:39:39 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:39:39 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 02 09:39:39 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 02 09:39:40 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79-merged.mount: Deactivated successfully.
Dec 02 09:39:40 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64218 DF PROTO=TCP SPT=34694 DPT=9100 SEQ=10372801 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54BE3220000000001030307) 
Dec 02 09:39:40 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:39:40 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:39:40 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:39:40 np0005541914.localdomain podman[242692]: 2025-12-02 09:39:40.513902267 +0000 UTC m=+0.087019156 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec 02 09:39:40 np0005541914.localdomain podman[242692]: 2025-12-02 09:39:40.526965475 +0000 UTC m=+0.100082374 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec 02 09:39:41 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:39:41 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 02 09:39:41 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-4b9c41fe9442d39f0f731cbd431e2ad53f3df5a873cab9bbccc810ab289d4d69-merged.mount: Deactivated successfully.
Dec 02 09:39:41 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-4b9c41fe9442d39f0f731cbd431e2ad53f3df5a873cab9bbccc810ab289d4d69-merged.mount: Deactivated successfully.
Dec 02 09:39:41 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:39:42 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 02 09:39:42 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:39:42 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 02 09:39:42 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:39:43 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 02 09:39:43 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 02 09:39:43 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55788 DF PROTO=TCP SPT=37382 DPT=9105 SEQ=3322693370 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54BEE630000000001030307) 
Dec 02 09:39:43 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Dec 02 09:39:43 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d06b9618ea7afeaba672d022a7f469c1b4fb954818b2395f63391bb50912ecbb-merged.mount: Deactivated successfully.
Dec 02 09:39:44 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-ea63802099ebb85258cb7d2a1bbd57ddeec51406b466437719c2fc7b376d5b79-merged.mount: Deactivated successfully.
Dec 02 09:39:44 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:39:44 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully.
Dec 02 09:39:45 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully.
Dec 02 09:39:45 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:39:45 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:39:45 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:39:45 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:39:45 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:39:45 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:39:45 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64111 DF PROTO=TCP SPT=54532 DPT=9105 SEQ=2084957773 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54BF9220000000001030307) 
Dec 02 09:39:46 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully.
Dec 02 09:39:46 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-cae296f764831135e29cafc4ebb3dae4bbdc9f9a6aba7fb9c51fecf58f2b7f2e-merged.mount: Deactivated successfully.
Dec 02 09:39:46 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-6ac3d5ef6cd74f750bad6e1bed4e64701dec5212d5cf52ac16ce138246b77afa-merged.mount: Deactivated successfully.
Dec 02 09:39:46 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-6ac3d5ef6cd74f750bad6e1bed4e64701dec5212d5cf52ac16ce138246b77afa-merged.mount: Deactivated successfully.
Dec 02 09:39:46 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:39:47 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:39:48 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 02 09:39:48 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Dec 02 09:39:48 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:39:48 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:39:48 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:39:48 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Dec 02 09:39:48 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:39:48 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:39:48 np0005541914.localdomain podman[239757]: time="2025-12-02T09:39:48Z" level=error msg="Getting root fs size for \"acca850a007a0ec242ce5dd760b330bd12c19e84116fb71d0ff4e5759135e9e7\": getting diffsize of layer \"3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae\" and its parent \"efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf\": unmounting layer 3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae: replacing mount point \"/var/lib/containers/storage/overlay/3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae/merged\": device or resource busy"
Dec 02 09:39:48 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:39:48 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:39:48 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56319 DF PROTO=TCP SPT=42962 DPT=9882 SEQ=1853791139 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54C05220000000001030307) 
Dec 02 09:39:49 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:39:49 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 02 09:39:49 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully.
Dec 02 09:39:49 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:39:50 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 02 09:39:50 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3d2cbcd6205ebc71bef7b0378e46c50958788e3d833a076a9d36ebe402a8a467-merged.mount: Deactivated successfully.
Dec 02 09:39:50 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 02 09:39:50 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 02 09:39:51 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:39:51 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:39:51 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:39:52 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52422 DF PROTO=TCP SPT=46650 DPT=9101 SEQ=1238365235 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54C11230000000001030307) 
Dec 02 09:39:52 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-5d735ed10a550a807437a0617701eca41c00b16c522094f4bdfdfee4840a918b-merged.mount: Deactivated successfully.
Dec 02 09:39:52 np0005541914.localdomain sshd[242711]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:39:52 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:39:52 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 02 09:39:52 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 02 09:39:53 np0005541914.localdomain sshd[242711]: Invalid user kafka from 34.78.29.97 port 46484
Dec 02 09:39:53 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-cf8de856f68682579de884f5a9ccb4b00fffe375a72087325354c97a26c55ce7-merged.mount: Deactivated successfully.
Dec 02 09:39:53 np0005541914.localdomain sshd[242711]: Received disconnect from 34.78.29.97 port 46484:11: Bye Bye [preauth]
Dec 02 09:39:53 np0005541914.localdomain sshd[242711]: Disconnected from invalid user kafka 34.78.29.97 port 46484 [preauth]
Dec 02 09:39:53 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:39:53 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d06b9618ea7afeaba672d022a7f469c1b4fb954818b2395f63391bb50912ecbb-merged.mount: Deactivated successfully.
Dec 02 09:39:53 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:39:53 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:39:53 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:39:53 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:39:53 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:39:53 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:39:54 np0005541914.localdomain podman[242713]: 2025-12-02 09:39:54.153255191 +0000 UTC m=+0.156985288 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:39:54 np0005541914.localdomain podman[242713]: 2025-12-02 09:39:54.202562846 +0000 UTC m=+0.206292933 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 02 09:39:54 np0005541914.localdomain podman[242713]: unhealthy
Dec 02 09:39:54 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 02 09:39:54 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-4cbd426914bbc0b3c94f281248297da1bdd998807cad604e4ab2f39851a1899c-merged.mount: Deactivated successfully.
Dec 02 09:39:55 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 02 09:39:55 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully.
Dec 02 09:39:55 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully.
Dec 02 09:39:55 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:39:55 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Failed with result 'exit-code'.
Dec 02 09:39:55 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:39:56 np0005541914.localdomain podman[242740]: 2025-12-02 09:39:56.074611458 +0000 UTC m=+0.083170038 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 02 09:39:56 np0005541914.localdomain podman[242740]: 2025-12-02 09:39:56.120973152 +0000 UTC m=+0.129531742 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 09:39:56 np0005541914.localdomain podman[242740]: unhealthy
Dec 02 09:39:56 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8-merged.mount: Deactivated successfully.
Dec 02 09:39:56 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 02 09:39:56 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-06baa34adcac19ffd1cac321f0c14e5e32037c7b357d2eb54e065b4d177d72fd-merged.mount: Deactivated successfully.
Dec 02 09:39:56 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-06baa34adcac19ffd1cac321f0c14e5e32037c7b357d2eb54e065b4d177d72fd-merged.mount: Deactivated successfully.
Dec 02 09:39:56 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:39:56 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Failed with result 'exit-code'.
Dec 02 09:39:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37697 DF PROTO=TCP SPT=59168 DPT=9101 SEQ=347695817 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54C25220000000001030307) 
Dec 02 09:39:57 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d6636e8195e20b46e9ff0be91c525681b79b061d34e7042a3302554bc91c2a8c-merged.mount: Deactivated successfully.
Dec 02 09:39:57 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 02 09:39:57 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 02 09:39:57 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 02 09:39:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55791 DF PROTO=TCP SPT=37382 DPT=9105 SEQ=3322693370 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54C27230000000001030307) 
Dec 02 09:39:58 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:39:58 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:39:58 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:39:58 np0005541914.localdomain podman[242714]: 2025-12-02 09:39:58.175077036 +0000 UTC m=+4.173501311 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Dec 02 09:39:58 np0005541914.localdomain podman[242714]: 2025-12-02 09:39:58.250776865 +0000 UTC m=+4.249201110 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:39:58 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:39:59 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:40:00 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:40:00 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:40:00 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:40:00 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:40:00 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:40:00 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:40:00 np0005541914.localdomain podman[242778]: 2025-12-02 09:40:00.70095297 +0000 UTC m=+0.710174260 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 02 09:40:00 np0005541914.localdomain podman[242778]: 2025-12-02 09:40:00.73475544 +0000 UTC m=+0.743976660 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Dec 02 09:40:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14171 DF PROTO=TCP SPT=32858 DPT=9102 SEQ=3707033213 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54C35220000000001030307) 
Dec 02 09:40:01 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:40:01 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:40:01 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:40:01 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:40:02 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:40:02 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:40:02 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:40:02 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:40:02 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:40:02 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:40:02 np0005541914.localdomain podman[242796]: 2025-12-02 09:40:02.604819512 +0000 UTC m=+0.615457540 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, version=9.6, container_name=openstack_network_exporter, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible)
Dec 02 09:40:02 np0005541914.localdomain podman[242796]: 2025-12-02 09:40:02.617827149 +0000 UTC m=+0.628465147 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.buildah.version=1.33.7, config_id=edpm, name=ubi9-minimal, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 02 09:40:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:40:03.147 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:40:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:40:03.148 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:40:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:40:03.148 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:40:03 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:40:03 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:40:03 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:40:03 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36136 DF PROTO=TCP SPT=41450 DPT=9882 SEQ=1489999072 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54C3E710000000001030307) 
Dec 02 09:40:03 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:03.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:40:03 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:03.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:40:03 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:03.641 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 02 09:40:03 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:40:03 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:40:03 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:03.763 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 02 09:40:03 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:03.766 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:40:03 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:03.766 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 02 09:40:03 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:03.846 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:40:04 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:40:04 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:40:04 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:40:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:04.859 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:40:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:04.860 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:40:04 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:04.860 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 09:40:04 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:40:05 np0005541914.localdomain podman[242815]: 2025-12-02 09:40:05.102508496 +0000 UTC m=+0.106005454 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 09:40:05 np0005541914.localdomain podman[242815]: 2025-12-02 09:40:05.141952129 +0000 UTC m=+0.145449047 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 09:40:05 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d39fd500dccd0614704d889eaaf9068fe2575a3bb203d70cd1f6b19969ae7a25-merged.mount: Deactivated successfully.
Dec 02 09:40:05 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3d2cbcd6205ebc71bef7b0378e46c50958788e3d833a076a9d36ebe402a8a467-merged.mount: Deactivated successfully.
Dec 02 09:40:05 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3d2cbcd6205ebc71bef7b0378e46c50958788e3d833a076a9d36ebe402a8a467-merged.mount: Deactivated successfully.
Dec 02 09:40:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:06.636 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:40:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:06.639 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:40:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:06.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:40:06 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36138 DF PROTO=TCP SPT=41450 DPT=9882 SEQ=1489999072 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54C4A630000000001030307) 
Dec 02 09:40:07 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:40:07 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e99c986d4857ab1fa44ce62584eec376fd6f28bcc79d8fb56e2c5847b897969a-merged.mount: Deactivated successfully.
Dec 02 09:40:07 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e99c986d4857ab1fa44ce62584eec376fd6f28bcc79d8fb56e2c5847b897969a-merged.mount: Deactivated successfully.
Dec 02 09:40:07 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:40:07 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:07.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:40:07 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:07.641 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 09:40:07 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:07.641 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 09:40:07 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:07.655 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 09:40:07 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:07.655 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:40:08 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:40:08 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 02 09:40:08 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 02 09:40:08 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:08.639 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:40:08 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:08.663 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:40:08 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:08.663 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:40:08 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:08.663 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:40:08 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:08.663 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 09:40:08 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:08.663 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:40:09 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:09.070 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:40:09 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:40:09 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:40:09 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:09.225 229589 WARNING nova.virt.libvirt.driver [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 09:40:09 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:09.226 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=13066MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 09:40:09 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:09.226 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:40:09 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:09.226 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:40:09 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:40:09 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:09.327 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 09:40:09 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:09.327 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 09:40:09 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:09.392 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Refreshing inventories for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 02 09:40:09 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:09.460 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Updating ProviderTree inventory for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 02 09:40:09 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:09.460 229589 DEBUG nova.compute.provider_tree [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Updating inventory in ProviderTree for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 02 09:40:09 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:09.477 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Refreshing aggregate associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 02 09:40:09 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:09.501 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Refreshing trait associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, traits: COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_BMI,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,HW_CPU_X86_AESNI,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_RESCUE_BFV,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE42,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_MMX,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 02 09:40:09 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:09.526 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:40:10 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:10.000 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:40:10 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:10.006 229589 DEBUG nova.compute.provider_tree [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 09:40:10 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:10.033 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 09:40:10 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:10.037 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 09:40:10 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:40:10.037 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:40:10 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-5d735ed10a550a807437a0617701eca41c00b16c522094f4bdfdfee4840a918b-merged.mount: Deactivated successfully.
Dec 02 09:40:10 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:40:10 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21026 DF PROTO=TCP SPT=43298 DPT=9100 SEQ=1065710807 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54C59220000000001030307) 
Dec 02 09:40:10 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:40:10 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 02 09:40:10 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 02 09:40:11 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:40:11 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:40:11 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:40:11 np0005541914.localdomain podman[242882]: 2025-12-02 09:40:11.515897111 +0000 UTC m=+0.079480788 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 02 09:40:11 np0005541914.localdomain podman[242882]: 2025-12-02 09:40:11.531023538 +0000 UTC m=+0.094607285 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:40:11 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:40:12 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:40:12 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:40:12 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:40:12 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:40:12 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a10f3c610bfd3a5166c8bb201abb4a07184bf8ddf69826ea8939f1a48ecba966-merged.mount: Deactivated successfully.
Dec 02 09:40:13 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-4cbd426914bbc0b3c94f281248297da1bdd998807cad604e4ab2f39851a1899c-merged.mount: Deactivated successfully.
Dec 02 09:40:13 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39908 DF PROTO=TCP SPT=44272 DPT=9105 SEQ=2658120026 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54C63A30000000001030307) 
Dec 02 09:40:13 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:40:13 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:40:13 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:40:14 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8-merged.mount: Deactivated successfully.
Dec 02 09:40:14 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:40:14 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 02 09:40:14 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-06baa34adcac19ffd1cac321f0c14e5e32037c7b357d2eb54e065b4d177d72fd-merged.mount: Deactivated successfully.
Dec 02 09:40:15 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:40:15 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 02 09:40:15 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 02 09:40:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.433 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:40:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.433 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:40:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:40:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:40:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:40:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:40:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:40:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:40:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:40:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:40:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:40:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:40:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:40:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:40:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:40:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:40:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:40:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:40:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:40:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:40:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:40:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:40:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:40:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:40:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:40:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:40:16 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23317 DF PROTO=TCP SPT=38768 DPT=9102 SEQ=905833791 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54C6ECF0000000001030307) 
Dec 02 09:40:16 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-0dae0ae2501f0b947a8e64948b264823feec8c7ddb8b7849cb102fbfe0c75da8-merged.mount: Deactivated successfully.
Dec 02 09:40:16 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-307fde9f9a17104e6d254f3661d03569d645ee844efb3016652158492a4ae8a6-merged.mount: Deactivated successfully.
Dec 02 09:40:17 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully.
Dec 02 09:40:17 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully.
Dec 02 09:40:18 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully.
Dec 02 09:40:18 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:40:19 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23319 DF PROTO=TCP SPT=38768 DPT=9102 SEQ=905833791 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54C7AE20000000001030307) 
Dec 02 09:40:19 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:40:19 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:40:19 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:40:19 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:40:20 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:40:20 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully.
Dec 02 09:40:20 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e33240bb039d460ca33f381563cd1cbbc8c9cff68602bf3e8b26baddcb70d04b-merged.mount: Deactivated successfully.
Dec 02 09:40:20 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:40:21 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:40:21 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:40:21 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Dec 02 09:40:21 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37699 DF PROTO=TCP SPT=59168 DPT=9101 SEQ=347695817 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54C85230000000001030307) 
Dec 02 09:40:21 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:40:21 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Dec 02 09:40:21 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:40:22 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:40:22 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:40:22 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:40:22 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:40:22 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:40:22 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:40:22 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:40:22 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:40:23 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:40:23 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:40:24 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:40:24 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:40:25 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e99c986d4857ab1fa44ce62584eec376fd6f28bcc79d8fb56e2c5847b897969a-merged.mount: Deactivated successfully.
Dec 02 09:40:25 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Dec 02 09:40:25 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e99c986d4857ab1fa44ce62584eec376fd6f28bcc79d8fb56e2c5847b897969a-merged.mount: Deactivated successfully.
Dec 02 09:40:25 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:40:25 np0005541914.localdomain podman[242898]: 2025-12-02 09:40:25.558695729 +0000 UTC m=+0.067605378 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:40:25 np0005541914.localdomain podman[242898]: 2025-12-02 09:40:25.562293225 +0000 UTC m=+0.071202914 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 02 09:40:25 np0005541914.localdomain podman[242898]: unhealthy
Dec 02 09:40:25 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-1cd4674896f37ed03c180aa0ab9f93ced388cfe5185ce6c19dc1fe143ce7985a-merged.mount: Deactivated successfully.
Dec 02 09:40:26 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:40:27 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully.
Dec 02 09:40:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65233 DF PROTO=TCP SPT=45234 DPT=9101 SEQ=2403759065 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54C9A620000000001030307) 
Dec 02 09:40:27 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully.
Dec 02 09:40:27 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully.
Dec 02 09:40:27 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:40:27 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:40:27 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Failed with result 'exit-code'.
Dec 02 09:40:27 np0005541914.localdomain podman[242915]: 2025-12-02 09:40:27.874938278 +0000 UTC m=+0.874984650 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 02 09:40:27 np0005541914.localdomain podman[242915]: 2025-12-02 09:40:27.883863481 +0000 UTC m=+0.883909853 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 09:40:27 np0005541914.localdomain podman[242915]: unhealthy
Dec 02 09:40:28 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:40:29 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 02 09:40:29 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully.
Dec 02 09:40:29 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:40:29 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:40:29 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:40:29 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Failed with result 'exit-code'.
Dec 02 09:40:30 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully.
Dec 02 09:40:30 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 02 09:40:30 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:40:30 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 02 09:40:30 np0005541914.localdomain podman[242937]: 2025-12-02 09:40:30.805653013 +0000 UTC m=+0.084122446 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 02 09:40:30 np0005541914.localdomain podman[242937]: 2025-12-02 09:40:30.86586764 +0000 UTC m=+0.144337103 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 02 09:40:31 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:40:31 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 02 09:40:31 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:40:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23321 DF PROTO=TCP SPT=38768 DPT=9102 SEQ=905833791 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54CAB230000000001030307) 
Dec 02 09:40:31 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 02 09:40:31 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 02 09:40:31 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 02 09:40:32 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:40:32 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:40:32 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:40:33 np0005541914.localdomain podman[242961]: 2025-12-02 09:40:33.088574008 +0000 UTC m=+0.089408542 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec 02 09:40:33 np0005541914.localdomain podman[242961]: 2025-12-02 09:40:33.127076165 +0000 UTC m=+0.127910679 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 02 09:40:33 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29337 DF PROTO=TCP SPT=45408 DPT=9882 SEQ=3445642487 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54CB3A10000000001030307) 
Dec 02 09:40:33 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully.
Dec 02 09:40:33 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:40:34 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:40:34 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-307fde9f9a17104e6d254f3661d03569d645ee844efb3016652158492a4ae8a6-merged.mount: Deactivated successfully.
Dec 02 09:40:34 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29338 DF PROTO=TCP SPT=45408 DPT=9882 SEQ=3445642487 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54CB7A20000000001030307) 
Dec 02 09:40:34 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-307fde9f9a17104e6d254f3661d03569d645ee844efb3016652158492a4ae8a6-merged.mount: Deactivated successfully.
Dec 02 09:40:34 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:40:34 np0005541914.localdomain podman[242979]: 2025-12-02 09:40:34.73509778 +0000 UTC m=+0.877917076 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, managed_by=edpm_ansible, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, architecture=x86_64, vcs-type=git, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, release=1755695350, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41)
Dec 02 09:40:34 np0005541914.localdomain podman[242979]: 2025-12-02 09:40:34.779058168 +0000 UTC m=+0.921877494 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, config_id=edpm, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-type=git, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Dec 02 09:40:35 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully.
Dec 02 09:40:35 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully.
Dec 02 09:40:36 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29339 DF PROTO=TCP SPT=45408 DPT=9882 SEQ=3445642487 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54CBFA30000000001030307) 
Dec 02 09:40:36 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:40:37 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 02 09:40:37 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 02 09:40:37 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:40:37 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:40:37 np0005541914.localdomain podman[242999]: 2025-12-02 09:40:37.394814812 +0000 UTC m=+0.067425982 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 09:40:37 np0005541914.localdomain podman[242999]: 2025-12-02 09:40:37.430740103 +0000 UTC m=+0.103351293 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 02 09:40:37 np0005541914.localdomain sudo[243023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:40:37 np0005541914.localdomain sudo[243023]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:40:37 np0005541914.localdomain sudo[243023]: pam_unix(sudo:session): session closed for user root
Dec 02 09:40:37 np0005541914.localdomain sudo[243041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 02 09:40:37 np0005541914.localdomain sudo[243041]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:40:38 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:40:38 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully.
Dec 02 09:40:38 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-5e63dbc6f2c2fad3afb78d8adbb63d1357a03d400c05fbcd9ab42cd01e6497a2-merged.mount: Deactivated successfully.
Dec 02 09:40:39 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:40:39 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:40:39 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:40:39 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:40:39 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:40:40 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57767 DF PROTO=TCP SPT=58558 DPT=9100 SEQ=3246799236 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54CCD220000000001030307) 
Dec 02 09:40:40 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:40:40 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:40:40 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:40:40 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:40:40 np0005541914.localdomain sudo[243041]: pam_unix(sudo:session): session closed for user root
Dec 02 09:40:40 np0005541914.localdomain sudo[243081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:40:40 np0005541914.localdomain sudo[243081]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:40:40 np0005541914.localdomain sudo[243081]: pam_unix(sudo:session): session closed for user root
Dec 02 09:40:40 np0005541914.localdomain sudo[243099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:40:40 np0005541914.localdomain sudo[243099]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:40:41 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:40:41 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:40:41 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:40:41 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:40:41 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:40:41 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:40:41 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:40:41 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:40:41 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:40:41 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:40:42 np0005541914.localdomain sudo[243099]: pam_unix(sudo:session): session closed for user root
Dec 02 09:40:42 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e007bb9d0888be9cba9b97125428a4f6aecdcc0d729e1ce5c64249815340e7d9-merged.mount: Deactivated successfully.
Dec 02 09:40:42 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e33240bb039d460ca33f381563cd1cbbc8c9cff68602bf3e8b26baddcb70d04b-merged.mount: Deactivated successfully.
Dec 02 09:40:42 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:40:42 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:40:42 np0005541914.localdomain podman[243149]: 2025-12-02 09:40:42.301661673 +0000 UTC m=+0.096654436 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 02 09:40:42 np0005541914.localdomain podman[243149]: 2025-12-02 09:40:42.3127425 +0000 UTC m=+0.107735263 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 02 09:40:42 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21027 DF PROTO=TCP SPT=43298 DPT=9100 SEQ=1065710807 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54CD7230000000001030307) 
Dec 02 09:40:43 np0005541914.localdomain systemd[1]: tmp-crun.CAHgbC.mount: Deactivated successfully.
Dec 02 09:40:43 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:40:43 np0005541914.localdomain sudo[243167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:40:43 np0005541914.localdomain sudo[243167]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:40:43 np0005541914.localdomain sudo[243167]: pam_unix(sudo:session): session closed for user root
Dec 02 09:40:43 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:40:43 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Dec 02 09:40:43 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Dec 02 09:40:44 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 02 09:40:44 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-becbc927e1a2defd8b98f9313e9ae54e436a645a48c9af865764923e7f3644aa-merged.mount: Deactivated successfully.
Dec 02 09:40:44 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:40:44 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:40:45 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:40:45 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:40:46 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14296 DF PROTO=TCP SPT=33926 DPT=9102 SEQ=3238031798 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54CE3FE0000000001030307) 
Dec 02 09:40:46 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:40:46 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:40:46 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Dec 02 09:40:46 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:40:46 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:40:47 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:40:47 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:40:47 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:40:47 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:40:47 np0005541914.localdomain sshd[243185]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:40:47 np0005541914.localdomain sshd[243185]: Accepted publickey for zuul from 192.168.122.30 port 41218 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 09:40:47 np0005541914.localdomain systemd-logind[760]: New session 56 of user zuul.
Dec 02 09:40:47 np0005541914.localdomain systemd[1]: Started Session 56 of User zuul.
Dec 02 09:40:47 np0005541914.localdomain sshd[243185]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 09:40:47 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:40:47 np0005541914.localdomain sudo[243279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-usbylvujftnnteirsundoosskkjcwaop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668447.348917-2802-165342508460312/AnsiballZ_podman_container_info.py
Dec 02 09:40:47 np0005541914.localdomain sudo[243279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:40:47 np0005541914.localdomain python3.9[243281]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Dec 02 09:40:47 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:40:47 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:40:48 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:40:48 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Dec 02 09:40:48 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-1cd4674896f37ed03c180aa0ab9f93ced388cfe5185ce6c19dc1fe143ce7985a-merged.mount: Deactivated successfully.
Dec 02 09:40:48 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:40:48 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:40:48 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29341 DF PROTO=TCP SPT=45408 DPT=9882 SEQ=3445642487 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54CEF220000000001030307) 
Dec 02 09:40:50 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully.
Dec 02 09:40:50 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully.
Dec 02 09:40:50 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully.
Dec 02 09:40:50 np0005541914.localdomain sudo[243279]: pam_unix(sudo:session): session closed for user root
Dec 02 09:40:51 np0005541914.localdomain sudo[243401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tskpbtxutavkkfbgheucehfhgqzlawtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668450.7762296-2813-252271288758442/AnsiballZ_podman_container_exec.py
Dec 02 09:40:51 np0005541914.localdomain sudo[243401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:40:51 np0005541914.localdomain python3.9[243403]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 02 09:40:51 np0005541914.localdomain systemd[1]: Started libpod-conmon-c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.scope.
Dec 02 09:40:51 np0005541914.localdomain podman[243404]: 2025-12-02 09:40:51.388021624 +0000 UTC m=+0.118025476 container exec c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:40:51 np0005541914.localdomain podman[243404]: 2025-12-02 09:40:51.421817572 +0000 UTC m=+0.151821344 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 02 09:40:51 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Dec 02 09:40:51 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-63f5c4d65539870ee2bafb1f7e39854f191dd3f1ae459b319446f5932294db9e-merged.mount: Deactivated successfully.
Dec 02 09:40:51 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=65235 DF PROTO=TCP SPT=45234 DPT=9101 SEQ=2403759065 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54CFB220000000001030307) 
Dec 02 09:40:52 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 02 09:40:52 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22-merged.mount: Deactivated successfully.
Dec 02 09:40:52 np0005541914.localdomain sudo[243401]: pam_unix(sudo:session): session closed for user root
Dec 02 09:40:52 np0005541914.localdomain sudo[243541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rryrjsfdltmbwvodxtgrawqpglbtcplp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668452.6013386-2821-10074760528326/AnsiballZ_podman_container_exec.py
Dec 02 09:40:52 np0005541914.localdomain sudo[243541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:40:53 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 02 09:40:53 np0005541914.localdomain python3.9[243543]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 02 09:40:53 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60-merged.mount: Deactivated successfully.
Dec 02 09:40:54 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:40:54 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:40:54 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:40:54 np0005541914.localdomain systemd[1]: libpod-conmon-c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.scope: Deactivated successfully.
Dec 02 09:40:54 np0005541914.localdomain systemd[1]: Started libpod-conmon-c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.scope.
Dec 02 09:40:55 np0005541914.localdomain podman[243544]: 2025-12-02 09:40:55.008340183 +0000 UTC m=+1.933017084 container exec c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 02 09:40:55 np0005541914.localdomain podman[243544]: 2025-12-02 09:40:55.041168263 +0000 UTC m=+1.965845194 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller)
Dec 02 09:40:55 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 02 09:40:55 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9-merged.mount: Deactivated successfully.
Dec 02 09:40:56 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:40:56 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:40:56 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:40:57 np0005541914.localdomain sudo[243541]: pam_unix(sudo:session): session closed for user root
Dec 02 09:40:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9686 DF PROTO=TCP SPT=54012 DPT=9101 SEQ=1647305550 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54D0FA20000000001030307) 
Dec 02 09:40:57 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa-merged.mount: Deactivated successfully.
Dec 02 09:40:57 np0005541914.localdomain sshd[243645]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:40:57 np0005541914.localdomain sudo[243683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-guvstodomdnoknhmvasibqqrtaeimptv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668457.1884289-2829-21919562896990/AnsiballZ_file.py
Dec 02 09:40:57 np0005541914.localdomain sudo[243683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:40:57 np0005541914.localdomain python3.9[243685]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:40:57 np0005541914.localdomain sudo[243683]: pam_unix(sudo:session): session closed for user root
Dec 02 09:40:57 np0005541914.localdomain sshd[243645]: Invalid user arkserver from 34.78.29.97 port 52200
Dec 02 09:40:57 np0005541914.localdomain sshd[243645]: Received disconnect from 34.78.29.97 port 52200:11: Bye Bye [preauth]
Dec 02 09:40:57 np0005541914.localdomain sshd[243645]: Disconnected from invalid user arkserver 34.78.29.97 port 52200 [preauth]
Dec 02 09:40:57 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:40:57 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:40:58 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:40:58 np0005541914.localdomain sudo[243804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vaahcbnbjfdmsrikpimuymwkkydcqvvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668457.825696-2838-65463555275802/AnsiballZ_podman_container_info.py
Dec 02 09:40:58 np0005541914.localdomain sudo[243804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:40:58 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:40:58 np0005541914.localdomain systemd[1]: libpod-conmon-c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.scope: Deactivated successfully.
Dec 02 09:40:58 np0005541914.localdomain podman[243755]: 2025-12-02 09:40:58.190215775 +0000 UTC m=+0.260544505 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec 02 09:40:58 np0005541914.localdomain podman[243755]: 2025-12-02 09:40:58.218891751 +0000 UTC m=+0.289220551 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:40:58 np0005541914.localdomain podman[243755]: unhealthy
Dec 02 09:40:58 np0005541914.localdomain python3.9[243806]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Dec 02 09:40:58 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:40:58 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:40:58 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:40:59 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:41:00 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d-merged.mount: Deactivated successfully.
Dec 02 09:41:00 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:41:00 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Failed with result 'exit-code'.
Dec 02 09:41:00 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:41:00 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:41:00 np0005541914.localdomain podman[243823]: 2025-12-02 09:41:00.157481809 +0000 UTC m=+0.161217022 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 09:41:00 np0005541914.localdomain podman[243823]: 2025-12-02 09:41:00.166836105 +0000 UTC m=+0.170571318 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 09:41:00 np0005541914.localdomain podman[243823]: unhealthy
Dec 02 09:41:01 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:41:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14300 DF PROTO=TCP SPT=33926 DPT=9102 SEQ=3238031798 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54D21220000000001030307) 
Dec 02 09:41:01 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:41:02 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:41:02 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 02 09:41:02 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 02 09:41:02 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:41:02 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Failed with result 'exit-code'.
Dec 02 09:41:02 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:41:02 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:41:02 np0005541914.localdomain podman[243844]: 2025-12-02 09:41:02.600234083 +0000 UTC m=+0.582094039 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec 02 09:41:02 np0005541914.localdomain podman[243844]: 2025-12-02 09:41:02.639031229 +0000 UTC m=+0.620891145 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec 02 09:41:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:41:03.148 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:41:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:41:03.149 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:41:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:41:03.149 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:41:03 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41494 DF PROTO=TCP SPT=53878 DPT=9882 SEQ=1550781520 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54D28D10000000001030307) 
Dec 02 09:41:04 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:41:04 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:41:04 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:41:04 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41495 DF PROTO=TCP SPT=53878 DPT=9882 SEQ=1550781520 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54D2CE20000000001030307) 
Dec 02 09:41:04 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:41:04 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:41:05 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e02df3188ed09c76117009d9e268cf57a20be20a288a1b1dd5d724192cbba084-merged.mount: Deactivated successfully.
Dec 02 09:41:05 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:41:05 np0005541914.localdomain sudo[243804]: pam_unix(sudo:session): session closed for user root
Dec 02 09:41:05 np0005541914.localdomain podman[243869]: 2025-12-02 09:41:05.231792134 +0000 UTC m=+0.264071718 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Dec 02 09:41:05 np0005541914.localdomain podman[243869]: 2025-12-02 09:41:05.263004176 +0000 UTC m=+0.295283720 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true)
Dec 02 09:41:05 np0005541914.localdomain sudo[243994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oagljrerzzkxytkungmqavsaprmtfrms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668465.3654413-2846-10359740858663/AnsiballZ_podman_container_exec.py
Dec 02 09:41:05 np0005541914.localdomain sudo[243994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:41:05 np0005541914.localdomain python3.9[243996]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 02 09:41:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:41:06.039 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:41:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:41:06.039 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:41:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:41:06.039 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:41:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:41:06.040 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 09:41:06 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:41:06 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:41:06 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:41:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:41:06.636 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:41:06 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41496 DF PROTO=TCP SPT=53878 DPT=9882 SEQ=1550781520 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54D34E20000000001030307) 
Dec 02 09:41:07 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:41:07 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:41:07 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:41:07 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:41:07 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:41:07.635 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:41:07 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:41:07 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:41:07 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:41:07 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:41:07.661 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:41:07 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:41:07.662 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:41:07 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:41:07.662 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:41:07 np0005541914.localdomain systemd[1]: Started libpod-conmon-225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.scope.
Dec 02 09:41:07 np0005541914.localdomain podman[243997]: 2025-12-02 09:41:07.693710295 +0000 UTC m=+1.887157790 container exec 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Dec 02 09:41:07 np0005541914.localdomain podman[243997]: 2025-12-02 09:41:07.698390162 +0000 UTC m=+1.891837647 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 02 09:41:07 np0005541914.localdomain podman[244009]: 2025-12-02 09:41:07.785616378 +0000 UTC m=+0.343880535 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, release=1755695350, name=ubi9-minimal, architecture=x86_64, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, vcs-type=git)
Dec 02 09:41:07 np0005541914.localdomain podman[244009]: 2025-12-02 09:41:07.79176055 +0000 UTC m=+0.350024697 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., distribution-scope=public, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, config_id=edpm, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 02 09:41:08 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:41:08 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:41:08 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:41:08 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:41:08.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:41:08 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:41:08.642 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 09:41:08 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:41:08.642 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 09:41:08 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:41:08.686 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 09:41:09 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:41:09 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:41:09 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:41:09 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:41:09 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:41:09 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:41:09 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:41:09 np0005541914.localdomain sudo[243994]: pam_unix(sudo:session): session closed for user root
Dec 02 09:41:10 np0005541914.localdomain sudo[244166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zcrkgmgmotovaxbczunuavwtbzzwmecn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668469.8908632-2854-64640278090948/AnsiballZ_podman_container_exec.py
Dec 02 09:41:10 np0005541914.localdomain sudo[244166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:41:10 np0005541914.localdomain python3.9[244168]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 02 09:41:10 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:41:10 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7488 DF PROTO=TCP SPT=55254 DPT=9100 SEQ=1578231587 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54D43220000000001030307) 
Dec 02 09:41:10 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:41:10.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:41:10 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:41:10 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:41:10.666 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:41:10 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:41:10.667 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:41:10 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:41:10.667 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:41:10 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:41:10.667 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 09:41:10 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:41:10.669 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:41:10 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:41:10 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:41:10 np0005541914.localdomain systemd[1]: libpod-conmon-225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.scope: Deactivated successfully.
Dec 02 09:41:10 np0005541914.localdomain podman[244048]: 2025-12-02 09:41:10.905356615 +0000 UTC m=+1.199968996 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 09:41:10 np0005541914.localdomain systemd[1]: Started libpod-conmon-225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.scope.
Dec 02 09:41:10 np0005541914.localdomain podman[244169]: 2025-12-02 09:41:10.992151078 +0000 UTC m=+0.674273112 container exec 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 02 09:41:11 np0005541914.localdomain podman[244169]: 2025-12-02 09:41:11.025788742 +0000 UTC m=+0.707910716 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:41:11 np0005541914.localdomain podman[244048]: 2025-12-02 09:41:11.046109181 +0000 UTC m=+1.340721612 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 09:41:11 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:41:11.154 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:41:11 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:41:11.347 229589 WARNING nova.virt.libvirt.driver [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 09:41:11 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:41:11.348 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=12986MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 09:41:11 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:41:11.348 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:41:11 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:41:11.349 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:41:11 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:41:11.413 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 09:41:11 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:41:11.413 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 09:41:11 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:41:11.434 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:41:11 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:41:11 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:41:11 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:41:11 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:41:11.870 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:41:11 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:41:11.877 229589 DEBUG nova.compute.provider_tree [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 09:41:11 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:41:11.913 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 09:41:11 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:41:11.917 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 09:41:11 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:41:11.917 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:41:13 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34857 DF PROTO=TCP SPT=37094 DPT=9105 SEQ=1296272358 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54D4DE20000000001030307) 
Dec 02 09:41:13 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e7e7fc61a64bc57d1eb8c2a61f7791db4e4a30e6f64eed9bc93c76716d60ed28-merged.mount: Deactivated successfully.
Dec 02 09:41:13 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-becbc927e1a2defd8b98f9313e9ae54e436a645a48c9af865764923e7f3644aa-merged.mount: Deactivated successfully.
Dec 02 09:41:13 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-becbc927e1a2defd8b98f9313e9ae54e436a645a48c9af865764923e7f3644aa-merged.mount: Deactivated successfully.
Dec 02 09:41:13 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:41:13 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:41:13 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:41:13 np0005541914.localdomain sudo[244166]: pam_unix(sudo:session): session closed for user root
Dec 02 09:41:13 np0005541914.localdomain sudo[244361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ylfnicfksqubcllrykalkpdvqmspbscl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668473.5721517-2862-164533466137339/AnsiballZ_file.py
Dec 02 09:41:13 np0005541914.localdomain sudo[244361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:41:14 np0005541914.localdomain python3.9[244363]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:41:14 np0005541914.localdomain sudo[244361]: pam_unix(sudo:session): session closed for user root
Dec 02 09:41:14 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:41:14 np0005541914.localdomain sudo[244471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mkbferguhgoelpwymolywudnvowdfykx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668474.1974387-2871-124035011626149/AnsiballZ_podman_container_info.py
Dec 02 09:41:14 np0005541914.localdomain sudo[244471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:41:14 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:41:14 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Dec 02 09:41:14 np0005541914.localdomain python3.9[244473]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Dec 02 09:41:14 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Dec 02 09:41:14 np0005541914.localdomain systemd[1]: libpod-conmon-225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.scope: Deactivated successfully.
Dec 02 09:41:14 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:41:14 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:41:14 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:41:15 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:41:15 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:41:15 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:41:15 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39913 DF PROTO=TCP SPT=44272 DPT=9105 SEQ=2658120026 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54D59220000000001030307) 
Dec 02 09:41:16 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:41:16 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-14ddf7e0c76befb63a54b1348ab4f9ad7d65a2f392d0685c8169eecf2841ddca-merged.mount: Deactivated successfully.
Dec 02 09:41:17 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:41:17 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:41:17 np0005541914.localdomain podman[244485]: 2025-12-02 09:41:17.135600786 +0000 UTC m=+2.144836579 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible)
Dec 02 09:41:17 np0005541914.localdomain podman[244485]: 2025-12-02 09:41:17.148445855 +0000 UTC m=+2.157681628 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:41:17 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:41:17 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:41:19 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4177 DF PROTO=TCP SPT=55286 DPT=9102 SEQ=3341804910 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54D65220000000001030307) 
Dec 02 09:41:19 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:41:19 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:41:19 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:41:19 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:41:19 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:41:19 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:41:19 np0005541914.localdomain sudo[244471]: pam_unix(sudo:session): session closed for user root
Dec 02 09:41:20 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:41:20 np0005541914.localdomain sudo[244609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uhgxsnimitgdszqqlamjcfryvpvxxwcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668479.8869135-2879-44412662385117/AnsiballZ_podman_container_exec.py
Dec 02 09:41:20 np0005541914.localdomain sudo[244609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:41:21 np0005541914.localdomain python3.9[244611]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 02 09:41:21 np0005541914.localdomain systemd[1]: Started libpod-conmon-2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.scope.
Dec 02 09:41:21 np0005541914.localdomain podman[244612]: 2025-12-02 09:41:21.176175085 +0000 UTC m=+0.112948937 container exec 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Dec 02 09:41:21 np0005541914.localdomain podman[244612]: 2025-12-02 09:41:21.18379436 +0000 UTC m=+0.120568242 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 02 09:41:21 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-88264f091bd3862d781bfa87f5675ae91e879ca34a7c2bbe081e8ea3bd8603d6-merged.mount: Deactivated successfully.
Dec 02 09:41:21 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-63f5c4d65539870ee2bafb1f7e39854f191dd3f1ae459b319446f5932294db9e-merged.mount: Deactivated successfully.
Dec 02 09:41:21 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:41:21 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9688 DF PROTO=TCP SPT=54012 DPT=9101 SEQ=1647305550 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54D6F220000000001030307) 
Dec 02 09:41:21 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:41:21 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:41:21 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:41:21 np0005541914.localdomain systemd[1]: libpod-conmon-2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.scope: Deactivated successfully.
Dec 02 09:41:22 np0005541914.localdomain sudo[244609]: pam_unix(sudo:session): session closed for user root
Dec 02 09:41:22 np0005541914.localdomain sudo[244747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vmisqdepkdvrnzdctdtbkgsdhvlwjgei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668482.1469438-2887-258317099364921/AnsiballZ_podman_container_exec.py
Dec 02 09:41:22 np0005541914.localdomain sudo[244747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:41:23 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:41:23 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:41:23 np0005541914.localdomain python3.9[244749]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 02 09:41:23 np0005541914.localdomain systemd[1]: Started libpod-conmon-2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.scope.
Dec 02 09:41:23 np0005541914.localdomain podman[244750]: 2025-12-02 09:41:23.199371431 +0000 UTC m=+0.104693343 container exec 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 02 09:41:23 np0005541914.localdomain podman[244750]: 2025-12-02 09:41:23.206913303 +0000 UTC m=+0.112235205 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 02 09:41:23 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:41:23 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:41:23 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:41:23 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:41:24 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:41:24 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:41:24 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:41:24 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:41:24 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:41:24 np0005541914.localdomain systemd[1]: libpod-conmon-2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.scope: Deactivated successfully.
Dec 02 09:41:24 np0005541914.localdomain sudo[244747]: pam_unix(sudo:session): session closed for user root
Dec 02 09:41:25 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:41:25 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:41:25 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:41:26 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:41:26 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:41:26 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:41:27 np0005541914.localdomain sudo[244887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xuppzvcovgebljrzvgxttaakidwohuyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668486.8985522-2895-123807035724032/AnsiballZ_file.py
Dec 02 09:41:27 np0005541914.localdomain sudo[244887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:41:27 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47236 DF PROTO=TCP SPT=46310 DPT=9101 SEQ=1154948648 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54D84A30000000001030307) 
Dec 02 09:41:27 np0005541914.localdomain python3.9[244889]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:41:27 np0005541914.localdomain sudo[244887]: pam_unix(sudo:session): session closed for user root
Dec 02 09:41:27 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:41:27 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:41:27 np0005541914.localdomain sudo[244997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nesjnjuzgfuikbnzjieecilgujksvcmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668487.497347-2904-269621062769204/AnsiballZ_podman_container_info.py
Dec 02 09:41:27 np0005541914.localdomain sudo[244997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:41:27 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:41:27 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:41:27 np0005541914.localdomain python3.9[244999]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Dec 02 09:41:28 np0005541914.localdomain kernel: overlayfs: lowerdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
Dec 02 09:41:28 np0005541914.localdomain podman[239757]: time="2025-12-02T09:41:28Z" level=error msg="Unable to write json: \"write unix /run/podman/podman.sock->@: write: broken pipe\""
Dec 02 09:41:28 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:36:37 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 4096 "" "Go-http-client/1.1"
Dec 02 09:41:28 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3c63bc0da00de6e07d0e525df0b33132c133b0af89f53ce43169161426eaeb98-merged.mount: Deactivated successfully.
Dec 02 09:41:28 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:41:29 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:41:29 np0005541914.localdomain sudo[244997]: pam_unix(sudo:session): session closed for user root
Dec 02 09:41:29 np0005541914.localdomain sudo[245120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wfwxwtlbhttfxvgjherjjkylzwgnisnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668489.2805998-2912-14227329401165/AnsiballZ_podman_container_exec.py
Dec 02 09:41:29 np0005541914.localdomain sudo[245120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:41:29 np0005541914.localdomain python3.9[245122]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 02 09:41:29 np0005541914.localdomain systemd[1]: Started libpod-conmon-a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.scope.
Dec 02 09:41:29 np0005541914.localdomain podman[245123]: 2025-12-02 09:41:29.887650117 +0000 UTC m=+0.102315112 container exec a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 02 09:41:29 np0005541914.localdomain podman[245123]: 2025-12-02 09:41:29.916419796 +0000 UTC m=+0.131084771 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 02 09:41:30 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:41:31 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:41:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4179 DF PROTO=TCP SPT=55286 DPT=9102 SEQ=3341804910 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54D95230000000001030307) 
Dec 02 09:41:31 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e02df3188ed09c76117009d9e268cf57a20be20a288a1b1dd5d724192cbba084-merged.mount: Deactivated successfully.
Dec 02 09:41:31 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e02df3188ed09c76117009d9e268cf57a20be20a288a1b1dd5d724192cbba084-merged.mount: Deactivated successfully.
Dec 02 09:41:31 np0005541914.localdomain sudo[245120]: pam_unix(sudo:session): session closed for user root
Dec 02 09:41:31 np0005541914.localdomain podman[245154]: 2025-12-02 09:41:31.600002983 +0000 UTC m=+0.604868403 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible)
Dec 02 09:41:31 np0005541914.localdomain podman[245154]: 2025-12-02 09:41:31.632833653 +0000 UTC m=+0.637699103 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute)
Dec 02 09:41:31 np0005541914.localdomain podman[245154]: unhealthy
Dec 02 09:41:32 np0005541914.localdomain sudo[245279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lkxbvdhncvtniyavnhncphseuidflqwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668491.7040527-2920-11821917511752/AnsiballZ_podman_container_exec.py
Dec 02 09:41:32 np0005541914.localdomain sudo[245279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:41:32 np0005541914.localdomain python3.9[245281]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 02 09:41:32 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:41:33 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48118 DF PROTO=TCP SPT=56140 DPT=9882 SEQ=983431562 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54D9E010000000001030307) 
Dec 02 09:41:33 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:41:33 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:41:34 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:41:34 np0005541914.localdomain systemd[1]: libpod-conmon-a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.scope: Deactivated successfully.
Dec 02 09:41:34 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:41:34 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Failed with result 'exit-code'.
Dec 02 09:41:34 np0005541914.localdomain podman[245293]: 2025-12-02 09:41:34.116199178 +0000 UTC m=+1.120192001 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 02 09:41:34 np0005541914.localdomain podman[245293]: 2025-12-02 09:41:34.131287043 +0000 UTC m=+1.135279826 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 09:41:34 np0005541914.localdomain podman[245293]: unhealthy
Dec 02 09:41:34 np0005541914.localdomain systemd[1]: Started libpod-conmon-a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.scope.
Dec 02 09:41:34 np0005541914.localdomain podman[245282]: 2025-12-02 09:41:34.231154622 +0000 UTC m=+1.969176591 container exec a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 02 09:41:34 np0005541914.localdomain podman[245282]: 2025-12-02 09:41:34.264377582 +0000 UTC m=+2.002399581 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true)
Dec 02 09:41:34 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48119 DF PROTO=TCP SPT=56140 DPT=9882 SEQ=983431562 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54DA2220000000001030307) 
Dec 02 09:41:35 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:41:35 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:41:35 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:41:36 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:41:36 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Main process exited, code=exited, status=1/FAILURE
Dec 02 09:41:36 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Failed with result 'exit-code'.
Dec 02 09:41:36 np0005541914.localdomain sudo[245279]: pam_unix(sudo:session): session closed for user root
Dec 02 09:41:36 np0005541914.localdomain podman[245332]: 2025-12-02 09:41:36.121537945 +0000 UTC m=+0.396248602 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 02 09:41:36 np0005541914.localdomain podman[245332]: 2025-12-02 09:41:36.157574869 +0000 UTC m=+0.432285906 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 02 09:41:36 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48120 DF PROTO=TCP SPT=56140 DPT=9882 SEQ=983431562 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54DAA220000000001030307) 
Dec 02 09:41:36 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:41:37 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:41:37 np0005541914.localdomain sudo[245464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bldjeewflxdbkkoqodtvxdrtbfqiwyag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668496.2462945-2928-188530882526989/AnsiballZ_file.py
Dec 02 09:41:37 np0005541914.localdomain sudo[245464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:41:37 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:41:37 np0005541914.localdomain systemd[1]: libpod-conmon-a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.scope: Deactivated successfully.
Dec 02 09:41:37 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:41:37 np0005541914.localdomain python3.9[245466]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:41:37 np0005541914.localdomain sudo[245464]: pam_unix(sudo:session): session closed for user root
Dec 02 09:41:37 np0005541914.localdomain sudo[245574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ltqyxzzdzuaudrbdbalvqwkcconyjofa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668497.4558344-2937-214155333377544/AnsiballZ_podman_container_info.py
Dec 02 09:41:37 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:41:37 np0005541914.localdomain sudo[245574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:41:37 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:41:37 np0005541914.localdomain podman[245576]: 2025-12-02 09:41:37.789145301 +0000 UTC m=+0.079763157 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:41:37 np0005541914.localdomain podman[245576]: 2025-12-02 09:41:37.795847128 +0000 UTC m=+0.086465044 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:41:37 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:41:37 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:41:37 np0005541914.localdomain python3.9[245577]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Dec 02 09:41:38 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:41:39 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:41:40 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45300 DF PROTO=TCP SPT=43638 DPT=9100 SEQ=2182860206 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54DB7230000000001030307) 
Dec 02 09:41:40 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:41:40 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-14ddf7e0c76befb63a54b1348ab4f9ad7d65a2f392d0685c8169eecf2841ddca-merged.mount: Deactivated successfully.
Dec 02 09:41:41 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-14ddf7e0c76befb63a54b1348ab4f9ad7d65a2f392d0685c8169eecf2841ddca-merged.mount: Deactivated successfully.
Dec 02 09:41:41 np0005541914.localdomain sudo[245574]: pam_unix(sudo:session): session closed for user root
Dec 02 09:41:41 np0005541914.localdomain podman[245607]: 2025-12-02 09:41:41.118862067 +0000 UTC m=+1.121051035 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vcs-type=git, vendor=Red Hat, Inc., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, version=9.6, name=ubi9-minimal, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 02 09:41:41 np0005541914.localdomain podman[245607]: 2025-12-02 09:41:41.153424258 +0000 UTC m=+1.155613216 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, vcs-type=git, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, vendor=Red Hat, Inc.)
Dec 02 09:41:41 np0005541914.localdomain sudo[245733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yfofadsupaeyamvsyzznwhfvwikeviqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668501.2815442-2945-232285863613894/AnsiballZ_podman_container_exec.py
Dec 02 09:41:41 np0005541914.localdomain sudo[245733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:41:41 np0005541914.localdomain python3.9[245735]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 02 09:41:43 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15392 DF PROTO=TCP SPT=48506 DPT=9105 SEQ=596598299 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54DC3220000000001030307) 
Dec 02 09:41:43 np0005541914.localdomain sudo[245747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:41:43 np0005541914.localdomain sudo[245747]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:41:43 np0005541914.localdomain sudo[245747]: pam_unix(sudo:session): session closed for user root
Dec 02 09:41:43 np0005541914.localdomain sudo[245765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:41:43 np0005541914.localdomain sudo[245765]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:41:43 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:41:43 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:41:43 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:41:43 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:41:43 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:41:43 np0005541914.localdomain podman[245783]: 2025-12-02 09:41:43.884867398 +0000 UTC m=+0.392793611 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 02 09:41:43 np0005541914.localdomain podman[245783]: 2025-12-02 09:41:43.889824074 +0000 UTC m=+0.397750277 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 02 09:41:43 np0005541914.localdomain systemd[1]: Started libpod-conmon-3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.scope.
Dec 02 09:41:43 np0005541914.localdomain podman[245736]: 2025-12-02 09:41:43.966481958 +0000 UTC m=+2.218340469 container exec 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 09:41:43 np0005541914.localdomain podman[245736]: 2025-12-02 09:41:43.999039079 +0000 UTC m=+2.250897590 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 09:41:45 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:41:45 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:41:45 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f49a20fc1f5020138578527318ecbf7083cb8c7be7c4014409c81f2cedb36958-merged.mount: Deactivated successfully.
Dec 02 09:41:46 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54946 DF PROTO=TCP SPT=50860 DPT=9102 SEQ=3557917637 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54DCE5F0000000001030307) 
Dec 02 09:41:46 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:41:46 np0005541914.localdomain sudo[245733]: pam_unix(sudo:session): session closed for user root
Dec 02 09:41:46 np0005541914.localdomain sudo[245765]: pam_unix(sudo:session): session closed for user root
Dec 02 09:41:46 np0005541914.localdomain sudo[245961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nggtkbphibxvxhlycaqsanviefwvbzet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668506.2534459-2953-129364974349461/AnsiballZ_podman_container_exec.py
Dec 02 09:41:46 np0005541914.localdomain sudo[245961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:41:46 np0005541914.localdomain python3.9[245963]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 02 09:41:46 np0005541914.localdomain sudo[245976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:41:46 np0005541914.localdomain sudo[245976]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:41:46 np0005541914.localdomain sudo[245976]: pam_unix(sudo:session): session closed for user root
Dec 02 09:41:46 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:41:46 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:41:47 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3df44265ee334241877fc90da4598858e128dcd022ea76b8f6ef87bd0d8667ae-merged.mount: Deactivated successfully.
Dec 02 09:41:47 np0005541914.localdomain systemd[1]: libpod-conmon-3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.scope: Deactivated successfully.
Dec 02 09:41:47 np0005541914.localdomain systemd[1]: Started libpod-conmon-3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.scope.
Dec 02 09:41:47 np0005541914.localdomain podman[245964]: 2025-12-02 09:41:47.242331554 +0000 UTC m=+0.560171883 container exec 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 02 09:41:47 np0005541914.localdomain podman[245964]: 2025-12-02 09:41:47.27096948 +0000 UTC m=+0.588809819 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 02 09:41:47 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-efd486ab4cd4ff83f3804626a19ad34bc69aaee72db0852b1e52409f0ff23ebf-merged.mount: Deactivated successfully.
Dec 02 09:41:47 np0005541914.localdomain sudo[245961]: pam_unix(sudo:session): session closed for user root
Dec 02 09:41:48 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:41:48 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c892fd6b7d17c3244e97732d72b83cd3d1a569af20da04450edaf25f54095ce6-merged.mount: Deactivated successfully.
Dec 02 09:41:48 np0005541914.localdomain systemd[1]: libpod-conmon-3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.scope: Deactivated successfully.
Dec 02 09:41:48 np0005541914.localdomain sudo[246118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bcyxpixxvftqlgdsaainzydsroohryhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668508.1188133-2961-82778714755087/AnsiballZ_file.py
Dec 02 09:41:48 np0005541914.localdomain sudo[246118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:41:48 np0005541914.localdomain python3.9[246120]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:41:48 np0005541914.localdomain sudo[246118]: pam_unix(sudo:session): session closed for user root
Dec 02 09:41:49 np0005541914.localdomain sudo[246228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hjwyngsxgthdlaegbpvlvknyknfuvvci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668508.7659173-2970-253776197105523/AnsiballZ_podman_container_info.py
Dec 02 09:41:49 np0005541914.localdomain sudo[246228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:41:49 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54948 DF PROTO=TCP SPT=50860 DPT=9102 SEQ=3557917637 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54DDA620000000001030307) 
Dec 02 09:41:49 np0005541914.localdomain python3.9[246230]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Dec 02 09:41:49 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:41:50 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3edfdc699753a1c833a1247909047263cd4d267465db29104ef571eb019dbe34-merged.mount: Deactivated successfully.
Dec 02 09:41:50 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3c63bc0da00de6e07d0e525df0b33132c133b0af89f53ce43169161426eaeb98-merged.mount: Deactivated successfully.
Dec 02 09:41:50 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3c63bc0da00de6e07d0e525df0b33132c133b0af89f53ce43169161426eaeb98-merged.mount: Deactivated successfully.
Dec 02 09:41:50 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:36:44 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 140643 "" "Go-http-client/1.1"
Dec 02 09:41:50 np0005541914.localdomain podman_exporter[240012]: ts=2025-12-02T09:41:50.825Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Dec 02 09:41:50 np0005541914.localdomain podman_exporter[240012]: ts=2025-12-02T09:41:50.826Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Dec 02 09:41:50 np0005541914.localdomain podman_exporter[240012]: ts=2025-12-02T09:41:50.826Z caller=tls_config.go:316 level=info msg="TLS is disabled." http2=false address=[::]:9882
Dec 02 09:41:50 np0005541914.localdomain podman[246244]: 2025-12-02 09:41:50.858756108 +0000 UTC m=+0.858314296 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:41:50 np0005541914.localdomain sudo[246228]: pam_unix(sudo:session): session closed for user root
Dec 02 09:41:50 np0005541914.localdomain podman[246244]: 2025-12-02 09:41:50.896259426 +0000 UTC m=+0.895817604 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd)
Dec 02 09:41:50 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:41:51 np0005541914.localdomain sudo[246368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gkjttveczwhkdyqrfsfrpvclzssfcmzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668511.0511758-2978-179162186077768/AnsiballZ_podman_container_exec.py
Dec 02 09:41:51 np0005541914.localdomain sudo[246368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:41:51 np0005541914.localdomain python3.9[246370]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 02 09:41:51 np0005541914.localdomain systemd[1]: Started libpod-conmon-8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.scope.
Dec 02 09:41:51 np0005541914.localdomain podman[246371]: 2025-12-02 09:41:51.652660543 +0000 UTC m=+0.112323579 container exec 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 09:41:51 np0005541914.localdomain podman[246371]: 2025-12-02 09:41:51.692603752 +0000 UTC m=+0.152266788 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 09:41:51 np0005541914.localdomain systemd[1]: libpod-conmon-8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.scope: Deactivated successfully.
Dec 02 09:41:51 np0005541914.localdomain sudo[246368]: pam_unix(sudo:session): session closed for user root
Dec 02 09:41:51 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47238 DF PROTO=TCP SPT=46310 DPT=9101 SEQ=1154948648 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54DE5230000000001030307) 
Dec 02 09:41:52 np0005541914.localdomain sudo[246510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ptnrarmzoaqgxouyplayyycixfaljocv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668511.9339244-2986-11731248602097/AnsiballZ_podman_container_exec.py
Dec 02 09:41:52 np0005541914.localdomain sudo[246510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:41:52 np0005541914.localdomain python3.9[246512]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 02 09:41:52 np0005541914.localdomain systemd[1]: Started libpod-conmon-8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.scope.
Dec 02 09:41:52 np0005541914.localdomain podman[246513]: 2025-12-02 09:41:52.511268628 +0000 UTC m=+0.084250349 container exec 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 09:41:52 np0005541914.localdomain podman[246513]: 2025-12-02 09:41:52.54383678 +0000 UTC m=+0.116818431 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 09:41:52 np0005541914.localdomain sudo[246510]: pam_unix(sudo:session): session closed for user root
Dec 02 09:41:52 np0005541914.localdomain systemd[1]: libpod-conmon-8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.scope: Deactivated successfully.
Dec 02 09:41:52 np0005541914.localdomain sudo[246650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lcjnmvpzhsskolpmvjlkmjjqvitwmhpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668512.7180176-2994-147452751206866/AnsiballZ_file.py
Dec 02 09:41:52 np0005541914.localdomain sudo[246650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:41:53 np0005541914.localdomain python3.9[246652]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:41:53 np0005541914.localdomain sudo[246650]: pam_unix(sudo:session): session closed for user root
Dec 02 09:41:53 np0005541914.localdomain sudo[246760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-egvxpuoylwhbdnwkryufdxrpdczoxxcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668513.3826609-3003-208829171328496/AnsiballZ_podman_container_info.py
Dec 02 09:41:53 np0005541914.localdomain sudo[246760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:41:53 np0005541914.localdomain python3.9[246762]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Dec 02 09:41:53 np0005541914.localdomain sudo[246760]: pam_unix(sudo:session): session closed for user root
Dec 02 09:41:54 np0005541914.localdomain sudo[246883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xjuugayokzebfshomnlgzglgbpkcfugr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668514.0187042-3011-252733028913109/AnsiballZ_podman_container_exec.py
Dec 02 09:41:54 np0005541914.localdomain sudo[246883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:41:54 np0005541914.localdomain python3.9[246885]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 02 09:41:54 np0005541914.localdomain systemd[1]: Started libpod-conmon-bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.scope.
Dec 02 09:41:54 np0005541914.localdomain podman[246886]: 2025-12-02 09:41:54.615705232 +0000 UTC m=+0.100448997 container exec bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, architecture=x86_64, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, distribution-scope=public, io.openshift.tags=minimal rhel9, vcs-type=git, vendor=Red Hat, Inc., release=1755695350, config_id=edpm, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Dec 02 09:41:54 np0005541914.localdomain podman[246886]: 2025-12-02 09:41:54.645208134 +0000 UTC m=+0.129951899 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1755695350, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vcs-type=git, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public)
Dec 02 09:41:54 np0005541914.localdomain sudo[246883]: pam_unix(sudo:session): session closed for user root
Dec 02 09:41:54 np0005541914.localdomain systemd[1]: libpod-conmon-bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.scope: Deactivated successfully.
Dec 02 09:41:55 np0005541914.localdomain sudo[247024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bahzyqcujihiulefeosdfiwywrycqdwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668514.831829-3019-32567766751164/AnsiballZ_podman_container_exec.py
Dec 02 09:41:55 np0005541914.localdomain sudo[247024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:41:55 np0005541914.localdomain python3.9[247026]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec 02 09:41:55 np0005541914.localdomain systemd[1]: Started libpod-conmon-bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.scope.
Dec 02 09:41:55 np0005541914.localdomain podman[247027]: 2025-12-02 09:41:55.396062396 +0000 UTC m=+0.085189507 container exec bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, io.buildah.version=1.33.7, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.expose-services=, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, release=1755695350, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container)
Dec 02 09:41:55 np0005541914.localdomain podman[247027]: 2025-12-02 09:41:55.424227058 +0000 UTC m=+0.113354159 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 09:41:55 np0005541914.localdomain sudo[247024]: pam_unix(sudo:session): session closed for user root
Dec 02 09:41:55 np0005541914.localdomain systemd[1]: libpod-conmon-bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.scope: Deactivated successfully.
Dec 02 09:41:55 np0005541914.localdomain sudo[247165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uspudodtcupcdxhkpoqimgcrizbfrldd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668515.6660554-3027-257423953381002/AnsiballZ_file.py
Dec 02 09:41:55 np0005541914.localdomain sudo[247165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:41:56 np0005541914.localdomain python3.9[247167]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:41:56 np0005541914.localdomain sudo[247165]: pam_unix(sudo:session): session closed for user root
Dec 02 09:41:56 np0005541914.localdomain sudo[247275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xryeqlllnbupmalblsglblroqgwzqmod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668516.4889486-3041-13771645742844/AnsiballZ_file.py
Dec 02 09:41:56 np0005541914.localdomain sudo[247275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:41:56 np0005541914.localdomain python3.9[247277]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:41:56 np0005541914.localdomain sudo[247275]: pam_unix(sudo:session): session closed for user root
Dec 02 09:41:57 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51294 DF PROTO=TCP SPT=54436 DPT=9101 SEQ=3533408136 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54DF9E20000000001030307) 
Dec 02 09:41:57 np0005541914.localdomain sudo[247385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-stcmlayrltggpplulnohujyqxwfkuvjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668517.2658207-3068-175235715447587/AnsiballZ_stat.py
Dec 02 09:41:57 np0005541914.localdomain sudo[247385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:41:57 np0005541914.localdomain python3.9[247387]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:41:57 np0005541914.localdomain sudo[247385]: pam_unix(sudo:session): session closed for user root
Dec 02 09:41:57 np0005541914.localdomain sudo[247473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-edqlkdqdrpazmovseqjlknwhxgkamwbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668517.2658207-3068-175235715447587/AnsiballZ_copy.py
Dec 02 09:41:57 np0005541914.localdomain sudo[247473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:41:58 np0005541914.localdomain python3.9[247475]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764668517.2658207-3068-175235715447587/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:41:58 np0005541914.localdomain sudo[247473]: pam_unix(sudo:session): session closed for user root
Dec 02 09:41:58 np0005541914.localdomain sudo[247583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ycodzebwsaoedmolxuwfflkavvvdlxft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668518.5552814-3116-11057831367368/AnsiballZ_file.py
Dec 02 09:41:58 np0005541914.localdomain sudo[247583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:41:59 np0005541914.localdomain python3.9[247585]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:41:59 np0005541914.localdomain sudo[247583]: pam_unix(sudo:session): session closed for user root
Dec 02 09:41:59 np0005541914.localdomain sudo[247693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iicizkrkvkgzqefnagqkdvekwnocujvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668519.1850128-3140-204978983804612/AnsiballZ_stat.py
Dec 02 09:41:59 np0005541914.localdomain sudo[247693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:41:59 np0005541914.localdomain python3.9[247695]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:41:59 np0005541914.localdomain sudo[247693]: pam_unix(sudo:session): session closed for user root
Dec 02 09:41:59 np0005541914.localdomain sudo[247750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rredbpiwzizqdwwtdrmeoxajjblucysf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668519.1850128-3140-204978983804612/AnsiballZ_file.py
Dec 02 09:41:59 np0005541914.localdomain sudo[247750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:00 np0005541914.localdomain python3.9[247752]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:42:00 np0005541914.localdomain sudo[247750]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:00 np0005541914.localdomain sudo[247860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-umimybcjliccavsltfpingcghwododsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668520.4162507-3176-146223136821771/AnsiballZ_stat.py
Dec 02 09:42:00 np0005541914.localdomain sudo[247860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:00 np0005541914.localdomain python3.9[247862]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:42:00 np0005541914.localdomain sudo[247860]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:01 np0005541914.localdomain sudo[247917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tgurbvsnloxsmpxibwfrpakgosohzoiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668520.4162507-3176-146223136821771/AnsiballZ_file.py
Dec 02 09:42:01 np0005541914.localdomain sudo[247917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:01 np0005541914.localdomain python3.9[247919]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.tze3l3lg recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:42:01 np0005541914.localdomain sudo[247917]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54950 DF PROTO=TCP SPT=50860 DPT=9102 SEQ=3557917637 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54E0B230000000001030307) 
Dec 02 09:42:01 np0005541914.localdomain sudo[248027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wtlxkrjhhslyylpziywdzjwrkorldzbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668521.6281831-3212-127901709372427/AnsiballZ_stat.py
Dec 02 09:42:01 np0005541914.localdomain sudo[248027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:02 np0005541914.localdomain python3.9[248029]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:42:02 np0005541914.localdomain sudo[248027]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:02 np0005541914.localdomain sudo[248084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-arjjnbqitncswbycnniypkkmzjapdjjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668521.6281831-3212-127901709372427/AnsiballZ_file.py
Dec 02 09:42:02 np0005541914.localdomain sudo[248084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:02 np0005541914.localdomain python3.9[248086]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:42:02 np0005541914.localdomain sudo[248084]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:03 np0005541914.localdomain sudo[248194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-atpkoevpjeagcchuitfkxgtmvgjzjfnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668522.7599387-3251-212823058046043/AnsiballZ_command.py
Dec 02 09:42:03 np0005541914.localdomain sudo[248194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:42:03.149 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:42:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:42:03.150 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:42:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:42:03.150 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:42:03 np0005541914.localdomain python3.9[248196]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:42:03 np0005541914.localdomain sudo[248194]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:03 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62069 DF PROTO=TCP SPT=33636 DPT=9882 SEQ=647009093 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54E13310000000001030307) 
Dec 02 09:42:03 np0005541914.localdomain sudo[248305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mstrzeqeymarfdppefwcywbcjiclbobl ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764668523.4754953-3275-148233228335942/AnsiballZ_edpm_nftables_from_files.py
Dec 02 09:42:03 np0005541914.localdomain sudo[248305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:04 np0005541914.localdomain python3[248307]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 02 09:42:04 np0005541914.localdomain sudo[248305]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:04 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62070 DF PROTO=TCP SPT=33636 DPT=9882 SEQ=647009093 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54E17220000000001030307) 
Dec 02 09:42:04 np0005541914.localdomain sudo[248415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oehkklxaanuwbphoipqdlxusuawhspou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668524.3794298-3299-232099181918564/AnsiballZ_stat.py
Dec 02 09:42:04 np0005541914.localdomain sudo[248415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:04 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:42:04 np0005541914.localdomain podman[248418]: 2025-12-02 09:42:04.794868586 +0000 UTC m=+0.081373844 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3)
Dec 02 09:42:04 np0005541914.localdomain podman[248418]: 2025-12-02 09:42:04.813822736 +0000 UTC m=+0.100327984 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2)
Dec 02 09:42:04 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 09:42:04 np0005541914.localdomain python3.9[248417]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:42:04 np0005541914.localdomain sudo[248415]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:05 np0005541914.localdomain sudo[248492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xtwjkgumhorfajextsfcqumdslrleorb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668524.3794298-3299-232099181918564/AnsiballZ_file.py
Dec 02 09:42:05 np0005541914.localdomain sudo[248492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:05 np0005541914.localdomain python3.9[248494]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:42:05 np0005541914.localdomain sudo[248492]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:05 np0005541914.localdomain sudo[248602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-irlpjuzapuijxisxmehndzqekeckoxkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668525.4998777-3335-248646980599604/AnsiballZ_stat.py
Dec 02 09:42:05 np0005541914.localdomain sudo[248602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:06 np0005541914.localdomain python3.9[248604]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:42:06 np0005541914.localdomain sudo[248602]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:06 np0005541914.localdomain sudo[248659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jcyujdchfyfeqlmatsxsdjtdoexkeqzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668525.4998777-3335-248646980599604/AnsiballZ_file.py
Dec 02 09:42:06 np0005541914.localdomain sudo[248659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:42:06 np0005541914.localdomain podman[248662]: 2025-12-02 09:42:06.366047973 +0000 UTC m=+0.073705658 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 09:42:06 np0005541914.localdomain podman[248662]: 2025-12-02 09:42:06.374376339 +0000 UTC m=+0.082034024 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 09:42:06 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 09:42:06 np0005541914.localdomain sshd[248685]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:42:06 np0005541914.localdomain python3.9[248661]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:42:06 np0005541914.localdomain sudo[248659]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:06 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62071 DF PROTO=TCP SPT=33636 DPT=9882 SEQ=647009093 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54E1F220000000001030307) 
Dec 02 09:42:06 np0005541914.localdomain sshd[248685]: Invalid user ark from 34.78.29.97 port 36936
Dec 02 09:42:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:42:06.918 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:42:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:42:06.919 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:42:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:42:06.919 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 09:42:07 np0005541914.localdomain sshd[248685]: Received disconnect from 34.78.29.97 port 36936:11: Bye Bye [preauth]
Dec 02 09:42:07 np0005541914.localdomain sshd[248685]: Disconnected from invalid user ark 34.78.29.97 port 36936 [preauth]
Dec 02 09:42:07 np0005541914.localdomain sudo[248794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ljvsiagluuwqfngfihjrvnqbytepcrpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668526.9250717-3371-221322877214994/AnsiballZ_stat.py
Dec 02 09:42:07 np0005541914.localdomain sudo[248794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:07 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:42:07 np0005541914.localdomain podman[248796]: 2025-12-02 09:42:07.32122737 +0000 UTC m=+0.081525649 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 02 09:42:07 np0005541914.localdomain podman[248796]: 2025-12-02 09:42:07.439119931 +0000 UTC m=+0.199418240 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 02 09:42:07 np0005541914.localdomain python3.9[248797]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:42:07 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:42:07 np0005541914.localdomain sudo[248794]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:07 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:42:07.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:42:07 np0005541914.localdomain sudo[248874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vonfysxxtieatpmucaluakkmdcedppqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668526.9250717-3371-221322877214994/AnsiballZ_file.py
Dec 02 09:42:07 np0005541914.localdomain sudo[248874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:07 np0005541914.localdomain python3.9[248876]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:42:07 np0005541914.localdomain sudo[248874]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:07 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:42:08 np0005541914.localdomain podman[248894]: 2025-12-02 09:42:08.077151812 +0000 UTC m=+0.074437109 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 02 09:42:08 np0005541914.localdomain podman[248894]: 2025-12-02 09:42:08.086927521 +0000 UTC m=+0.084212818 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 02 09:42:08 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:42:08 np0005541914.localdomain auditd[726]: Audit daemon rotating log files
Dec 02 09:42:08 np0005541914.localdomain sudo[249000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mztaacfbnkpnlgsxvyyqlmieaprqioqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668528.0659955-3407-189436627639210/AnsiballZ_stat.py
Dec 02 09:42:08 np0005541914.localdomain sudo[249000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:08 np0005541914.localdomain python3.9[249002]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:42:08 np0005541914.localdomain sudo[249000]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:08 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:42:08.637 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:42:08 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:42:08.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:42:08 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:42:08.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:42:08 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:42:08.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:42:08 np0005541914.localdomain sudo[249057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ftfmyjnvrgstbtxqfxdgxaasngoedqex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668528.0659955-3407-189436627639210/AnsiballZ_file.py
Dec 02 09:42:08 np0005541914.localdomain sudo[249057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:09 np0005541914.localdomain python3.9[249059]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:42:09 np0005541914.localdomain sudo[249057]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:09 np0005541914.localdomain sudo[249167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aorsunexsghssparvdnvjvjylxzfsjwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668529.1991947-3443-40348481829427/AnsiballZ_stat.py
Dec 02 09:42:09 np0005541914.localdomain sudo[249167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:09 np0005541914.localdomain python3.9[249169]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:42:09 np0005541914.localdomain sudo[249167]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:10 np0005541914.localdomain sudo[249257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bneuvplguyydmpozokplsbiplleoglts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668529.1991947-3443-40348481829427/AnsiballZ_copy.py
Dec 02 09:42:10 np0005541914.localdomain sudo[249257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:10 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=369 DF PROTO=TCP SPT=58560 DPT=9100 SEQ=3669563591 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54E2D220000000001030307) 
Dec 02 09:42:10 np0005541914.localdomain python3.9[249259]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764668529.1991947-3443-40348481829427/.source.nft follow=False _original_basename=ruleset.j2 checksum=953266ca5f7d82d2777a0a437bd7feceb9259ee8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:42:10 np0005541914.localdomain sudo[249257]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:10 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:42:10.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:42:10 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:42:10.642 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 09:42:10 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:42:10.642 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 09:42:10 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:42:10.660 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 09:42:10 np0005541914.localdomain sudo[249367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-veqpuwpnumfibanyhpbpunczhtkfnvsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668530.6058447-3487-153721978926324/AnsiballZ_file.py
Dec 02 09:42:10 np0005541914.localdomain rsyslogd[759]: imjournal from <localhost:nova_compute>: begin to drop messages due to rate-limiting
Dec 02 09:42:10 np0005541914.localdomain sudo[249367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:11 np0005541914.localdomain python3.9[249369]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:42:11 np0005541914.localdomain sudo[249367]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:11 np0005541914.localdomain sudo[249477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tpmxocrafnqdnsdziccwpqfxkqqtkbzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668531.3362904-3512-187770946906472/AnsiballZ_command.py
Dec 02 09:42:11 np0005541914.localdomain sudo[249477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:11 np0005541914.localdomain python3.9[249479]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:42:11 np0005541914.localdomain sudo[249477]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:12 np0005541914.localdomain sudo[249590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uitgtercmmhjmocgupcttvnvfpqlkhwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668532.1967723-3536-11857094367940/AnsiballZ_blockinfile.py
Dec 02 09:42:12 np0005541914.localdomain sudo[249590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:12 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:42:12.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:42:12 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:42:12.660 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:42:12 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:42:12.661 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:42:12 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:42:12.661 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:42:12 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:42:12.662 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 09:42:12 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:42:12.662 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:42:12 np0005541914.localdomain python3.9[249592]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                                            include "/etc/nftables/edpm-chains.nft"
                                                            include "/etc/nftables/edpm-rules.nft"
                                                            include "/etc/nftables/edpm-jumps.nft"
                                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:42:12 np0005541914.localdomain sudo[249590]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:13 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:42:13.132 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:42:13 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18395 DF PROTO=TCP SPT=35304 DPT=9105 SEQ=2429823407 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54E38620000000001030307) 
Dec 02 09:42:13 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:42:13.324 229589 WARNING nova.virt.libvirt.driver [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 09:42:13 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:42:13.327 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=13035MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 09:42:13 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:42:13.327 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:42:13 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:42:13.328 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:42:13 np0005541914.localdomain sudo[249722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kptnhcbycnzkrevspqpcxnsltzxzsgyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668533.090152-3563-102727865780561/AnsiballZ_command.py
Dec 02 09:42:13 np0005541914.localdomain sudo[249722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:13 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:42:13.395 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 09:42:13 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:42:13.395 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 09:42:13 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:42:13.421 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:42:13 np0005541914.localdomain python3.9[249724]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:42:13 np0005541914.localdomain sudo[249722]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:13 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:42:13.859 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:42:13 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:42:13.864 229589 DEBUG nova.compute.provider_tree [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 09:42:13 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:42:13.886 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 09:42:13 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:42:13.888 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 09:42:13 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:42:13.888 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:42:13 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:42:14 np0005541914.localdomain sudo[249856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-soukswudrkswqcrfxkepchowzrsbflgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668533.7577226-3587-25662141892523/AnsiballZ_stat.py
Dec 02 09:42:14 np0005541914.localdomain sudo[249856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:14 np0005541914.localdomain podman[249853]: 2025-12-02 09:42:14.080558502 +0000 UTC m=+0.076356745 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, version=9.6, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, vendor=Red Hat, Inc., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 02 09:42:14 np0005541914.localdomain podman[249853]: 2025-12-02 09:42:14.096775068 +0000 UTC m=+0.092573281 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.33.7, managed_by=edpm_ansible, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, architecture=x86_64, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vendor=Red Hat, Inc., distribution-scope=public)
Dec 02 09:42:14 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:42:14 np0005541914.localdomain python3.9[249868]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:42:14 np0005541914.localdomain sudo[249856]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:14 np0005541914.localdomain sudo[249985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-knsgzzumklrhprbubgyqdofkuptsslfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668534.418301-3611-183827120590450/AnsiballZ_command.py
Dec 02 09:42:14 np0005541914.localdomain sudo[249985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:14 np0005541914.localdomain python3.9[249987]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:42:14 np0005541914.localdomain sudo[249985]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:15 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:42:15 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 09:42:15 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:42:15 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:42:15 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:42:15 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:42:15 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:42:15 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 09:42:15 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:42:15 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:42:15 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 09:42:15 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:42:15 np0005541914.localdomain sudo[250101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-klcceaiweeubndxdeytbpuxirrtlbadx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668535.11103-3635-32148274067448/AnsiballZ_file.py
Dec 02 09:42:15 np0005541914.localdomain sudo[250101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:42:15.433 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:42:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:42:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:42:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:42:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:42:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:42:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:42:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:42:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:42:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:42:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:42:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:42:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:42:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:42:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:42:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:42:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:42:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:42:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:42:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:42:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:42:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:42:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:42:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:42:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:42:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:42:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:42:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:42:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:42:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:42:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:42:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:42:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:42:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:42:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:42:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:42:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:42:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:42:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:42:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:42:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:42:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:42:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:42:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:42:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:42:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:42:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:42:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:42:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:42:15 np0005541914.localdomain python3.9[250103]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:42:15 np0005541914.localdomain sudo[250101]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:16 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59331 DF PROTO=TCP SPT=41532 DPT=9102 SEQ=2930245784 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54E438E0000000001030307) 
Dec 02 09:42:16 np0005541914.localdomain sshd[243185]: pam_unix(sshd:session): session closed for user zuul
Dec 02 09:42:16 np0005541914.localdomain systemd[1]: session-56.scope: Deactivated successfully.
Dec 02 09:42:16 np0005541914.localdomain systemd[1]: session-56.scope: Consumed 28.975s CPU time.
Dec 02 09:42:16 np0005541914.localdomain systemd-logind[760]: Session 56 logged out. Waiting for processes to exit.
Dec 02 09:42:16 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:42:16 np0005541914.localdomain systemd-logind[760]: Removed session 56.
Dec 02 09:42:16 np0005541914.localdomain systemd[1]: tmp-crun.Vl14Gg.mount: Deactivated successfully.
Dec 02 09:42:16 np0005541914.localdomain podman[250121]: 2025-12-02 09:42:16.159499398 +0000 UTC m=+0.082523674 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 09:42:16 np0005541914.localdomain podman[250121]: 2025-12-02 09:42:16.167789782 +0000 UTC m=+0.090814058 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 09:42:16 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:42:19 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59333 DF PROTO=TCP SPT=41532 DPT=9102 SEQ=2930245784 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54E4FA20000000001030307) 
Dec 02 09:42:20 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:42:21 np0005541914.localdomain systemd[1]: tmp-crun.nBQNq3.mount: Deactivated successfully.
Dec 02 09:42:21 np0005541914.localdomain podman[250144]: 2025-12-02 09:42:21.064734431 +0000 UTC m=+0.068740668 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 02 09:42:21 np0005541914.localdomain podman[250144]: 2025-12-02 09:42:21.080980205 +0000 UTC m=+0.084986512 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:42:21 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:42:22 np0005541914.localdomain sshd[250163]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:42:22 np0005541914.localdomain sshd[250163]: Accepted publickey for zuul from 192.168.122.30 port 52118 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 09:42:22 np0005541914.localdomain systemd-logind[760]: New session 57 of user zuul.
Dec 02 09:42:22 np0005541914.localdomain systemd[1]: Started Session 57 of User zuul.
Dec 02 09:42:22 np0005541914.localdomain sshd[250163]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 09:42:22 np0005541914.localdomain sudo[250274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qhotpswgstiemmexuinjqzonpvwjoiqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668542.283228-28-90079398250440/AnsiballZ_file.py
Dec 02 09:42:22 np0005541914.localdomain sudo[250274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:22 np0005541914.localdomain python3.9[250276]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:42:23 np0005541914.localdomain sudo[250274]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:23 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59334 DF PROTO=TCP SPT=41532 DPT=9102 SEQ=2930245784 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54E5F630000000001030307) 
Dec 02 09:42:23 np0005541914.localdomain sudo[250384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-exmewbxyqekrcowfvpdwwghwklohnxpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668543.119224-28-260821643794215/AnsiballZ_file.py
Dec 02 09:42:23 np0005541914.localdomain sudo[250384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:23 np0005541914.localdomain python3.9[250386]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:42:23 np0005541914.localdomain sudo[250384]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:23 np0005541914.localdomain sudo[250494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-htlbpgankamwmvouedpwkrqtvvhyrwon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668543.6879554-28-62372931954923/AnsiballZ_file.py
Dec 02 09:42:23 np0005541914.localdomain sudo[250494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:24 np0005541914.localdomain python3.9[250496]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated/neutron-sriov-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:42:24 np0005541914.localdomain sudo[250494]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:25 np0005541914.localdomain python3.9[250604]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/neutron_sriov_agent.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:42:25 np0005541914.localdomain python3.9[250690]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/neutron_sriov_agent.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668544.4236-106-265836972178509/.source.yaml follow=False _original_basename=neutron_sriov_agent.yaml.j2 checksum=d3942d8476d006ea81540d2a1d96dd9d67f33f5f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:42:26 np0005541914.localdomain python3.9[250798]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:42:26 np0005541914.localdomain python3.9[250884]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668545.8419485-151-123320850058842/.source.conf follow=False _original_basename=neutron.conf.j2 checksum=24e013b64eb8be4a13596c6ffccbd94df7442bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:42:27 np0005541914.localdomain python3.9[250992]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:42:27 np0005541914.localdomain python3.9[251078]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668546.8871336-151-55432476234739/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:42:28 np0005541914.localdomain python3.9[251186]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron-sriov-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:42:28 np0005541914.localdomain python3.9[251272]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/01-neutron-sriov-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668547.962142-151-50993177515833/.source.conf follow=False _original_basename=neutron-sriov-agent.conf.j2 checksum=3ab240933b526d33490e89c1e858b1c259da9ae2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:42:30 np0005541914.localdomain python3.9[251380]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-sriov-agent/10-neutron-sriov.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:42:30 np0005541914.localdomain python3.9[251466]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-sriov-agent/10-neutron-sriov.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668549.7141707-325-54112914272408/.source.conf _original_basename=10-neutron-sriov.conf follow=False checksum=d6e803f833d8b5f768d3a3c0112defa742aeec55 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:42:31 np0005541914.localdomain python3.9[251574]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:42:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59335 DF PROTO=TCP SPT=41532 DPT=9102 SEQ=2930245784 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54E7F220000000001030307) 
Dec 02 09:42:31 np0005541914.localdomain sudo[251684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lbjbqofjvqtlprieppxjfqagtwnhxlow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668551.4641411-397-182420987660088/AnsiballZ_file.py
Dec 02 09:42:31 np0005541914.localdomain sudo[251684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:31 np0005541914.localdomain python3.9[251686]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:42:31 np0005541914.localdomain sudo[251684]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:32 np0005541914.localdomain sudo[251794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ofcgpocqszhisfthvvzjsejxdxigemms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668552.072413-422-128257433340655/AnsiballZ_stat.py
Dec 02 09:42:32 np0005541914.localdomain sudo[251794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:32 np0005541914.localdomain python3.9[251796]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:42:32 np0005541914.localdomain sudo[251794]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:32 np0005541914.localdomain sudo[251851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pqcygrojbyvoyijsafjkpdzjntofhork ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668552.072413-422-128257433340655/AnsiballZ_file.py
Dec 02 09:42:32 np0005541914.localdomain sudo[251851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:32 np0005541914.localdomain python3.9[251853]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:42:32 np0005541914.localdomain sudo[251851]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:33 np0005541914.localdomain sudo[251961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fdnipogyhntoqamkamgpquewcdbwciou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668553.0285575-422-157414691228309/AnsiballZ_stat.py
Dec 02 09:42:33 np0005541914.localdomain sudo[251961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:33 np0005541914.localdomain python3.9[251963]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:42:33 np0005541914.localdomain sudo[251961]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:33 np0005541914.localdomain podman[239757]: time="2025-12-02T09:42:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 09:42:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:42:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 142286 "" "Go-http-client/1.1"
Dec 02 09:42:33 np0005541914.localdomain sudo[252019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lycrghmiiexztiydlgvfdtwzaarqgtya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668553.0285575-422-157414691228309/AnsiballZ_file.py
Dec 02 09:42:33 np0005541914.localdomain sudo[252019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:42:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 15866 "" "Go-http-client/1.1"
Dec 02 09:42:33 np0005541914.localdomain python3.9[252021]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:42:33 np0005541914.localdomain sudo[252019]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:34 np0005541914.localdomain sudo[252129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yestykoarqjrutsoktjanlyeqctjgbtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668554.2471657-491-57715254109958/AnsiballZ_file.py
Dec 02 09:42:34 np0005541914.localdomain sudo[252129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:34 np0005541914.localdomain python3.9[252131]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:42:34 np0005541914.localdomain sudo[252129]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:34 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:42:35 np0005541914.localdomain podman[252203]: 2025-12-02 09:42:35.087275261 +0000 UTC m=+0.087394656 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute)
Dec 02 09:42:35 np0005541914.localdomain podman[252203]: 2025-12-02 09:42:35.102862016 +0000 UTC m=+0.102981391 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 02 09:42:35 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 09:42:35 np0005541914.localdomain sudo[252258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uuavdvsyheigjgwwkgavsvohrwtccdre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668554.8810875-514-67359596756972/AnsiballZ_stat.py
Dec 02 09:42:35 np0005541914.localdomain sudo[252258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:35 np0005541914.localdomain python3.9[252260]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:42:35 np0005541914.localdomain sudo[252258]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:35 np0005541914.localdomain sudo[252315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tspzaapbohdantmfmqsqfposbsvtprar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668554.8810875-514-67359596756972/AnsiballZ_file.py
Dec 02 09:42:35 np0005541914.localdomain sudo[252315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:35 np0005541914.localdomain python3.9[252317]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:42:35 np0005541914.localdomain sudo[252315]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:36 np0005541914.localdomain sudo[252425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rthuqonhkfxhucoqcdxdtaurditahkzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668556.0044148-552-21740843810910/AnsiballZ_stat.py
Dec 02 09:42:36 np0005541914.localdomain sudo[252425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:36 np0005541914.localdomain python3.9[252427]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:42:36 np0005541914.localdomain sudo[252425]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:36 np0005541914.localdomain sudo[252482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lvhiudctpygvpnrcsaqrwrvmdwoxfetz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668556.0044148-552-21740843810910/AnsiballZ_file.py
Dec 02 09:42:36 np0005541914.localdomain sudo[252482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:36 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:42:36 np0005541914.localdomain systemd[1]: tmp-crun.R4e4bG.mount: Deactivated successfully.
Dec 02 09:42:36 np0005541914.localdomain podman[252485]: 2025-12-02 09:42:36.848520756 +0000 UTC m=+0.090343446 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 09:42:36 np0005541914.localdomain podman[252485]: 2025-12-02 09:42:36.859923963 +0000 UTC m=+0.101746723 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 02 09:42:36 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 09:42:36 np0005541914.localdomain python3.9[252484]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:42:36 np0005541914.localdomain sudo[252482]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:37 np0005541914.localdomain sudo[252614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bvtgyapakfomsrfwssmhtopahyoxpwre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668557.119006-586-41211562107788/AnsiballZ_systemd.py
Dec 02 09:42:37 np0005541914.localdomain sudo[252614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:37 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:42:37 np0005541914.localdomain podman[252617]: 2025-12-02 09:42:37.735857493 +0000 UTC m=+0.076680290 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 02 09:42:37 np0005541914.localdomain podman[252617]: 2025-12-02 09:42:37.791827879 +0000 UTC m=+0.132650676 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 02 09:42:37 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:42:37 np0005541914.localdomain python3.9[252616]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:42:37 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:42:38 np0005541914.localdomain systemd-rc-local-generator[252663]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:42:38 np0005541914.localdomain systemd-sysv-generator[252667]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:42:38 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:42:38 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:42:38 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:42:38 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:42:38 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:42:38 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:42:38 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:42:38 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:42:38 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:42:38 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:42:38 np0005541914.localdomain sudo[252614]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:38 np0005541914.localdomain podman[252679]: 2025-12-02 09:42:38.453076792 +0000 UTC m=+0.097897156 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 02 09:42:38 np0005541914.localdomain podman[252679]: 2025-12-02 09:42:38.45890662 +0000 UTC m=+0.103726954 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 02 09:42:38 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:42:38 np0005541914.localdomain sudo[252805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fcnrofumomcoiisaqfclhhaiqvjunern ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668558.602096-611-115856572655351/AnsiballZ_stat.py
Dec 02 09:42:38 np0005541914.localdomain sudo[252805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:39 np0005541914.localdomain python3.9[252807]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:42:39 np0005541914.localdomain sudo[252805]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:39 np0005541914.localdomain sudo[252862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-celxcnxypabwhkcnmguyuolfouyynrcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668558.602096-611-115856572655351/AnsiballZ_file.py
Dec 02 09:42:39 np0005541914.localdomain sudo[252862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:39 np0005541914.localdomain python3.9[252864]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:42:39 np0005541914.localdomain sudo[252862]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:39 np0005541914.localdomain sudo[252972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ispwnmwyhnoowuazbeboqmtgtguennie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668559.7099016-647-156197873316467/AnsiballZ_stat.py
Dec 02 09:42:39 np0005541914.localdomain sudo[252972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:40 np0005541914.localdomain python3.9[252974]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:42:40 np0005541914.localdomain sudo[252972]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:40 np0005541914.localdomain sudo[253029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pohwfzxkgaaoynayplhaslgsghydnqta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668559.7099016-647-156197873316467/AnsiballZ_file.py
Dec 02 09:42:40 np0005541914.localdomain sudo[253029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:40 np0005541914.localdomain python3.9[253031]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:42:40 np0005541914.localdomain sudo[253029]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:41 np0005541914.localdomain sudo[253139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eszoxbvacmhgiwuxwrpwwszgvostwnwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668560.8817189-682-220050451773712/AnsiballZ_systemd.py
Dec 02 09:42:41 np0005541914.localdomain sudo[253139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:41 np0005541914.localdomain python3.9[253141]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:42:41 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:42:41 np0005541914.localdomain systemd-rc-local-generator[253161]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:42:41 np0005541914.localdomain systemd-sysv-generator[253165]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:42:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:42:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:42:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:42:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:42:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:42:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:42:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:42:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:42:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:42:41 np0005541914.localdomain systemd[1]: Starting Create netns directory...
Dec 02 09:42:41 np0005541914.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 02 09:42:41 np0005541914.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 02 09:42:41 np0005541914.localdomain systemd[1]: Finished Create netns directory.
Dec 02 09:42:41 np0005541914.localdomain sudo[253139]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:42:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 09:42:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:42:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:42:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:42:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:42:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:42:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 09:42:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:42:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:42:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 09:42:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:42:42 np0005541914.localdomain sudo[253292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-buvfrhgfhprkafojbvnzmzqtcbeqjdyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668562.3230324-712-139364648497004/AnsiballZ_file.py
Dec 02 09:42:42 np0005541914.localdomain sudo[253292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:42 np0005541914.localdomain python3.9[253294]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:42:42 np0005541914.localdomain sudo[253292]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:43 np0005541914.localdomain sudo[253402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kuibaqusbbiqmptxnbwgwkxkofbwpblp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668563.054349-736-6858309016937/AnsiballZ_stat.py
Dec 02 09:42:43 np0005541914.localdomain sudo[253402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:43 np0005541914.localdomain python3.9[253404]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/neutron_sriov_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:42:43 np0005541914.localdomain sudo[253402]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:43 np0005541914.localdomain sudo[253490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vcfusqjkybwviyrszqbmetejrcqzxsze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668563.054349-736-6858309016937/AnsiballZ_copy.py
Dec 02 09:42:43 np0005541914.localdomain sudo[253490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:43 np0005541914.localdomain python3.9[253492]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/neutron_sriov_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764668563.054349-736-6858309016937/.source.json _original_basename=.221opef9 follow=False checksum=a32073fdba4733b9ffe872cfb91708eff83a585a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:42:43 np0005541914.localdomain sudo[253490]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:44 np0005541914.localdomain sudo[253600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rlxddwhdmzrzqfepqalbfhizruccpgqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668564.2399142-781-10455307140289/AnsiballZ_file.py
Dec 02 09:42:44 np0005541914.localdomain sudo[253600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:44 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:42:44 np0005541914.localdomain systemd[1]: tmp-crun.uVm2Ro.mount: Deactivated successfully.
Dec 02 09:42:44 np0005541914.localdomain podman[253603]: 2025-12-02 09:42:44.603802372 +0000 UTC m=+0.093645616 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, name=ubi9-minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, config_id=edpm, io.openshift.tags=minimal rhel9, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git)
Dec 02 09:42:44 np0005541914.localdomain podman[253603]: 2025-12-02 09:42:44.619847061 +0000 UTC m=+0.109690285 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, distribution-scope=public, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, vcs-type=git, io.openshift.expose-services=, container_name=openstack_network_exporter)
Dec 02 09:42:44 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:42:44 np0005541914.localdomain python3.9[253602]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:42:44 np0005541914.localdomain sudo[253600]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:45 np0005541914.localdomain sudo[253730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-myxsmgktpvqcvlchniqmhrfiaoivtnml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668564.9096022-805-9230151471282/AnsiballZ_stat.py
Dec 02 09:42:45 np0005541914.localdomain sudo[253730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:45 np0005541914.localdomain sudo[253730]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:45 np0005541914.localdomain sudo[253818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qoppzsxzszurtprftglclftbvoxcijzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668564.9096022-805-9230151471282/AnsiballZ_copy.py
Dec 02 09:42:45 np0005541914.localdomain sudo[253818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:45 np0005541914.localdomain sudo[253818]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:46 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38351 DF PROTO=TCP SPT=32940 DPT=9102 SEQ=301234046 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54EB8BE0000000001030307) 
Dec 02 09:42:46 np0005541914.localdomain sudo[253928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bgrtlkugayxwglbhsooqrcfvurmtqpbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668566.2258327-856-270160419704285/AnsiballZ_container_config_data.py
Dec 02 09:42:46 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:42:46 np0005541914.localdomain sudo[253928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:46 np0005541914.localdomain podman[253930]: 2025-12-02 09:42:46.719402281 +0000 UTC m=+0.069841730 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 09:42:46 np0005541914.localdomain podman[253930]: 2025-12-02 09:42:46.727015874 +0000 UTC m=+0.077455293 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 09:42:46 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:42:46 np0005541914.localdomain python3.9[253931]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent config_pattern=*.json debug=False
Dec 02 09:42:46 np0005541914.localdomain sudo[253928]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:46 np0005541914.localdomain sudo[253954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:42:46 np0005541914.localdomain sudo[253954]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:42:46 np0005541914.localdomain sudo[253954]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:47 np0005541914.localdomain sudo[253989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 02 09:42:47 np0005541914.localdomain sudo[253989]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:42:47 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38352 DF PROTO=TCP SPT=32940 DPT=9102 SEQ=301234046 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54EBCE20000000001030307) 
Dec 02 09:42:47 np0005541914.localdomain sudo[254121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kfzfvhyksxnrkxcbczdijooxiidxrszo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668567.0918133-883-273606096057738/AnsiballZ_container_config_hash.py
Dec 02 09:42:47 np0005541914.localdomain sudo[254121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:47 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59336 DF PROTO=TCP SPT=41532 DPT=9102 SEQ=2930245784 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54EBF220000000001030307) 
Dec 02 09:42:47 np0005541914.localdomain python3.9[254125]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 02 09:42:47 np0005541914.localdomain sudo[254121]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:47 np0005541914.localdomain podman[254187]: 2025-12-02 09:42:47.904611211 +0000 UTC m=+0.081565488 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, distribution-scope=public, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, RELEASE=main, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, vcs-type=git, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 09:42:48 np0005541914.localdomain podman[254187]: 2025-12-02 09:42:48.017980888 +0000 UTC m=+0.194935155 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, CEPH_POINT_RELEASE=, RELEASE=main, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc.)
Dec 02 09:42:48 np0005541914.localdomain sudo[253989]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:48 np0005541914.localdomain sudo[254328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:42:48 np0005541914.localdomain sudo[254361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-juoxhsmotcisxechwhujlwhyzwgxmwfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668568.0145345-910-61903253939231/AnsiballZ_podman_container_info.py
Dec 02 09:42:48 np0005541914.localdomain sudo[254328]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:42:48 np0005541914.localdomain sudo[254361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:48 np0005541914.localdomain sudo[254328]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:48 np0005541914.localdomain sudo[254366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:42:48 np0005541914.localdomain sudo[254366]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:42:48 np0005541914.localdomain python3.9[254365]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 02 09:42:48 np0005541914.localdomain sudo[254361]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:49 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38353 DF PROTO=TCP SPT=32940 DPT=9102 SEQ=301234046 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54EC4E20000000001030307) 
Dec 02 09:42:49 np0005541914.localdomain sudo[254366]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:49 np0005541914.localdomain sudo[254459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:42:49 np0005541914.localdomain sudo[254459]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:42:49 np0005541914.localdomain sudo[254459]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:50 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54952 DF PROTO=TCP SPT=50860 DPT=9102 SEQ=3557917637 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54EC9220000000001030307) 
Dec 02 09:42:51 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:42:52 np0005541914.localdomain systemd[1]: tmp-crun.vmoyci.mount: Deactivated successfully.
Dec 02 09:42:52 np0005541914.localdomain podman[254477]: 2025-12-02 09:42:52.096657016 +0000 UTC m=+0.094906245 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 02 09:42:52 np0005541914.localdomain podman[254477]: 2025-12-02 09:42:52.107516708 +0000 UTC m=+0.105765867 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 02 09:42:52 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:42:52 np0005541914.localdomain sudo[254586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xfeqditsagcoqjlexmjkvqucqupikucz ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764668572.1983821-949-224837114707528/AnsiballZ_edpm_container_manage.py
Dec 02 09:42:52 np0005541914.localdomain sudo[254586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:52 np0005541914.localdomain python3[254588]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent config_id=neutron_sriov_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 02 09:42:53 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38354 DF PROTO=TCP SPT=32940 DPT=9102 SEQ=301234046 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54ED4A30000000001030307) 
Dec 02 09:42:53 np0005541914.localdomain podman[254626]: 
Dec 02 09:42:53 np0005541914.localdomain podman[254626]: 2025-12-02 09:42:53.212957735 +0000 UTC m=+0.084814907 container create 41dc3059f1c34e522049f1e4cb28ec8edf81261b81f11a012cf946964c34a82e (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, config_id=neutron_sriov_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '71a45f41a3d5a46d8c81b415ae7d588c3fab880d8e869e7173bec916ef222998'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 02 09:42:53 np0005541914.localdomain podman[254626]: 2025-12-02 09:42:53.168563702 +0000 UTC m=+0.040420934 image pull  quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified
Dec 02 09:42:53 np0005541914.localdomain python3[254588]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name neutron_sriov_agent --conmon-pidfile /run/neutron_sriov_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=71a45f41a3d5a46d8c81b415ae7d588c3fab880d8e869e7173bec916ef222998 --label config_id=neutron_sriov_agent --label container_name=neutron_sriov_agent --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '71a45f41a3d5a46d8c81b415ae7d588c3fab880d8e869e7173bec916ef222998'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user neutron --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified
Dec 02 09:42:53 np0005541914.localdomain sudo[254586]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:53 np0005541914.localdomain sudo[254768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vznrhccjrsqkhdzaqkfxcqecrgoskpwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668573.5791037-974-153263727422698/AnsiballZ_stat.py
Dec 02 09:42:53 np0005541914.localdomain sudo[254768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:54 np0005541914.localdomain python3.9[254770]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:42:54 np0005541914.localdomain sudo[254768]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:54 np0005541914.localdomain sudo[254880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xepyxxdbgjmlewhpxjyvbjckhhvahqfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668574.3467093-1000-50028988742961/AnsiballZ_file.py
Dec 02 09:42:54 np0005541914.localdomain sudo[254880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:54 np0005541914.localdomain python3.9[254882]: ansible-file Invoked with path=/etc/systemd/system/edpm_neutron_sriov_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:42:54 np0005541914.localdomain sudo[254880]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:54 np0005541914.localdomain sudo[254935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gtxiktstthodnepvsjhorrtuvaggldis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668574.3467093-1000-50028988742961/AnsiballZ_stat.py
Dec 02 09:42:54 np0005541914.localdomain sudo[254935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:55 np0005541914.localdomain python3.9[254937]: ansible-stat Invoked with path=/etc/systemd/system/edpm_neutron_sriov_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:42:55 np0005541914.localdomain sudo[254935]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:55 np0005541914.localdomain rsyslogd[759]: imjournal: 342 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Dec 02 09:42:55 np0005541914.localdomain sudo[255044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fixxeqbcnaldhtrbbysfwxfavgzulfrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668575.2542298-1000-87350527762936/AnsiballZ_copy.py
Dec 02 09:42:55 np0005541914.localdomain sudo[255044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:55 np0005541914.localdomain python3.9[255046]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764668575.2542298-1000-87350527762936/source dest=/etc/systemd/system/edpm_neutron_sriov_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:42:55 np0005541914.localdomain sudo[255044]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:56 np0005541914.localdomain sudo[255099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nbtxfvrhhvgctkfeuivvjphqimeaufnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668575.2542298-1000-87350527762936/AnsiballZ_systemd.py
Dec 02 09:42:56 np0005541914.localdomain sudo[255099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:56 np0005541914.localdomain python3.9[255101]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 02 09:42:56 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:42:56 np0005541914.localdomain systemd-rc-local-generator[255125]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:42:56 np0005541914.localdomain systemd-sysv-generator[255131]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:42:56 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:42:56 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:42:56 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:42:56 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:42:56 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:42:56 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:42:56 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:42:56 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:42:56 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:42:56 np0005541914.localdomain sudo[255099]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:57 np0005541914.localdomain sudo[255189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fmfwoulimrslugqciwolbjjkldofizyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668575.2542298-1000-87350527762936/AnsiballZ_systemd.py
Dec 02 09:42:57 np0005541914.localdomain sudo[255189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:57 np0005541914.localdomain python3.9[255191]: ansible-systemd Invoked with state=restarted name=edpm_neutron_sriov_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:42:58 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:42:58 np0005541914.localdomain systemd-rc-local-generator[255221]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:42:58 np0005541914.localdomain systemd-sysv-generator[255224]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:42:58 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:42:58 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:42:58 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:42:58 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:42:58 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:42:58 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:42:58 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:42:58 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:42:58 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:42:58 np0005541914.localdomain systemd[1]: Starting neutron_sriov_agent container...
Dec 02 09:42:58 np0005541914.localdomain systemd[1]: tmp-crun.OhfIYa.mount: Deactivated successfully.
Dec 02 09:42:58 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:42:58 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0266898e58d901f00366381468b4c5e50455ea88d0b0487c4e0c34f4cce7ed32/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec 02 09:42:58 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0266898e58d901f00366381468b4c5e50455ea88d0b0487c4e0c34f4cce7ed32/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 09:42:58 np0005541914.localdomain podman[255232]: 2025-12-02 09:42:58.86666936 +0000 UTC m=+0.129807070 container init 41dc3059f1c34e522049f1e4cb28ec8edf81261b81f11a012cf946964c34a82e (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '71a45f41a3d5a46d8c81b415ae7d588c3fab880d8e869e7173bec916ef222998'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=neutron_sriov_agent, container_name=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 02 09:42:58 np0005541914.localdomain podman[255232]: 2025-12-02 09:42:58.878375617 +0000 UTC m=+0.141513327 container start 41dc3059f1c34e522049f1e4cb28ec8edf81261b81f11a012cf946964c34a82e (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '71a45f41a3d5a46d8c81b415ae7d588c3fab880d8e869e7173bec916ef222998'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_sriov_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=neutron_sriov_agent)
Dec 02 09:42:58 np0005541914.localdomain podman[255232]: neutron_sriov_agent
Dec 02 09:42:58 np0005541914.localdomain neutron_sriov_agent[255247]: + sudo -E kolla_set_configs
Dec 02 09:42:58 np0005541914.localdomain systemd[1]: Started neutron_sriov_agent container.
Dec 02 09:42:58 np0005541914.localdomain sudo[255189]: pam_unix(sudo:session): session closed for user root
Dec 02 09:42:58 np0005541914.localdomain neutron_sriov_agent[255247]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 02 09:42:58 np0005541914.localdomain neutron_sriov_agent[255247]: INFO:__main__:Validating config file
Dec 02 09:42:58 np0005541914.localdomain neutron_sriov_agent[255247]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 02 09:42:58 np0005541914.localdomain neutron_sriov_agent[255247]: INFO:__main__:Copying service configuration files
Dec 02 09:42:58 np0005541914.localdomain neutron_sriov_agent[255247]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec 02 09:42:58 np0005541914.localdomain neutron_sriov_agent[255247]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec 02 09:42:58 np0005541914.localdomain neutron_sriov_agent[255247]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec 02 09:42:58 np0005541914.localdomain neutron_sriov_agent[255247]: INFO:__main__:Writing out command to execute
Dec 02 09:42:58 np0005541914.localdomain neutron_sriov_agent[255247]: INFO:__main__:Setting permission for /var/lib/neutron
Dec 02 09:42:58 np0005541914.localdomain neutron_sriov_agent[255247]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec 02 09:42:58 np0005541914.localdomain neutron_sriov_agent[255247]: INFO:__main__:Setting permission for /var/lib/neutron/.cache
Dec 02 09:42:58 np0005541914.localdomain neutron_sriov_agent[255247]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec 02 09:42:58 np0005541914.localdomain neutron_sriov_agent[255247]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec 02 09:42:58 np0005541914.localdomain neutron_sriov_agent[255247]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec 02 09:42:58 np0005541914.localdomain neutron_sriov_agent[255247]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy
Dec 02 09:42:58 np0005541914.localdomain neutron_sriov_agent[255247]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec 02 09:42:58 np0005541914.localdomain neutron_sriov_agent[255247]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints
Dec 02 09:42:58 np0005541914.localdomain neutron_sriov_agent[255247]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934
Dec 02 09:42:58 np0005541914.localdomain neutron_sriov_agent[255247]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec 02 09:42:58 np0005541914.localdomain neutron_sriov_agent[255247]: ++ cat /run_command
Dec 02 09:42:58 np0005541914.localdomain neutron_sriov_agent[255247]: + CMD=/usr/bin/neutron-sriov-nic-agent
Dec 02 09:42:58 np0005541914.localdomain neutron_sriov_agent[255247]: + ARGS=
Dec 02 09:42:58 np0005541914.localdomain neutron_sriov_agent[255247]: + sudo kolla_copy_cacerts
Dec 02 09:42:58 np0005541914.localdomain neutron_sriov_agent[255247]: + [[ ! -n '' ]]
Dec 02 09:42:58 np0005541914.localdomain neutron_sriov_agent[255247]: + . kolla_extend_start
Dec 02 09:42:58 np0005541914.localdomain neutron_sriov_agent[255247]: Running command: '/usr/bin/neutron-sriov-nic-agent'
Dec 02 09:42:58 np0005541914.localdomain neutron_sriov_agent[255247]: + echo 'Running command: '\''/usr/bin/neutron-sriov-nic-agent'\'''
Dec 02 09:42:58 np0005541914.localdomain neutron_sriov_agent[255247]: + umask 0022
Dec 02 09:42:58 np0005541914.localdomain neutron_sriov_agent[255247]: + exec /usr/bin/neutron-sriov-nic-agent
Dec 02 09:42:59 np0005541914.localdomain sudo[255368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-loorxgvgiabronhfmsjwqaixzbyggbmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668579.1132326-1085-84458382039255/AnsiballZ_systemd.py
Dec 02 09:42:59 np0005541914.localdomain sudo[255368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:42:59 np0005541914.localdomain python3.9[255370]: ansible-ansible.builtin.systemd Invoked with name=edpm_neutron_sriov_agent.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 02 09:42:59 np0005541914.localdomain systemd[1]: Stopping neutron_sriov_agent container...
Dec 02 09:42:59 np0005541914.localdomain systemd[1]: libpod-41dc3059f1c34e522049f1e4cb28ec8edf81261b81f11a012cf946964c34a82e.scope: Deactivated successfully.
Dec 02 09:42:59 np0005541914.localdomain podman[255375]: 2025-12-02 09:42:59.834508962 +0000 UTC m=+0.075023779 container died 41dc3059f1c34e522049f1e4cb28ec8edf81261b81f11a012cf946964c34a82e (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '71a45f41a3d5a46d8c81b415ae7d588c3fab880d8e869e7173bec916ef222998'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_sriov_agent, container_name=neutron_sriov_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:42:59 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-41dc3059f1c34e522049f1e4cb28ec8edf81261b81f11a012cf946964c34a82e-userdata-shm.mount: Deactivated successfully.
Dec 02 09:42:59 np0005541914.localdomain podman[255375]: 2025-12-02 09:42:59.928282481 +0000 UTC m=+0.168797248 container cleanup 41dc3059f1c34e522049f1e4cb28ec8edf81261b81f11a012cf946964c34a82e (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '71a45f41a3d5a46d8c81b415ae7d588c3fab880d8e869e7173bec916ef222998'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=neutron_sriov_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 02 09:42:59 np0005541914.localdomain podman[255375]: neutron_sriov_agent
Dec 02 09:42:59 np0005541914.localdomain podman[255386]: 2025-12-02 09:42:59.930367164 +0000 UTC m=+0.092352107 container cleanup 41dc3059f1c34e522049f1e4cb28ec8edf81261b81f11a012cf946964c34a82e (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '71a45f41a3d5a46d8c81b415ae7d588c3fab880d8e869e7173bec916ef222998'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=neutron_sriov_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=neutron_sriov_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:43:00 np0005541914.localdomain podman[255399]: 2025-12-02 09:43:00.009005702 +0000 UTC m=+0.046277922 container cleanup 41dc3059f1c34e522049f1e4cb28ec8edf81261b81f11a012cf946964c34a82e (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=neutron_sriov_agent, container_name=neutron_sriov_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '71a45f41a3d5a46d8c81b415ae7d588c3fab880d8e869e7173bec916ef222998'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:43:00 np0005541914.localdomain podman[255399]: neutron_sriov_agent
Dec 02 09:43:00 np0005541914.localdomain systemd[1]: edpm_neutron_sriov_agent.service: Deactivated successfully.
Dec 02 09:43:00 np0005541914.localdomain systemd[1]: Stopped neutron_sriov_agent container.
Dec 02 09:43:00 np0005541914.localdomain systemd[1]: Starting neutron_sriov_agent container...
Dec 02 09:43:00 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:43:00 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0266898e58d901f00366381468b4c5e50455ea88d0b0487c4e0c34f4cce7ed32/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec 02 09:43:00 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0266898e58d901f00366381468b4c5e50455ea88d0b0487c4e0c34f4cce7ed32/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 09:43:00 np0005541914.localdomain podman[255412]: 2025-12-02 09:43:00.144509174 +0000 UTC m=+0.103089894 container init 41dc3059f1c34e522049f1e4cb28ec8edf81261b81f11a012cf946964c34a82e (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, container_name=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '71a45f41a3d5a46d8c81b415ae7d588c3fab880d8e869e7173bec916ef222998'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_sriov_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 02 09:43:00 np0005541914.localdomain podman[255412]: 2025-12-02 09:43:00.150690333 +0000 UTC m=+0.109271043 container start 41dc3059f1c34e522049f1e4cb28ec8edf81261b81f11a012cf946964c34a82e (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '71a45f41a3d5a46d8c81b415ae7d588c3fab880d8e869e7173bec916ef222998'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/config-data/ansible-generated/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=neutron_sriov_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=neutron_sriov_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:43:00 np0005541914.localdomain podman[255412]: neutron_sriov_agent
Dec 02 09:43:00 np0005541914.localdomain neutron_sriov_agent[255428]: + sudo -E kolla_set_configs
Dec 02 09:43:00 np0005541914.localdomain systemd[1]: Started neutron_sriov_agent container.
Dec 02 09:43:00 np0005541914.localdomain sudo[255368]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:00 np0005541914.localdomain neutron_sriov_agent[255428]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 02 09:43:00 np0005541914.localdomain neutron_sriov_agent[255428]: INFO:__main__:Validating config file
Dec 02 09:43:00 np0005541914.localdomain neutron_sriov_agent[255428]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 02 09:43:00 np0005541914.localdomain neutron_sriov_agent[255428]: INFO:__main__:Copying service configuration files
Dec 02 09:43:00 np0005541914.localdomain neutron_sriov_agent[255428]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec 02 09:43:00 np0005541914.localdomain neutron_sriov_agent[255428]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec 02 09:43:00 np0005541914.localdomain neutron_sriov_agent[255428]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec 02 09:43:00 np0005541914.localdomain neutron_sriov_agent[255428]: INFO:__main__:Writing out command to execute
Dec 02 09:43:00 np0005541914.localdomain neutron_sriov_agent[255428]: INFO:__main__:Setting permission for /var/lib/neutron
Dec 02 09:43:00 np0005541914.localdomain neutron_sriov_agent[255428]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec 02 09:43:00 np0005541914.localdomain neutron_sriov_agent[255428]: INFO:__main__:Setting permission for /var/lib/neutron/.cache
Dec 02 09:43:00 np0005541914.localdomain neutron_sriov_agent[255428]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec 02 09:43:00 np0005541914.localdomain neutron_sriov_agent[255428]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec 02 09:43:00 np0005541914.localdomain neutron_sriov_agent[255428]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec 02 09:43:00 np0005541914.localdomain neutron_sriov_agent[255428]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy
Dec 02 09:43:00 np0005541914.localdomain neutron_sriov_agent[255428]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec 02 09:43:00 np0005541914.localdomain neutron_sriov_agent[255428]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints
Dec 02 09:43:00 np0005541914.localdomain neutron_sriov_agent[255428]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934
Dec 02 09:43:00 np0005541914.localdomain neutron_sriov_agent[255428]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec 02 09:43:00 np0005541914.localdomain neutron_sriov_agent[255428]: ++ cat /run_command
Dec 02 09:43:00 np0005541914.localdomain neutron_sriov_agent[255428]: + CMD=/usr/bin/neutron-sriov-nic-agent
Dec 02 09:43:00 np0005541914.localdomain neutron_sriov_agent[255428]: + ARGS=
Dec 02 09:43:00 np0005541914.localdomain neutron_sriov_agent[255428]: + sudo kolla_copy_cacerts
Dec 02 09:43:00 np0005541914.localdomain neutron_sriov_agent[255428]: Running command: '/usr/bin/neutron-sriov-nic-agent'
Dec 02 09:43:00 np0005541914.localdomain neutron_sriov_agent[255428]: + [[ ! -n '' ]]
Dec 02 09:43:00 np0005541914.localdomain neutron_sriov_agent[255428]: + . kolla_extend_start
Dec 02 09:43:00 np0005541914.localdomain neutron_sriov_agent[255428]: + echo 'Running command: '\''/usr/bin/neutron-sriov-nic-agent'\'''
Dec 02 09:43:00 np0005541914.localdomain neutron_sriov_agent[255428]: + umask 0022
Dec 02 09:43:00 np0005541914.localdomain neutron_sriov_agent[255428]: + exec /usr/bin/neutron-sriov-nic-agent
Dec 02 09:43:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38355 DF PROTO=TCP SPT=32940 DPT=9102 SEQ=301234046 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54EF5230000000001030307) 
Dec 02 09:43:01 np0005541914.localdomain sshd[250163]: pam_unix(sshd:session): session closed for user zuul
Dec 02 09:43:01 np0005541914.localdomain systemd[1]: session-57.scope: Deactivated successfully.
Dec 02 09:43:01 np0005541914.localdomain systemd[1]: session-57.scope: Consumed 22.873s CPU time.
Dec 02 09:43:01 np0005541914.localdomain systemd-logind[760]: Session 57 logged out. Waiting for processes to exit.
Dec 02 09:43:01 np0005541914.localdomain systemd-logind[760]: Removed session 57.
Dec 02 09:43:02 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 09:43:02.085 2 INFO neutron.common.config [-] Logging enabled!
Dec 02 09:43:02 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 09:43:02.085 2 INFO neutron.common.config [-] /usr/bin/neutron-sriov-nic-agent version 22.2.2.dev43
Dec 02 09:43:02 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 09:43:02.086 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Physical Devices mappings: {'dummy_sriov_net': ['dummy-dev']}
Dec 02 09:43:02 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 09:43:02.087 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Exclude Devices: {}
Dec 02 09:43:02 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 09:43:02.087 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider bandwidths: {}
Dec 02 09:43:02 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 09:43:02.087 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider inventory defaults: {'allocation_ratio': 1.0, 'min_unit': 1, 'step_size': 1, 'reserved': 0}
Dec 02 09:43:02 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 09:43:02.087 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider hypervisors: {'dummy-dev': 'np0005541914.localdomain'}
Dec 02 09:43:02 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 09:43:02.088 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-3e20ecd4-d4aa-4f65-8974-198bfa2f7280 - - - - - -] RPC agent_id: nic-switch-agent.np0005541914.localdomain
Dec 02 09:43:02 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 09:43:02.096 2 INFO neutron.agent.agent_extensions_manager [None req-3e20ecd4-d4aa-4f65-8974-198bfa2f7280 - - - - - -] Loaded agent extensions: ['qos']
Dec 02 09:43:02 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 09:43:02.097 2 INFO neutron.agent.agent_extensions_manager [None req-3e20ecd4-d4aa-4f65-8974-198bfa2f7280 - - - - - -] Initializing agent extension 'qos'
Dec 02 09:43:02 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 09:43:02.366 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-3e20ecd4-d4aa-4f65-8974-198bfa2f7280 - - - - - -] Agent initialized successfully, now running... 
Dec 02 09:43:02 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 09:43:02.367 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-3e20ecd4-d4aa-4f65-8974-198bfa2f7280 - - - - - -] SRIOV NIC Agent RPC Daemon Started!
Dec 02 09:43:02 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 09:43:02.368 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-3e20ecd4-d4aa-4f65-8974-198bfa2f7280 - - - - - -] Agent out of sync with plugin!
Dec 02 09:43:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:43:03.150 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:43:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:43:03.151 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:43:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:43:03.151 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:43:03 np0005541914.localdomain podman[239757]: time="2025-12-02T09:43:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 09:43:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:43:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144244 "" "Go-http-client/1.1"
Dec 02 09:43:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:43:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16292 "" "Go-http-client/1.1"
Dec 02 09:43:05 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:43:06 np0005541914.localdomain podman[255461]: 2025-12-02 09:43:06.077558177 +0000 UTC m=+0.085148718 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 02 09:43:06 np0005541914.localdomain podman[255461]: 2025-12-02 09:43:06.114184464 +0000 UTC m=+0.121775025 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 02 09:43:06 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 09:43:06 np0005541914.localdomain sshd[255482]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:43:06 np0005541914.localdomain sshd[255482]: Accepted publickey for zuul from 192.168.122.30 port 43928 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 09:43:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:43:06 np0005541914.localdomain systemd-logind[760]: New session 58 of user zuul.
Dec 02 09:43:06 np0005541914.localdomain systemd[1]: Started Session 58 of User zuul.
Dec 02 09:43:06 np0005541914.localdomain sshd[255482]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 09:43:07 np0005541914.localdomain systemd[1]: tmp-crun.YneE5y.mount: Deactivated successfully.
Dec 02 09:43:07 np0005541914.localdomain podman[255485]: 2025-12-02 09:43:07.01462531 +0000 UTC m=+0.077183863 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 09:43:07 np0005541914.localdomain podman[255485]: 2025-12-02 09:43:07.022869262 +0000 UTC m=+0.085427895 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 02 09:43:07 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 09:43:07 np0005541914.localdomain python3.9[255616]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 09:43:07 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:43:07.890 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:43:07 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:43:08 np0005541914.localdomain systemd[1]: tmp-crun.IrvXra.mount: Deactivated successfully.
Dec 02 09:43:08 np0005541914.localdomain podman[255621]: 2025-12-02 09:43:08.076748657 +0000 UTC m=+0.081170496 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 02 09:43:08 np0005541914.localdomain podman[255621]: 2025-12-02 09:43:08.111001461 +0000 UTC m=+0.115423290 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller)
Dec 02 09:43:08 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:43:08 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:43:08 np0005541914.localdomain podman[255662]: 2025-12-02 09:43:08.571771502 +0000 UTC m=+0.066653044 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 02 09:43:08 np0005541914.localdomain podman[255662]: 2025-12-02 09:43:08.58187454 +0000 UTC m=+0.076756042 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 02 09:43:08 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:43:08 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:43:08.637 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:43:08 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:43:08.662 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:43:08 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:43:08.662 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:43:08 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:43:08.663 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 09:43:08 np0005541914.localdomain sudo[255771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vlighkcxggtnsbcvwxglyejuextbhmjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668588.7270749-67-250336300315961/AnsiballZ_setup.py
Dec 02 09:43:08 np0005541914.localdomain sudo[255771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:09 np0005541914.localdomain python3.9[255773]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 02 09:43:09 np0005541914.localdomain sudo[255771]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:09 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:43:09.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:43:09 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:43:09.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:43:09 np0005541914.localdomain sudo[255834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hkgppjtmnfzhuviawgfbljazczywouzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668588.7270749-67-250336300315961/AnsiballZ_dnf.py
Dec 02 09:43:09 np0005541914.localdomain sudo[255834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:10 np0005541914.localdomain python3.9[255836]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 02 09:43:10 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:43:10.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:43:10 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:43:10.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:43:11 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:43:11.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:43:11 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:43:11.642 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 09:43:11 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:43:11.642 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 09:43:11 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:43:11.658 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 09:43:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:43:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 09:43:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:43:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:43:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:43:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:43:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:43:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 09:43:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:43:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:43:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 09:43:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:43:12 np0005541914.localdomain sshd[255839]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:43:13 np0005541914.localdomain sshd[255839]: Received disconnect from 34.78.29.97 port 43840:11: Bye Bye [preauth]
Dec 02 09:43:13 np0005541914.localdomain sshd[255839]: Disconnected from authenticating user root 34.78.29.97 port 43840 [preauth]
Dec 02 09:43:13 np0005541914.localdomain sudo[255834]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:14 np0005541914.localdomain sudo[255948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-icdsyycdskrgcpxybaqamlxzlautzwsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668593.8309906-103-244977280678321/AnsiballZ_systemd.py
Dec 02 09:43:14 np0005541914.localdomain sudo[255948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:14 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:43:14.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:43:14 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:43:14.658 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:43:14 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:43:14.659 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:43:14 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:43:14.659 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:43:14 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:43:14.659 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 09:43:14 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:43:14.660 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:43:14 np0005541914.localdomain python3.9[255950]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 02 09:43:14 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:43:14 np0005541914.localdomain sudo[255948]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:14 np0005541914.localdomain systemd[1]: tmp-crun.8Cb9rO.mount: Deactivated successfully.
Dec 02 09:43:14 np0005541914.localdomain podman[255953]: 2025-12-02 09:43:14.827961657 +0000 UTC m=+0.109085827 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, config_id=edpm, architecture=x86_64, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.openshift.expose-services=, container_name=openstack_network_exporter, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, com.redhat.component=ubi9-minimal-container)
Dec 02 09:43:14 np0005541914.localdomain podman[255953]: 2025-12-02 09:43:14.845815522 +0000 UTC m=+0.126939742 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, managed_by=edpm_ansible, config_id=edpm, maintainer=Red Hat, Inc., vcs-type=git, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Dec 02 09:43:14 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:43:15 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:43:15.121 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:43:15 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:43:15.313 229589 WARNING nova.virt.libvirt.driver [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 09:43:15 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:43:15.314 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=12965MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 09:43:15 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:43:15.315 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:43:15 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:43:15.315 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:43:15 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:43:15.375 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 09:43:15 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:43:15.378 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 09:43:15 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:43:15.396 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:43:15 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:43:15.819 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:43:15 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:43:15.826 229589 DEBUG nova.compute.provider_tree [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 09:43:15 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:43:15.844 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 09:43:15 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:43:15.846 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 09:43:15 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:43:15.846 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:43:16 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47933 DF PROTO=TCP SPT=44042 DPT=9102 SEQ=4103372198 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54F2DEF0000000001030307) 
Dec 02 09:43:16 np0005541914.localdomain sudo[256123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-grngmpluhavezyqrxwsducgzsebvgsng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668595.9933288-130-196411288843605/AnsiballZ_file.py
Dec 02 09:43:16 np0005541914.localdomain sudo[256123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:16 np0005541914.localdomain python3.9[256125]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:43:16 np0005541914.localdomain sudo[256123]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:16 np0005541914.localdomain sudo[256233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jscxmdqiuzrusifmneyodwenedipozgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668596.7116787-130-122026798079472/AnsiballZ_file.py
Dec 02 09:43:16 np0005541914.localdomain sudo[256233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:16 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:43:17 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47934 DF PROTO=TCP SPT=44042 DPT=9102 SEQ=4103372198 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54F31E20000000001030307) 
Dec 02 09:43:17 np0005541914.localdomain systemd[1]: tmp-crun.iRpOCQ.mount: Deactivated successfully.
Dec 02 09:43:17 np0005541914.localdomain podman[256235]: 2025-12-02 09:43:17.074698886 +0000 UTC m=+0.096988858 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 09:43:17 np0005541914.localdomain podman[256235]: 2025-12-02 09:43:17.08989767 +0000 UTC m=+0.112187582 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 09:43:17 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:43:17 np0005541914.localdomain python3.9[256236]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:43:17 np0005541914.localdomain sudo[256233]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:17 np0005541914.localdomain sudo[256366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sppcsjylpfhnjdvqhkckwcrrneyrqvtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668597.309774-130-35474229572992/AnsiballZ_file.py
Dec 02 09:43:17 np0005541914.localdomain sudo[256366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:17 np0005541914.localdomain python3.9[256368]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:43:17 np0005541914.localdomain sudo[256366]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:17 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38356 DF PROTO=TCP SPT=32940 DPT=9102 SEQ=301234046 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54F35220000000001030307) 
Dec 02 09:43:18 np0005541914.localdomain sudo[256476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aekjtsrikcinnbhhvnifmxcsshwmtowa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668597.8730094-130-101799597189738/AnsiballZ_file.py
Dec 02 09:43:18 np0005541914.localdomain sudo[256476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:18 np0005541914.localdomain python3.9[256478]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:43:18 np0005541914.localdomain sudo[256476]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:18 np0005541914.localdomain sudo[256586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nyugnbwirqdxypnslzwjrrlzajlfnewq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668598.5109348-130-91120134148480/AnsiballZ_file.py
Dec 02 09:43:18 np0005541914.localdomain sudo[256586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:18 np0005541914.localdomain python3.9[256588]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:43:18 np0005541914.localdomain sudo[256586]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:19 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47935 DF PROTO=TCP SPT=44042 DPT=9102 SEQ=4103372198 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54F39E20000000001030307) 
Dec 02 09:43:19 np0005541914.localdomain sudo[256696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dgdxqacgzjetoyposqgzqqsgzljsvrfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668599.0439806-130-28047089990278/AnsiballZ_file.py
Dec 02 09:43:19 np0005541914.localdomain sudo[256696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:19 np0005541914.localdomain python3.9[256698]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ns-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:43:19 np0005541914.localdomain sudo[256696]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:19 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59337 DF PROTO=TCP SPT=41532 DPT=9102 SEQ=2930245784 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54F3D220000000001030307) 
Dec 02 09:43:19 np0005541914.localdomain sudo[256806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kvacfryyvttsqfiumtlpoexnjjlebqqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668599.6275692-130-199716093016971/AnsiballZ_file.py
Dec 02 09:43:19 np0005541914.localdomain sudo[256806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:20 np0005541914.localdomain python3.9[256808]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:43:20 np0005541914.localdomain sudo[256806]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:20 np0005541914.localdomain sudo[256916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rezkyubnalkmcaodhnttzfiychkjrqvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668600.6121984-281-157018551260081/AnsiballZ_stat.py
Dec 02 09:43:20 np0005541914.localdomain sudo[256916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:21 np0005541914.localdomain python3.9[256918]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/neutron_dhcp_agent.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:43:21 np0005541914.localdomain sudo[256916]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:21 np0005541914.localdomain sudo[257004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ebwshzctmykqqfykbugzrdqsnlkturxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668600.6121984-281-157018551260081/AnsiballZ_copy.py
Dec 02 09:43:21 np0005541914.localdomain sudo[257004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:21 np0005541914.localdomain python3.9[257006]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/neutron_dhcp_agent.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668600.6121984-281-157018551260081/.source.yaml follow=False _original_basename=neutron_dhcp_agent.yaml.j2 checksum=3ebfe8ab1da42a1c6ca52429f61716009c5fd177 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:43:21 np0005541914.localdomain sudo[257004]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:22 np0005541914.localdomain python3.9[257114]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:43:22 np0005541914.localdomain python3.9[257200]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668602.0612595-326-87680102253525/.source.conf follow=False _original_basename=neutron.conf.j2 checksum=24e013b64eb8be4a13596c6ffccbd94df7442bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:43:23 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:43:23 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47936 DF PROTO=TCP SPT=44042 DPT=9102 SEQ=4103372198 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54F49A20000000001030307) 
Dec 02 09:43:23 np0005541914.localdomain podman[257201]: 2025-12-02 09:43:23.104857899 +0000 UTC m=+0.086420526 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS)
Dec 02 09:43:23 np0005541914.localdomain podman[257201]: 2025-12-02 09:43:23.123163617 +0000 UTC m=+0.104726274 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec 02 09:43:23 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:43:23 np0005541914.localdomain python3.9[257327]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:43:24 np0005541914.localdomain python3.9[257413]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668603.1174016-326-270681360036164/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:43:24 np0005541914.localdomain python3.9[257521]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron-dhcp-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:43:25 np0005541914.localdomain python3.9[257607]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/01-neutron-dhcp-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668604.2101135-326-216242466810126/.source.conf follow=False _original_basename=neutron-dhcp-agent.conf.j2 checksum=f3a803fb4781ff2c03993a2db54cc2ba6fd7b97a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:43:26 np0005541914.localdomain python3.9[257715]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/10-neutron-dhcp.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:43:26 np0005541914.localdomain python3.9[257801]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-dhcp-agent/10-neutron-dhcp.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668605.9944148-499-142329062657303/.source.conf _original_basename=10-neutron-dhcp.conf follow=False checksum=d6e803f833d8b5f768d3a3c0112defa742aeec55 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:43:27 np0005541914.localdomain python3.9[257909]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/dhcp_agent_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:43:28 np0005541914.localdomain python3.9[257995]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/dhcp_agent_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668607.1491325-544-152874252841788/.source follow=False _original_basename=haproxy.j2 checksum=e4288860049c1baef23f6e1bb6c6f91acb5432e7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:43:28 np0005541914.localdomain python3.9[258103]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/dhcp_agent_dnsmasq_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:43:29 np0005541914.localdomain python3.9[258189]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/dhcp_agent_dnsmasq_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668608.281913-544-34264019356847/.source follow=False _original_basename=dnsmasq.j2 checksum=efc19f376a79c40570368e9c2b979cde746f1ea8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:43:29 np0005541914.localdomain python3.9[258297]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:43:30 np0005541914.localdomain python3.9[258352]: ansible-ansible.legacy.file Invoked with mode=0755 setype=container_file_t dest=/var/lib/neutron/kill_scripts/haproxy-kill _original_basename=kill-script.j2 recurse=False state=file path=/var/lib/neutron/kill_scripts/haproxy-kill force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:43:30 np0005541914.localdomain python3.9[258460]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/dnsmasq-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:43:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47937 DF PROTO=TCP SPT=44042 DPT=9102 SEQ=4103372198 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54F69230000000001030307) 
Dec 02 09:43:31 np0005541914.localdomain python3.9[258546]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/dnsmasq-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668610.4371908-631-214664646599854/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:43:31 np0005541914.localdomain python3.9[258654]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:43:32 np0005541914.localdomain sudo[258764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-drabivjexfspsrtdclszlotamsqyqlst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668612.2346814-737-95934058711089/AnsiballZ_file.py
Dec 02 09:43:32 np0005541914.localdomain sudo[258764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:32 np0005541914.localdomain python3.9[258766]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:43:32 np0005541914.localdomain sudo[258764]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:33 np0005541914.localdomain sudo[258874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xylggvydypznncgtyozzykdrfrjvohhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668612.8707893-761-173659302197836/AnsiballZ_stat.py
Dec 02 09:43:33 np0005541914.localdomain sudo[258874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:33 np0005541914.localdomain python3.9[258876]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:43:33 np0005541914.localdomain sudo[258874]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:33 np0005541914.localdomain sudo[258931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pwqazpsiaujcckinaybeiibpceghniiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668612.8707893-761-173659302197836/AnsiballZ_file.py
Dec 02 09:43:33 np0005541914.localdomain sudo[258931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:33 np0005541914.localdomain podman[239757]: time="2025-12-02T09:43:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 09:43:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:43:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144244 "" "Go-http-client/1.1"
Dec 02 09:43:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:43:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16310 "" "Go-http-client/1.1"
Dec 02 09:43:33 np0005541914.localdomain python3.9[258933]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:43:33 np0005541914.localdomain sudo[258931]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:34 np0005541914.localdomain sudo[259044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aiycmcikqfjhnwbdthvlvuzhiuhgexlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668613.8583944-761-82112004549833/AnsiballZ_stat.py
Dec 02 09:43:34 np0005541914.localdomain sudo[259044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:34 np0005541914.localdomain python3.9[259046]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:43:34 np0005541914.localdomain sudo[259044]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:34 np0005541914.localdomain sudo[259101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jvjaocmlbqnbcyzldjtrhbsxyypdfctu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668613.8583944-761-82112004549833/AnsiballZ_file.py
Dec 02 09:43:34 np0005541914.localdomain sudo[259101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:34 np0005541914.localdomain python3.9[259103]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:43:34 np0005541914.localdomain sudo[259101]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:35 np0005541914.localdomain sudo[259211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dyeksakbjvogjixfxqqzdryytyhfxzxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668614.948086-830-60133303351703/AnsiballZ_file.py
Dec 02 09:43:35 np0005541914.localdomain sudo[259211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:35 np0005541914.localdomain python3.9[259213]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:43:35 np0005541914.localdomain sudo[259211]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:35 np0005541914.localdomain sudo[259321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jfmuygejewerdflovwmjkmmokqqoluex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668615.591936-853-36847697685965/AnsiballZ_stat.py
Dec 02 09:43:35 np0005541914.localdomain sudo[259321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:36 np0005541914.localdomain python3.9[259323]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:43:36 np0005541914.localdomain sudo[259321]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:36 np0005541914.localdomain sudo[259378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hifsnbzsytdixmcemishvrotdwhsdksv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668615.591936-853-36847697685965/AnsiballZ_file.py
Dec 02 09:43:36 np0005541914.localdomain sudo[259378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:36 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:43:36 np0005541914.localdomain podman[259380]: 2025-12-02 09:43:36.378364149 +0000 UTC m=+0.082281699 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:43:36 np0005541914.localdomain podman[259380]: 2025-12-02 09:43:36.392837791 +0000 UTC m=+0.096755331 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute)
Dec 02 09:43:36 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 09:43:36 np0005541914.localdomain python3.9[259381]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:43:36 np0005541914.localdomain sudo[259378]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:36 np0005541914.localdomain sudo[259508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ggesqzpqhwajdjndevbbdhkhzykwsasg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668616.7102208-889-232156863608435/AnsiballZ_stat.py
Dec 02 09:43:36 np0005541914.localdomain sudo[259508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:37 np0005541914.localdomain python3.9[259510]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:43:37 np0005541914.localdomain sudo[259508]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:37 np0005541914.localdomain sudo[259565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wgsmvdpygyucuuwbonryunlimockonvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668616.7102208-889-232156863608435/AnsiballZ_file.py
Dec 02 09:43:37 np0005541914.localdomain sudo[259565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:37 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:43:37 np0005541914.localdomain systemd[1]: tmp-crun.TxbK8f.mount: Deactivated successfully.
Dec 02 09:43:37 np0005541914.localdomain podman[259568]: 2025-12-02 09:43:37.482411384 +0000 UTC m=+0.099687471 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 02 09:43:37 np0005541914.localdomain podman[259568]: 2025-12-02 09:43:37.493819342 +0000 UTC m=+0.111095449 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 02 09:43:37 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 09:43:37 np0005541914.localdomain python3.9[259567]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:43:37 np0005541914.localdomain sudo[259565]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:38 np0005541914.localdomain sudo[259697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xbdecjrczycdtvdczjakwgeyuhsrtyts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668617.7719283-926-112109288801897/AnsiballZ_systemd.py
Dec 02 09:43:38 np0005541914.localdomain sudo[259697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:38 np0005541914.localdomain python3.9[259699]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:43:38 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:43:38 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:43:38 np0005541914.localdomain systemd-rc-local-generator[259734]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:43:38 np0005541914.localdomain systemd-sysv-generator[259738]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:43:38 np0005541914.localdomain podman[259701]: 2025-12-02 09:43:38.466272885 +0000 UTC m=+0.121123135 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 02 09:43:38 np0005541914.localdomain podman[259701]: 2025-12-02 09:43:38.499647052 +0000 UTC m=+0.154497282 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec 02 09:43:38 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:43:38 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:43:38 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:43:38 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:43:38 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:43:38 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:43:38 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:43:38 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:43:38 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:43:38 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:43:38 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:43:38 np0005541914.localdomain sudo[259697]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:38 np0005541914.localdomain podman[259760]: 2025-12-02 09:43:38.799402253 +0000 UTC m=+0.076452592 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 02 09:43:38 np0005541914.localdomain podman[259760]: 2025-12-02 09:43:38.805592571 +0000 UTC m=+0.082642930 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 02 09:43:38 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:43:39 np0005541914.localdomain sudo[259885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-glviywenzhncbimjicribptjzakwzkps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668618.9615564-950-97717742979374/AnsiballZ_stat.py
Dec 02 09:43:39 np0005541914.localdomain sudo[259885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:39 np0005541914.localdomain python3.9[259887]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:43:39 np0005541914.localdomain sudo[259885]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:39 np0005541914.localdomain sudo[259942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pbbhkjvxjgnvyvhrbgnifvxmkibejywz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668618.9615564-950-97717742979374/AnsiballZ_file.py
Dec 02 09:43:39 np0005541914.localdomain sudo[259942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:39 np0005541914.localdomain python3.9[259944]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:43:39 np0005541914.localdomain sudo[259942]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:40 np0005541914.localdomain sudo[260052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hllegzkxmdbrbqyxyknwekklbiwbkftm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668620.1156478-985-74191244518015/AnsiballZ_stat.py
Dec 02 09:43:40 np0005541914.localdomain sudo[260052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:40 np0005541914.localdomain python3.9[260054]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:43:40 np0005541914.localdomain sudo[260052]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:40 np0005541914.localdomain sudo[260109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hlwemuscjbtaqelgzphikonwdudapprr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668620.1156478-985-74191244518015/AnsiballZ_file.py
Dec 02 09:43:40 np0005541914.localdomain sudo[260109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:40 np0005541914.localdomain python3.9[260111]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:43:40 np0005541914.localdomain sudo[260109]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:41 np0005541914.localdomain sudo[260219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lhgpfxkrxwxkkolezmlyymmnxbhjcjfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668621.2244916-1022-83321330262300/AnsiballZ_systemd.py
Dec 02 09:43:41 np0005541914.localdomain sudo[260219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:41 np0005541914.localdomain python3.9[260221]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:43:41 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:43:41 np0005541914.localdomain systemd-rc-local-generator[260247]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:43:41 np0005541914.localdomain systemd-sysv-generator[260251]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:43:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:43:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:43:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:43:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:43:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:43:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:43:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:43:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:43:41 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:43:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:43:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:43:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:43:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:43:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:43:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 09:43:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:43:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 09:43:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:43:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:43:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 09:43:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:43:42 np0005541914.localdomain systemd[1]: Starting Create netns directory...
Dec 02 09:43:42 np0005541914.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 02 09:43:42 np0005541914.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 02 09:43:42 np0005541914.localdomain systemd[1]: Finished Create netns directory.
Dec 02 09:43:42 np0005541914.localdomain sudo[260219]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:42 np0005541914.localdomain sudo[260371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uhmwsobmlujintiwdmqptkpvwbahyxbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668622.669506-1052-108057634571404/AnsiballZ_file.py
Dec 02 09:43:42 np0005541914.localdomain sudo[260371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:43 np0005541914.localdomain python3.9[260373]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:43:43 np0005541914.localdomain sudo[260371]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:43 np0005541914.localdomain sudo[260481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dwzkmzrbcdcvpoksuembstftzajyxsly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668623.3722298-1075-212584866539915/AnsiballZ_stat.py
Dec 02 09:43:43 np0005541914.localdomain sudo[260481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:43 np0005541914.localdomain python3.9[260483]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/neutron_dhcp_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:43:43 np0005541914.localdomain sudo[260481]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:44 np0005541914.localdomain sudo[260569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ckdqiuxlqbqeinypzlhxbcaieczgprmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668623.3722298-1075-212584866539915/AnsiballZ_copy.py
Dec 02 09:43:44 np0005541914.localdomain sudo[260569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:44 np0005541914.localdomain python3.9[260571]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/neutron_dhcp_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764668623.3722298-1075-212584866539915/.source.json _original_basename=.ffpbcxcs follow=False checksum=c62829c98c0f9e788d62f52aa71fba276cd98270 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:43:44 np0005541914.localdomain sudo[260569]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:44 np0005541914.localdomain sudo[260679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-scxnhuomwqshkpqnlgkmvoiyjrljamfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668624.7101922-1121-213224100966977/AnsiballZ_file.py
Dec 02 09:43:44 np0005541914.localdomain sudo[260679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:44 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:43:45 np0005541914.localdomain podman[260682]: 2025-12-02 09:43:45.069149552 +0000 UTC m=+0.081658601 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, release=1755695350, architecture=x86_64, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal)
Dec 02 09:43:45 np0005541914.localdomain podman[260682]: 2025-12-02 09:43:45.087294576 +0000 UTC m=+0.099803615 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, name=ubi9-minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, architecture=x86_64, container_name=openstack_network_exporter, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_id=edpm, io.openshift.expose-services=, io.buildah.version=1.33.7, vcs-type=git, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 02 09:43:45 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:43:45 np0005541914.localdomain python3.9[260681]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/neutron_dhcp state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:43:45 np0005541914.localdomain sudo[260679]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:45 np0005541914.localdomain sudo[260809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yokrfxvvibtqqwwdjwrwpobksctoecfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668625.3987496-1144-151231945593909/AnsiballZ_stat.py
Dec 02 09:43:45 np0005541914.localdomain sudo[260809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:45 np0005541914.localdomain sudo[260809]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:46 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47914 DF PROTO=TCP SPT=33656 DPT=9102 SEQ=3142268360 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54FA31E0000000001030307) 
Dec 02 09:43:46 np0005541914.localdomain sudo[260897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mureentmapsfinocvfouyujqwxgdsdbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668625.3987496-1144-151231945593909/AnsiballZ_copy.py
Dec 02 09:43:46 np0005541914.localdomain sudo[260897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:46 np0005541914.localdomain sudo[260897]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:47 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47915 DF PROTO=TCP SPT=33656 DPT=9102 SEQ=3142268360 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54FA7220000000001030307) 
Dec 02 09:43:47 np0005541914.localdomain sudo[261007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nhiyprdmgemxuqkxvhnkeapxmjdpuweb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668626.7824347-1195-35415707690606/AnsiballZ_container_config_data.py
Dec 02 09:43:47 np0005541914.localdomain sudo[261007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:47 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:43:47 np0005541914.localdomain podman[261010]: 2025-12-02 09:43:47.292060135 +0000 UTC m=+0.083699654 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 09:43:47 np0005541914.localdomain podman[261010]: 2025-12-02 09:43:47.303781542 +0000 UTC m=+0.095421061 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 09:43:47 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:43:47 np0005541914.localdomain python3.9[261009]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/neutron_dhcp config_pattern=*.json debug=False
Dec 02 09:43:47 np0005541914.localdomain sudo[261007]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:47 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47938 DF PROTO=TCP SPT=44042 DPT=9102 SEQ=4103372198 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54FA9220000000001030307) 
Dec 02 09:43:48 np0005541914.localdomain sudo[261140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qafqimezxjeloislxosbtvgygfcltxpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668627.6830347-1222-156551961938811/AnsiballZ_container_config_hash.py
Dec 02 09:43:48 np0005541914.localdomain sudo[261140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:48 np0005541914.localdomain python3.9[261142]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 02 09:43:48 np0005541914.localdomain sudo[261140]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:48 np0005541914.localdomain sudo[261250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-twjrajfgwozmkpxnoodezcxajqilpwav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668628.5628464-1250-196960781564536/AnsiballZ_podman_container_info.py
Dec 02 09:43:48 np0005541914.localdomain sudo[261250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:49 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47916 DF PROTO=TCP SPT=33656 DPT=9102 SEQ=3142268360 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54FAF230000000001030307) 
Dec 02 09:43:49 np0005541914.localdomain python3.9[261252]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 02 09:43:49 np0005541914.localdomain sudo[261250]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:49 np0005541914.localdomain sudo[261296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:43:49 np0005541914.localdomain sudo[261296]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:43:49 np0005541914.localdomain sudo[261296]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:49 np0005541914.localdomain sudo[261314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:43:49 np0005541914.localdomain sudo[261314]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:43:50 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38357 DF PROTO=TCP SPT=32940 DPT=9102 SEQ=301234046 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54FB3230000000001030307) 
Dec 02 09:43:50 np0005541914.localdomain sudo[261314]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:51 np0005541914.localdomain sudo[261364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:43:51 np0005541914.localdomain sudo[261364]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:43:51 np0005541914.localdomain sudo[261364]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:53 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47917 DF PROTO=TCP SPT=33656 DPT=9102 SEQ=3142268360 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54FBEE30000000001030307) 
Dec 02 09:43:53 np0005541914.localdomain sudo[261472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bdugorihurbjvupnhhakvkiiglsakoru ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764668632.981513-1288-243383370045719/AnsiballZ_edpm_container_manage.py
Dec 02 09:43:53 np0005541914.localdomain sudo[261472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:53 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:43:53 np0005541914.localdomain systemd[1]: tmp-crun.sdY4Jc.mount: Deactivated successfully.
Dec 02 09:43:53 np0005541914.localdomain podman[261474]: 2025-12-02 09:43:53.580916517 +0000 UTC m=+0.095637228 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd)
Dec 02 09:43:53 np0005541914.localdomain podman[261474]: 2025-12-02 09:43:53.621856805 +0000 UTC m=+0.136577486 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 02 09:43:53 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:43:53 np0005541914.localdomain python3[261475]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/neutron_dhcp config_id=neutron_dhcp config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 02 09:43:54 np0005541914.localdomain podman[261531]: 
Dec 02 09:43:54 np0005541914.localdomain podman[261531]: 2025-12-02 09:43:54.056029423 +0000 UTC m=+0.091349916 container create 6e40f8e58b6b029d3568c06494fabd7ef9499b42f2517574761d9c80c13d661a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=neutron_dhcp_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c25b95e0df6bb53432701a02ce8d2e4f2041c8ed873428b691ab87f6f8e89fc9'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_dhcp, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 02 09:43:54 np0005541914.localdomain podman[261531]: 2025-12-02 09:43:54.008329109 +0000 UTC m=+0.043649592 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 02 09:43:54 np0005541914.localdomain python3[261475]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name neutron_dhcp_agent --cgroupns=host --conmon-pidfile /run/neutron_dhcp_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=c25b95e0df6bb53432701a02ce8d2e4f2041c8ed873428b691ab87f6f8e89fc9 --label config_id=neutron_dhcp --label container_name=neutron_dhcp_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c25b95e0df6bb53432701a02ce8d2e4f2041c8ed873428b691ab87f6f8e89fc9'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/netns:/run/netns:shared --volume /var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 02 09:43:54 np0005541914.localdomain sudo[261472]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:54 np0005541914.localdomain sudo[261676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hedhrrkneppnlotaotbvbrqoqubfrcfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668634.4022782-1312-37152484051599/AnsiballZ_stat.py
Dec 02 09:43:54 np0005541914.localdomain sudo[261676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:54 np0005541914.localdomain python3.9[261678]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:43:54 np0005541914.localdomain sudo[261676]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:55 np0005541914.localdomain sudo[261788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wjlfwcqthlddiqxazcoqblzwyminqkjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668635.1396782-1339-232227357136221/AnsiballZ_file.py
Dec 02 09:43:55 np0005541914.localdomain sudo[261788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:55 np0005541914.localdomain python3.9[261790]: ansible-file Invoked with path=/etc/systemd/system/edpm_neutron_dhcp_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:43:55 np0005541914.localdomain sudo[261788]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:55 np0005541914.localdomain sudo[261843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vmfbsfpwhiizcgngsmpncuaflynlnshf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668635.1396782-1339-232227357136221/AnsiballZ_stat.py
Dec 02 09:43:55 np0005541914.localdomain sudo[261843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:55 np0005541914.localdomain python3.9[261845]: ansible-stat Invoked with path=/etc/systemd/system/edpm_neutron_dhcp_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:43:55 np0005541914.localdomain sudo[261843]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:56 np0005541914.localdomain sudo[261952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wgolxuqplyduzsthcdcuiblilurqoyet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668636.0255766-1339-31479654538936/AnsiballZ_copy.py
Dec 02 09:43:56 np0005541914.localdomain sudo[261952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:56 np0005541914.localdomain python3.9[261954]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764668636.0255766-1339-31479654538936/source dest=/etc/systemd/system/edpm_neutron_dhcp_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:43:56 np0005541914.localdomain sudo[261952]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:56 np0005541914.localdomain sudo[262007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xxvhgzupaaexbeqgnnztmfqyzrtowkoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668636.0255766-1339-31479654538936/AnsiballZ_systemd.py
Dec 02 09:43:56 np0005541914.localdomain sudo[262007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:57 np0005541914.localdomain python3.9[262009]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 02 09:43:57 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:43:57 np0005541914.localdomain systemd-rc-local-generator[262034]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:43:57 np0005541914.localdomain systemd-sysv-generator[262038]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:43:57 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:43:57 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:43:57 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:43:57 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:43:57 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:43:57 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:43:57 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:43:57 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:43:57 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:43:57 np0005541914.localdomain sudo[262007]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:57 np0005541914.localdomain sudo[262097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rlbnunerrsxsbofdforwkdfnwqkvtmoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668636.0255766-1339-31479654538936/AnsiballZ_systemd.py
Dec 02 09:43:57 np0005541914.localdomain sudo[262097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:43:58 np0005541914.localdomain python3.9[262099]: ansible-systemd Invoked with state=restarted name=edpm_neutron_dhcp_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:43:58 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:43:58 np0005541914.localdomain systemd-sysv-generator[262129]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:43:58 np0005541914.localdomain systemd-rc-local-generator[262124]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:43:58 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:43:58 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:43:58 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:43:58 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:43:58 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:43:58 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:43:58 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:43:58 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:43:58 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:43:58 np0005541914.localdomain systemd[1]: Starting neutron_dhcp_agent container...
Dec 02 09:43:58 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:43:58 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4c8315f8090c49b053a083b1d9bf117cce35685b6c601c36e78e817b365e9a6/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec 02 09:43:58 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4c8315f8090c49b053a083b1d9bf117cce35685b6c601c36e78e817b365e9a6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 09:43:58 np0005541914.localdomain podman[262140]: 2025-12-02 09:43:58.579242216 +0000 UTC m=+0.122271090 container init 6e40f8e58b6b029d3568c06494fabd7ef9499b42f2517574761d9c80c13d661a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, config_id=neutron_dhcp, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c25b95e0df6bb53432701a02ce8d2e4f2041c8ed873428b691ab87f6f8e89fc9'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=neutron_dhcp_agent)
Dec 02 09:43:58 np0005541914.localdomain podman[262140]: 2025-12-02 09:43:58.58689775 +0000 UTC m=+0.129926634 container start 6e40f8e58b6b029d3568c06494fabd7ef9499b42f2517574761d9c80c13d661a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c25b95e0df6bb53432701a02ce8d2e4f2041c8ed873428b691ab87f6f8e89fc9'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_dhcp, org.label-schema.schema-version=1.0, container_name=neutron_dhcp_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 02 09:43:58 np0005541914.localdomain podman[262140]: neutron_dhcp_agent
Dec 02 09:43:58 np0005541914.localdomain neutron_dhcp_agent[262156]: + sudo -E kolla_set_configs
Dec 02 09:43:58 np0005541914.localdomain systemd[1]: Started neutron_dhcp_agent container.
Dec 02 09:43:58 np0005541914.localdomain sudo[262097]: pam_unix(sudo:session): session closed for user root
Dec 02 09:43:58 np0005541914.localdomain neutron_dhcp_agent[262156]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 02 09:43:58 np0005541914.localdomain neutron_dhcp_agent[262156]: INFO:__main__:Validating config file
Dec 02 09:43:58 np0005541914.localdomain neutron_dhcp_agent[262156]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 02 09:43:58 np0005541914.localdomain neutron_dhcp_agent[262156]: INFO:__main__:Copying service configuration files
Dec 02 09:43:58 np0005541914.localdomain neutron_dhcp_agent[262156]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec 02 09:43:58 np0005541914.localdomain neutron_dhcp_agent[262156]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec 02 09:43:58 np0005541914.localdomain neutron_dhcp_agent[262156]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec 02 09:43:58 np0005541914.localdomain neutron_dhcp_agent[262156]: INFO:__main__:Writing out command to execute
Dec 02 09:43:58 np0005541914.localdomain neutron_dhcp_agent[262156]: INFO:__main__:Setting permission for /var/lib/neutron
Dec 02 09:43:58 np0005541914.localdomain neutron_dhcp_agent[262156]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec 02 09:43:58 np0005541914.localdomain neutron_dhcp_agent[262156]: INFO:__main__:Setting permission for /var/lib/neutron/.cache
Dec 02 09:43:58 np0005541914.localdomain neutron_dhcp_agent[262156]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec 02 09:43:58 np0005541914.localdomain neutron_dhcp_agent[262156]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec 02 09:43:58 np0005541914.localdomain neutron_dhcp_agent[262156]: INFO:__main__:Setting permission for /var/lib/neutron/ns-metadata-proxy
Dec 02 09:43:58 np0005541914.localdomain neutron_dhcp_agent[262156]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec 02 09:43:58 np0005541914.localdomain neutron_dhcp_agent[262156]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy
Dec 02 09:43:58 np0005541914.localdomain neutron_dhcp_agent[262156]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_haproxy_wrapper
Dec 02 09:43:58 np0005541914.localdomain neutron_dhcp_agent[262156]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_dnsmasq_wrapper
Dec 02 09:43:58 np0005541914.localdomain neutron_dhcp_agent[262156]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec 02 09:43:58 np0005541914.localdomain neutron_dhcp_agent[262156]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/dnsmasq-kill
Dec 02 09:43:58 np0005541914.localdomain neutron_dhcp_agent[262156]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints
Dec 02 09:43:58 np0005541914.localdomain neutron_dhcp_agent[262156]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934
Dec 02 09:43:58 np0005541914.localdomain neutron_dhcp_agent[262156]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/00c6e44062d81bae38ea1c96678049e54d3f27d226bb6f9651816ab13eb94f06
Dec 02 09:43:58 np0005541914.localdomain neutron_dhcp_agent[262156]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec 02 09:43:58 np0005541914.localdomain neutron_dhcp_agent[262156]: ++ cat /run_command
Dec 02 09:43:58 np0005541914.localdomain neutron_dhcp_agent[262156]: + CMD=/usr/bin/neutron-dhcp-agent
Dec 02 09:43:58 np0005541914.localdomain neutron_dhcp_agent[262156]: + ARGS=
Dec 02 09:43:58 np0005541914.localdomain neutron_dhcp_agent[262156]: + sudo kolla_copy_cacerts
Dec 02 09:43:58 np0005541914.localdomain neutron_dhcp_agent[262156]: + [[ ! -n '' ]]
Dec 02 09:43:58 np0005541914.localdomain neutron_dhcp_agent[262156]: + . kolla_extend_start
Dec 02 09:43:58 np0005541914.localdomain neutron_dhcp_agent[262156]: Running command: '/usr/bin/neutron-dhcp-agent'
Dec 02 09:43:58 np0005541914.localdomain neutron_dhcp_agent[262156]: + echo 'Running command: '\''/usr/bin/neutron-dhcp-agent'\'''
Dec 02 09:43:58 np0005541914.localdomain neutron_dhcp_agent[262156]: + umask 0022
Dec 02 09:43:58 np0005541914.localdomain neutron_dhcp_agent[262156]: + exec /usr/bin/neutron-dhcp-agent
Dec 02 09:43:59 np0005541914.localdomain neutron_dhcp_agent[262156]: 2025-12-02 09:43:59.898 262160 INFO neutron.common.config [-] Logging enabled!
Dec 02 09:43:59 np0005541914.localdomain neutron_dhcp_agent[262156]: 2025-12-02 09:43:59.898 262160 INFO neutron.common.config [-] /usr/bin/neutron-dhcp-agent version 22.2.2.dev43
Dec 02 09:44:00 np0005541914.localdomain neutron_dhcp_agent[262156]: 2025-12-02 09:44:00.278 262160 INFO neutron.agent.dhcp.agent [-] Synchronizing state
Dec 02 09:44:00 np0005541914.localdomain sudo[262279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hqcgjafrpnhuletlnackjrjptwjglkwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668640.0958996-1424-9300890388027/AnsiballZ_systemd.py
Dec 02 09:44:00 np0005541914.localdomain sudo[262279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:44:00 np0005541914.localdomain python3.9[262281]: ansible-ansible.builtin.systemd Invoked with name=edpm_neutron_dhcp_agent.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 02 09:44:00 np0005541914.localdomain systemd[1]: Stopping neutron_dhcp_agent container...
Dec 02 09:44:00 np0005541914.localdomain neutron_dhcp_agent[262156]: 2025-12-02 09:44:00.751 262160 INFO neutron.agent.dhcp.agent [None req-9550d0b2-9c7b-4d81-8465-e009d7c59926 - - - - - -] All active networks have been fetched through RPC.
Dec 02 09:44:00 np0005541914.localdomain neutron_dhcp_agent[262156]: 2025-12-02 09:44:00.752 262160 INFO neutron.agent.dhcp.agent [-] Starting network 447a69ac-5cfc-4dee-8482-764b4cafdf04 dhcp configuration
Dec 02 09:44:00 np0005541914.localdomain neutron_dhcp_agent[262156]: 2025-12-02 09:44:00.803 262160 INFO neutron.agent.dhcp.agent [-] Starting network 595e1c9b-709c-41d2-9212-0b18b13291a8 dhcp configuration
Dec 02 09:44:01 np0005541914.localdomain systemd[1]: libpod-6e40f8e58b6b029d3568c06494fabd7ef9499b42f2517574761d9c80c13d661a.scope: Deactivated successfully.
Dec 02 09:44:01 np0005541914.localdomain systemd[1]: libpod-6e40f8e58b6b029d3568c06494fabd7ef9499b42f2517574761d9c80c13d661a.scope: Consumed 2.082s CPU time.
Dec 02 09:44:01 np0005541914.localdomain podman[262285]: 2025-12-02 09:44:01.132215542 +0000 UTC m=+0.400933717 container died 6e40f8e58b6b029d3568c06494fabd7ef9499b42f2517574761d9c80c13d661a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, config_id=neutron_dhcp, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c25b95e0df6bb53432701a02ce8d2e4f2041c8ed873428b691ab87f6f8e89fc9'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_dhcp_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 02 09:44:01 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6e40f8e58b6b029d3568c06494fabd7ef9499b42f2517574761d9c80c13d661a-userdata-shm.mount: Deactivated successfully.
Dec 02 09:44:01 np0005541914.localdomain podman[262285]: 2025-12-02 09:44:01.231088286 +0000 UTC m=+0.499806481 container cleanup 6e40f8e58b6b029d3568c06494fabd7ef9499b42f2517574761d9c80c13d661a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c25b95e0df6bb53432701a02ce8d2e4f2041c8ed873428b691ab87f6f8e89fc9'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_dhcp_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=neutron_dhcp, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 02 09:44:01 np0005541914.localdomain podman[262285]: neutron_dhcp_agent
Dec 02 09:44:01 np0005541914.localdomain podman[262326]: error opening file `/run/crun/6e40f8e58b6b029d3568c06494fabd7ef9499b42f2517574761d9c80c13d661a/status`: No such file or directory
Dec 02 09:44:01 np0005541914.localdomain podman[262315]: 2025-12-02 09:44:01.33385756 +0000 UTC m=+0.067561421 container cleanup 6e40f8e58b6b029d3568c06494fabd7ef9499b42f2517574761d9c80c13d661a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, container_name=neutron_dhcp_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c25b95e0df6bb53432701a02ce8d2e4f2041c8ed873428b691ab87f6f8e89fc9'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_dhcp, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 02 09:44:01 np0005541914.localdomain podman[262315]: neutron_dhcp_agent
Dec 02 09:44:01 np0005541914.localdomain systemd[1]: edpm_neutron_dhcp_agent.service: Deactivated successfully.
Dec 02 09:44:01 np0005541914.localdomain systemd[1]: Stopped neutron_dhcp_agent container.
Dec 02 09:44:01 np0005541914.localdomain systemd[1]: Starting neutron_dhcp_agent container...
Dec 02 09:44:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47918 DF PROTO=TCP SPT=33656 DPT=9102 SEQ=3142268360 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD54FDF220000000001030307) 
Dec 02 09:44:01 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:44:01 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4c8315f8090c49b053a083b1d9bf117cce35685b6c601c36e78e817b365e9a6/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec 02 09:44:01 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4c8315f8090c49b053a083b1d9bf117cce35685b6c601c36e78e817b365e9a6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 09:44:01 np0005541914.localdomain podman[262328]: 2025-12-02 09:44:01.474423116 +0000 UTC m=+0.111362276 container init 6e40f8e58b6b029d3568c06494fabd7ef9499b42f2517574761d9c80c13d661a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c25b95e0df6bb53432701a02ce8d2e4f2041c8ed873428b691ab87f6f8e89fc9'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, config_id=neutron_dhcp, container_name=neutron_dhcp_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:44:01 np0005541914.localdomain podman[262328]: 2025-12-02 09:44:01.482763811 +0000 UTC m=+0.119703001 container start 6e40f8e58b6b029d3568c06494fabd7ef9499b42f2517574761d9c80c13d661a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'c25b95e0df6bb53432701a02ce8d2e4f2041c8ed873428b691ab87f6f8e89fc9'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/config-data/ansible-generated/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_dhcp, container_name=neutron_dhcp_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:44:01 np0005541914.localdomain podman[262328]: neutron_dhcp_agent
Dec 02 09:44:01 np0005541914.localdomain neutron_dhcp_agent[262343]: + sudo -E kolla_set_configs
Dec 02 09:44:01 np0005541914.localdomain systemd[1]: Started neutron_dhcp_agent container.
Dec 02 09:44:01 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:44:01.521 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 09:44:01 np0005541914.localdomain sudo[262279]: pam_unix(sudo:session): session closed for user root
Dec 02 09:44:01 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:44:01.522 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 02 09:44:01 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:44:01.523 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 09:44:01 np0005541914.localdomain neutron_dhcp_agent[262343]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 02 09:44:01 np0005541914.localdomain neutron_dhcp_agent[262343]: INFO:__main__:Validating config file
Dec 02 09:44:01 np0005541914.localdomain neutron_dhcp_agent[262343]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 02 09:44:01 np0005541914.localdomain neutron_dhcp_agent[262343]: INFO:__main__:Copying service configuration files
Dec 02 09:44:01 np0005541914.localdomain neutron_dhcp_agent[262343]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec 02 09:44:01 np0005541914.localdomain neutron_dhcp_agent[262343]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec 02 09:44:01 np0005541914.localdomain neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec 02 09:44:01 np0005541914.localdomain neutron_dhcp_agent[262343]: INFO:__main__:Writing out command to execute
Dec 02 09:44:01 np0005541914.localdomain neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /var/lib/neutron
Dec 02 09:44:01 np0005541914.localdomain neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec 02 09:44:01 np0005541914.localdomain neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /var/lib/neutron/.cache
Dec 02 09:44:01 np0005541914.localdomain neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec 02 09:44:01 np0005541914.localdomain neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec 02 09:44:01 np0005541914.localdomain neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /var/lib/neutron/ns-metadata-proxy
Dec 02 09:44:01 np0005541914.localdomain neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp
Dec 02 09:44:01 np0005541914.localdomain neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec 02 09:44:01 np0005541914.localdomain neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy
Dec 02 09:44:01 np0005541914.localdomain neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_haproxy_wrapper
Dec 02 09:44:01 np0005541914.localdomain neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_dnsmasq_wrapper
Dec 02 09:44:01 np0005541914.localdomain neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec 02 09:44:01 np0005541914.localdomain neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/dnsmasq-kill
Dec 02 09:44:01 np0005541914.localdomain neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints
Dec 02 09:44:01 np0005541914.localdomain neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/adac9f827fd7fb11fb07020ef60ee06a1fede4feab743856dc8fb3266181d934
Dec 02 09:44:01 np0005541914.localdomain neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/00c6e44062d81bae38ea1c96678049e54d3f27d226bb6f9651816ab13eb94f06
Dec 02 09:44:01 np0005541914.localdomain neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec 02 09:44:01 np0005541914.localdomain neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 09:44:01 np0005541914.localdomain neutron_dhcp_agent[262343]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp/595e1c9b-709c-41d2-9212-0b18b13291a8
Dec 02 09:44:01 np0005541914.localdomain neutron_dhcp_agent[262343]: ++ cat /run_command
Dec 02 09:44:01 np0005541914.localdomain neutron_dhcp_agent[262343]: + CMD=/usr/bin/neutron-dhcp-agent
Dec 02 09:44:01 np0005541914.localdomain neutron_dhcp_agent[262343]: + ARGS=
Dec 02 09:44:01 np0005541914.localdomain neutron_dhcp_agent[262343]: + sudo kolla_copy_cacerts
Dec 02 09:44:01 np0005541914.localdomain neutron_dhcp_agent[262343]: + [[ ! -n '' ]]
Dec 02 09:44:01 np0005541914.localdomain neutron_dhcp_agent[262343]: + . kolla_extend_start
Dec 02 09:44:01 np0005541914.localdomain neutron_dhcp_agent[262343]: + echo 'Running command: '\''/usr/bin/neutron-dhcp-agent'\'''
Dec 02 09:44:01 np0005541914.localdomain neutron_dhcp_agent[262343]: Running command: '/usr/bin/neutron-dhcp-agent'
Dec 02 09:44:01 np0005541914.localdomain neutron_dhcp_agent[262343]: + umask 0022
Dec 02 09:44:01 np0005541914.localdomain neutron_dhcp_agent[262343]: + exec /usr/bin/neutron-dhcp-agent
Dec 02 09:44:02 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 09:44:02.794 262347 INFO neutron.common.config [-] Logging enabled!
Dec 02 09:44:02 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 09:44:02.794 262347 INFO neutron.common.config [-] /usr/bin/neutron-dhcp-agent version 22.2.2.dev43
Dec 02 09:44:02 np0005541914.localdomain sshd[255482]: pam_unix(sshd:session): session closed for user zuul
Dec 02 09:44:02 np0005541914.localdomain systemd-logind[760]: Session 58 logged out. Waiting for processes to exit.
Dec 02 09:44:02 np0005541914.localdomain systemd[1]: session-58.scope: Deactivated successfully.
Dec 02 09:44:02 np0005541914.localdomain systemd[1]: session-58.scope: Consumed 34.343s CPU time.
Dec 02 09:44:02 np0005541914.localdomain systemd-logind[760]: Removed session 58.
Dec 02 09:44:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:44:03.151 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:44:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:44:03.154 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:44:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:44:03.154 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:44:03 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 09:44:03.175 262347 INFO neutron.agent.dhcp.agent [-] Synchronizing state
Dec 02 09:44:03 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 09:44:03.347 262347 INFO neutron.agent.dhcp.agent [None req-02ca4a4f-d11d-4589-b5a7-96cca885a1c9 - - - - - -] All active networks have been fetched through RPC.
Dec 02 09:44:03 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 09:44:03.348 262347 INFO neutron.agent.dhcp.agent [-] Starting network 447a69ac-5cfc-4dee-8482-764b4cafdf04 dhcp configuration
Dec 02 09:44:03 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 09:44:03.397 262347 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpbvcfomb8/privsep.sock']
Dec 02 09:44:03 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 09:44:03.398 262347 INFO neutron.agent.dhcp.agent [-] Starting network 595e1c9b-709c-41d2-9212-0b18b13291a8 dhcp configuration
Dec 02 09:44:03 np0005541914.localdomain podman[239757]: time="2025-12-02T09:44:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 09:44:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:44:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 146549 "" "Go-http-client/1.1"
Dec 02 09:44:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:44:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16746 "" "Go-http-client/1.1"
Dec 02 09:44:03 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 09:44:03.975 262347 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 02 09:44:03 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 09:44:03.888 262380 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 02 09:44:03 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 09:44:03.893 262380 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 02 09:44:03 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 09:44:03.896 262380 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Dec 02 09:44:03 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 09:44:03.897 262380 INFO oslo.privsep.daemon [-] privsep daemon running as pid 262380
Dec 02 09:44:03 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 09:44:03.979 262347 WARNING oslo_privsep.priv_context [-] privsep daemon already running
Dec 02 09:44:04 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 09:44:04.491 262347 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpo7j0x9to/privsep.sock']
Dec 02 09:44:05 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 09:44:05.080 262347 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 02 09:44:05 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 09:44:04.987 262390 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 02 09:44:05 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 09:44:04.992 262390 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 02 09:44:05 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 09:44:04.996 262390 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Dec 02 09:44:05 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 09:44:04.996 262390 INFO oslo.privsep.daemon [-] privsep daemon running as pid 262390
Dec 02 09:44:05 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 09:44:05.084 262347 WARNING oslo_privsep.priv_context [-] privsep daemon already running
Dec 02 09:44:05 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 09:44:05.980 262347 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp59hk0oy_/privsep.sock']
Dec 02 09:44:06 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 09:44:06.609 262347 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 02 09:44:06 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 09:44:06.512 262406 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 02 09:44:06 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 09:44:06.516 262406 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 02 09:44:06 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 09:44:06.518 262406 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec 02 09:44:06 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 09:44:06.518 262406 INFO oslo.privsep.daemon [-] privsep daemon running as pid 262406
Dec 02 09:44:06 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 09:44:06.612 262347 WARNING oslo_privsep.priv_context [-] privsep daemon already running
Dec 02 09:44:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:44:07 np0005541914.localdomain systemd[1]: tmp-crun.PUrL6p.mount: Deactivated successfully.
Dec 02 09:44:07 np0005541914.localdomain podman[262412]: 2025-12-02 09:44:07.073389953 +0000 UTC m=+0.074741881 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0)
Dec 02 09:44:07 np0005541914.localdomain podman[262412]: 2025-12-02 09:44:07.084810721 +0000 UTC m=+0.086162639 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 02 09:44:07 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 09:44:07 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:44:07.847 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:44:07 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 09:44:07.968 262347 INFO neutron.agent.linux.ip_lib [-] Device tap51dc7089-37 cannot be used as it has no MAC address
Dec 02 09:44:07 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 09:44:07.969 262347 INFO neutron.agent.linux.ip_lib [-] Device tap71143481-6b cannot be used as it has no MAC address
Dec 02 09:44:07 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:44:08 np0005541914.localdomain kernel: device tap51dc7089-37 entered promiscuous mode
Dec 02 09:44:08 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:44:08Z|00025|binding|INFO|Claiming lport 51dc7089-37a2-48fc-93b9-4ba936552f69 for this chassis.
Dec 02 09:44:08 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:44:08Z|00026|binding|INFO|51dc7089-37a2-48fc-93b9-4ba936552f69: Claiming unknown
Dec 02 09:44:08 np0005541914.localdomain NetworkManager[5967]: <info>  [1764668648.0806] manager: (tap51dc7089-37): new Generic device (/org/freedesktop/NetworkManager/Devices/13)
Dec 02 09:44:08 np0005541914.localdomain podman[262447]: 2025-12-02 09:44:08.121936115 +0000 UTC m=+0.093544453 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 09:44:08 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:44:08.129 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.122.172/24', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-447a69ac-5cfc-4dee-8482-764b4cafdf04', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-447a69ac-5cfc-4dee-8482-764b4cafdf04', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cf41aa1c-eb45-46e3-a272-be0018b06eb4, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=51dc7089-37a2-48fc-93b9-4ba936552f69) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 09:44:08 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:44:08.131 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 51dc7089-37a2-48fc-93b9-4ba936552f69 in datapath 447a69ac-5cfc-4dee-8482-764b4cafdf04 bound to our chassis
Dec 02 09:44:08 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:44:08Z|00027|ovn_bfd|INFO|Enabled BFD on interface ovn-4d166c-0
Dec 02 09:44:08 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:44:08Z|00028|ovn_bfd|INFO|Enabled BFD on interface ovn-2587fe-0
Dec 02 09:44:08 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:44:08Z|00029|ovn_bfd|INFO|Enabled BFD on interface ovn-be95dc-0
Dec 02 09:44:08 np0005541914.localdomain podman[262447]: 2025-12-02 09:44:08.136808509 +0000 UTC m=+0.108416847 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 09:44:08 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:44:08.136 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Port e6bceff3-4869-485b-b4ce-6bba322f358c IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 02 09:44:08 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:44:08.136 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 447a69ac-5cfc-4dee-8482-764b4cafdf04, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 09:44:08 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:44:08.137 159483 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpny958rap/privsep.sock']
Dec 02 09:44:08 np0005541914.localdomain kernel: device tap71143481-6b entered promiscuous mode
Dec 02 09:44:08 np0005541914.localdomain NetworkManager[5967]: <info>  [1764668648.1469] manager: (tap71143481-6b): new Generic device (/org/freedesktop/NetworkManager/Devices/14)
Dec 02 09:44:08 np0005541914.localdomain systemd-udevd[262460]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 09:44:08 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 09:44:08 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:44:08Z|00030|binding|INFO|Claiming lport 71143481-6bca-4043-aaee-4555f1b73e03 for this chassis.
Dec 02 09:44:08 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:44:08Z|00031|binding|INFO|71143481-6bca-4043-aaee-4555f1b73e03: Claiming unknown
Dec 02 09:44:08 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:44:08Z|00032|binding|INFO|Setting lport 51dc7089-37a2-48fc-93b9-4ba936552f69 ovn-installed in OVS
Dec 02 09:44:08 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:44:08Z|00033|binding|INFO|Setting lport 51dc7089-37a2-48fc-93b9-4ba936552f69 up in Southbound
Dec 02 09:44:08 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:44:08.169 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.3/24', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-595e1c9b-709c-41d2-9212-0b18b13291a8', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-595e1c9b-709c-41d2-9212-0b18b13291a8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e2d97696ab6749899bb8ba5ce29a3de2', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=23d69817-a35d-4528-880f-f329bfbd969c, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=71143481-6bca-4043-aaee-4555f1b73e03) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 09:44:08 np0005541914.localdomain virtnodedevd[229262]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec 02 09:44:08 np0005541914.localdomain virtnodedevd[229262]: hostname: np0005541914.localdomain
Dec 02 09:44:08 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap71143481-6b: No such device
Dec 02 09:44:08 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap71143481-6b: No such device
Dec 02 09:44:08 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:44:08Z|00034|binding|INFO|Setting lport 71143481-6bca-4043-aaee-4555f1b73e03 ovn-installed in OVS
Dec 02 09:44:08 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap71143481-6b: No such device
Dec 02 09:44:08 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:44:08Z|00035|binding|INFO|Setting lport 71143481-6bca-4043-aaee-4555f1b73e03 up in Southbound
Dec 02 09:44:08 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap71143481-6b: No such device
Dec 02 09:44:08 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap71143481-6b: No such device
Dec 02 09:44:08 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap71143481-6b: No such device
Dec 02 09:44:08 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap71143481-6b: No such device
Dec 02 09:44:08 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap71143481-6b: No such device
Dec 02 09:44:08 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:44:08.642 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:44:08 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:44:08.743 159483 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 02 09:44:08 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:44:08.744 159483 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpny958rap/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 02 09:44:08 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:44:08.620 262550 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 02 09:44:08 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:44:08.627 262550 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 02 09:44:08 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:44:08.630 262550 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Dec 02 09:44:08 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:44:08.631 262550 INFO oslo.privsep.daemon [-] privsep daemon running as pid 262550
Dec 02 09:44:08 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:44:08.747 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[572c9a2a-1ade-4d2f-b537-30273a0dade6]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 09:44:08 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:44:08 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:44:09 np0005541914.localdomain podman[262556]: 2025-12-02 09:44:09.122789574 +0000 UTC m=+0.118777953 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 02 09:44:09 np0005541914.localdomain podman[262556]: 2025-12-02 09:44:09.168554479 +0000 UTC m=+0.164542908 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2)
Dec 02 09:44:09 np0005541914.localdomain podman[262555]: 2025-12-02 09:44:09.178307856 +0000 UTC m=+0.173758849 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 02 09:44:09 np0005541914.localdomain podman[262555]: 2025-12-02 09:44:09.186984731 +0000 UTC m=+0.182435744 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Dec 02 09:44:09 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:44:09 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:44:09 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:44:09.276 262550 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:44:09 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:44:09.276 262550 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:44:09 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:44:09.276 262550 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:44:09 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:44:09.374 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[cfed8121-9ccf-471c-8dbf-08f97dc9d29d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 09:44:09 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:44:09.375 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 71143481-6bca-4043-aaee-4555f1b73e03 in datapath 595e1c9b-709c-41d2-9212-0b18b13291a8 unbound from our chassis
Dec 02 09:44:09 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:44:09.376 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Port 91c85177-b7d9-4980-b09c-b22e92e8c189 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 02 09:44:09 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:44:09.376 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 595e1c9b-709c-41d2-9212-0b18b13291a8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 09:44:09 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:44:09.377 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[a44c8467-29fe-46c0-b655-befa8e53ee02]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 09:44:09 np0005541914.localdomain podman[262637]: 
Dec 02 09:44:09 np0005541914.localdomain podman[262637]: 2025-12-02 09:44:09.582308986 +0000 UTC m=+0.080208447 container create 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 02 09:44:09 np0005541914.localdomain systemd[1]: Started libpod-conmon-69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8.scope.
Dec 02 09:44:09 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:44:09 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c6b94ed714f6eb0d65e561b3bb1abfb0a8d5b609cce5e65500be00bd7ad6a8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 09:44:09 np0005541914.localdomain podman[262657]: 
Dec 02 09:44:09 np0005541914.localdomain podman[262637]: 2025-12-02 09:44:09.639774528 +0000 UTC m=+0.137673989 container init 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:44:09 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:44:09.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:44:09 np0005541914.localdomain podman[262637]: 2025-12-02 09:44:09.548821105 +0000 UTC m=+0.046720616 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 02 09:44:09 np0005541914.localdomain podman[262637]: 2025-12-02 09:44:09.649505415 +0000 UTC m=+0.147404866 container start 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec 02 09:44:09 np0005541914.localdomain dnsmasq[262677]: started, version 2.85 cachesize 150
Dec 02 09:44:09 np0005541914.localdomain dnsmasq[262677]: DNS service limited to local subnets
Dec 02 09:44:09 np0005541914.localdomain dnsmasq[262677]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 02 09:44:09 np0005541914.localdomain dnsmasq[262677]: warning: no upstream servers configured
Dec 02 09:44:09 np0005541914.localdomain dnsmasq-dhcp[262677]: DHCP, static leases only on 192.168.122.0, lease time 1d
Dec 02 09:44:09 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 1 addresses
Dec 02 09:44:09 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 09:44:09 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 09:44:09 np0005541914.localdomain podman[262657]: 2025-12-02 09:44:09.601822821 +0000 UTC m=+0.055505314 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 02 09:44:09 np0005541914.localdomain podman[262657]: 2025-12-02 09:44:09.701983814 +0000 UTC m=+0.155666277 container create eb81db119aab864a853934e55954a079d18831bd89ea944835e873b52aa7805a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-595e1c9b-709c-41d2-9212-0b18b13291a8, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 02 09:44:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 09:44:09.703 262347 INFO neutron.agent.dhcp.agent [None req-1170d6cb-19dd-407d-a976-0819479f745d - - - - - -] Finished network 447a69ac-5cfc-4dee-8482-764b4cafdf04 dhcp configuration
Dec 02 09:44:09 np0005541914.localdomain systemd[1]: Started libpod-conmon-eb81db119aab864a853934e55954a079d18831bd89ea944835e873b52aa7805a.scope.
Dec 02 09:44:09 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:44:09 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75be5ee0573a5f22e7c63c33e70afcb1e8e8310b3fc827f5b5a982381468071b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 09:44:09 np0005541914.localdomain podman[262657]: 2025-12-02 09:44:09.787845373 +0000 UTC m=+0.241527876 container init eb81db119aab864a853934e55954a079d18831bd89ea944835e873b52aa7805a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-595e1c9b-709c-41d2-9212-0b18b13291a8, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:44:09 np0005541914.localdomain dnsmasq[262683]: started, version 2.85 cachesize 150
Dec 02 09:44:09 np0005541914.localdomain dnsmasq[262683]: DNS service limited to local subnets
Dec 02 09:44:09 np0005541914.localdomain dnsmasq[262683]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 02 09:44:09 np0005541914.localdomain dnsmasq[262683]: warning: no upstream servers configured
Dec 02 09:44:09 np0005541914.localdomain dnsmasq-dhcp[262683]: DHCP, static leases only on 192.168.0.0, lease time 1d
Dec 02 09:44:09 np0005541914.localdomain dnsmasq[262683]: read /var/lib/neutron/dhcp/595e1c9b-709c-41d2-9212-0b18b13291a8/addn_hosts - 2 addresses
Dec 02 09:44:09 np0005541914.localdomain dnsmasq-dhcp[262683]: read /var/lib/neutron/dhcp/595e1c9b-709c-41d2-9212-0b18b13291a8/host
Dec 02 09:44:09 np0005541914.localdomain dnsmasq-dhcp[262683]: read /var/lib/neutron/dhcp/595e1c9b-709c-41d2-9212-0b18b13291a8/opts
Dec 02 09:44:09 np0005541914.localdomain podman[262657]: 2025-12-02 09:44:09.923618142 +0000 UTC m=+0.377300585 container start eb81db119aab864a853934e55954a079d18831bd89ea944835e873b52aa7805a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-595e1c9b-709c-41d2-9212-0b18b13291a8, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 02 09:44:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 09:44:09.966 262347 INFO neutron.agent.dhcp.agent [None req-7bb3c70d-be97-4642-9c3d-ce8b509ffd27 - - - - - -] DHCP configuration for ports {'51dc7089-37a2-48fc-93b9-4ba936552f69', '814edf37-348d-4c72-93ca-d397ec86c224', '9e501a82-0cca-4b7c-94e7-c6ee2e9d1a23', '9d9215ec-7a9b-4874-b1f4-9e2052b2af35'} is completed
Dec 02 09:44:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 09:44:09.989 262347 INFO neutron.agent.dhcp.agent [None req-95195c8e-baa2-412b-81ef-ca5d4173c1a3 - - - - - -] Finished network 595e1c9b-709c-41d2-9212-0b18b13291a8 dhcp configuration
Dec 02 09:44:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 09:44:09.989 262347 INFO neutron.agent.dhcp.agent [None req-02ca4a4f-d11d-4589-b5a7-96cca885a1c9 - - - - - -] Synchronizing state complete
Dec 02 09:44:10 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 09:44:10.055 262347 INFO neutron.agent.dhcp.agent [None req-02ca4a4f-d11d-4589-b5a7-96cca885a1c9 - - - - - -] DHCP agent started
Dec 02 09:44:10 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 09:44:10.440 262347 INFO neutron.agent.dhcp.agent [None req-0948eb2d-0fb9-4553-b15d-d99f5e3d9d5e - - - - - -] DHCP configuration for ports {'00628954-c581-410a-8676-a93c861b87a0', '814edf37-348d-4c72-93ca-d397ec86c224', '71143481-6bca-4043-aaee-4555f1b73e03', '9d9215ec-7a9b-4874-b1f4-9e2052b2af35', '9e501a82-0cca-4b7c-94e7-c6ee2e9d1a23', '51dc7089-37a2-48fc-93b9-4ba936552f69', 'd6e7da3f-8574-49e0-8ba1-2f642b3cec92', '4a318f6a-b3c1-4690-8246-f7d046ccd64a'} is completed
Dec 02 09:44:10 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:44:10.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:44:10 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:44:10.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:44:10 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:44:10.641 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 09:44:11 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:44:11.636 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:44:11 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:44:11.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:44:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:44:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:44:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:44:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:44:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:44:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 09:44:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:44:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 09:44:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:44:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:44:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 09:44:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:44:13 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:44:13.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:44:13 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:44:13.641 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 09:44:13 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:44:13.642 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 09:44:13 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:44:13.653 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 09:44:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:44:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:44:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:44:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:44:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:44:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:44:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:44:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:44:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:44:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:44:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:44:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:44:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:44:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:44:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:44:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:44:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:44:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:44:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:44:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:44:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:44:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:44:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:44:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:44:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:44:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:44:15 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:44:16 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46466 DF PROTO=TCP SPT=55932 DPT=9102 SEQ=839054200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD550184F0000000001030307) 
Dec 02 09:44:16 np0005541914.localdomain podman[262684]: 2025-12-02 09:44:16.128780552 +0000 UTC m=+0.132388138 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2025-08-20T13:12:41, config_id=edpm, vendor=Red Hat, Inc., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, com.redhat.component=ubi9-minimal-container, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, release=1755695350, managed_by=edpm_ansible, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 02 09:44:16 np0005541914.localdomain podman[262684]: 2025-12-02 09:44:16.14017705 +0000 UTC m=+0.143784586 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, build-date=2025-08-20T13:12:41, architecture=x86_64, vendor=Red Hat, Inc., version=9.6, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 02 09:44:16 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:44:16 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:44:16.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:44:16 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:44:16.689 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:44:16 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:44:16.689 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:44:16 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:44:16.689 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:44:16 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:44:16.689 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 09:44:16 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:44:16.690 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:44:17 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46467 DF PROTO=TCP SPT=55932 DPT=9102 SEQ=839054200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5501C620000000001030307) 
Dec 02 09:44:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:44:17.087 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:44:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:44:17.215 229589 WARNING nova.virt.libvirt.driver [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 09:44:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:44:17.216 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=12502MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 09:44:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:44:17.217 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:44:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:44:17.217 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:44:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:44:17.270 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 09:44:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:44:17.271 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 09:44:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:44:17.294 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:44:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:44:17.724 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:44:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:44:17.729 229589 DEBUG nova.compute.provider_tree [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 09:44:17 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47919 DF PROTO=TCP SPT=33656 DPT=9102 SEQ=3142268360 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5501F220000000001030307) 
Dec 02 09:44:17 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:44:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:44:17.997 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 09:44:18 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:44:18.001 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 09:44:18 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:44:18.002 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:44:18 np0005541914.localdomain systemd[1]: tmp-crun.NHGSYU.mount: Deactivated successfully.
Dec 02 09:44:18 np0005541914.localdomain podman[262748]: 2025-12-02 09:44:18.095070038 +0000 UTC m=+0.095916345 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 09:44:18 np0005541914.localdomain podman[262748]: 2025-12-02 09:44:18.103428053 +0000 UTC m=+0.104274320 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 09:44:18 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:44:19 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46468 DF PROTO=TCP SPT=55932 DPT=9102 SEQ=839054200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55024620000000001030307) 
Dec 02 09:44:19 np0005541914.localdomain sshd[262770]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:44:19 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47939 DF PROTO=TCP SPT=44042 DPT=9102 SEQ=4103372198 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55027220000000001030307) 
Dec 02 09:44:20 np0005541914.localdomain sshd[262770]: Invalid user student from 34.78.29.97 port 34976
Dec 02 09:44:20 np0005541914.localdomain sshd[262770]: Received disconnect from 34.78.29.97 port 34976:11: Bye Bye [preauth]
Dec 02 09:44:20 np0005541914.localdomain sshd[262770]: Disconnected from invalid user student 34.78.29.97 port 34976 [preauth]
Dec 02 09:44:23 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46469 DF PROTO=TCP SPT=55932 DPT=9102 SEQ=839054200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55034230000000001030307) 
Dec 02 09:44:23 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:44:24 np0005541914.localdomain podman[262772]: 2025-12-02 09:44:24.076760006 +0000 UTC m=+0.077475553 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 02 09:44:24 np0005541914.localdomain podman[262772]: 2025-12-02 09:44:24.091956949 +0000 UTC m=+0.092672476 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 02 09:44:24 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:44:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46470 DF PROTO=TCP SPT=55932 DPT=9102 SEQ=839054200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55055220000000001030307) 
Dec 02 09:44:33 np0005541914.localdomain podman[239757]: time="2025-12-02T09:44:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 09:44:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:44:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150198 "" "Go-http-client/1.1"
Dec 02 09:44:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:44:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17695 "" "Go-http-client/1.1"
Dec 02 09:44:37 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:44:38 np0005541914.localdomain systemd[1]: tmp-crun.gLdXML.mount: Deactivated successfully.
Dec 02 09:44:38 np0005541914.localdomain podman[262791]: 2025-12-02 09:44:38.078330286 +0000 UTC m=+0.084676557 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 02 09:44:38 np0005541914.localdomain podman[262791]: 2025-12-02 09:44:38.116863777 +0000 UTC m=+0.123210058 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm)
Dec 02 09:44:38 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 09:44:38 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:44:38Z|00036|memory_trim|INFO|Detected inactivity (last active 30018 ms ago): trimming memory
Dec 02 09:44:38 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:44:38 np0005541914.localdomain systemd[1]: tmp-crun.Zmn8cP.mount: Deactivated successfully.
Dec 02 09:44:38 np0005541914.localdomain podman[262810]: 2025-12-02 09:44:38.638017776 +0000 UTC m=+0.143916877 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 09:44:38 np0005541914.localdomain podman[262810]: 2025-12-02 09:44:38.691948087 +0000 UTC m=+0.197847138 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 02 09:44:38 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 09:44:39 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:44:39 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:44:40 np0005541914.localdomain podman[262833]: 2025-12-02 09:44:40.070721108 +0000 UTC m=+0.076187788 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 02 09:44:40 np0005541914.localdomain podman[262834]: 2025-12-02 09:44:40.129494435 +0000 UTC m=+0.128824219 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 02 09:44:40 np0005541914.localdomain podman[262833]: 2025-12-02 09:44:40.156879838 +0000 UTC m=+0.162346518 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 02 09:44:40 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:44:40 np0005541914.localdomain podman[262834]: 2025-12-02 09:44:40.22897869 +0000 UTC m=+0.228308504 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 02 09:44:40 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:44:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:44:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 09:44:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:44:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:44:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:44:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:44:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:44:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 09:44:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:44:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:44:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 09:44:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:44:46 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48898 DF PROTO=TCP SPT=42590 DPT=9102 SEQ=3670853150 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5508D7E0000000001030307) 
Dec 02 09:44:46 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:44:47 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48899 DF PROTO=TCP SPT=42590 DPT=9102 SEQ=3670853150 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55091A20000000001030307) 
Dec 02 09:44:47 np0005541914.localdomain podman[262876]: 2025-12-02 09:44:47.07468385 +0000 UTC m=+0.076065824 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., release=1755695350, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.6)
Dec 02 09:44:47 np0005541914.localdomain podman[262876]: 2025-12-02 09:44:47.113097728 +0000 UTC m=+0.114479782 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, name=ubi9-minimal, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, version=9.6, maintainer=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=minimal rhel9)
Dec 02 09:44:47 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:44:47 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46471 DF PROTO=TCP SPT=55932 DPT=9102 SEQ=839054200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55095230000000001030307) 
Dec 02 09:44:48 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:44:49 np0005541914.localdomain podman[262897]: 2025-12-02 09:44:49.084162291 +0000 UTC m=+0.084496331 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 09:44:49 np0005541914.localdomain podman[262897]: 2025-12-02 09:44:49.099833157 +0000 UTC m=+0.100167187 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 09:44:49 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:44:49 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48900 DF PROTO=TCP SPT=42590 DPT=9102 SEQ=3670853150 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55099A30000000001030307) 
Dec 02 09:44:50 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47920 DF PROTO=TCP SPT=33656 DPT=9102 SEQ=3142268360 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5509D220000000001030307) 
Dec 02 09:44:51 np0005541914.localdomain sudo[262921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:44:51 np0005541914.localdomain sudo[262921]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:44:51 np0005541914.localdomain sudo[262921]: pam_unix(sudo:session): session closed for user root
Dec 02 09:44:51 np0005541914.localdomain sudo[262939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:44:51 np0005541914.localdomain sudo[262939]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:44:52 np0005541914.localdomain sudo[262939]: pam_unix(sudo:session): session closed for user root
Dec 02 09:44:52 np0005541914.localdomain sudo[262988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:44:52 np0005541914.localdomain sudo[262988]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:44:52 np0005541914.localdomain sudo[262988]: pam_unix(sudo:session): session closed for user root
Dec 02 09:44:53 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48901 DF PROTO=TCP SPT=42590 DPT=9102 SEQ=3670853150 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD550A9620000000001030307) 
Dec 02 09:44:54 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:44:55 np0005541914.localdomain podman[263006]: 2025-12-02 09:44:55.077112737 +0000 UTC m=+0.080482669 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:44:55 np0005541914.localdomain podman[263006]: 2025-12-02 09:44:55.090947487 +0000 UTC m=+0.094317469 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:44:55 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:45:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48902 DF PROTO=TCP SPT=42590 DPT=9102 SEQ=3670853150 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD550C9220000000001030307) 
Dec 02 09:45:03 np0005541914.localdomain sshd[263026]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:45:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:45:03.153 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:45:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:45:03.153 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:45:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:45:03.154 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:45:03 np0005541914.localdomain sshd[263026]: Accepted publickey for zuul from 192.168.122.30 port 53304 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 09:45:03 np0005541914.localdomain systemd-logind[760]: New session 59 of user zuul.
Dec 02 09:45:03 np0005541914.localdomain systemd[1]: Started Session 59 of User zuul.
Dec 02 09:45:03 np0005541914.localdomain sshd[263026]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 09:45:03 np0005541914.localdomain podman[239757]: time="2025-12-02T09:45:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 09:45:03 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:03.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:45:03 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:03.642 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 02 09:45:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:45:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150198 "" "Go-http-client/1.1"
Dec 02 09:45:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:45:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17701 "" "Go-http-client/1.1"
Dec 02 09:45:04 np0005541914.localdomain python3.9[263137]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 09:45:05 np0005541914.localdomain python3.9[263249]: ansible-ansible.builtin.service_facts Invoked
Dec 02 09:45:05 np0005541914.localdomain network[263266]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 02 09:45:05 np0005541914.localdomain network[263267]: 'network-scripts' will be removed from distribution in near future.
Dec 02 09:45:05 np0005541914.localdomain network[263268]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 02 09:45:06 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:06.665 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:45:07 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:45:08 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:45:08 np0005541914.localdomain systemd[1]: tmp-crun.ElXlTJ.mount: Deactivated successfully.
Dec 02 09:45:08 np0005541914.localdomain podman[263329]: 2025-12-02 09:45:08.278249183 +0000 UTC m=+0.105757078 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:45:08 np0005541914.localdomain podman[263329]: 2025-12-02 09:45:08.316991381 +0000 UTC m=+0.144499316 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:45:08 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 09:45:08 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:45:08 np0005541914.localdomain podman[263377]: 2025-12-02 09:45:08.834174979 +0000 UTC m=+0.091946717 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 09:45:08 np0005541914.localdomain podman[263377]: 2025-12-02 09:45:08.868938747 +0000 UTC m=+0.126710445 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 09:45:08 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 09:45:10 np0005541914.localdomain sudo[263542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rhrxbxejedatgladjycdznorqquqfwtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668709.9078393-104-240788029011212/AnsiballZ_setup.py
Dec 02 09:45:10 np0005541914.localdomain sudo[263542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:10 np0005541914.localdomain python3.9[263544]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 02 09:45:10 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:10.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:45:10 np0005541914.localdomain sudo[263542]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:10 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:45:10 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:45:11 np0005541914.localdomain systemd[1]: tmp-crun.3eKSHs.mount: Deactivated successfully.
Dec 02 09:45:11 np0005541914.localdomain podman[263570]: 2025-12-02 09:45:11.082100263 +0000 UTC m=+0.085350728 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:45:11 np0005541914.localdomain sudo[263642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-padklkhuilntbbkrcugljsayhxmctqxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668709.9078393-104-240788029011212/AnsiballZ_dnf.py
Dec 02 09:45:11 np0005541914.localdomain podman[263570]: 2025-12-02 09:45:11.110785025 +0000 UTC m=+0.114035530 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 02 09:45:11 np0005541914.localdomain sudo[263642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:11 np0005541914.localdomain podman[263569]: 2025-12-02 09:45:11.067746506 +0000 UTC m=+0.074766415 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:45:11 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:45:11 np0005541914.localdomain podman[263569]: 2025-12-02 09:45:11.148739369 +0000 UTC m=+0.155759298 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 02 09:45:11 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:45:11 np0005541914.localdomain python3.9[263649]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 02 09:45:11 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:11.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:45:11 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:11.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:45:11 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:11.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:45:11 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:11.642 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 02 09:45:11 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:11.657 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 02 09:45:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:45:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:45:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:45:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:45:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:45:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 09:45:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:45:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 09:45:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:45:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:45:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 09:45:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:45:12 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:12.652 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:45:12 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:12.652 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:45:12 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:12.667 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:45:12 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:12.667 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:45:12 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:12.668 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 09:45:14 np0005541914.localdomain sudo[263642]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:15 np0005541914.localdomain sudo[263759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ozsnwpmtnbudokiokcbxnbtboramtkvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668715.0082355-140-32366127951106/AnsiballZ_stat.py
Dec 02 09:45:15 np0005541914.localdomain sudo[263759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:15 np0005541914.localdomain python3.9[263761]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:45:15 np0005541914.localdomain sudo[263759]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:15 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:15.642 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:45:15 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:15.642 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 09:45:15 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:15.642 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 09:45:15 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:15.655 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 09:45:16 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48425 DF PROTO=TCP SPT=47784 DPT=9102 SEQ=4163748844 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55102AE0000000001030307) 
Dec 02 09:45:16 np0005541914.localdomain sudo[263869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ajgwzpkbxulceofkvfghtqmkorduxtrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668715.9193227-170-63358612549443/AnsiballZ_command.py
Dec 02 09:45:16 np0005541914.localdomain sudo[263869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:16 np0005541914.localdomain python3.9[263871]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:45:16 np0005541914.localdomain sudo[263869]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:16 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:16.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:45:16 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:16.900 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:45:16 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:16.901 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:45:16 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:16.901 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:45:16 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:16.902 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 09:45:16 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:16.902 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:45:17 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48426 DF PROTO=TCP SPT=47784 DPT=9102 SEQ=4163748844 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55106A30000000001030307) 
Dec 02 09:45:17 np0005541914.localdomain sudo[264000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jpyttcrzhpkyvmzxuhdfligdwjisxehu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668716.8511055-200-48523348161248/AnsiballZ_stat.py
Dec 02 09:45:17 np0005541914.localdomain sudo[264000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:17 np0005541914.localdomain python3.9[264002]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:45:17 np0005541914.localdomain sudo[264000]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:17.368 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:45:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:17.523 229589 WARNING nova.virt.libvirt.driver [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 09:45:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:17.525 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=12493MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 09:45:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:17.525 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:45:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:17.526 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:45:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:17.624 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 09:45:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:17.624 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 09:45:17 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48903 DF PROTO=TCP SPT=42590 DPT=9102 SEQ=3670853150 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55109230000000001030307) 
Dec 02 09:45:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:17.674 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Refreshing inventories for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 02 09:45:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:17.728 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Updating ProviderTree inventory for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 02 09:45:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:17.729 229589 DEBUG nova.compute.provider_tree [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Updating inventory in ProviderTree for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 02 09:45:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:17.746 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Refreshing aggregate associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 02 09:45:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:17.766 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Refreshing trait associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, traits: COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_ACCELERATORS,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE41,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_BMI,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,HW_CPU_X86_AESNI,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_SSE,HW_CPU_X86_ABM,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_RESCUE_BFV,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE42,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_F16C,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_MMX,HW_CPU_X86_SVM,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_AUTO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 02 09:45:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:17.791 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:45:17 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:45:18 np0005541914.localdomain podman[264099]: 2025-12-02 09:45:18.095543392 +0000 UTC m=+0.085347406 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, distribution-scope=public, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 02 09:45:18 np0005541914.localdomain sudo[264145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fzqzgdqbdyhqxokrdxevooxunixbbukd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668717.6677659-233-27672297899061/AnsiballZ_lineinfile.py
Dec 02 09:45:18 np0005541914.localdomain podman[264099]: 2025-12-02 09:45:18.107705413 +0000 UTC m=+0.097509507 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, io.buildah.version=1.33.7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 02 09:45:18 np0005541914.localdomain sudo[264145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:18 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:45:18 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:18.259 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:45:18 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:18.266 229589 DEBUG nova.compute.provider_tree [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 09:45:18 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:18.295 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 09:45:18 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:18.298 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 09:45:18 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:18.299 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:45:18 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:18.300 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:45:18 np0005541914.localdomain python3.9[264154]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:45:18 np0005541914.localdomain sudo[264145]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:19 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48427 DF PROTO=TCP SPT=47784 DPT=9102 SEQ=4163748844 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5510EA20000000001030307) 
Dec 02 09:45:19 np0005541914.localdomain sudo[264264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-khjyaxbqertqknxiqaniijaaconzenxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668718.5969157-260-279358788289894/AnsiballZ_systemd_service.py
Dec 02 09:45:19 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:45:19 np0005541914.localdomain sudo[264264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:19 np0005541914.localdomain podman[264266]: 2025-12-02 09:45:19.254501378 +0000 UTC m=+0.083153930 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 09:45:19 np0005541914.localdomain podman[264266]: 2025-12-02 09:45:19.286663317 +0000 UTC m=+0.115315849 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 02 09:45:19 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:45:19 np0005541914.localdomain python3.9[264267]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:45:19 np0005541914.localdomain sudo[264264]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:20 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46472 DF PROTO=TCP SPT=55932 DPT=9102 SEQ=839054200 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55113220000000001030307) 
Dec 02 09:45:20 np0005541914.localdomain sudo[264399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-itzmelmaytqntnpzxmnilifwxkayfknl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668720.7083664-284-31024998007684/AnsiballZ_systemd_service.py
Dec 02 09:45:20 np0005541914.localdomain sudo[264399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:21 np0005541914.localdomain python3.9[264401]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:45:22 np0005541914.localdomain sudo[264399]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:23 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48428 DF PROTO=TCP SPT=47784 DPT=9102 SEQ=4163748844 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5511E630000000001030307) 
Dec 02 09:45:23 np0005541914.localdomain sudo[264511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pplcrdinillnjgkgumumuaoaqoaihgec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668722.8967876-317-71597049916706/AnsiballZ_service_facts.py
Dec 02 09:45:23 np0005541914.localdomain sudo[264511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:23 np0005541914.localdomain python3.9[264513]: ansible-ansible.builtin.service_facts Invoked
Dec 02 09:45:23 np0005541914.localdomain network[264530]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 02 09:45:23 np0005541914.localdomain network[264531]: 'network-scripts' will be removed from distribution in near future.
Dec 02 09:45:23 np0005541914.localdomain network[264532]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 02 09:45:25 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:45:25 np0005541914.localdomain podman[264555]: 2025-12-02 09:45:25.384923124 +0000 UTC m=+0.064237123 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 02 09:45:25 np0005541914.localdomain podman[264555]: 2025-12-02 09:45:25.399363744 +0000 UTC m=+0.078677783 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:45:25 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:45:25 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:45:26 np0005541914.localdomain sshd[264608]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:45:26 np0005541914.localdomain sshd[264608]: Invalid user hu from 34.78.29.97 port 45828
Dec 02 09:45:26 np0005541914.localdomain sshd[264608]: Received disconnect from 34.78.29.97 port 45828:11: Bye Bye [preauth]
Dec 02 09:45:26 np0005541914.localdomain sshd[264608]: Disconnected from invalid user hu 34.78.29.97 port 45828 [preauth]
Dec 02 09:45:28 np0005541914.localdomain sudo[264511]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:29 np0005541914.localdomain sudo[264786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qfaogjburuxoqthvhilnbpowajltyxmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668728.8090053-347-266731660445269/AnsiballZ_file.py
Dec 02 09:45:29 np0005541914.localdomain sudo[264786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:29 np0005541914.localdomain python3.9[264788]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 02 09:45:29 np0005541914.localdomain sudo[264786]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:29 np0005541914.localdomain sudo[264896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iiawdcwutxlabovtoqnwuixktompmssc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668729.6036425-371-52336096971376/AnsiballZ_modprobe.py
Dec 02 09:45:29 np0005541914.localdomain sudo[264896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:30 np0005541914.localdomain python3.9[264898]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Dec 02 09:45:30 np0005541914.localdomain sudo[264896]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:30 np0005541914.localdomain sudo[265006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yykyqizbdwgksufnyvtnhbiffnehfgnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668730.4278245-395-178222758111738/AnsiballZ_stat.py
Dec 02 09:45:30 np0005541914.localdomain sudo[265006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:30 np0005541914.localdomain python3.9[265008]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:45:30 np0005541914.localdomain sudo[265006]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:31 np0005541914.localdomain sudo[265063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tjnllhohbkqhrtchyxsxwkvhfkfvkych ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668730.4278245-395-178222758111738/AnsiballZ_file.py
Dec 02 09:45:31 np0005541914.localdomain sudo[265063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:31 np0005541914.localdomain python3.9[265065]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/modules-load.d/dm-multipath.conf _original_basename=module-load.conf.j2 recurse=False state=file path=/etc/modules-load.d/dm-multipath.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:45:31 np0005541914.localdomain sudo[265063]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48429 DF PROTO=TCP SPT=47784 DPT=9102 SEQ=4163748844 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5513F220000000001030307) 
Dec 02 09:45:31 np0005541914.localdomain sudo[265173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zasddlijcntyehgdgqmndirvcojlisgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668731.616957-434-83920212432165/AnsiballZ_lineinfile.py
Dec 02 09:45:31 np0005541914.localdomain sudo[265173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:32 np0005541914.localdomain python3.9[265175]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:45:32 np0005541914.localdomain sudo[265173]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:32 np0005541914.localdomain sudo[265283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ewucgocypwnjeamdjjfydgxyqqailgsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668732.3412952-461-99818115089336/AnsiballZ_file.py
Dec 02 09:45:32 np0005541914.localdomain sudo[265283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:32 np0005541914.localdomain python3.9[265285]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:45:32 np0005541914.localdomain sudo[265283]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:33 np0005541914.localdomain sudo[265393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eefmyakvxhixdnslbslmbjbugtmddeel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668733.0993383-488-60832019833841/AnsiballZ_stat.py
Dec 02 09:45:33 np0005541914.localdomain sudo[265393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:33 np0005541914.localdomain python3.9[265395]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:45:33 np0005541914.localdomain sudo[265393]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:33 np0005541914.localdomain podman[239757]: time="2025-12-02T09:45:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 09:45:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:45:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150198 "" "Go-http-client/1.1"
Dec 02 09:45:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:45:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17701 "" "Go-http-client/1.1"
Dec 02 09:45:34 np0005541914.localdomain sudo[265505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zymjujwqdtnqajbgqwqtizosstleewuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668733.8250244-515-153633544242064/AnsiballZ_stat.py
Dec 02 09:45:34 np0005541914.localdomain sudo[265505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:34 np0005541914.localdomain python3.9[265507]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:45:34 np0005541914.localdomain sudo[265505]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:34 np0005541914.localdomain sudo[265617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ibgacwyldoqefvkpiornjekvcidwqnba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668734.5587006-542-193291097417190/AnsiballZ_command.py
Dec 02 09:45:34 np0005541914.localdomain sudo[265617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:34 np0005541914.localdomain python3.9[265619]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:45:34 np0005541914.localdomain sudo[265617]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:35 np0005541914.localdomain sudo[265728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ymsbmjeeckdjnispwjjsekpheotzjxno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668735.3168359-572-88629952924417/AnsiballZ_replace.py
Dec 02 09:45:35 np0005541914.localdomain sudo[265728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:35 np0005541914.localdomain python3.9[265730]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:45:35 np0005541914.localdomain sudo[265728]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:35 np0005541914.localdomain systemd-journald[47679]: Field hash table of /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal has a fill level at 75.7 (252 of 333 items), suggesting rotation.
Dec 02 09:45:35 np0005541914.localdomain systemd-journald[47679]: /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal: Journal header limits reached or header out-of-date, rotating.
Dec 02 09:45:35 np0005541914.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 02 09:45:35 np0005541914.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 02 09:45:35 np0005541914.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 02 09:45:36 np0005541914.localdomain sudo[265839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vfoezxkpcbodlincrygqpvrfcfroxmtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668736.1614473-599-68924669569554/AnsiballZ_lineinfile.py
Dec 02 09:45:36 np0005541914.localdomain sudo[265839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:36 np0005541914.localdomain python3.9[265841]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:45:36 np0005541914.localdomain sudo[265839]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:36 np0005541914.localdomain sudo[265949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pwfpicyxhwvhsvkepfdziifthuxkpexd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668736.7251074-599-147068954334441/AnsiballZ_lineinfile.py
Dec 02 09:45:36 np0005541914.localdomain sudo[265949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:37 np0005541914.localdomain python3.9[265951]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:45:37 np0005541914.localdomain sudo[265949]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:37 np0005541914.localdomain sudo[266059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-apqhcvlchqyofskypaseunczlrbiayfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668737.2797968-599-275123020408584/AnsiballZ_lineinfile.py
Dec 02 09:45:37 np0005541914.localdomain sudo[266059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:37 np0005541914.localdomain python3.9[266061]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:45:37 np0005541914.localdomain sudo[266059]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:38 np0005541914.localdomain sudo[266169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vmersrfgzzrgsersmpmjoxxfdgsqpooy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668737.8297925-599-218315711450555/AnsiballZ_lineinfile.py
Dec 02 09:45:38 np0005541914.localdomain sudo[266169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:38 np0005541914.localdomain python3.9[266171]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:45:38 np0005541914.localdomain sudo[266169]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:38 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:45:38 np0005541914.localdomain systemd[1]: tmp-crun.UKZMpR.mount: Deactivated successfully.
Dec 02 09:45:38 np0005541914.localdomain podman[266225]: 2025-12-02 09:45:38.593663695 +0000 UTC m=+0.085994897 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:45:38 np0005541914.localdomain podman[266225]: 2025-12-02 09:45:38.603252036 +0000 UTC m=+0.095583238 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm)
Dec 02 09:45:38 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 09:45:38 np0005541914.localdomain sudo[266299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-owylqxsywistmfxapijsjluuwfnpimlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668738.448268-686-57684395184457/AnsiballZ_stat.py
Dec 02 09:45:38 np0005541914.localdomain sudo[266299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:38 np0005541914.localdomain python3.9[266301]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:45:38 np0005541914.localdomain sudo[266299]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:38 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:45:39 np0005541914.localdomain systemd[1]: tmp-crun.vkRWyk.mount: Deactivated successfully.
Dec 02 09:45:39 np0005541914.localdomain podman[266321]: 2025-12-02 09:45:39.073242769 +0000 UTC m=+0.080192690 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 09:45:39 np0005541914.localdomain podman[266321]: 2025-12-02 09:45:39.087119381 +0000 UTC m=+0.094068922 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 09:45:39 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 09:45:39 np0005541914.localdomain sudo[266433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qjbrkoktdyshvszlnmqornesqtzjkgsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668739.3402967-716-89583698391964/AnsiballZ_file.py
Dec 02 09:45:39 np0005541914.localdomain sudo[266433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:39 np0005541914.localdomain python3.9[266435]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:45:39 np0005541914.localdomain sudo[266433]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:40 np0005541914.localdomain sudo[266543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hhgubbbxldsedmhtfhcdqmdsdlxgptek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668740.015563-740-32012010158175/AnsiballZ_stat.py
Dec 02 09:45:40 np0005541914.localdomain sudo[266543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:40 np0005541914.localdomain python3.9[266545]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:45:40 np0005541914.localdomain sudo[266543]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:40 np0005541914.localdomain sudo[266600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-itqfojuceajibyrpovffeohwdfhedgkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668740.015563-740-32012010158175/AnsiballZ_file.py
Dec 02 09:45:40 np0005541914.localdomain sudo[266600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:40 np0005541914.localdomain python3.9[266602]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:45:40 np0005541914.localdomain sudo[266600]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:41 np0005541914.localdomain sudo[266710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jveouoqjgrhwbllonaapnrvdzdgdnmfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668740.995706-740-138739278286102/AnsiballZ_stat.py
Dec 02 09:45:41 np0005541914.localdomain sudo[266710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:41 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:45:41 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:45:41 np0005541914.localdomain systemd[1]: tmp-crun.1frTEM.mount: Deactivated successfully.
Dec 02 09:45:41 np0005541914.localdomain podman[266714]: 2025-12-02 09:45:41.308976312 +0000 UTC m=+0.058379377 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Dec 02 09:45:41 np0005541914.localdomain podman[266713]: 2025-12-02 09:45:41.39277016 +0000 UTC m=+0.141497115 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 02 09:45:41 np0005541914.localdomain podman[266714]: 2025-12-02 09:45:41.419919815 +0000 UTC m=+0.169322880 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:45:41 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:45:41 np0005541914.localdomain python3.9[266712]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:45:41 np0005541914.localdomain sudo[266710]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:41 np0005541914.localdomain podman[266713]: 2025-12-02 09:45:41.475125544 +0000 UTC m=+0.223852499 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 02 09:45:41 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:45:41 np0005541914.localdomain sudo[266810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xdagtdcynqlzuucwwvbrtentizlwpihf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668740.995706-740-138739278286102/AnsiballZ_file.py
Dec 02 09:45:41 np0005541914.localdomain sudo[266810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:41 np0005541914.localdomain python3.9[266812]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:45:41 np0005541914.localdomain sudo[266810]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:45:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:45:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:45:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:45:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:45:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 09:45:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:45:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 09:45:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:45:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:45:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 09:45:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:45:42 np0005541914.localdomain sudo[266920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jxbutqbpsnuzcanxcpxhphfeufuxhymr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668742.12102-809-9929091082224/AnsiballZ_file.py
Dec 02 09:45:42 np0005541914.localdomain sudo[266920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:42 np0005541914.localdomain python3.9[266922]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:45:42 np0005541914.localdomain sudo[266920]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:43 np0005541914.localdomain sudo[267030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dqhxtewvezxsprtokfxusjsjnklcepsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668742.7880185-833-47173772509160/AnsiballZ_stat.py
Dec 02 09:45:43 np0005541914.localdomain sudo[267030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:43 np0005541914.localdomain python3.9[267032]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:45:43 np0005541914.localdomain sudo[267030]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:43 np0005541914.localdomain sudo[267087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dfupritvbutkknuuecpptwakjppprrbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668742.7880185-833-47173772509160/AnsiballZ_file.py
Dec 02 09:45:43 np0005541914.localdomain sudo[267087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:43 np0005541914.localdomain python3.9[267089]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:45:43 np0005541914.localdomain sudo[267087]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:44 np0005541914.localdomain sudo[267197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bnndqdzfckjcgxozjrgzdyxuaiusohyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668743.8337083-869-44628566035452/AnsiballZ_stat.py
Dec 02 09:45:44 np0005541914.localdomain sudo[267197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:44 np0005541914.localdomain python3.9[267199]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:45:44 np0005541914.localdomain sudo[267197]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:44 np0005541914.localdomain sudo[267254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ypenibpbyprelfnuqamejukmnuixgaym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668743.8337083-869-44628566035452/AnsiballZ_file.py
Dec 02 09:45:44 np0005541914.localdomain sudo[267254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:44 np0005541914.localdomain python3.9[267256]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:45:44 np0005541914.localdomain sudo[267254]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:45 np0005541914.localdomain sudo[267364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qkjnucxbnuounjwirppnznwjvowgsfib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668744.9448364-905-232571577581734/AnsiballZ_systemd.py
Dec 02 09:45:45 np0005541914.localdomain sudo[267364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:45 np0005541914.localdomain python3.9[267366]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:45:45 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:45:45 np0005541914.localdomain systemd-rc-local-generator[267388]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:45:45 np0005541914.localdomain systemd-sysv-generator[267392]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:45:45 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:45:45 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:45:45 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:45:45 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:45:45 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:45:45 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:45:45 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:45:45 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:45:45 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:45:46 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26088 DF PROTO=TCP SPT=55788 DPT=9102 SEQ=335465398 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55177DE0000000001030307) 
Dec 02 09:45:46 np0005541914.localdomain sudo[267364]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:46 np0005541914.localdomain sudo[267512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vnqthpczmuyihelvieyancbzohpjreaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668746.4115677-929-116227680210442/AnsiballZ_stat.py
Dec 02 09:45:46 np0005541914.localdomain sudo[267512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:46 np0005541914.localdomain python3.9[267514]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:45:46 np0005541914.localdomain sudo[267512]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:47 np0005541914.localdomain sudo[267569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lpiilxswckpeshqqlszjsmnyegitjsxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668746.4115677-929-116227680210442/AnsiballZ_file.py
Dec 02 09:45:47 np0005541914.localdomain sudo[267569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:47 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26089 DF PROTO=TCP SPT=55788 DPT=9102 SEQ=335465398 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5517BE20000000001030307) 
Dec 02 09:45:47 np0005541914.localdomain python3.9[267571]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:45:47 np0005541914.localdomain sudo[267569]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:47 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:45:47.599 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:45:47 np0005541914.localdomain sudo[267679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rfpgmgthtctzzvzusdnytivokkshgzoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668747.5635655-966-259521121450134/AnsiballZ_stat.py
Dec 02 09:45:47 np0005541914.localdomain sudo[267679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:47 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48430 DF PROTO=TCP SPT=47784 DPT=9102 SEQ=4163748844 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5517F220000000001030307) 
Dec 02 09:45:47 np0005541914.localdomain python3.9[267681]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:45:48 np0005541914.localdomain sudo[267679]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:48 np0005541914.localdomain sudo[267736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zpjilodjcznulaaqevbqisevmgophqmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668747.5635655-966-259521121450134/AnsiballZ_file.py
Dec 02 09:45:48 np0005541914.localdomain sudo[267736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:48 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:45:48 np0005541914.localdomain podman[267739]: 2025-12-02 09:45:48.304382003 +0000 UTC m=+0.081644704 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, version=9.6, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9-minimal, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 02 09:45:48 np0005541914.localdomain podman[267739]: 2025-12-02 09:45:48.321676879 +0000 UTC m=+0.098939570 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, maintainer=Red Hat, Inc., architecture=x86_64, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, release=1755695350, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 02 09:45:48 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:45:48 np0005541914.localdomain python3.9[267738]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:45:48 np0005541914.localdomain sudo[267736]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:48 np0005541914.localdomain sudo[267865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iwblvjpahinpabejlikpgyioqghtsigf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668748.6181598-1001-115393753192127/AnsiballZ_systemd.py
Dec 02 09:45:48 np0005541914.localdomain sudo[267865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:49 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26090 DF PROTO=TCP SPT=55788 DPT=9102 SEQ=335465398 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55183E20000000001030307) 
Dec 02 09:45:49 np0005541914.localdomain python3.9[267867]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:45:49 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:45:49 np0005541914.localdomain systemd-rc-local-generator[267892]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:45:49 np0005541914.localdomain systemd-sysv-generator[267897]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:45:49 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:45:49 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:45:49 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:45:49 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:45:49 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:45:49 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:45:49 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:45:49 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:45:49 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:45:49 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:45:49 np0005541914.localdomain systemd[1]: Starting Create netns directory...
Dec 02 09:45:49 np0005541914.localdomain podman[267905]: 2025-12-02 09:45:49.57057108 +0000 UTC m=+0.075604260 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 09:45:49 np0005541914.localdomain podman[267905]: 2025-12-02 09:45:49.584981318 +0000 UTC m=+0.090014548 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 09:45:49 np0005541914.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 02 09:45:49 np0005541914.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 02 09:45:49 np0005541914.localdomain systemd[1]: Finished Create netns directory.
Dec 02 09:45:49 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:45:49 np0005541914.localdomain sudo[267865]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:49 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48904 DF PROTO=TCP SPT=42590 DPT=9102 SEQ=3670853150 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55187220000000001030307) 
Dec 02 09:45:51 np0005541914.localdomain sudo[268040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-clgdlyarxffyfohfhrujginwicovrkfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668751.0505822-1031-97728964380508/AnsiballZ_file.py
Dec 02 09:45:51 np0005541914.localdomain sudo[268040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:51 np0005541914.localdomain python3.9[268042]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:45:51 np0005541914.localdomain sudo[268040]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:51 np0005541914.localdomain sudo[268150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hgtfprvpmkucdnbbczhukjaenycgzdfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668751.7112842-1055-160005903197128/AnsiballZ_stat.py
Dec 02 09:45:51 np0005541914.localdomain sudo[268150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:52 np0005541914.localdomain python3.9[268152]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:45:52 np0005541914.localdomain sudo[268150]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:52 np0005541914.localdomain sudo[268207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pvyzvpvjsbgzxuboedrfkxyaholmdqok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668751.7112842-1055-160005903197128/AnsiballZ_file.py
Dec 02 09:45:52 np0005541914.localdomain sudo[268207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:52 np0005541914.localdomain python3.9[268209]: ansible-ansible.legacy.file Invoked with group=zuul mode=0700 owner=zuul setype=container_file_t dest=/var/lib/openstack/healthchecks/multipathd/ _original_basename=healthcheck recurse=False state=file path=/var/lib/openstack/healthchecks/multipathd/ force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:45:52 np0005541914.localdomain sudo[268207]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:52 np0005541914.localdomain sudo[268227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:45:52 np0005541914.localdomain sudo[268227]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:45:52 np0005541914.localdomain sudo[268227]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:52 np0005541914.localdomain sudo[268245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:45:52 np0005541914.localdomain sudo[268245]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:45:53 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26091 DF PROTO=TCP SPT=55788 DPT=9102 SEQ=335465398 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55193A30000000001030307) 
Dec 02 09:45:53 np0005541914.localdomain sudo[268366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-odnimxdeutkzfxwyknlumvvupaxrqina ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668753.1069913-1097-254863550416687/AnsiballZ_file.py
Dec 02 09:45:53 np0005541914.localdomain sudo[268366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:53 np0005541914.localdomain python3.9[268370]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:45:53 np0005541914.localdomain sudo[268366]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:53 np0005541914.localdomain sudo[268245]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:54 np0005541914.localdomain sudo[268495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-imfhuhmxxcwgexcrkdobpoidewpscejk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668753.8243427-1120-124387004836783/AnsiballZ_stat.py
Dec 02 09:45:54 np0005541914.localdomain sudo[268495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:54 np0005541914.localdomain python3.9[268497]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:45:54 np0005541914.localdomain sudo[268495]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:54 np0005541914.localdomain sudo[268552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hxksixecforqjrswgrepbmkmkfgdqzkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668753.8243427-1120-124387004836783/AnsiballZ_file.py
Dec 02 09:45:54 np0005541914.localdomain sudo[268552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:54 np0005541914.localdomain python3.9[268554]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/multipathd.json _original_basename=.a7nnnnsg recurse=False state=file path=/var/lib/kolla/config_files/multipathd.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:45:54 np0005541914.localdomain sudo[268552]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:55 np0005541914.localdomain sudo[268662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zfbbsopiobhgpqckufjgfemrbwzlxrjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668754.910275-1157-27542887203114/AnsiballZ_file.py
Dec 02 09:45:55 np0005541914.localdomain sudo[268662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:55 np0005541914.localdomain python3.9[268664]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:45:55 np0005541914.localdomain sudo[268662]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:55 np0005541914.localdomain sudo[268772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lmwzrzsohgavucexzrjtvueixqdprzys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668755.5959697-1180-4200945144133/AnsiballZ_stat.py
Dec 02 09:45:55 np0005541914.localdomain sudo[268772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:55 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:45:55 np0005541914.localdomain podman[268775]: 2025-12-02 09:45:55.97606022 +0000 UTC m=+0.091747990 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 02 09:45:55 np0005541914.localdomain podman[268775]: 2025-12-02 09:45:55.991883382 +0000 UTC m=+0.107571162 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 02 09:45:56 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:45:56 np0005541914.localdomain sudo[268772]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:56 np0005541914.localdomain sudo[268846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ynbbgttgykshjhwsxnsdvrfyxzgmbbnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668755.5959697-1180-4200945144133/AnsiballZ_file.py
Dec 02 09:45:56 np0005541914.localdomain sudo[268846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:56 np0005541914.localdomain sudo[268846]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:57 np0005541914.localdomain sudo[268956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bsqhyvyvjrjgfjftunuiucuttpvjyacs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668756.9594166-1223-202713473740636/AnsiballZ_container_config_data.py
Dec 02 09:45:57 np0005541914.localdomain sudo[268956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:57 np0005541914.localdomain python3.9[268958]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Dec 02 09:45:57 np0005541914.localdomain sudo[268956]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:57 np0005541914.localdomain sudo[268971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:45:57 np0005541914.localdomain sudo[268971]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:45:57 np0005541914.localdomain sudo[268971]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:58 np0005541914.localdomain sudo[269084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lqeydwsgblmloeximnpesdjpfhcszzvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668757.8934667-1250-39183248010018/AnsiballZ_container_config_hash.py
Dec 02 09:45:58 np0005541914.localdomain sudo[269084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:58 np0005541914.localdomain python3.9[269086]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 02 09:45:58 np0005541914.localdomain sudo[269084]: pam_unix(sudo:session): session closed for user root
Dec 02 09:45:59 np0005541914.localdomain sudo[269194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ngmsbyldjmnwzqfsdswdatckthluemis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668758.8597898-1277-275235019171782/AnsiballZ_podman_container_info.py
Dec 02 09:45:59 np0005541914.localdomain sudo[269194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:45:59 np0005541914.localdomain python3.9[269196]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 02 09:45:59 np0005541914.localdomain sudo[269194]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26092 DF PROTO=TCP SPT=55788 DPT=9102 SEQ=335465398 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD551B3220000000001030307) 
Dec 02 09:46:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:46:03.154 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:46:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:46:03.155 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:46:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:46:03.155 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:46:03 np0005541914.localdomain podman[239757]: time="2025-12-02T09:46:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 09:46:03 np0005541914.localdomain sudo[269331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zjirrnqhcsyoeyvhfxilwkbhyvpsfcco ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764668763.2911587-1316-78511236182113/AnsiballZ_edpm_container_manage.py
Dec 02 09:46:03 np0005541914.localdomain sudo[269331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:46:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150198 "" "Go-http-client/1.1"
Dec 02 09:46:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:46:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17698 "" "Go-http-client/1.1"
Dec 02 09:46:03 np0005541914.localdomain python3[269333]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 02 09:46:04 np0005541914.localdomain python3[269333]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                               {
                                                                    "Id": "9af6aa52ee187025bc25565b66d3eefb486acac26f9281e33f4cce76a40d21f7",
                                                                    "Digest": "sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842",
                                                                    "RepoTags": [
                                                                         "quay.io/podified-antelope-centos9/openstack-multipathd:current-podified"
                                                                    ],
                                                                    "RepoDigests": [
                                                                         "quay.io/podified-antelope-centos9/openstack-multipathd@sha256:5b59d54dc4a23373a5172f15f5497b287422c32f5702efd1e171c3f2048c9842"
                                                                    ],
                                                                    "Parent": "",
                                                                    "Comment": "",
                                                                    "Created": "2025-12-01T06:11:02.031267563Z",
                                                                    "Config": {
                                                                         "User": "root",
                                                                         "Env": [
                                                                              "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                                              "LANG=en_US.UTF-8",
                                                                              "TZ=UTC",
                                                                              "container=oci"
                                                                         ],
                                                                         "Entrypoint": [
                                                                              "dumb-init",
                                                                              "--single-child",
                                                                              "--"
                                                                         ],
                                                                         "Cmd": [
                                                                              "kolla_start"
                                                                         ],
                                                                         "Labels": {
                                                                              "io.buildah.version": "1.41.3",
                                                                              "maintainer": "OpenStack Kubernetes Operator team",
                                                                              "org.label-schema.build-date": "20251125",
                                                                              "org.label-schema.license": "GPLv2",
                                                                              "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                              "org.label-schema.schema-version": "1.0",
                                                                              "org.label-schema.vendor": "CentOS",
                                                                              "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "tcib_managed": "true"
                                                                         },
                                                                         "StopSignal": "SIGTERM"
                                                                    },
                                                                    "Version": "",
                                                                    "Author": "",
                                                                    "Architecture": "amd64",
                                                                    "Os": "linux",
                                                                    "Size": 249482216,
                                                                    "VirtualSize": 249482216,
                                                                    "GraphDriver": {
                                                                         "Name": "overlay",
                                                                         "Data": {
                                                                              "LowerDir": "/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",
                                                                              "UpperDir": "/var/lib/containers/storage/overlay/a6426b16bb5884060eaf559f46c5a81bf85811eff8d5d75aaee95a48f0b492cc/diff",
                                                                              "WorkDir": "/var/lib/containers/storage/overlay/a6426b16bb5884060eaf559f46c5a81bf85811eff8d5d75aaee95a48f0b492cc/work"
                                                                         }
                                                                    },
                                                                    "RootFS": {
                                                                         "Type": "layers",
                                                                         "Layers": [
                                                                              "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",
                                                                              "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",
                                                                              "sha256:8c448567789503f6c5be645a12473dfc27734872532d528b6ee764c214f9f2f3"
                                                                         ]
                                                                    },
                                                                    "Labels": {
                                                                         "io.buildah.version": "1.41.3",
                                                                         "maintainer": "OpenStack Kubernetes Operator team",
                                                                         "org.label-schema.build-date": "20251125",
                                                                         "org.label-schema.license": "GPLv2",
                                                                         "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                         "org.label-schema.schema-version": "1.0",
                                                                         "org.label-schema.vendor": "CentOS",
                                                                         "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                         "tcib_managed": "true"
                                                                    },
                                                                    "Annotations": {},
                                                                    "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                                    "User": "root",
                                                                    "History": [
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223494528Z",
                                                                              "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223562059Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251125\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:39.054452717Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025707917Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                                              "comment": "FROM quay.io/centos/centos:stream9",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025744608Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025767729Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025791379Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.02581523Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025867611Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.469442331Z",
                                                                              "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:02.029095017Z",
                                                                              "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:05.672474685Z",
                                                                              "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.113425253Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.532320725Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.370061347Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.805172373Z",
                                                                              "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.259306372Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.625948784Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.028304824Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.423316076Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.801219631Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.239187116Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.70996597Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.147342611Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.5739488Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.006975065Z",
                                                                              "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.421255505Z",
                                                                              "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.066694755Z",
                                                                              "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.475695836Z",
                                                                              "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.8971372Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /openstack",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:18.542651107Z",
                                                                              "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622503041Z",
                                                                              "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622561802Z",
                                                                              "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622578342Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622594423Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:22.080892529Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:24.212273596Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:11:01.523582443Z",
                                                                              "created_by": "/bin/sh -c dnf -y install device-mapper-multipath iscsi-initiator-utils && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:11:03.162365736Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         }
                                                                    ],
                                                                    "NamesHistory": [
                                                                         "quay.io/podified-antelope-centos9/openstack-multipathd:current-podified"
                                                                    ]
                                                               }
                                                          ]
                                                          : quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 02 09:46:04 np0005541914.localdomain sudo[269331]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:04 np0005541914.localdomain sudo[269504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gvgjpstrxivoevrbwwicsswyddvaiqcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668764.530949-1340-141806894261171/AnsiballZ_stat.py
Dec 02 09:46:04 np0005541914.localdomain sudo[269504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:04 np0005541914.localdomain python3.9[269506]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:46:04 np0005541914.localdomain sudo[269504]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:05 np0005541914.localdomain sudo[269616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mpwznykujfnumjmvetmhxpgjnkohmugx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668765.224556-1366-249532053602924/AnsiballZ_file.py
Dec 02 09:46:05 np0005541914.localdomain sudo[269616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:05 np0005541914.localdomain python3.9[269618]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:46:05 np0005541914.localdomain sudo[269616]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:05 np0005541914.localdomain sudo[269671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mzushmernsmwvntfpokbfmhhhcfogcek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668765.224556-1366-249532053602924/AnsiballZ_stat.py
Dec 02 09:46:05 np0005541914.localdomain sudo[269671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:06 np0005541914.localdomain python3.9[269673]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:46:06 np0005541914.localdomain sudo[269671]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:06 np0005541914.localdomain sudo[269780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gbdjefxflypxtfruoloasujybgphvopp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668766.1289968-1366-235958845725721/AnsiballZ_copy.py
Dec 02 09:46:06 np0005541914.localdomain sudo[269780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:06 np0005541914.localdomain python3.9[269782]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764668766.1289968-1366-235958845725721/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:46:06 np0005541914.localdomain sudo[269780]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:07 np0005541914.localdomain sudo[269835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hukgsvrgjjkbigktyrnzlunncekrlwyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668766.1289968-1366-235958845725721/AnsiballZ_systemd.py
Dec 02 09:46:07 np0005541914.localdomain sudo[269835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:07 np0005541914.localdomain python3.9[269837]: ansible-systemd Invoked with state=started name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:46:07 np0005541914.localdomain sudo[269835]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:07 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:46:07.805 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:46:09 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:46:09 np0005541914.localdomain podman[269911]: 2025-12-02 09:46:09.079547389 +0000 UTC m=+0.068896426 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 02 09:46:09 np0005541914.localdomain podman[269911]: 2025-12-02 09:46:09.096030701 +0000 UTC m=+0.085379718 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 02 09:46:09 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 09:46:09 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:46:09 np0005541914.localdomain systemd[1]: tmp-crun.wpZvJN.mount: Deactivated successfully.
Dec 02 09:46:09 np0005541914.localdomain podman[269966]: 2025-12-02 09:46:09.225297942 +0000 UTC m=+0.087708348 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 02 09:46:09 np0005541914.localdomain python3.9[269965]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:46:09 np0005541914.localdomain podman[269966]: 2025-12-02 09:46:09.263956638 +0000 UTC m=+0.126367044 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 09:46:09 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 09:46:09 np0005541914.localdomain sudo[270097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pnrvzludvjpbhcwgpzxagijmmbxkouhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668769.6123228-1469-106354134597940/AnsiballZ_file.py
Dec 02 09:46:09 np0005541914.localdomain sudo[270097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:10 np0005541914.localdomain python3.9[270099]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:46:10 np0005541914.localdomain sudo[270097]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:10 np0005541914.localdomain sudo[270207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pjfmwxoewhxndfmozogvjwccjmimngpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668770.5556512-1505-66149974764353/AnsiballZ_file.py
Dec 02 09:46:10 np0005541914.localdomain sudo[270207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:10 np0005541914.localdomain python3.9[270209]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 02 09:46:11 np0005541914.localdomain sudo[270207]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:11 np0005541914.localdomain sudo[270317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xxcdmyddhrhhgggpbvkmmojhcahvpgfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668771.207225-1529-250176217757478/AnsiballZ_modprobe.py
Dec 02 09:46:11 np0005541914.localdomain sudo[270317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:11 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:46:11 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:46:11 np0005541914.localdomain podman[270320]: 2025-12-02 09:46:11.587006006 +0000 UTC m=+0.077218910 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:46:11 np0005541914.localdomain podman[270321]: 2025-12-02 09:46:11.637090109 +0000 UTC m=+0.121191177 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 02 09:46:11 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:46:11.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:46:11 np0005541914.localdomain podman[270321]: 2025-12-02 09:46:11.645959338 +0000 UTC m=+0.130060386 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 02 09:46:11 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:46:11 np0005541914.localdomain podman[270320]: 2025-12-02 09:46:11.702505748 +0000 UTC m=+0.192718572 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 02 09:46:11 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:46:11 np0005541914.localdomain python3.9[270319]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Dec 02 09:46:11 np0005541914.localdomain sudo[270317]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:46:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 09:46:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:46:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:46:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:46:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:46:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:46:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 09:46:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:46:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:46:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 09:46:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:46:12 np0005541914.localdomain sudo[270467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bwayapoqqzutqokmfkgmvcysitkvkiod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668771.9076533-1553-35952884057693/AnsiballZ_stat.py
Dec 02 09:46:12 np0005541914.localdomain sudo[270467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:12 np0005541914.localdomain python3.9[270469]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:46:12 np0005541914.localdomain sudo[270467]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:12 np0005541914.localdomain sudo[270524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pcngcqsddsdlhwallltwmetcujjpiugb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668771.9076533-1553-35952884057693/AnsiballZ_file.py
Dec 02 09:46:12 np0005541914.localdomain sudo[270524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:12 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:46:12.636 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:46:12 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:46:12.639 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:46:12 np0005541914.localdomain python3.9[270526]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/modules-load.d/nvme-fabrics.conf _original_basename=module-load.conf.j2 recurse=False state=file path=/etc/modules-load.d/nvme-fabrics.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:46:12 np0005541914.localdomain sudo[270524]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:13 np0005541914.localdomain sudo[270634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jwpatbdfpfmkwudtxoheowxotaofcehc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668773.0748475-1592-108222400783203/AnsiballZ_lineinfile.py
Dec 02 09:46:13 np0005541914.localdomain sudo[270634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:13 np0005541914.localdomain python3.9[270636]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:46:13 np0005541914.localdomain sudo[270634]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:13 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:46:13.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:46:13 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:46:13.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:46:14 np0005541914.localdomain sudo[270744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-huefcadytjwxhokjymfglcbgzfeslftg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668773.9240606-1619-240523117979172/AnsiballZ_dnf.py
Dec 02 09:46:14 np0005541914.localdomain sudo[270744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:14 np0005541914.localdomain python3.9[270746]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 02 09:46:14 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:46:14.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:46:14 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:46:14.641 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 09:46:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.434 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:46:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:46:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:46:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:46:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:46:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:46:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:46:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:46:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:46:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:46:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:46:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:46:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:46:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:46:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:46:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:46:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:46:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:46:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:46:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:46:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:46:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:46:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:46:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:46:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:46:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:46:16 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10330 DF PROTO=TCP SPT=58450 DPT=9102 SEQ=1913653352 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD551ED0E0000000001030307) 
Dec 02 09:46:16 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:46:16.642 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:46:16 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:46:16.642 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 09:46:16 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:46:16.642 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 09:46:16 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:46:16.657 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 09:46:17 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10331 DF PROTO=TCP SPT=58450 DPT=9102 SEQ=1913653352 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD551F1220000000001030307) 
Dec 02 09:46:17 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26093 DF PROTO=TCP SPT=55788 DPT=9102 SEQ=335465398 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD551F3220000000001030307) 
Dec 02 09:46:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:46:17.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:46:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:46:17.659 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:46:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:46:17.659 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:46:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:46:17.660 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:46:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:46:17.660 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 09:46:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:46:17.660 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:46:17 np0005541914.localdomain sudo[270744]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:18 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:46:18.117 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:46:18 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:46:18.294 229589 WARNING nova.virt.libvirt.driver [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 09:46:18 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:46:18.296 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=12478MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 09:46:18 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:46:18.296 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:46:18 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:46:18.297 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:46:18 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:46:18.476 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 09:46:18 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:46:18.476 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 09:46:18 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:46:18.497 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:46:18 np0005541914.localdomain python3.9[270878]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 02 09:46:18 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:46:18.945 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:46:18 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:46:18.953 229589 DEBUG nova.compute.provider_tree [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 09:46:18 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:46:18.968 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 09:46:18 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:46:18.970 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 09:46:18 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:46:18.970 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:46:18 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:46:19 np0005541914.localdomain podman[270906]: 2025-12-02 09:46:19.048560594 +0000 UTC m=+0.050864838 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, name=ubi9-minimal, vcs-type=git, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_id=edpm, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7)
Dec 02 09:46:19 np0005541914.localdomain podman[270906]: 2025-12-02 09:46:19.057768374 +0000 UTC m=+0.060072628 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, release=1755695350, architecture=x86_64, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, version=9.6, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter)
Dec 02 09:46:19 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:46:19 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10332 DF PROTO=TCP SPT=58450 DPT=9102 SEQ=1913653352 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD551F9220000000001030307) 
Dec 02 09:46:19 np0005541914.localdomain sudo[271031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oixhqaqjajyqrdvitmrripybxqutrfsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668779.2246087-1671-260680268688737/AnsiballZ_file.py
Dec 02 09:46:19 np0005541914.localdomain sudo[271031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:19 np0005541914.localdomain python3.9[271033]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:46:19 np0005541914.localdomain sudo[271031]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:19 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:46:20 np0005541914.localdomain podman[271051]: 2025-12-02 09:46:20.079742384 +0000 UTC m=+0.083834930 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 02 09:46:20 np0005541914.localdomain podman[271051]: 2025-12-02 09:46:20.088511371 +0000 UTC m=+0.092603917 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 02 09:46:20 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:46:20 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48431 DF PROTO=TCP SPT=47784 DPT=9102 SEQ=4163748844 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD551FD230000000001030307) 
Dec 02 09:46:20 np0005541914.localdomain sudo[271164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-azwogjbahizhdbibpwvcxtnjtexzzare ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668780.2158542-1704-217533892801385/AnsiballZ_systemd_service.py
Dec 02 09:46:20 np0005541914.localdomain sudo[271164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:20 np0005541914.localdomain python3.9[271166]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 02 09:46:20 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:46:20 np0005541914.localdomain systemd-rc-local-generator[271192]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:46:20 np0005541914.localdomain systemd-sysv-generator[271195]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:46:20 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:46:20 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:46:20 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:46:20 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:46:20 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:46:20 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:46:20 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:46:20 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:46:20 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:46:21 np0005541914.localdomain sudo[271164]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:21 np0005541914.localdomain python3.9[271310]: ansible-ansible.builtin.service_facts Invoked
Dec 02 09:46:21 np0005541914.localdomain network[271327]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 02 09:46:21 np0005541914.localdomain network[271328]: 'network-scripts' will be removed from distribution in near future.
Dec 02 09:46:21 np0005541914.localdomain network[271329]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 02 09:46:23 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10333 DF PROTO=TCP SPT=58450 DPT=9102 SEQ=1913653352 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55208E30000000001030307) 
Dec 02 09:46:24 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:46:26 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:46:27 np0005541914.localdomain systemd[1]: tmp-crun.DwhRIl.mount: Deactivated successfully.
Dec 02 09:46:27 np0005541914.localdomain podman[271471]: 2025-12-02 09:46:27.090244974 +0000 UTC m=+0.090509633 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.license=GPLv2)
Dec 02 09:46:27 np0005541914.localdomain podman[271471]: 2025-12-02 09:46:27.102114105 +0000 UTC m=+0.102378734 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd)
Dec 02 09:46:27 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:46:27 np0005541914.localdomain sudo[271581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hjwicwzfbuqydqfcubjlgwootsyeehcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668787.1735623-1761-81717120659998/AnsiballZ_systemd_service.py
Dec 02 09:46:27 np0005541914.localdomain sudo[271581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:27 np0005541914.localdomain python3.9[271583]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:46:28 np0005541914.localdomain sudo[271581]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:29 np0005541914.localdomain sudo[271692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lghbwywwstvxpspjpouxqrvmhpbydqma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668788.9021792-1761-205562791846610/AnsiballZ_systemd_service.py
Dec 02 09:46:29 np0005541914.localdomain sudo[271692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:29 np0005541914.localdomain python3.9[271694]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:46:29 np0005541914.localdomain sudo[271692]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:30 np0005541914.localdomain sudo[271803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gnfclnocsmfjylczvoyoimxomyeheheu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668789.7476685-1761-16321065449353/AnsiballZ_systemd_service.py
Dec 02 09:46:30 np0005541914.localdomain sudo[271803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:30 np0005541914.localdomain python3.9[271805]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:46:30 np0005541914.localdomain sudo[271803]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:30 np0005541914.localdomain sudo[271914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kmslywvogenmyyfkddbqkoplascpkwia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668790.4785447-1761-45622237511841/AnsiballZ_systemd_service.py
Dec 02 09:46:30 np0005541914.localdomain sudo[271914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:31 np0005541914.localdomain python3.9[271916]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:46:31 np0005541914.localdomain sudo[271914]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10334 DF PROTO=TCP SPT=58450 DPT=9102 SEQ=1913653352 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55229220000000001030307) 
Dec 02 09:46:31 np0005541914.localdomain sudo[272025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vdniytktajkscwquaincohieofmsxkhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668791.2037902-1761-184224745819379/AnsiballZ_systemd_service.py
Dec 02 09:46:31 np0005541914.localdomain sudo[272025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:31 np0005541914.localdomain python3.9[272027]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:46:31 np0005541914.localdomain sudo[272025]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:32 np0005541914.localdomain sudo[272136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-glfsgsnqhsdtuqatfkmhuaviworlblxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668791.9334443-1761-85189422092883/AnsiballZ_systemd_service.py
Dec 02 09:46:32 np0005541914.localdomain sudo[272136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:32 np0005541914.localdomain python3.9[272138]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:46:32 np0005541914.localdomain sudo[272136]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:32 np0005541914.localdomain sudo[272247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-giuffcxnzwmkxmdorpdrvuyqkyybrbyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668792.6856549-1761-154929396560169/AnsiballZ_systemd_service.py
Dec 02 09:46:32 np0005541914.localdomain sudo[272247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:33 np0005541914.localdomain python3.9[272249]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:46:33 np0005541914.localdomain sudo[272247]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:33 np0005541914.localdomain sudo[272358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fixfoegguulqodxfqwliucbqyiszegba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668793.3024747-1761-247221147397616/AnsiballZ_systemd_service.py
Dec 02 09:46:33 np0005541914.localdomain sudo[272358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:33 np0005541914.localdomain podman[239757]: time="2025-12-02T09:46:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 09:46:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:46:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150198 "" "Go-http-client/1.1"
Dec 02 09:46:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:46:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17697 "" "Go-http-client/1.1"
Dec 02 09:46:33 np0005541914.localdomain python3.9[272360]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:46:34 np0005541914.localdomain sudo[272358]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:35 np0005541914.localdomain sudo[272469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hzlkdlgrpcfisjjotkfxptutsszrpaib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668795.3550258-1938-139166821383825/AnsiballZ_file.py
Dec 02 09:46:35 np0005541914.localdomain sudo[272469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:35 np0005541914.localdomain python3.9[272471]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:46:35 np0005541914.localdomain sudo[272469]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:36 np0005541914.localdomain sudo[272579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mqvzccfetewcjqqwgwnduqjlgyelxqhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668795.91162-1938-231872420547786/AnsiballZ_file.py
Dec 02 09:46:36 np0005541914.localdomain sudo[272579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:36 np0005541914.localdomain python3.9[272581]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:46:36 np0005541914.localdomain sudo[272579]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:36 np0005541914.localdomain sudo[272689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ytdvihnhkdygprgiscnsoyqtwltaestw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668796.4551182-1938-56092340736896/AnsiballZ_file.py
Dec 02 09:46:36 np0005541914.localdomain sudo[272689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:36 np0005541914.localdomain python3.9[272691]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:46:36 np0005541914.localdomain sudo[272689]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:37 np0005541914.localdomain sudo[272799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pyqlqragopelkjjckybmalwoiyfedbpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668797.055846-1938-214845162123491/AnsiballZ_file.py
Dec 02 09:46:37 np0005541914.localdomain sudo[272799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:37 np0005541914.localdomain python3.9[272801]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:46:37 np0005541914.localdomain sudo[272799]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:37 np0005541914.localdomain sudo[272909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pmaroxadpfujkcukbxhntwtsmuejipgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668797.6660268-1938-268012729168090/AnsiballZ_file.py
Dec 02 09:46:37 np0005541914.localdomain sudo[272909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:38 np0005541914.localdomain python3.9[272911]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:46:38 np0005541914.localdomain sudo[272909]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:38 np0005541914.localdomain sudo[273019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tailjlmbzjbkzspjgjbvhkskiahpmrfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668798.2724385-1938-216834116561263/AnsiballZ_file.py
Dec 02 09:46:38 np0005541914.localdomain sudo[273019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:38 np0005541914.localdomain python3.9[273021]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:46:38 np0005541914.localdomain sudo[273019]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:39 np0005541914.localdomain sudo[273129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ggstcumtlvyaqxiuxrnbktytwnvckxxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668798.8437574-1938-119925382298266/AnsiballZ_file.py
Dec 02 09:46:39 np0005541914.localdomain sudo[273129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:39 np0005541914.localdomain sshd[273132]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:46:39 np0005541914.localdomain python3.9[273131]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:46:39 np0005541914.localdomain sudo[273129]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:39 np0005541914.localdomain sudo[273241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pfwhryluqcnvwuypcmsnebawttkicmih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668799.433796-1938-154563004713940/AnsiballZ_file.py
Dec 02 09:46:39 np0005541914.localdomain sudo[273241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:39 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:46:39 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:46:39 np0005541914.localdomain sshd[273132]: Received disconnect from 34.78.29.97 port 54000:11: Bye Bye [preauth]
Dec 02 09:46:39 np0005541914.localdomain sshd[273132]: Disconnected from authenticating user root 34.78.29.97 port 54000 [preauth]
Dec 02 09:46:39 np0005541914.localdomain podman[273245]: 2025-12-02 09:46:39.796968425 +0000 UTC m=+0.085305502 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec 02 09:46:39 np0005541914.localdomain podman[273245]: 2025-12-02 09:46:39.805825566 +0000 UTC m=+0.094162603 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 02 09:46:39 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 09:46:39 np0005541914.localdomain podman[273244]: 2025-12-02 09:46:39.890990853 +0000 UTC m=+0.181968351 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 09:46:39 np0005541914.localdomain podman[273244]: 2025-12-02 09:46:39.901834634 +0000 UTC m=+0.192812132 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 09:46:39 np0005541914.localdomain python3.9[273243]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:46:39 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 09:46:39 np0005541914.localdomain sudo[273241]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:40 np0005541914.localdomain sudo[273393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oeewoqsqzvqnvkqpgsfnwtltqsgoaqme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668800.1006675-2109-213802091329656/AnsiballZ_file.py
Dec 02 09:46:40 np0005541914.localdomain sudo[273393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:40 np0005541914.localdomain python3.9[273395]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:46:40 np0005541914.localdomain sudo[273393]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:40 np0005541914.localdomain sudo[273503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hyckkmdjggnacpqrlrcbafzddprakwum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668800.685007-2109-153250518924334/AnsiballZ_file.py
Dec 02 09:46:40 np0005541914.localdomain sudo[273503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:41 np0005541914.localdomain python3.9[273505]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:46:41 np0005541914.localdomain sudo[273503]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:41 np0005541914.localdomain sudo[273613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fkymoxeidodonxyavtwvrknlhynqsrnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668801.2012062-2109-15891454321197/AnsiballZ_file.py
Dec 02 09:46:41 np0005541914.localdomain sudo[273613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:41 np0005541914.localdomain python3.9[273615]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:46:41 np0005541914.localdomain sudo[273613]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:41 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:46:41 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:46:42 np0005541914.localdomain sudo[273735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zheodhmobvfkcepsqauctjnspyphnzlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668801.7858362-2109-73050827885692/AnsiballZ_file.py
Dec 02 09:46:42 np0005541914.localdomain sudo[273735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:42 np0005541914.localdomain systemd[1]: tmp-crun.xiwr6i.mount: Deactivated successfully.
Dec 02 09:46:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:46:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:46:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:46:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 09:46:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:46:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:46:42 np0005541914.localdomain podman[273712]: 2025-12-02 09:46:42.096626878 +0000 UTC m=+0.091739549 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:46:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:46:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 09:46:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:46:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:46:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 09:46:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:46:42 np0005541914.localdomain podman[273714]: 2025-12-02 09:46:42.115720882 +0000 UTC m=+0.105349056 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 02 09:46:42 np0005541914.localdomain podman[273712]: 2025-12-02 09:46:42.138335904 +0000 UTC m=+0.133448575 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:46:42 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:46:42 np0005541914.localdomain podman[273714]: 2025-12-02 09:46:42.159856902 +0000 UTC m=+0.149485036 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Dec 02 09:46:42 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:46:42 np0005541914.localdomain python3.9[273746]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:46:42 np0005541914.localdomain sudo[273735]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:42 np0005541914.localdomain sudo[273874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hevedpocwscgkyffewgxqsrdupzssurq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668802.3650486-2109-27843280825803/AnsiballZ_file.py
Dec 02 09:46:42 np0005541914.localdomain sudo[273874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:42 np0005541914.localdomain python3.9[273876]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:46:42 np0005541914.localdomain sudo[273874]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:43 np0005541914.localdomain sudo[273984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uguknemucpvadwdbdksjpkhjsgeegnlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668802.936155-2109-246220572663812/AnsiballZ_file.py
Dec 02 09:46:43 np0005541914.localdomain sudo[273984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:43 np0005541914.localdomain python3.9[273986]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:46:43 np0005541914.localdomain sudo[273984]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:43 np0005541914.localdomain sudo[274094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-msgblzqdcukymxuuwihqzivkrlwucvea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668803.5539832-2109-253187280813354/AnsiballZ_file.py
Dec 02 09:46:43 np0005541914.localdomain sudo[274094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:44 np0005541914.localdomain python3.9[274096]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:46:44 np0005541914.localdomain sudo[274094]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:44 np0005541914.localdomain sudo[274204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eydaitwstaodanjbonkmqmorecklbbxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668804.1381705-2109-145438246595030/AnsiballZ_file.py
Dec 02 09:46:44 np0005541914.localdomain sudo[274204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:44 np0005541914.localdomain python3.9[274206]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:46:44 np0005541914.localdomain sudo[274204]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:45 np0005541914.localdomain sudo[274314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tqmxzkxaxlielqlgvinxinctykehoank ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668804.9378507-2283-104876984982964/AnsiballZ_command.py
Dec 02 09:46:45 np0005541914.localdomain sudo[274314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:45 np0005541914.localdomain python3.9[274316]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                                              systemctl disable --now certmonger.service
                                                              test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                                            fi
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:46:45 np0005541914.localdomain sudo[274314]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:46 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59040 DF PROTO=TCP SPT=40808 DPT=9102 SEQ=3749391443 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD552623E0000000001030307) 
Dec 02 09:46:46 np0005541914.localdomain python3.9[274426]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 02 09:46:46 np0005541914.localdomain sudo[274534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dripjyfxvjknvdouocdomasmrpyrjccz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668806.6012392-2337-196595093586003/AnsiballZ_systemd_service.py
Dec 02 09:46:46 np0005541914.localdomain sudo[274534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:47 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59041 DF PROTO=TCP SPT=40808 DPT=9102 SEQ=3749391443 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55266620000000001030307) 
Dec 02 09:46:47 np0005541914.localdomain python3.9[274536]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 02 09:46:47 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:46:47 np0005541914.localdomain systemd-rc-local-generator[274558]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:46:47 np0005541914.localdomain systemd-sysv-generator[274564]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:46:47 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:46:47 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:46:47 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:46:47 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:46:47 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:46:47 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:46:47 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:46:47 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:46:47 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:46:47 np0005541914.localdomain sudo[274534]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:47 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10335 DF PROTO=TCP SPT=58450 DPT=9102 SEQ=1913653352 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55269230000000001030307) 
Dec 02 09:46:48 np0005541914.localdomain sudo[274680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zfsisjwphgepzqzgcuqmkjgzsjkisqpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668807.7602117-2361-20533311382006/AnsiballZ_command.py
Dec 02 09:46:48 np0005541914.localdomain sudo[274680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:48 np0005541914.localdomain python3.9[274682]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:46:48 np0005541914.localdomain sudo[274680]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:48 np0005541914.localdomain sudo[274791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ukxlskmatbxbkpevvmobnobggvkaunay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668808.3407872-2361-61789768915838/AnsiballZ_command.py
Dec 02 09:46:48 np0005541914.localdomain sudo[274791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:48 np0005541914.localdomain python3.9[274793]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:46:49 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59042 DF PROTO=TCP SPT=40808 DPT=9102 SEQ=3749391443 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5526E620000000001030307) 
Dec 02 09:46:49 np0005541914.localdomain sudo[274791]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:49 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26094 DF PROTO=TCP SPT=55788 DPT=9102 SEQ=335465398 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55271220000000001030307) 
Dec 02 09:46:49 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:46:50 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:46:50 np0005541914.localdomain podman[274854]: 2025-12-02 09:46:50.149994772 +0000 UTC m=+0.140496002 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, vcs-type=git, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=edpm)
Dec 02 09:46:50 np0005541914.localdomain sudo[274918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qqdeqishwgpuairssnnavzmdmjtfakvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668809.9216168-2361-179061135473732/AnsiballZ_command.py
Dec 02 09:46:50 np0005541914.localdomain sudo[274918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:50 np0005541914.localdomain podman[274854]: 2025-12-02 09:46:50.197037311 +0000 UTC m=+0.187538601 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.expose-services=, architecture=x86_64, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=minimal rhel9, vcs-type=git, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 02 09:46:50 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:46:50 np0005541914.localdomain podman[274920]: 2025-12-02 09:46:50.297089303 +0000 UTC m=+0.138478249 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 09:46:50 np0005541914.localdomain podman[274920]: 2025-12-02 09:46:50.332017813 +0000 UTC m=+0.173406799 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 09:46:50 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:46:50 np0005541914.localdomain python3.9[274933]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:46:50 np0005541914.localdomain sudo[274918]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:50 np0005541914.localdomain sudo[275052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nrkhacdtrtjtwshafyereqzhwvyqrvci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668810.4983995-2361-175256567226611/AnsiballZ_command.py
Dec 02 09:46:50 np0005541914.localdomain sudo[275052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:50 np0005541914.localdomain python3.9[275054]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:46:50 np0005541914.localdomain sudo[275052]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:51 np0005541914.localdomain sudo[275163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tonwcglvghydsvlgricopwdnlalxxdwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668811.0866287-2361-124584735419933/AnsiballZ_command.py
Dec 02 09:46:51 np0005541914.localdomain sudo[275163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:51 np0005541914.localdomain python3.9[275165]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:46:51 np0005541914.localdomain sudo[275163]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:51 np0005541914.localdomain sudo[275274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zaqackfkogoybhztsntqzhkoildmeajj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668811.606343-2361-207073119814516/AnsiballZ_command.py
Dec 02 09:46:51 np0005541914.localdomain sudo[275274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:52 np0005541914.localdomain python3.9[275276]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:46:53 np0005541914.localdomain sudo[275274]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:53 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59043 DF PROTO=TCP SPT=40808 DPT=9102 SEQ=3749391443 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5527E220000000001030307) 
Dec 02 09:46:53 np0005541914.localdomain sudo[275385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ewvkdngdlnepnnfuuhhsbcdyitfjdvnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668813.211026-2361-31991118213465/AnsiballZ_command.py
Dec 02 09:46:53 np0005541914.localdomain sudo[275385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:53 np0005541914.localdomain python3.9[275387]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:46:54 np0005541914.localdomain sudo[275385]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:55 np0005541914.localdomain sudo[275496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-axqqrxibuxjpfhdmnlsuepcpjbvzicub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668814.8205538-2361-92485363731187/AnsiballZ_command.py
Dec 02 09:46:55 np0005541914.localdomain sudo[275496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:55 np0005541914.localdomain python3.9[275498]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:46:55 np0005541914.localdomain sudo[275496]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:57 np0005541914.localdomain sudo[275607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rxpfuswqesqjjfzcqxbqycdheunvxilf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668816.9034758-2568-189133607790838/AnsiballZ_file.py
Dec 02 09:46:57 np0005541914.localdomain sudo[275607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:57 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:46:57 np0005541914.localdomain systemd[1]: tmp-crun.9UxryQ.mount: Deactivated successfully.
Dec 02 09:46:57 np0005541914.localdomain podman[275610]: 2025-12-02 09:46:57.307526238 +0000 UTC m=+0.089360746 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 02 09:46:57 np0005541914.localdomain podman[275610]: 2025-12-02 09:46:57.319811763 +0000 UTC m=+0.101646251 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 02 09:46:57 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:46:57 np0005541914.localdomain python3.9[275609]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:46:57 np0005541914.localdomain sudo[275607]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:57 np0005541914.localdomain sudo[275736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-knwmvmcyodjmejtjikgbdvnqadvdydpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668817.520514-2568-235593900273889/AnsiballZ_file.py
Dec 02 09:46:57 np0005541914.localdomain sudo[275736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:57 np0005541914.localdomain sudo[275737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:46:57 np0005541914.localdomain sudo[275737]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:46:57 np0005541914.localdomain sudo[275737]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:57 np0005541914.localdomain sudo[275757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:46:57 np0005541914.localdomain sudo[275757]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:46:57 np0005541914.localdomain python3.9[275754]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:46:57 np0005541914.localdomain sudo[275736]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:58 np0005541914.localdomain sudo[275896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kgbxtsnyytfbfnfubckavrbcygosovsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668818.0995376-2568-107334546218707/AnsiballZ_file.py
Dec 02 09:46:58 np0005541914.localdomain sudo[275896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:58 np0005541914.localdomain sudo[275757]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:58 np0005541914.localdomain python3.9[275899]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:46:58 np0005541914.localdomain sudo[275896]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:58 np0005541914.localdomain sudo[275944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:46:58 np0005541914.localdomain sudo[275944]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:46:58 np0005541914.localdomain sudo[275944]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:58 np0005541914.localdomain sudo[275991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Dec 02 09:46:58 np0005541914.localdomain sudo[275991]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:46:59 np0005541914.localdomain sudo[276059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nzlvofaylerzsjmhbvtecjfsytpmhzhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668818.781624-2634-224340729905560/AnsiballZ_file.py
Dec 02 09:46:59 np0005541914.localdomain sudo[276059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:59 np0005541914.localdomain python3.9[276061]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:46:59 np0005541914.localdomain sudo[276059]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:59 np0005541914.localdomain sudo[275991]: pam_unix(sudo:session): session closed for user root
Dec 02 09:46:59 np0005541914.localdomain sudo[276187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eyzlyziroukmrfeaeeihbrxbhfzkhoig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668819.3647714-2634-19887450457402/AnsiballZ_file.py
Dec 02 09:46:59 np0005541914.localdomain sudo[276187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:46:59 np0005541914.localdomain python3.9[276189]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:46:59 np0005541914.localdomain sudo[276187]: pam_unix(sudo:session): session closed for user root
Dec 02 09:47:00 np0005541914.localdomain sudo[276297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-picetldzxuyyezllbcqxbkzuptoscblq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668819.9298675-2634-54965105033300/AnsiballZ_file.py
Dec 02 09:47:00 np0005541914.localdomain sudo[276297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:47:00 np0005541914.localdomain python3.9[276299]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:47:00 np0005541914.localdomain sudo[276297]: pam_unix(sudo:session): session closed for user root
Dec 02 09:47:00 np0005541914.localdomain sudo[276407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jursicrgmjcjmftppsccjkcfkekjoeoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668820.4567635-2634-272921825042566/AnsiballZ_file.py
Dec 02 09:47:00 np0005541914.localdomain sudo[276407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:47:00 np0005541914.localdomain python3.9[276409]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:47:00 np0005541914.localdomain sudo[276407]: pam_unix(sudo:session): session closed for user root
Dec 02 09:47:01 np0005541914.localdomain sudo[276517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ggkssrlctjgawgcxoxwcktxqcufhrfcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668821.0858903-2634-211689140749942/AnsiballZ_file.py
Dec 02 09:47:01 np0005541914.localdomain sudo[276517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:47:01 np0005541914.localdomain python3.9[276519]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:47:01 np0005541914.localdomain sudo[276517]: pam_unix(sudo:session): session closed for user root
Dec 02 09:47:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59044 DF PROTO=TCP SPT=40808 DPT=9102 SEQ=3749391443 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5529F220000000001030307) 
Dec 02 09:47:02 np0005541914.localdomain sudo[276627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uanhhkgrwqnwpaupdzxxupspskwsqjky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668821.8513765-2634-9156377541625/AnsiballZ_file.py
Dec 02 09:47:02 np0005541914.localdomain sudo[276627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:47:02 np0005541914.localdomain sudo[276630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:47:02 np0005541914.localdomain sudo[276630]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:47:02 np0005541914.localdomain sudo[276630]: pam_unix(sudo:session): session closed for user root
Dec 02 09:47:02 np0005541914.localdomain python3.9[276629]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:47:02 np0005541914.localdomain sudo[276627]: pam_unix(sudo:session): session closed for user root
Dec 02 09:47:02 np0005541914.localdomain sudo[276755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mfeqogsqyoqxmixqiwxlkxotafrtlsts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668822.4153888-2634-123144301329365/AnsiballZ_file.py
Dec 02 09:47:02 np0005541914.localdomain sudo[276755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:47:02 np0005541914.localdomain python3.9[276757]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:47:02 np0005541914.localdomain sudo[276755]: pam_unix(sudo:session): session closed for user root
Dec 02 09:47:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:47:03.157 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:47:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:47:03.157 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:47:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:47:03.158 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:47:03 np0005541914.localdomain podman[239757]: time="2025-12-02T09:47:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 09:47:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:47:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150198 "" "Go-http-client/1.1"
Dec 02 09:47:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:47:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17700 "" "Go-http-client/1.1"
Dec 02 09:47:08 np0005541914.localdomain sudo[276865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hkmhwjwghzhlyzvdgiseofrwepbrhvgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668828.2171595-2959-88061420779322/AnsiballZ_getent.py
Dec 02 09:47:08 np0005541914.localdomain sudo[276865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:47:08 np0005541914.localdomain python3.9[276867]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Dec 02 09:47:08 np0005541914.localdomain sudo[276865]: pam_unix(sudo:session): session closed for user root
Dec 02 09:47:09 np0005541914.localdomain sshd[276886]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:47:09 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:47:09 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:47:10 np0005541914.localdomain sshd[276886]: Accepted publickey for zuul from 192.168.122.30 port 50376 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 09:47:10 np0005541914.localdomain systemd-logind[760]: New session 60 of user zuul.
Dec 02 09:47:10 np0005541914.localdomain systemd[1]: Started Session 60 of User zuul.
Dec 02 09:47:10 np0005541914.localdomain sshd[276886]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 09:47:10 np0005541914.localdomain podman[276889]: 2025-12-02 09:47:10.153080575 +0000 UTC m=+0.154089977 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 02 09:47:10 np0005541914.localdomain podman[276888]: 2025-12-02 09:47:10.109124129 +0000 UTC m=+0.112167234 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 09:47:10 np0005541914.localdomain podman[276889]: 2025-12-02 09:47:10.166258268 +0000 UTC m=+0.167267650 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 02 09:47:10 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 09:47:10 np0005541914.localdomain sshd[276909]: Received disconnect from 192.168.122.30 port 50376:11: disconnected by user
Dec 02 09:47:10 np0005541914.localdomain sshd[276909]: Disconnected from user zuul 192.168.122.30 port 50376
Dec 02 09:47:10 np0005541914.localdomain sshd[276886]: pam_unix(sshd:session): session closed for user zuul
Dec 02 09:47:10 np0005541914.localdomain podman[276888]: 2025-12-02 09:47:10.195045439 +0000 UTC m=+0.198088534 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 09:47:10 np0005541914.localdomain systemd[1]: session-60.scope: Deactivated successfully.
Dec 02 09:47:10 np0005541914.localdomain systemd-logind[760]: Session 60 logged out. Waiting for processes to exit.
Dec 02 09:47:10 np0005541914.localdomain systemd-logind[760]: Removed session 60.
Dec 02 09:47:10 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 09:47:10 np0005541914.localdomain python3.9[277041]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:47:10 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:47:10.972 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 09:47:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 7200.1 total, 600.0 interval
                                                          Cumulative writes: 4846 writes, 21K keys, 4846 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 4846 writes, 677 syncs, 7.16 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 02 09:47:11 np0005541914.localdomain python3.9[277127]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668830.410593-3041-20451610905541/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:47:11 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:47:11.641 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:47:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:47:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:47:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:47:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:47:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:47:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 09:47:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:47:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 09:47:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:47:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:47:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 09:47:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:47:12 np0005541914.localdomain python3.9[277235]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:47:12 np0005541914.localdomain python3.9[277290]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:47:12 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:47:12.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:47:12 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:47:12 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:47:13 np0005541914.localdomain podman[277399]: 2025-12-02 09:47:13.08042647 +0000 UTC m=+0.082738834 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 02 09:47:13 np0005541914.localdomain podman[277399]: 2025-12-02 09:47:13.087820297 +0000 UTC m=+0.090132651 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, tcib_managed=true)
Dec 02 09:47:13 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:47:13 np0005541914.localdomain podman[277400]: 2025-12-02 09:47:13.144775389 +0000 UTC m=+0.142029618 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Dec 02 09:47:13 np0005541914.localdomain python3.9[277398]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:47:13 np0005541914.localdomain podman[277400]: 2025-12-02 09:47:13.251720762 +0000 UTC m=+0.248974951 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 02 09:47:13 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:47:13 np0005541914.localdomain python3.9[277525]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668832.7037854-3041-208133948468060/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:47:14 np0005541914.localdomain python3.9[277633]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:47:14 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:47:14.636 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:47:14 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:47:14.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:47:14 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:47:14.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:47:14 np0005541914.localdomain python3.9[277719]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668833.7850075-3041-258902145269385/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=2618deabb92e3bb6763a4ba7147e78332a2d3a7c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:47:15 np0005541914.localdomain python3.9[277827]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:47:15 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:47:15.640 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:47:15 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:47:15.640 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 09:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 09:47:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 7200.2 total, 600.0 interval
                                                          Cumulative writes: 5767 writes, 25K keys, 5767 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5767 writes, 746 syncs, 7.73 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 02 09:47:15 np0005541914.localdomain python3.9[277913]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668834.940844-3041-77157463090852/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:47:16 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10172 DF PROTO=TCP SPT=38574 DPT=9102 SEQ=3134322641 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD552D76E0000000001030307) 
Dec 02 09:47:16 np0005541914.localdomain python3.9[278021]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:47:16 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:47:16.636 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:47:17 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10173 DF PROTO=TCP SPT=38574 DPT=9102 SEQ=3134322641 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD552DB630000000001030307) 
Dec 02 09:47:17 np0005541914.localdomain python3.9[278107]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764668836.005264-3041-121873187305884/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:47:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:47:17.179 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:47:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:47:17.179 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 09:47:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:47:17.179 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 09:47:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:47:17.211 229589 DEBUG nova.compute.manager [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 09:47:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:47:17.639 229589 DEBUG oslo_service.periodic_task [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:47:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:47:17.657 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:47:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:47:17.657 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:47:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:47:17.658 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:47:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:47:17.658 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 09:47:17 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:47:17.659 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:47:17 np0005541914.localdomain sudo[278215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mlymgmypntozibkhdosoavysxkbozyhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668837.401067-3289-171039589866822/AnsiballZ_file.py
Dec 02 09:47:17 np0005541914.localdomain sudo[278215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:47:17 np0005541914.localdomain python3.9[278218]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:47:17 np0005541914.localdomain sudo[278215]: pam_unix(sudo:session): session closed for user root
Dec 02 09:47:17 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59045 DF PROTO=TCP SPT=40808 DPT=9102 SEQ=3749391443 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD552DF220000000001030307) 
Dec 02 09:47:18 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:47:18.129 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:47:18 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:47:18.312 229589 WARNING nova.virt.libvirt.driver [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 09:47:18 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:47:18.314 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=12466MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 09:47:18 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:47:18.314 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:47:18 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:47:18.314 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:47:18 np0005541914.localdomain sudo[278347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xtojbhrikivvcfpbnivhzffdlcsrnhwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668838.0846446-3313-132349809294716/AnsiballZ_copy.py
Dec 02 09:47:18 np0005541914.localdomain sudo[278347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:47:18 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:47:18.376 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 09:47:18 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:47:18.376 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 09:47:18 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:47:18.398 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:47:18 np0005541914.localdomain python3.9[278349]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:47:18 np0005541914.localdomain sudo[278347]: pam_unix(sudo:session): session closed for user root
Dec 02 09:47:18 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:47:18.882 229589 DEBUG oslo_concurrency.processutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:47:18 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:47:18.887 229589 DEBUG nova.compute.provider_tree [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 09:47:18 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:47:18.904 229589 DEBUG nova.scheduler.client.report [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 09:47:18 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:47:18.907 229589 DEBUG nova.compute.resource_tracker [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 09:47:18 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:47:18.907 229589 DEBUG oslo_concurrency.lockutils [None req-c574df46-a852-44ad-9660-1d0628ff3122 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:47:19 np0005541914.localdomain sudo[278479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ueuyydcmerkxkextgvcfcjkejkrnqbkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668838.7910495-3338-207220850827696/AnsiballZ_stat.py
Dec 02 09:47:19 np0005541914.localdomain sudo[278479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:47:19 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10174 DF PROTO=TCP SPT=38574 DPT=9102 SEQ=3134322641 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD552E3630000000001030307) 
Dec 02 09:47:19 np0005541914.localdomain python3.9[278481]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:47:19 np0005541914.localdomain sudo[278479]: pam_unix(sudo:session): session closed for user root
Dec 02 09:47:19 np0005541914.localdomain sudo[278591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-protnccdwjerxtbcldbfhovmwmsbzmxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668839.519121-3364-143586561859495/AnsiballZ_file.py
Dec 02 09:47:19 np0005541914.localdomain sudo[278591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:47:19 np0005541914.localdomain python3.9[278593]: ansible-ansible.builtin.file Invoked with group=nova mode=0400 owner=nova path=/var/lib/nova/compute_id state=file recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:47:19 np0005541914.localdomain sudo[278591]: pam_unix(sudo:session): session closed for user root
Dec 02 09:47:20 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10336 DF PROTO=TCP SPT=58450 DPT=9102 SEQ=1913653352 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD552E7220000000001030307) 
Dec 02 09:47:20 np0005541914.localdomain python3.9[278701]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:47:20 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:47:20 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:47:21 np0005541914.localdomain podman[278738]: 2025-12-02 09:47:21.082587217 +0000 UTC m=+0.080623808 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, architecture=x86_64, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, managed_by=edpm_ansible, io.openshift.expose-services=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 02 09:47:21 np0005541914.localdomain podman[278738]: 2025-12-02 09:47:21.093954336 +0000 UTC m=+0.091990957 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=edpm, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.openshift.tags=minimal rhel9, release=1755695350, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.openshift.expose-services=)
Dec 02 09:47:21 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:47:21 np0005541914.localdomain podman[278737]: 2025-12-02 09:47:21.130653589 +0000 UTC m=+0.129634360 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 02 09:47:21 np0005541914.localdomain podman[278737]: 2025-12-02 09:47:21.168888878 +0000 UTC m=+0.167869659 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 09:47:21 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:47:21 np0005541914.localdomain python3.9[278854]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:47:21 np0005541914.localdomain python3.9[278909]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/containers/nova_compute.json _original_basename=nova_compute.json.j2 recurse=False state=file path=/var/lib/openstack/config/containers/nova_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:47:22 np0005541914.localdomain python3.9[279017]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 02 09:47:22 np0005541914.localdomain python3.9[279072]: ansible-ansible.legacy.file Invoked with mode=0700 setype=container_file_t dest=/var/lib/openstack/config/containers/nova_compute_init.json _original_basename=nova_compute_init.json.j2 recurse=False state=file path=/var/lib/openstack/config/containers/nova_compute_init.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 02 09:47:23 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10175 DF PROTO=TCP SPT=38574 DPT=9102 SEQ=3134322641 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD552F3220000000001030307) 
Dec 02 09:47:24 np0005541914.localdomain sudo[279180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xssreaohshrwcaowzpcfyjkwvlkfddak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668843.4123096-3493-63317543911666/AnsiballZ_container_config_data.py
Dec 02 09:47:24 np0005541914.localdomain sudo[279180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:47:24 np0005541914.localdomain python3.9[279182]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Dec 02 09:47:24 np0005541914.localdomain sudo[279180]: pam_unix(sudo:session): session closed for user root
Dec 02 09:47:24 np0005541914.localdomain sudo[279290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oqzdvpanhiarmidrsslvpmqevoxtatrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668844.5475767-3520-258615471070664/AnsiballZ_container_config_hash.py
Dec 02 09:47:24 np0005541914.localdomain sudo[279290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:47:25 np0005541914.localdomain python3.9[279292]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 02 09:47:25 np0005541914.localdomain sudo[279290]: pam_unix(sudo:session): session closed for user root
Dec 02 09:47:25 np0005541914.localdomain sudo[279400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-parimbcvhapazadreatkkvkbaktpfctv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764668845.5252237-3550-44022018214862/AnsiballZ_edpm_container_manage.py
Dec 02 09:47:25 np0005541914.localdomain sudo[279400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:47:25 np0005541914.localdomain python3[279402]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Dec 02 09:47:26 np0005541914.localdomain python3[279402]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                               {
                                                                    "Id": "5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3",
                                                                    "Digest": "sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5",
                                                                    "RepoTags": [
                                                                         "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"
                                                                    ],
                                                                    "RepoDigests": [
                                                                         "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5"
                                                                    ],
                                                                    "Parent": "",
                                                                    "Comment": "",
                                                                    "Created": "2025-12-01T06:31:10.62653219Z",
                                                                    "Config": {
                                                                         "User": "nova",
                                                                         "Env": [
                                                                              "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                                              "LANG=en_US.UTF-8",
                                                                              "TZ=UTC",
                                                                              "container=oci"
                                                                         ],
                                                                         "Entrypoint": [
                                                                              "dumb-init",
                                                                              "--single-child",
                                                                              "--"
                                                                         ],
                                                                         "Cmd": [
                                                                              "kolla_start"
                                                                         ],
                                                                         "Labels": {
                                                                              "io.buildah.version": "1.41.3",
                                                                              "maintainer": "OpenStack Kubernetes Operator team",
                                                                              "org.label-schema.build-date": "20251125",
                                                                              "org.label-schema.license": "GPLv2",
                                                                              "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                              "org.label-schema.schema-version": "1.0",
                                                                              "org.label-schema.vendor": "CentOS",
                                                                              "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "tcib_managed": "true"
                                                                         },
                                                                         "StopSignal": "SIGTERM"
                                                                    },
                                                                    "Version": "",
                                                                    "Author": "",
                                                                    "Architecture": "amd64",
                                                                    "Os": "linux",
                                                                    "Size": 1211779450,
                                                                    "VirtualSize": 1211779450,
                                                                    "GraphDriver": {
                                                                         "Name": "overlay",
                                                                         "Data": {
                                                                              "LowerDir": "/var/lib/containers/storage/overlay/bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22/diff:/var/lib/containers/storage/overlay/11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",
                                                                              "UpperDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/diff",
                                                                              "WorkDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/work"
                                                                         }
                                                                    },
                                                                    "RootFS": {
                                                                         "Type": "layers",
                                                                         "Layers": [
                                                                              "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",
                                                                              "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",
                                                                              "sha256:86c2cd3987225f8a9bf38cc88e9c24b56bdf4a194f2301186519b4a7571b0c92",
                                                                              "sha256:baa8e0bc73d6b505f07c40d4f69a464312cc41ae2045c7975dd4759c27721a22",
                                                                              "sha256:d0cde44181262e43c105085c32a5af158b232f2e2ce4fe4b50530d7cdc5126cd"
                                                                         ]
                                                                    },
                                                                    "Labels": {
                                                                         "io.buildah.version": "1.41.3",
                                                                         "maintainer": "OpenStack Kubernetes Operator team",
                                                                         "org.label-schema.build-date": "20251125",
                                                                         "org.label-schema.license": "GPLv2",
                                                                         "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                         "org.label-schema.schema-version": "1.0",
                                                                         "org.label-schema.vendor": "CentOS",
                                                                         "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                         "tcib_managed": "true"
                                                                    },
                                                                    "Annotations": {},
                                                                    "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                                    "User": "nova",
                                                                    "History": [
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223494528Z",
                                                                              "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223562059Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251125\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:39.054452717Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025707917Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                                              "comment": "FROM quay.io/centos/centos:stream9",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025744608Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025767729Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025791379Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.02581523Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025867611Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.469442331Z",
                                                                              "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:02.029095017Z",
                                                                              "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:05.672474685Z",
                                                                              "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.113425253Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.532320725Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.370061347Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.805172373Z",
                                                                              "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.259306372Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.625948784Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.028304824Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.423316076Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.801219631Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.239187116Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.70996597Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.147342611Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.5739488Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.006975065Z",
                                                                              "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.421255505Z",
                                                                              "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.066694755Z",
                                                                              "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.475695836Z",
                                                                              "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.8971372Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /openstack",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:18.542651107Z",
                                                                              "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622503041Z",
                                                                              "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622561802Z",
                                                                              "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622578342Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622594423Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:22.080892529Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:15.092312074Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:53.218820537Z",
                                                                              "created_by": "/bin/sh -c dnf install -y python3-barbicanclient python3-cinderclient python3-designateclient python3-glanceclient python3-ironicclient python3-keystoneclient python3-manilaclient python3-neutronclient python3-novaclient python3-observabilityclient python3-octaviaclient python3-openstackclient python3-swiftclient python3-pymemcache && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:56.858075591Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:17:53.072482982Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-os:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:18:02.761216507Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage nova",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:18:03.785234187Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /etc/ssh && touch /etc/ssh/ssh_known_host",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:19:17.194997182Z",
                                                                              "created_by": "/bin/sh -c dnf install -y openstack-nova-common && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:19:24.14458279Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:29:30.048641643Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-nova-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:09.707360362Z",
                                                                              "created_by": "/bin/sh -c dnf -y install e2fsprogs xfsprogs xorriso iscsi-initiator-utils nfs-utils targetcli nvme-cli device-mapper-multipath ceph-common openssh-clients openstack-nova-compute openvswitch swtpm swtpm-tools && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:10.208898452Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage nova",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:10.624465805Z",
                                                                              "created_by": "/bin/sh -c rm -f /etc/machine-id",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:10.624514176Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER nova",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:18.661822382Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         }
                                                                    ],
                                                                    "NamesHistory": [
                                                                         "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"
                                                                    ]
                                                               }
                                                          ]
                                                          : quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 02 09:47:26 np0005541914.localdomain sudo[279400]: pam_unix(sudo:session): session closed for user root
Dec 02 09:47:27 np0005541914.localdomain sudo[279573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-quzmjmxkmlrcwsqfyhvtcdctwkdhscfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668846.7301948-3574-208602465876419/AnsiballZ_stat.py
Dec 02 09:47:27 np0005541914.localdomain sudo[279573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:47:27 np0005541914.localdomain python3.9[279575]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:47:27 np0005541914.localdomain sudo[279573]: pam_unix(sudo:session): session closed for user root
Dec 02 09:47:27 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:47:28 np0005541914.localdomain podman[279595]: 2025-12-02 09:47:28.083762188 +0000 UTC m=+0.079417412 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 02 09:47:28 np0005541914.localdomain podman[279595]: 2025-12-02 09:47:28.094686453 +0000 UTC m=+0.090341687 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 02 09:47:28 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:47:28 np0005541914.localdomain sudo[279704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hexddthouwcnuygvhachofbpxumdzhee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668848.092255-3610-194060830067766/AnsiballZ_container_config_data.py
Dec 02 09:47:28 np0005541914.localdomain sudo[279704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:47:28 np0005541914.localdomain python3.9[279706]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Dec 02 09:47:28 np0005541914.localdomain sudo[279704]: pam_unix(sudo:session): session closed for user root
Dec 02 09:47:29 np0005541914.localdomain sudo[279814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kqftpaueflztscqofoveuwkxrhkkkxuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668848.8811712-3637-74984346872403/AnsiballZ_container_config_hash.py
Dec 02 09:47:29 np0005541914.localdomain sudo[279814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:47:29 np0005541914.localdomain python3.9[279816]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 02 09:47:29 np0005541914.localdomain sudo[279814]: pam_unix(sudo:session): session closed for user root
Dec 02 09:47:29 np0005541914.localdomain sudo[279924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-glnqwznjbikttuszpmauziiddzugjpwn ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1764668849.7424812-3667-258087591040797/AnsiballZ_edpm_container_manage.py
Dec 02 09:47:29 np0005541914.localdomain sudo[279924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:47:30 np0005541914.localdomain python3[279926]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec 02 09:47:30 np0005541914.localdomain python3[279926]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                               {
                                                                    "Id": "5571c1b2140c835f70406e4553b3b44135b9c9b4eb673345cbd571460c5d59a3",
                                                                    "Digest": "sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5",
                                                                    "RepoTags": [
                                                                         "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"
                                                                    ],
                                                                    "RepoDigests": [
                                                                         "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:d6189c79b326e4b09ccae1141528b03bc59b2533781a960e8f91f2a5dbb343d5"
                                                                    ],
                                                                    "Parent": "",
                                                                    "Comment": "",
                                                                    "Created": "2025-12-01T06:31:10.62653219Z",
                                                                    "Config": {
                                                                         "User": "nova",
                                                                         "Env": [
                                                                              "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                                              "LANG=en_US.UTF-8",
                                                                              "TZ=UTC",
                                                                              "container=oci"
                                                                         ],
                                                                         "Entrypoint": [
                                                                              "dumb-init",
                                                                              "--single-child",
                                                                              "--"
                                                                         ],
                                                                         "Cmd": [
                                                                              "kolla_start"
                                                                         ],
                                                                         "Labels": {
                                                                              "io.buildah.version": "1.41.3",
                                                                              "maintainer": "OpenStack Kubernetes Operator team",
                                                                              "org.label-schema.build-date": "20251125",
                                                                              "org.label-schema.license": "GPLv2",
                                                                              "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                              "org.label-schema.schema-version": "1.0",
                                                                              "org.label-schema.vendor": "CentOS",
                                                                              "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "tcib_managed": "true"
                                                                         },
                                                                         "StopSignal": "SIGTERM"
                                                                    },
                                                                    "Version": "",
                                                                    "Author": "",
                                                                    "Architecture": "amd64",
                                                                    "Os": "linux",
                                                                    "Size": 1211779450,
                                                                    "VirtualSize": 1211779450,
                                                                    "GraphDriver": {
                                                                         "Name": "overlay",
                                                                         "Data": {
                                                                              "LowerDir": "/var/lib/containers/storage/overlay/bb270959ea4f0d2c0dd791aa5a80a96b2d6621117349e00f19fca53fc0632a22/diff:/var/lib/containers/storage/overlay/11c5062d45c4d7c0ad6abaddd64ed9bdbf7963c4793402f2ed3e5264e255ad60/diff:/var/lib/containers/storage/overlay/ac70de19a933522ca2cf73df928823e8823ff6b4231733a8230c668e15d517e9/diff:/var/lib/containers/storage/overlay/cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa/diff",
                                                                              "UpperDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/diff",
                                                                              "WorkDir": "/var/lib/containers/storage/overlay/45b05c829d68772ce6f113ebe908af5bcf8533af84d5ff30fea8dfca06e71a2d/work"
                                                                         }
                                                                    },
                                                                    "RootFS": {
                                                                         "Type": "layers",
                                                                         "Layers": [
                                                                              "sha256:cf752d9babba20815c6849e3dd587209dffdfbbc56c600ddbc26d05721943ffa",
                                                                              "sha256:d26dbee55abfd9d572bfbbd4b765c5624affd9ef117ad108fb34be41e199a619",
                                                                              "sha256:86c2cd3987225f8a9bf38cc88e9c24b56bdf4a194f2301186519b4a7571b0c92",
                                                                              "sha256:baa8e0bc73d6b505f07c40d4f69a464312cc41ae2045c7975dd4759c27721a22",
                                                                              "sha256:d0cde44181262e43c105085c32a5af158b232f2e2ce4fe4b50530d7cdc5126cd"
                                                                         ]
                                                                    },
                                                                    "Labels": {
                                                                         "io.buildah.version": "1.41.3",
                                                                         "maintainer": "OpenStack Kubernetes Operator team",
                                                                         "org.label-schema.build-date": "20251125",
                                                                         "org.label-schema.license": "GPLv2",
                                                                         "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                         "org.label-schema.schema-version": "1.0",
                                                                         "org.label-schema.vendor": "CentOS",
                                                                         "tcib_build_tag": "fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                         "tcib_managed": "true"
                                                                    },
                                                                    "Annotations": {},
                                                                    "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                                    "User": "nova",
                                                                    "History": [
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223494528Z",
                                                                              "created_by": "/bin/sh -c #(nop) ADD file:cacf1a97b4abfca5db2db22f7ddbca8fd7daa5076a559639c109f09aaf55871d in / ",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:36.223562059Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20251125\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-11-25T04:02:39.054452717Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025707917Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                                              "comment": "FROM quay.io/centos/centos:stream9",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025744608Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025767729Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025791379Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.02581523Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.025867611Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:09:28.469442331Z",
                                                                              "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:02.029095017Z",
                                                                              "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:05.672474685Z",
                                                                              "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.113425253Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:06.532320725Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.370061347Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:07.805172373Z",
                                                                              "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.259306372Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:08.625948784Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.028304824Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.423316076Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:09.801219631Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.239187116Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:10.70996597Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.147342611Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:11.5739488Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.006975065Z",
                                                                              "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:12.421255505Z",
                                                                              "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.066694755Z",
                                                                              "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.475695836Z",
                                                                              "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:16.8971372Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /openstack",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:18.542651107Z",
                                                                              "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622503041Z",
                                                                              "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622561802Z",
                                                                              "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622578342Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:20.622594423Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:10:22.080892529Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:15.092312074Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:53.218820537Z",
                                                                              "created_by": "/bin/sh -c dnf install -y python3-barbicanclient python3-cinderclient python3-designateclient python3-glanceclient python3-ironicclient python3-keystoneclient python3-manilaclient python3-neutronclient python3-novaclient python3-observabilityclient python3-octaviaclient python3-openstackclient python3-swiftclient python3-pymemcache && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:12:56.858075591Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:17:53.072482982Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-os:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:18:02.761216507Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage nova",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:18:03.785234187Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /etc/ssh && touch /etc/ssh/ssh_known_host",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:19:17.194997182Z",
                                                                              "created_by": "/bin/sh -c dnf install -y openstack-nova-common && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:19:24.14458279Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:29:30.048641643Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-nova-base:fa2bb8efef6782c26ea7f1675eeb36dd",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:09.707360362Z",
                                                                              "created_by": "/bin/sh -c dnf -y install e2fsprogs xfsprogs xorriso iscsi-initiator-utils nfs-utils targetcli nvme-cli device-mapper-multipath ceph-common openssh-clients openstack-nova-compute openvswitch swtpm swtpm-tools && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:10.208898452Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage nova",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:10.624465805Z",
                                                                              "created_by": "/bin/sh -c rm -f /etc/machine-id",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:10.624514176Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER nova",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2025-12-01T06:31:18.661822382Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"fa2bb8efef6782c26ea7f1675eeb36dd\""
                                                                         }
                                                                    ],
                                                                    "NamesHistory": [
                                                                         "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"
                                                                    ]
                                                               }
                                                          ]
                                                          : quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 02 09:47:30 np0005541914.localdomain sudo[279924]: pam_unix(sudo:session): session closed for user root
Dec 02 09:47:31 np0005541914.localdomain sudo[280095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xpdhrkettdietmlcctbhatezuulvbkax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668850.8926709-3691-208390160770909/AnsiballZ_stat.py
Dec 02 09:47:31 np0005541914.localdomain sudo[280095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:47:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10176 DF PROTO=TCP SPT=38574 DPT=9102 SEQ=3134322641 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55313230000000001030307) 
Dec 02 09:47:31 np0005541914.localdomain python3.9[280097]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:47:31 np0005541914.localdomain sudo[280095]: pam_unix(sudo:session): session closed for user root
Dec 02 09:47:31 np0005541914.localdomain sudo[280207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xovedzdmlxtrwgfeduyltnupioczppii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668851.681368-3717-55843620754066/AnsiballZ_file.py
Dec 02 09:47:31 np0005541914.localdomain sudo[280207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:47:32 np0005541914.localdomain python3.9[280209]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:47:32 np0005541914.localdomain sudo[280207]: pam_unix(sudo:session): session closed for user root
Dec 02 09:47:32 np0005541914.localdomain sudo[280316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vhhdyxpmetsdnrhumyrjhkbocyxbsolk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668852.2447531-3717-210465183495401/AnsiballZ_copy.py
Dec 02 09:47:32 np0005541914.localdomain sudo[280316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:47:32 np0005541914.localdomain python3.9[280318]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764668852.2447531-3717-210465183495401/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:47:32 np0005541914.localdomain sudo[280316]: pam_unix(sudo:session): session closed for user root
Dec 02 09:47:33 np0005541914.localdomain sudo[280371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mljukjhqpnxccerbinsmmwaofqvkpvej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668852.2447531-3717-210465183495401/AnsiballZ_systemd.py
Dec 02 09:47:33 np0005541914.localdomain sudo[280371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:47:33 np0005541914.localdomain python3.9[280373]: ansible-systemd Invoked with state=started name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:47:33 np0005541914.localdomain sudo[280371]: pam_unix(sudo:session): session closed for user root
Dec 02 09:47:33 np0005541914.localdomain podman[239757]: time="2025-12-02T09:47:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 09:47:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:47:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150198 "" "Go-http-client/1.1"
Dec 02 09:47:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:47:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17697 "" "Go-http-client/1.1"
Dec 02 09:47:34 np0005541914.localdomain python3.9[280483]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:47:35 np0005541914.localdomain python3.9[280591]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:47:36 np0005541914.localdomain python3.9[280699]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 02 09:47:37 np0005541914.localdomain sudo[280807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tasccuyuuffdwwjdvufnjexqntdhrgmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668856.662915-3887-190945063261013/AnsiballZ_podman_container.py
Dec 02 09:47:37 np0005541914.localdomain sudo[280807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:47:37 np0005541914.localdomain python3.9[280809]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 02 09:47:37 np0005541914.localdomain sudo[280807]: pam_unix(sudo:session): session closed for user root
Dec 02 09:47:37 np0005541914.localdomain systemd-journald[47679]: Field hash table of /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal has a fill level at 103.9 (346 of 333 items), suggesting rotation.
Dec 02 09:47:37 np0005541914.localdomain systemd-journald[47679]: /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal: Journal header limits reached or header out-of-date, rotating.
Dec 02 09:47:37 np0005541914.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 02 09:47:37 np0005541914.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 02 09:47:38 np0005541914.localdomain sudo[280941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kuyxnqalbydwxsecgykwjemacwbpvqsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668857.7393446-3910-183889918061381/AnsiballZ_systemd.py
Dec 02 09:47:38 np0005541914.localdomain sudo[280941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:47:38 np0005541914.localdomain python3.9[280943]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 02 09:47:39 np0005541914.localdomain systemd[1]: Stopping nova_compute container...
Dec 02 09:47:40 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:47:40 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:47:41 np0005541914.localdomain podman[280961]: 2025-12-02 09:47:41.088043722 +0000 UTC m=+0.087772239 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 02 09:47:41 np0005541914.localdomain podman[280961]: 2025-12-02 09:47:41.128937533 +0000 UTC m=+0.128666020 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 09:47:41 np0005541914.localdomain podman[280962]: 2025-12-02 09:47:41.138067723 +0000 UTC m=+0.135461568 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 02 09:47:41 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 09:47:41 np0005541914.localdomain podman[280962]: 2025-12-02 09:47:41.152829164 +0000 UTC m=+0.150223049 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 02 09:47:41 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 09:47:41 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:47:41.874 229589 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored
Dec 02 09:47:41 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:47:41.877 229589 DEBUG oslo_concurrency.lockutils [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 02 09:47:41 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:47:41.878 229589 DEBUG oslo_concurrency.lockutils [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 02 09:47:41 np0005541914.localdomain nova_compute[229585]: 2025-12-02 09:47:41.878 229589 DEBUG oslo_concurrency.lockutils [None req-7b54325d-ae8a-4797-a6c4-1babab245a7f - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 02 09:47:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:47:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:47:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:47:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:47:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:47:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 09:47:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:47:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 09:47:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:47:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:47:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 09:47:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:47:42 np0005541914.localdomain virtqemud[228953]: End of file while reading data: Input/output error
Dec 02 09:47:42 np0005541914.localdomain systemd[1]: libpod-e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256.scope: Deactivated successfully.
Dec 02 09:47:42 np0005541914.localdomain systemd[1]: libpod-e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256.scope: Consumed 17.770s CPU time.
Dec 02 09:47:42 np0005541914.localdomain podman[280947]: 2025-12-02 09:47:42.287149762 +0000 UTC m=+2.935368792 container died e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:47:42 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256-userdata-shm.mount: Deactivated successfully.
Dec 02 09:47:42 np0005541914.localdomain podman[280947]: 2025-12-02 09:47:42.423919958 +0000 UTC m=+3.072138968 container cleanup e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 02 09:47:42 np0005541914.localdomain podman[280947]: nova_compute
Dec 02 09:47:42 np0005541914.localdomain podman[281029]: error opening file `/run/crun/e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256/status`: No such file or directory
Dec 02 09:47:42 np0005541914.localdomain podman[281016]: 2025-12-02 09:47:42.493159037 +0000 UTC m=+0.042096470 container cleanup e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 02 09:47:42 np0005541914.localdomain podman[281016]: nova_compute
Dec 02 09:47:42 np0005541914.localdomain systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Dec 02 09:47:42 np0005541914.localdomain systemd[1]: Stopped nova_compute container.
Dec 02 09:47:42 np0005541914.localdomain systemd[1]: Starting nova_compute container...
Dec 02 09:47:42 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:47:42 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd847ae8b8d450ddddf78efaf612113cebe913c0aa9acb083d5c321023fdf168/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 02 09:47:42 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd847ae8b8d450ddddf78efaf612113cebe913c0aa9acb083d5c321023fdf168/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 02 09:47:42 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd847ae8b8d450ddddf78efaf612113cebe913c0aa9acb083d5c321023fdf168/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 02 09:47:42 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd847ae8b8d450ddddf78efaf612113cebe913c0aa9acb083d5c321023fdf168/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 02 09:47:42 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd847ae8b8d450ddddf78efaf612113cebe913c0aa9acb083d5c321023fdf168/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 02 09:47:42 np0005541914.localdomain podman[281031]: 2025-12-02 09:47:42.622227677 +0000 UTC m=+0.101504518 container init e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_managed=true)
Dec 02 09:47:42 np0005541914.localdomain podman[281031]: 2025-12-02 09:47:42.630929393 +0000 UTC m=+0.110206234 container start e75f46e63aa63370f2bc38ffaa47e19125145eb95639c817a1bf9eb01fbf5256 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_id=edpm, container_name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Dec 02 09:47:42 np0005541914.localdomain podman[281031]: nova_compute
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: + sudo -E kolla_set_configs
Dec 02 09:47:42 np0005541914.localdomain systemd[1]: Started nova_compute container.
Dec 02 09:47:42 np0005541914.localdomain sudo[280941]: pam_unix(sudo:session): session closed for user root
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Validating config file
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Copying service configuration files
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Deleting /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Copying /var/lib/kolla/config_files/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Deleting /etc/ceph
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Creating directory /etc/ceph
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Setting permission for /etc/ceph
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Writing out command to execute
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: ++ cat /run_command
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: + CMD=nova-compute
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: + ARGS=
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: + sudo kolla_copy_cacerts
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: + [[ ! -n '' ]]
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: + . kolla_extend_start
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: + echo 'Running command: '\''nova-compute'\'''
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: Running command: 'nova-compute'
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: + umask 0022
Dec 02 09:47:42 np0005541914.localdomain nova_compute[281045]: + exec nova-compute
Dec 02 09:47:43 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:47:43 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:47:43 np0005541914.localdomain podman[281082]: 2025-12-02 09:47:43.334198029 +0000 UTC m=+0.084272572 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 02 09:47:43 np0005541914.localdomain podman[281082]: 2025-12-02 09:47:43.364910458 +0000 UTC m=+0.114984961 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 02 09:47:43 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:47:43 np0005541914.localdomain podman[281118]: 2025-12-02 09:47:43.443304747 +0000 UTC m=+0.121430917 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 02 09:47:43 np0005541914.localdomain podman[281118]: 2025-12-02 09:47:43.531891789 +0000 UTC m=+0.210017949 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 02 09:47:43 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:47:43 np0005541914.localdomain sudo[281206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vbzlrfzafkdempgkkgemoyeekdieqixl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1764668863.2578497-3937-271605918992637/AnsiballZ_podman_container.py
Dec 02 09:47:43 np0005541914.localdomain sudo[281206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 02 09:47:43 np0005541914.localdomain python3.9[281208]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 02 09:47:44 np0005541914.localdomain systemd[1]: Started libpod-conmon-21fdb0dbdd9f58ae102d96a43fbe2e853b5f997904471f5738055c23f246e34e.scope.
Dec 02 09:47:44 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:47:44 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac17f28608a7cfce4db232908145eceefc4390121a03756b7f9081a0f7d2c6d6/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Dec 02 09:47:44 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac17f28608a7cfce4db232908145eceefc4390121a03756b7f9081a0f7d2c6d6/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec 02 09:47:44 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac17f28608a7cfce4db232908145eceefc4390121a03756b7f9081a0f7d2c6d6/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 02 09:47:44 np0005541914.localdomain podman[281232]: 2025-12-02 09:47:44.111825918 +0000 UTC m=+0.101466206 container init 21fdb0dbdd9f58ae102d96a43fbe2e853b5f997904471f5738055c23f246e34e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Dec 02 09:47:44 np0005541914.localdomain podman[281232]: 2025-12-02 09:47:44.123674191 +0000 UTC m=+0.113314469 container start 21fdb0dbdd9f58ae102d96a43fbe2e853b5f997904471f5738055c23f246e34e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true)
Dec 02 09:47:44 np0005541914.localdomain python3.9[281208]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Dec 02 09:47:44 np0005541914.localdomain nova_compute_init[281252]: INFO:nova_statedir:Applying nova statedir ownership
Dec 02 09:47:44 np0005541914.localdomain nova_compute_init[281252]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Dec 02 09:47:44 np0005541914.localdomain nova_compute_init[281252]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Dec 02 09:47:44 np0005541914.localdomain nova_compute_init[281252]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Dec 02 09:47:44 np0005541914.localdomain nova_compute_init[281252]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Dec 02 09:47:44 np0005541914.localdomain nova_compute_init[281252]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Dec 02 09:47:44 np0005541914.localdomain nova_compute_init[281252]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Dec 02 09:47:44 np0005541914.localdomain nova_compute_init[281252]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Dec 02 09:47:44 np0005541914.localdomain nova_compute_init[281252]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/delay-nova-compute
Dec 02 09:47:44 np0005541914.localdomain nova_compute_init[281252]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Dec 02 09:47:44 np0005541914.localdomain nova_compute_init[281252]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Dec 02 09:47:44 np0005541914.localdomain nova_compute_init[281252]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Dec 02 09:47:44 np0005541914.localdomain nova_compute_init[281252]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Dec 02 09:47:44 np0005541914.localdomain nova_compute_init[281252]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Dec 02 09:47:44 np0005541914.localdomain nova_compute_init[281252]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/
Dec 02 09:47:44 np0005541914.localdomain nova_compute_init[281252]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache already 42436:42436
Dec 02 09:47:44 np0005541914.localdomain nova_compute_init[281252]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache to system_u:object_r:container_file_t:s0
Dec 02 09:47:44 np0005541914.localdomain nova_compute_init[281252]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/
Dec 02 09:47:44 np0005541914.localdomain nova_compute_init[281252]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache/python-entrypoints already 42436:42436
Dec 02 09:47:44 np0005541914.localdomain nova_compute_init[281252]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache/python-entrypoints to system_u:object_r:container_file_t:s0
Dec 02 09:47:44 np0005541914.localdomain nova_compute_init[281252]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/b234715fc878456b41e32c4fbc669b417044dbe6c6684bbc9059e5c93396ffea
Dec 02 09:47:44 np0005541914.localdomain nova_compute_init[281252]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/20273498b7380904530133bcb3f720bd45f4f00b810dc4597d81d23acd8f9673
Dec 02 09:47:44 np0005541914.localdomain nova_compute_init[281252]: INFO:nova_statedir:Nova statedir ownership complete
Dec 02 09:47:44 np0005541914.localdomain systemd[1]: libpod-21fdb0dbdd9f58ae102d96a43fbe2e853b5f997904471f5738055c23f246e34e.scope: Deactivated successfully.
Dec 02 09:47:44 np0005541914.localdomain sudo[281206]: pam_unix(sudo:session): session closed for user root
Dec 02 09:47:44 np0005541914.localdomain podman[281267]: 2025-12-02 09:47:44.289371282 +0000 UTC m=+0.078653008 container died 21fdb0dbdd9f58ae102d96a43fbe2e853b5f997904471f5738055c23f246e34e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=nova_compute_init, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 02 09:47:44 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-ac17f28608a7cfce4db232908145eceefc4390121a03756b7f9081a0f7d2c6d6-merged.mount: Deactivated successfully.
Dec 02 09:47:44 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-21fdb0dbdd9f58ae102d96a43fbe2e853b5f997904471f5738055c23f246e34e-userdata-shm.mount: Deactivated successfully.
Dec 02 09:47:44 np0005541914.localdomain podman[281267]: 2025-12-02 09:47:44.315828942 +0000 UTC m=+0.105110648 container cleanup 21fdb0dbdd9f58ae102d96a43fbe2e853b5f997904471f5738055c23f246e34e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute_init, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2)
Dec 02 09:47:44 np0005541914.localdomain systemd[1]: libpod-conmon-21fdb0dbdd9f58ae102d96a43fbe2e853b5f997904471f5738055c23f246e34e.scope: Deactivated successfully.
Dec 02 09:47:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:44.384 281049 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 02 09:47:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:44.385 281049 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 02 09:47:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:44.385 281049 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 02 09:47:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:44.385 281049 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Dec 02 09:47:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:44.496 281049 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:47:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:44.519 281049 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:47:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:44.519 281049 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 02 09:47:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:44.925 281049 INFO nova.virt.driver [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.058 281049 INFO nova.compute.provider_config [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Dec 02 09:47:45 np0005541914.localdomain sshd[263026]: pam_unix(sshd:session): session closed for user zuul
Dec 02 09:47:45 np0005541914.localdomain systemd[1]: session-59.scope: Deactivated successfully.
Dec 02 09:47:45 np0005541914.localdomain systemd[1]: session-59.scope: Consumed 1min 29.058s CPU time.
Dec 02 09:47:45 np0005541914.localdomain systemd-logind[760]: Session 59 logged out. Waiting for processes to exit.
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.068 281049 DEBUG oslo_concurrency.lockutils [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 02 09:47:45 np0005541914.localdomain systemd-logind[760]: Removed session 59.
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.069 281049 DEBUG oslo_concurrency.lockutils [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.070 281049 DEBUG oslo_concurrency.lockutils [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.071 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.072 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.072 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.072 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.073 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.073 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.073 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.074 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.074 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.074 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.075 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.075 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.075 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.076 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.076 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.076 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.077 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.077 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.077 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.078 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] console_host                   = np0005541914.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.078 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.078 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.079 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.079 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.079 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.080 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.080 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.080 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.081 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.081 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.081 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.082 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.082 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.083 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.083 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.083 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.084 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.084 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.084 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] host                           = np0005541914.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.085 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.085 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.085 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.086 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.086 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.086 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.087 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.087 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.087 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.088 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.088 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.088 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.089 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.089 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.089 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.090 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.090 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.090 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.091 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.091 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.091 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.092 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.092 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.092 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.093 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.093 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.093 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.093 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.094 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.094 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.095 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.095 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.095 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.095 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.096 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.096 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.096 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.097 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.097 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.098 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.098 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.098 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] my_block_storage_ip            = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.099 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] my_ip                          = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.099 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.099 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.099 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.100 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.100 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.100 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.101 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.101 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.101 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.102 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.102 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.102 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.103 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.103 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.103 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.104 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.104 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.104 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.105 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.105 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.105 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.105 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.106 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.106 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.107 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.107 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.107 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.108 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.108 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.108 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.109 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.109 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.110 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.110 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.110 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.111 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.111 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.111 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.112 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.112 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.112 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.113 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.113 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.114 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.114 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.114 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.115 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.115 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.115 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.115 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.116 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.116 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.117 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.117 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.117 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.117 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.118 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.118 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.119 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.119 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.119 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.119 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.119 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.120 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.120 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.120 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.120 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.120 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.121 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.121 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.121 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.121 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.121 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.122 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.122 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.122 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.122 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.123 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.123 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.123 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.123 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.123 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.124 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.124 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.124 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.124 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.125 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.125 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.125 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.125 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.125 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.126 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.126 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.126 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.126 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.126 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.127 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.127 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.127 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.127 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.127 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.128 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.128 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.128 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.128 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.128 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.129 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.129 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.129 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.129 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.129 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.130 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.130 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.130 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.130 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.130 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.131 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.131 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.131 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.131 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.131 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.132 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.132 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.132 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.132 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.132 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.133 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.133 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.133 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.133 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.134 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.134 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.134 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.134 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.134 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.135 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.135 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.135 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.135 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.136 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.136 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.136 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.137 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.137 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.137 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.137 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.138 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.138 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.138 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.138 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.139 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.139 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.139 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.139 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.140 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.140 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.140 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.140 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.141 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.141 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.141 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.141 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.142 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.142 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.142 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.142 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.143 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.143 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.143 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.143 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.144 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.144 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.144 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.144 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.145 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.145 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.145 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.145 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.146 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.146 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.146 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.146 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.147 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.147 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.147 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.147 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.148 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.148 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.148 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.148 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.149 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.149 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.149 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.149 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.150 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.150 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.150 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.150 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.151 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.151 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.151 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.151 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.152 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.152 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.152 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.153 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.153 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.153 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.153 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.154 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.154 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.154 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.155 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.155 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.155 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.155 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.156 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.156 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.156 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.156 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.156 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.157 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.157 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.157 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.157 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.157 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.157 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.158 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.158 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.158 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.158 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.158 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.159 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.159 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.159 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.159 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.159 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.160 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.160 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.160 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.160 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.160 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.161 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.161 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.161 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.161 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.161 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.161 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain sshd[281312]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.162 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.162 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.162 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.162 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.162 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.163 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.163 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.163 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.163 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.164 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.164 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.164 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.164 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.164 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.165 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.165 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.165 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.165 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.165 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.165 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.166 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.166 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.166 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.166 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.166 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.167 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.167 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.167 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.167 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.167 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.167 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.167 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.168 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.168 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.168 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.168 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.168 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.168 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.169 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.169 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.169 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.169 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.169 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.169 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.169 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.169 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.170 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.170 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.170 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.170 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.170 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.170 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.170 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.171 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.171 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.171 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.171 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.171 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.171 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.171 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.172 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.172 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.172 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.172 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.172 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.172 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.172 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.173 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.173 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.173 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.173 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.173 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.173 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.173 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.174 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.174 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.174 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.174 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.174 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.174 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.174 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.174 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.175 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.175 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.175 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.175 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.175 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.175 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.175 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.176 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.176 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.176 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.176 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.176 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.176 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.176 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.176 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.178 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.178 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.179 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.179 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.179 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.182 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.183 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.183 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.184 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.184 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.185 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.185 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.186 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.186 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.187 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.187 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.187 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.188 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.188 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.188 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.189 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.189 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.189 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.190 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.190 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.191 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.191 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.191 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.192 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.192 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.192 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.192 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.193 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.193 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.193 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.194 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.194 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.195 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.195 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.195 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.196 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.196 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.197 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.197 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.198 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.198 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.198 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.199 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.199 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.200 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.200 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.200 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.201 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.201 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.201 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.202 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.202 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.203 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.203 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.203 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.204 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.204 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.205 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.205 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.205 281049 WARNING oslo_config.cfg [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: and ``live_migration_inbound_addr`` respectively.
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: ).  Its value may be silently ignored in the future.
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.206 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.live_migration_uri     = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.206 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.207 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.207 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.208 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.208 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.208 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.209 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.209 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.210 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.210 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.210 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.211 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.211 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.212 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.212 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.212 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.213 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.213 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.rbd_secret_uuid        = c7c8e171-a193-56fb-95fa-8879fcfa7074 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.214 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.214 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.214 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.215 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.215 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.216 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.216 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.216 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.217 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.217 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.218 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.218 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.219 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.219 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.219 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.220 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.220 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.221 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.221 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.221 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.222 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.222 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.223 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.223 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.223 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.224 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.224 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.225 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.225 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.225 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.226 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.226 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.227 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.227 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.227 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.228 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.228 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.229 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.229 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.229 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.230 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.230 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.230 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.231 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.231 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.231 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.232 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.232 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.232 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.232 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.233 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.233 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.233 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.234 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.234 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.234 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.234 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.235 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.235 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.235 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.236 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.236 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.236 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.236 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.237 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.237 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.237 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.238 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.238 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.238 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.238 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.239 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.auth_url             = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.239 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.239 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.239 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.240 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.240 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.240 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.241 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.241 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.241 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.241 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.242 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.242 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.242 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.242 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.243 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.243 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.243 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.244 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.244 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.244 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.244 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.245 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.245 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.245 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.245 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.246 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.246 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.246 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.247 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.247 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.247 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.247 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.248 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.248 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.248 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.249 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.249 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.249 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.249 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.250 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.250 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.250 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.251 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.251 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.251 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.251 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.252 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.252 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.252 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.253 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.253 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.253 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.253 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.254 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.254 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.254 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.254 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.255 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.255 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.255 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.256 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.256 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.256 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.256 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.257 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.257 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.257 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.258 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.258 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.258 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.258 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.259 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.259 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.259 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.260 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.260 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.260 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.260 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.261 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.261 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.261 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.261 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.262 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.262 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.262 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.263 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.263 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.263 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.264 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.264 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.264 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.264 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.265 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.265 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.265 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.265 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.266 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.266 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.266 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.267 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.267 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.267 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.267 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.268 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.268 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.268 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.269 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.269 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.269 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.269 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.270 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.270 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.270 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.271 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.271 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.271 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.271 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.272 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.272 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.272 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.272 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.272 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.273 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.273 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.273 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.273 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.273 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.274 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.274 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.274 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.274 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.274 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.275 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.275 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.275 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.275 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.275 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.276 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.276 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.276 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.276 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.276 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.276 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.277 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.277 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.277 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.277 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.277 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.278 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.278 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.278 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.278 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.278 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.279 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.279 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.279 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.279 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.279 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vnc.novncproxy_base_url        = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.280 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.280 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.280 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.280 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vnc.server_proxyclient_address = 192.168.122.108 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.280 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.281 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.281 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.281 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.281 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.281 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.282 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.282 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.282 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.282 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.282 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.282 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.283 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.283 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.283 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.283 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.283 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.284 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.284 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.284 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.284 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.284 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.285 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.285 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.285 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.285 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.285 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.286 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.286 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.286 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.286 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.286 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.287 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.287 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.287 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.287 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.287 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.288 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.288 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.288 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.288 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.288 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.289 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.289 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.289 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.289 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.289 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.290 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.290 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.290 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.290 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.290 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.290 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.291 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.291 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.291 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.291 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.291 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.292 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.292 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.292 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.292 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.292 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.293 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.293 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.293 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.293 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.293 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.293 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.294 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.294 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.294 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.294 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.294 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.295 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.295 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.295 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.295 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.295 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.296 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.296 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.296 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.296 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.296 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.296 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.297 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.297 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.297 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.297 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.297 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.298 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.auth_url            = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.298 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.298 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.298 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.298 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.299 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.299 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.299 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.299 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.299 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.299 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.300 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.300 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.300 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.300 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.300 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.301 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.301 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.301 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.301 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.301 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.301 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.302 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.302 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.302 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.302 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.302 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.303 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.303 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.303 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.303 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.303 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.304 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.304 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.304 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.304 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.304 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.305 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.305 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.305 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.305 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.305 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.305 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.306 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.306 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.306 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.306 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.306 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.307 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.307 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.307 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.307 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.307 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.308 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.308 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.308 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.308 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.308 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.308 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.309 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.309 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.309 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.309 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.309 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.310 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.310 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.310 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.310 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.310 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.311 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.311 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.311 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.311 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.311 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.311 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.312 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.312 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.312 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.312 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.312 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.313 281049 DEBUG oslo_service.service [None req-ec884c35-7db9-4b88-b56c-630d9a26b637 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.314 281049 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.325 281049 INFO nova.virt.node [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Determined node identity 9ec09c1a-d246-41d7-94f4-b482f646a9f1 from /var/lib/nova/compute_id
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.326 281049 DEBUG nova.virt.libvirt.host [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.327 281049 DEBUG nova.virt.libvirt.host [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.327 281049 DEBUG nova.virt.libvirt.host [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.327 281049 DEBUG nova.virt.libvirt.host [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.335 281049 DEBUG nova.virt.libvirt.host [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f912ab23940> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.337 281049 DEBUG nova.virt.libvirt.host [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f912ab23940> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.337 281049 INFO nova.virt.libvirt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Connection event '1' reason 'None'
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.341 281049 INFO nova.virt.libvirt.host [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Libvirt host capabilities <capabilities>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <host>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <uuid>64aa5208-7bf7-490c-857b-3c1a3cae8bb3</uuid>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <cpu>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <arch>x86_64</arch>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model>EPYC-Rome-v4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <vendor>AMD</vendor>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <microcode version='16777317'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <signature family='23' model='49' stepping='0'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <maxphysaddr mode='emulate' bits='40'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature name='x2apic'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature name='tsc-deadline'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature name='osxsave'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature name='hypervisor'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature name='tsc_adjust'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature name='spec-ctrl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature name='stibp'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature name='arch-capabilities'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature name='ssbd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature name='cmp_legacy'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature name='topoext'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature name='virt-ssbd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature name='lbrv'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature name='tsc-scale'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature name='vmcb-clean'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature name='pause-filter'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature name='pfthreshold'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature name='svme-addr-chk'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature name='rdctl-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature name='skip-l1dfl-vmentry'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature name='mds-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature name='pschange-mc-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <pages unit='KiB' size='4'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <pages unit='KiB' size='2048'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <pages unit='KiB' size='1048576'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </cpu>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <power_management>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <suspend_mem/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <suspend_disk/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <suspend_hybrid/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </power_management>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <iommu support='no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <migration_features>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <live/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <uri_transports>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <uri_transport>tcp</uri_transport>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <uri_transport>rdma</uri_transport>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </uri_transports>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </migration_features>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <topology>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <cells num='1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <cell id='0'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:           <memory unit='KiB'>16116612</memory>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:           <pages unit='KiB' size='4'>4029153</pages>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:           <pages unit='KiB' size='2048'>0</pages>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:           <pages unit='KiB' size='1048576'>0</pages>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:           <distances>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:             <sibling id='0' value='10'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:           </distances>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:           <cpus num='8'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:           </cpus>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         </cell>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </cells>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </topology>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <cache>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </cache>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <secmodel>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model>selinux</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <doi>0</doi>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </secmodel>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <secmodel>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model>dac</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <doi>0</doi>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <baselabel type='kvm'>+107:+107</baselabel>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <baselabel type='qemu'>+107:+107</baselabel>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </secmodel>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   </host>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <guest>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <os_type>hvm</os_type>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <arch name='i686'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <wordsize>32</wordsize>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <domain type='qemu'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <domain type='kvm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </arch>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <features>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <pae/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <nonpae/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <acpi default='on' toggle='yes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <apic default='on' toggle='no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <cpuselection/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <deviceboot/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <disksnapshot default='on' toggle='no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <externalSnapshot/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </features>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   </guest>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <guest>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <os_type>hvm</os_type>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <arch name='x86_64'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <wordsize>64</wordsize>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <domain type='qemu'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <domain type='kvm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </arch>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <features>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <acpi default='on' toggle='yes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <apic default='on' toggle='no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <cpuselection/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <deviceboot/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <disksnapshot default='on' toggle='no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <externalSnapshot/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </features>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   </guest>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: </capabilities>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.347 281049 DEBUG nova.virt.libvirt.host [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.348 281049 DEBUG nova.virt.libvirt.volume.mount [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.351 281049 DEBUG nova.virt.libvirt.host [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: <domainCapabilities>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <path>/usr/libexec/qemu-kvm</path>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <domain>kvm</domain>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <arch>i686</arch>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <vcpu max='240'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <iothreads supported='yes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <os supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <enum name='firmware'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <loader supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='type'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>rom</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>pflash</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='readonly'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>yes</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>no</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='secure'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>no</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </loader>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   </os>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <cpu>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <mode name='host-passthrough' supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='hostPassthroughMigratable'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>on</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>off</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </mode>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <mode name='maximum' supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='maximumMigratable'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>on</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>off</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </mode>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <mode name='host-model' supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <vendor>AMD</vendor>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='x2apic'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='tsc-deadline'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='hypervisor'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='tsc_adjust'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='spec-ctrl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='stibp'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='ssbd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='cmp_legacy'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='overflow-recov'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='succor'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='ibrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='amd-ssbd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='virt-ssbd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='lbrv'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='tsc-scale'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='vmcb-clean'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='pause-filter'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='pfthreshold'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='svme-addr-chk'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='disable' name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </mode>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <mode name='custom' supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Broadwell'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Broadwell-IBRS'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Broadwell-noTSX'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Broadwell-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Broadwell-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Broadwell-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Broadwell-v4'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cascadelake-Server'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cascadelake-Server-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cascadelake-Server-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cascadelake-Server-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cascadelake-Server-v4'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cascadelake-Server-v5'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cooperlake'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cooperlake-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cooperlake-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Denverton'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='mpx'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Denverton-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='mpx'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Denverton-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Denverton-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Dhyana-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-Genoa'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amd-psfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='auto-ibrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='no-nested-data-bp'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='null-sel-clr-base'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='stibp-always-on'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-Genoa-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amd-psfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='auto-ibrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='no-nested-data-bp'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='null-sel-clr-base'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='stibp-always-on'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-Milan'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-Milan-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-Milan-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amd-psfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='no-nested-data-bp'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='null-sel-clr-base'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='stibp-always-on'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-Rome'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-Rome-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-Rome-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-Rome-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-v4'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='GraniteRapids'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-int8'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-tile'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='bus-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fbsdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrc'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fzrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='mcdt-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pbrsb-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='prefetchiti'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='psdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='serialize'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='tsx-ldtrk'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='GraniteRapids-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-int8'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-tile'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='bus-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fbsdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrc'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fzrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='mcdt-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pbrsb-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='prefetchiti'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='psdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='serialize'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='tsx-ldtrk'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='GraniteRapids-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-int8'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-tile'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx10'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx10-128'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx10-256'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx10-512'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='bus-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='cldemote'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fbsdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrc'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fzrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='mcdt-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdir64b'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdiri'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pbrsb-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='prefetchiti'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='psdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='serialize'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='tsx-ldtrk'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Haswell'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Haswell-IBRS'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Haswell-noTSX'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Haswell-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Haswell-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Haswell-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Haswell-v4'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Icelake-Server'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Icelake-Server-noTSX'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Icelake-Server-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Icelake-Server-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Icelake-Server-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Icelake-Server-v4'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Icelake-Server-v5'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Icelake-Server-v6'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Icelake-Server-v7'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='IvyBridge'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='IvyBridge-IBRS'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='IvyBridge-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='IvyBridge-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='KnightsMill'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-4fmaps'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-4vnniw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512er'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512pf'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='KnightsMill-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-4fmaps'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-4vnniw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512er'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512pf'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Opteron_G4'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fma4'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xop'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Opteron_G4-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fma4'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xop'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Opteron_G5'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fma4'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='tbm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xop'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Opteron_G5-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fma4'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='tbm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xop'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='SapphireRapids'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-int8'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-tile'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='bus-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrc'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fzrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='serialize'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='tsx-ldtrk'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='SapphireRapids-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-int8'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-tile'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='bus-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrc'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fzrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='serialize'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='tsx-ldtrk'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='SapphireRapids-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-int8'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-tile'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='bus-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fbsdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrc'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fzrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='psdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='serialize'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='tsx-ldtrk'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='SapphireRapids-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-int8'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-tile'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='bus-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='cldemote'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fbsdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrc'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fzrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdir64b'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdiri'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='psdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='serialize'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='tsx-ldtrk'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='SierraForest'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-ne-convert'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni-int8'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='bus-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='cmpccxadd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fbsdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='mcdt-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pbrsb-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='psdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='serialize'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='SierraForest-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-ne-convert'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni-int8'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='bus-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='cmpccxadd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fbsdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='mcdt-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pbrsb-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='psdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='serialize'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Client'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Client-IBRS'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Client-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Client-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Client-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Client-v4'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Server'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Server-IBRS'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Server-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Server-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Server-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Server-v4'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Server-v5'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Snowridge'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='cldemote'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='core-capability'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdir64b'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdiri'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='mpx'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='split-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Snowridge-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='cldemote'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='core-capability'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdir64b'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdiri'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='mpx'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='split-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Snowridge-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='cldemote'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='core-capability'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdir64b'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdiri'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='split-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Snowridge-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='cldemote'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='core-capability'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdir64b'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdiri'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='split-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Snowridge-v4'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='cldemote'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdir64b'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdiri'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='athlon'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='3dnow'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='3dnowext'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='athlon-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='3dnow'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='3dnowext'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='core2duo'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='core2duo-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='coreduo'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='coreduo-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='n270'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='n270-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='phenom'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='3dnow'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='3dnowext'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='phenom-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='3dnow'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='3dnowext'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </mode>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   </cpu>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <memoryBacking supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <enum name='sourceType'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <value>file</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <value>anonymous</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <value>memfd</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   </memoryBacking>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <devices>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <disk supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='diskDevice'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>disk</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>cdrom</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>floppy</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>lun</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='bus'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>ide</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>fdc</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>scsi</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>virtio</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>usb</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>sata</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='model'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>virtio</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>virtio-transitional</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>virtio-non-transitional</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </disk>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <graphics supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='type'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>vnc</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>egl-headless</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>dbus</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </graphics>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <video supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='modelType'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>vga</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>cirrus</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>virtio</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>none</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>bochs</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>ramfb</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </video>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <hostdev supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='mode'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>subsystem</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='startupPolicy'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>default</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>mandatory</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>requisite</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>optional</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='subsysType'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>usb</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>pci</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>scsi</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='capsType'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='pciBackend'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </hostdev>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <rng supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='model'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>virtio</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>virtio-transitional</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>virtio-non-transitional</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='backendModel'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>random</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>egd</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>builtin</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </rng>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <filesystem supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='driverType'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>path</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>handle</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>virtiofs</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </filesystem>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <tpm supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='model'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>tpm-tis</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>tpm-crb</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='backendModel'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>emulator</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>external</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='backendVersion'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>2.0</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </tpm>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <redirdev supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='bus'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>usb</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </redirdev>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <channel supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='type'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>pty</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>unix</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </channel>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <crypto supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='model'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='type'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>qemu</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='backendModel'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>builtin</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </crypto>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <interface supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='backendType'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>default</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>passt</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </interface>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <panic supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='model'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>isa</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>hyperv</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </panic>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <console supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='type'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>null</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>vc</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>pty</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>dev</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>file</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>pipe</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>stdio</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>udp</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>tcp</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>unix</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>qemu-vdagent</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>dbus</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </console>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   </devices>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <features>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <gic supported='no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <vmcoreinfo supported='yes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <genid supported='yes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <backingStoreInput supported='yes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <backup supported='yes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <async-teardown supported='yes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <ps2 supported='yes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <sev supported='no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <sgx supported='no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <hyperv supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='features'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>relaxed</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>vapic</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>spinlocks</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>vpindex</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>runtime</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>synic</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>stimer</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>reset</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>vendor_id</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>frequencies</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>reenlightenment</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>tlbflush</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>ipi</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>avic</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>emsr_bitmap</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>xmm_input</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <defaults>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <spinlocks>4095</spinlocks>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <stimer_direct>on</stimer_direct>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <tlbflush_direct>off</tlbflush_direct>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <tlbflush_extended>off</tlbflush_extended>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </defaults>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </hyperv>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <launchSecurity supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='sectype'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>tdx</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </launchSecurity>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   </features>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: </domainCapabilities>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.356 281049 DEBUG nova.virt.libvirt.host [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: <domainCapabilities>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <path>/usr/libexec/qemu-kvm</path>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <domain>kvm</domain>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <arch>i686</arch>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <vcpu max='1024'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <iothreads supported='yes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <os supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <enum name='firmware'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <loader supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='type'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>rom</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>pflash</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='readonly'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>yes</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>no</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='secure'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>no</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </loader>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   </os>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <cpu>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <mode name='host-passthrough' supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='hostPassthroughMigratable'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>on</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>off</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </mode>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <mode name='maximum' supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='maximumMigratable'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>on</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>off</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </mode>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <mode name='host-model' supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <vendor>AMD</vendor>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='x2apic'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='tsc-deadline'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='hypervisor'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='tsc_adjust'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='spec-ctrl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='stibp'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='ssbd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='cmp_legacy'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='overflow-recov'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='succor'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='ibrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='amd-ssbd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='virt-ssbd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='lbrv'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='tsc-scale'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='vmcb-clean'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='pause-filter'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='pfthreshold'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='svme-addr-chk'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='disable' name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </mode>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <mode name='custom' supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Broadwell'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Broadwell-IBRS'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Broadwell-noTSX'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Broadwell-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Broadwell-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Broadwell-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Broadwell-v4'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cascadelake-Server'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cascadelake-Server-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cascadelake-Server-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cascadelake-Server-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cascadelake-Server-v4'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cascadelake-Server-v5'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cooperlake'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cooperlake-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cooperlake-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Denverton'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='mpx'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Denverton-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='mpx'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Denverton-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Denverton-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Dhyana-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-Genoa'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amd-psfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='auto-ibrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='no-nested-data-bp'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='null-sel-clr-base'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='stibp-always-on'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-Genoa-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amd-psfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='auto-ibrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='no-nested-data-bp'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='null-sel-clr-base'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='stibp-always-on'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-Milan'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-Milan-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-Milan-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amd-psfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='no-nested-data-bp'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='null-sel-clr-base'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='stibp-always-on'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-Rome'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-Rome-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-Rome-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-Rome-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-v4'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='GraniteRapids'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-int8'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-tile'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='bus-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fbsdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrc'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fzrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='mcdt-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pbrsb-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='prefetchiti'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='psdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='serialize'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='tsx-ldtrk'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='GraniteRapids-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-int8'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-tile'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='bus-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fbsdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrc'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fzrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='mcdt-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pbrsb-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='prefetchiti'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='psdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='serialize'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='tsx-ldtrk'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='GraniteRapids-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-int8'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-tile'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx10'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx10-128'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx10-256'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx10-512'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='bus-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='cldemote'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fbsdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrc'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fzrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='mcdt-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdir64b'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdiri'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pbrsb-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='prefetchiti'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='psdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='serialize'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='tsx-ldtrk'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Haswell'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Haswell-IBRS'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Haswell-noTSX'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Haswell-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Haswell-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Haswell-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Haswell-v4'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Icelake-Server'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Icelake-Server-noTSX'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Icelake-Server-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Icelake-Server-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Icelake-Server-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Icelake-Server-v4'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Icelake-Server-v5'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Icelake-Server-v6'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Icelake-Server-v7'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='IvyBridge'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='IvyBridge-IBRS'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='IvyBridge-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='IvyBridge-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='KnightsMill'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-4fmaps'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-4vnniw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512er'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512pf'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='KnightsMill-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-4fmaps'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-4vnniw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512er'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512pf'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Opteron_G4'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fma4'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xop'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Opteron_G4-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fma4'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xop'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Opteron_G5'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fma4'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='tbm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xop'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Opteron_G5-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fma4'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='tbm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xop'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='SapphireRapids'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-int8'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-tile'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='bus-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrc'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fzrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='serialize'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='tsx-ldtrk'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='SapphireRapids-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-int8'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-tile'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='bus-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrc'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fzrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='serialize'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='tsx-ldtrk'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='SapphireRapids-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-int8'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-tile'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='bus-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fbsdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrc'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fzrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='psdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='serialize'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='tsx-ldtrk'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='SapphireRapids-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-int8'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-tile'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='bus-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='cldemote'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fbsdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrc'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fzrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdir64b'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdiri'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='psdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='serialize'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='tsx-ldtrk'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='SierraForest'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-ne-convert'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni-int8'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='bus-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='cmpccxadd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fbsdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='mcdt-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pbrsb-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='psdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='serialize'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='SierraForest-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-ne-convert'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni-int8'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='bus-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='cmpccxadd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fbsdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='mcdt-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pbrsb-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='psdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='serialize'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Client'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Client-IBRS'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Client-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Client-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Client-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Client-v4'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Server'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Server-IBRS'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Server-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Server-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Server-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Server-v4'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Server-v5'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Snowridge'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='cldemote'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='core-capability'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdir64b'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdiri'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='mpx'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='split-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Snowridge-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='cldemote'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='core-capability'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdir64b'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdiri'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='mpx'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='split-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Snowridge-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='cldemote'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='core-capability'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdir64b'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdiri'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='split-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Snowridge-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='cldemote'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='core-capability'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdir64b'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdiri'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='split-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Snowridge-v4'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='cldemote'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdir64b'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdiri'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='athlon'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='3dnow'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='3dnowext'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='athlon-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='3dnow'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='3dnowext'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='core2duo'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='core2duo-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='coreduo'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='coreduo-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='n270'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='n270-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='phenom'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='3dnow'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='3dnowext'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='phenom-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='3dnow'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='3dnowext'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </mode>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   </cpu>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <memoryBacking supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <enum name='sourceType'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <value>file</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <value>anonymous</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <value>memfd</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   </memoryBacking>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <devices>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <disk supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='diskDevice'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>disk</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>cdrom</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>floppy</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>lun</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='bus'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>fdc</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>scsi</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>virtio</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>usb</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>sata</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='model'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>virtio</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>virtio-transitional</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>virtio-non-transitional</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </disk>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <graphics supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='type'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>vnc</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>egl-headless</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>dbus</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </graphics>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <video supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='modelType'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>vga</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>cirrus</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>virtio</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>none</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>bochs</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>ramfb</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </video>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <hostdev supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='mode'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>subsystem</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='startupPolicy'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>default</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>mandatory</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>requisite</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>optional</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='subsysType'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>usb</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>pci</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>scsi</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='capsType'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='pciBackend'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </hostdev>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <rng supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='model'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>virtio</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>virtio-transitional</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>virtio-non-transitional</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='backendModel'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>random</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>egd</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>builtin</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </rng>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <filesystem supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='driverType'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>path</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>handle</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>virtiofs</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </filesystem>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <tpm supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='model'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>tpm-tis</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>tpm-crb</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='backendModel'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>emulator</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>external</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='backendVersion'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>2.0</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </tpm>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <redirdev supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='bus'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>usb</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </redirdev>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <channel supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='type'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>pty</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>unix</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </channel>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <crypto supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='model'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='type'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>qemu</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='backendModel'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>builtin</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </crypto>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <interface supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='backendType'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>default</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>passt</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </interface>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <panic supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='model'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>isa</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>hyperv</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </panic>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <console supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='type'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>null</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>vc</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>pty</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>dev</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>file</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>pipe</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>stdio</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>udp</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>tcp</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>unix</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>qemu-vdagent</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>dbus</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </console>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   </devices>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <features>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <gic supported='no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <vmcoreinfo supported='yes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <genid supported='yes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <backingStoreInput supported='yes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <backup supported='yes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <async-teardown supported='yes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <ps2 supported='yes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <sev supported='no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <sgx supported='no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <hyperv supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='features'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>relaxed</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>vapic</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>spinlocks</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>vpindex</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>runtime</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>synic</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>stimer</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>reset</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>vendor_id</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>frequencies</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>reenlightenment</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>tlbflush</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>ipi</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>avic</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>emsr_bitmap</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>xmm_input</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <defaults>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <spinlocks>4095</spinlocks>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <stimer_direct>on</stimer_direct>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <tlbflush_direct>off</tlbflush_direct>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <tlbflush_extended>off</tlbflush_extended>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </defaults>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </hyperv>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <launchSecurity supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='sectype'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>tdx</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </launchSecurity>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   </features>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: </domainCapabilities>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.375 281049 DEBUG nova.virt.libvirt.host [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.379 281049 DEBUG nova.virt.libvirt.host [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: <domainCapabilities>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <path>/usr/libexec/qemu-kvm</path>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <domain>kvm</domain>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <arch>x86_64</arch>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <vcpu max='240'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <iothreads supported='yes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <os supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <enum name='firmware'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <loader supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='type'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>rom</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>pflash</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='readonly'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>yes</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>no</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='secure'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>no</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </loader>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   </os>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <cpu>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <mode name='host-passthrough' supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='hostPassthroughMigratable'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>on</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>off</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </mode>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <mode name='maximum' supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='maximumMigratable'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>on</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>off</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </mode>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <mode name='host-model' supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <vendor>AMD</vendor>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='x2apic'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='tsc-deadline'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='hypervisor'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='tsc_adjust'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='spec-ctrl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='stibp'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='ssbd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='cmp_legacy'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='overflow-recov'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='succor'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='ibrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='amd-ssbd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='virt-ssbd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='lbrv'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='tsc-scale'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='vmcb-clean'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='pause-filter'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='pfthreshold'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='svme-addr-chk'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='disable' name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </mode>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <mode name='custom' supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Broadwell'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Broadwell-IBRS'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Broadwell-noTSX'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Broadwell-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Broadwell-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Broadwell-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Broadwell-v4'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cascadelake-Server'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cascadelake-Server-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cascadelake-Server-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cascadelake-Server-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cascadelake-Server-v4'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cascadelake-Server-v5'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cooperlake'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cooperlake-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cooperlake-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Denverton'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='mpx'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Denverton-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='mpx'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Denverton-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Denverton-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Dhyana-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-Genoa'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amd-psfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='auto-ibrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='no-nested-data-bp'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='null-sel-clr-base'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='stibp-always-on'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-Genoa-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amd-psfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='auto-ibrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='no-nested-data-bp'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='null-sel-clr-base'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='stibp-always-on'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-Milan'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-Milan-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-Milan-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amd-psfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='no-nested-data-bp'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='null-sel-clr-base'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='stibp-always-on'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-Rome'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-Rome-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-Rome-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-Rome-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-v4'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='GraniteRapids'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-int8'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-tile'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='bus-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fbsdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrc'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fzrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='mcdt-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pbrsb-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='prefetchiti'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='psdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='serialize'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='tsx-ldtrk'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='GraniteRapids-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-int8'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-tile'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='bus-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fbsdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrc'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fzrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='mcdt-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pbrsb-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='prefetchiti'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='psdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='serialize'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='tsx-ldtrk'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='GraniteRapids-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-int8'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-tile'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx10'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx10-128'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx10-256'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx10-512'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='bus-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='cldemote'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fbsdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrc'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fzrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='mcdt-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdir64b'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdiri'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pbrsb-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='prefetchiti'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='psdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='serialize'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='tsx-ldtrk'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Haswell'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Haswell-IBRS'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Haswell-noTSX'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Haswell-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Haswell-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Haswell-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Haswell-v4'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Icelake-Server'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Icelake-Server-noTSX'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Icelake-Server-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Icelake-Server-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Icelake-Server-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Icelake-Server-v4'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Icelake-Server-v5'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Icelake-Server-v6'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Icelake-Server-v7'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='IvyBridge'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='IvyBridge-IBRS'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='IvyBridge-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='IvyBridge-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='KnightsMill'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-4fmaps'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-4vnniw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512er'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512pf'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='KnightsMill-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-4fmaps'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-4vnniw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512er'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512pf'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Opteron_G4'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fma4'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xop'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Opteron_G4-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fma4'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xop'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Opteron_G5'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fma4'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='tbm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xop'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Opteron_G5-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fma4'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='tbm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xop'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='SapphireRapids'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-int8'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-tile'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='bus-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrc'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fzrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='serialize'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='tsx-ldtrk'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='SapphireRapids-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-int8'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-tile'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='bus-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrc'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fzrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='serialize'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='tsx-ldtrk'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='SapphireRapids-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-int8'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-tile'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='bus-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fbsdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrc'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fzrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='psdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='serialize'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='tsx-ldtrk'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='SapphireRapids-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-int8'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-tile'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='bus-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='cldemote'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fbsdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrc'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fzrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdir64b'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdiri'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='psdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='serialize'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='tsx-ldtrk'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='SierraForest'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-ne-convert'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni-int8'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='bus-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='cmpccxadd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fbsdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='mcdt-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pbrsb-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='psdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='serialize'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='SierraForest-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-ne-convert'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni-int8'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='bus-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='cmpccxadd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fbsdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='mcdt-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pbrsb-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='psdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='serialize'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Client'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Client-IBRS'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Client-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Client-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Client-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Client-v4'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Server'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Server-IBRS'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Server-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Server-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Server-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Server-v4'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Server-v5'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Snowridge'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='cldemote'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='core-capability'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdir64b'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdiri'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='mpx'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='split-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Snowridge-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='cldemote'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='core-capability'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdir64b'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdiri'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='mpx'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='split-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Snowridge-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='cldemote'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='core-capability'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdir64b'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdiri'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='split-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Snowridge-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='cldemote'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='core-capability'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdir64b'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdiri'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='split-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Snowridge-v4'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='cldemote'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdir64b'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdiri'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='athlon'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='3dnow'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='3dnowext'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='athlon-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='3dnow'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='3dnowext'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='core2duo'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='core2duo-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='coreduo'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='coreduo-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='n270'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='n270-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='phenom'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='3dnow'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='3dnowext'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='phenom-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='3dnow'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='3dnowext'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </mode>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   </cpu>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <memoryBacking supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <enum name='sourceType'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <value>file</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <value>anonymous</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <value>memfd</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   </memoryBacking>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <devices>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <disk supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='diskDevice'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>disk</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>cdrom</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>floppy</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>lun</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='bus'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>ide</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>fdc</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>scsi</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>virtio</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>usb</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>sata</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='model'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>virtio</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>virtio-transitional</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>virtio-non-transitional</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </disk>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <graphics supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='type'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>vnc</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>egl-headless</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>dbus</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </graphics>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <video supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='modelType'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>vga</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>cirrus</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>virtio</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>none</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>bochs</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>ramfb</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </video>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <hostdev supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='mode'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>subsystem</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='startupPolicy'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>default</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>mandatory</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>requisite</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>optional</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='subsysType'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>usb</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>pci</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>scsi</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='capsType'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='pciBackend'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </hostdev>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <rng supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='model'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>virtio</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>virtio-transitional</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>virtio-non-transitional</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='backendModel'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>random</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>egd</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>builtin</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </rng>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <filesystem supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='driverType'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>path</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>handle</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>virtiofs</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </filesystem>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <tpm supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='model'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>tpm-tis</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>tpm-crb</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='backendModel'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>emulator</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>external</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='backendVersion'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>2.0</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </tpm>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <redirdev supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='bus'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>usb</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </redirdev>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <channel supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='type'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>pty</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>unix</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </channel>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <crypto supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='model'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='type'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>qemu</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='backendModel'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>builtin</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </crypto>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <interface supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='backendType'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>default</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>passt</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </interface>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <panic supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='model'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>isa</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>hyperv</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </panic>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <console supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='type'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>null</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>vc</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>pty</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>dev</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>file</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>pipe</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>stdio</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>udp</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>tcp</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>unix</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>qemu-vdagent</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>dbus</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </console>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   </devices>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <features>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <gic supported='no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <vmcoreinfo supported='yes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <genid supported='yes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <backingStoreInput supported='yes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <backup supported='yes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <async-teardown supported='yes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <ps2 supported='yes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <sev supported='no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <sgx supported='no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <hyperv supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='features'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>relaxed</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>vapic</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>spinlocks</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>vpindex</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>runtime</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>synic</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>stimer</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>reset</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>vendor_id</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>frequencies</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>reenlightenment</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>tlbflush</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>ipi</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>avic</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>emsr_bitmap</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>xmm_input</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <defaults>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <spinlocks>4095</spinlocks>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <stimer_direct>on</stimer_direct>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <tlbflush_direct>off</tlbflush_direct>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <tlbflush_extended>off</tlbflush_extended>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </defaults>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </hyperv>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <launchSecurity supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='sectype'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>tdx</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </launchSecurity>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   </features>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: </domainCapabilities>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.427 281049 DEBUG nova.virt.libvirt.host [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: <domainCapabilities>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <path>/usr/libexec/qemu-kvm</path>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <domain>kvm</domain>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <arch>x86_64</arch>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <vcpu max='1024'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <iothreads supported='yes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <os supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <enum name='firmware'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <value>efi</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <loader supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='type'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>rom</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>pflash</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='readonly'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>yes</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>no</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='secure'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>yes</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>no</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </loader>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   </os>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <cpu>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <mode name='host-passthrough' supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='hostPassthroughMigratable'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>on</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>off</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </mode>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <mode name='maximum' supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='maximumMigratable'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>on</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>off</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </mode>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <mode name='host-model' supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <vendor>AMD</vendor>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='x2apic'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='tsc-deadline'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='hypervisor'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='tsc_adjust'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='spec-ctrl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='stibp'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='ssbd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='cmp_legacy'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='overflow-recov'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='succor'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='ibrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='amd-ssbd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='virt-ssbd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='lbrv'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='tsc-scale'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='vmcb-clean'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='pause-filter'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='pfthreshold'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='svme-addr-chk'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <feature policy='disable' name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </mode>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <mode name='custom' supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Broadwell'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Broadwell-IBRS'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Broadwell-noTSX'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Broadwell-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Broadwell-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Broadwell-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Broadwell-v4'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cascadelake-Server'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cascadelake-Server-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cascadelake-Server-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cascadelake-Server-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cascadelake-Server-v4'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cascadelake-Server-v5'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cooperlake'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cooperlake-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Cooperlake-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Denverton'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='mpx'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Denverton-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='mpx'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Denverton-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Denverton-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Dhyana-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-Genoa'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amd-psfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='auto-ibrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='no-nested-data-bp'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='null-sel-clr-base'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='stibp-always-on'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-Genoa-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amd-psfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='auto-ibrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='no-nested-data-bp'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='null-sel-clr-base'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='stibp-always-on'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-Milan'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-Milan-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-Milan-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amd-psfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='no-nested-data-bp'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='null-sel-clr-base'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='stibp-always-on'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-Rome'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-Rome-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-Rome-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-Rome-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='EPYC-v4'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='GraniteRapids'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-int8'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-tile'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='bus-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fbsdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrc'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fzrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='mcdt-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pbrsb-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='prefetchiti'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='psdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='serialize'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='tsx-ldtrk'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='GraniteRapids-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-int8'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-tile'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='bus-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fbsdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrc'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fzrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='mcdt-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pbrsb-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='prefetchiti'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='psdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='serialize'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='tsx-ldtrk'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='GraniteRapids-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-int8'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-tile'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx10'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx10-128'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx10-256'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx10-512'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='bus-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='cldemote'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fbsdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrc'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fzrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='mcdt-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdir64b'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdiri'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pbrsb-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='prefetchiti'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='psdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='serialize'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='tsx-ldtrk'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Haswell'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Haswell-IBRS'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Haswell-noTSX'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Haswell-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Haswell-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Haswell-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Haswell-v4'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Icelake-Server'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Icelake-Server-noTSX'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Icelake-Server-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Icelake-Server-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Icelake-Server-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Icelake-Server-v4'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Icelake-Server-v5'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Icelake-Server-v6'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Icelake-Server-v7'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='IvyBridge'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='IvyBridge-IBRS'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='IvyBridge-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='IvyBridge-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='KnightsMill'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-4fmaps'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-4vnniw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512er'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512pf'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='KnightsMill-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-4fmaps'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-4vnniw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512er'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512pf'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Opteron_G4'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fma4'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xop'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Opteron_G4-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fma4'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xop'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Opteron_G5'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fma4'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='tbm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xop'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Opteron_G5-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fma4'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='tbm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xop'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='SapphireRapids'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-int8'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-tile'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='bus-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrc'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fzrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='serialize'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='tsx-ldtrk'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='SapphireRapids-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-int8'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-tile'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='bus-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrc'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fzrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='serialize'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='tsx-ldtrk'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='SapphireRapids-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-int8'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-tile'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='bus-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fbsdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrc'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fzrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='psdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='serialize'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='tsx-ldtrk'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='SapphireRapids-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-int8'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='amx-tile'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-bf16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-fp16'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512-vpopcntdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bitalg'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vbmi2'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='bus-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='cldemote'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fbsdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrc'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fzrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='la57'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdir64b'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdiri'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='psdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='serialize'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='taa-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='tsx-ldtrk'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xfd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='SierraForest'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-ne-convert'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni-int8'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='bus-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='cmpccxadd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fbsdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='mcdt-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pbrsb-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='psdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='serialize'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='SierraForest-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-ifma'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-ne-convert'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx-vnni-int8'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='bus-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='cmpccxadd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fbsdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='fsrs'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ibrs-all'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='mcdt-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pbrsb-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='psdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='sbdr-ssdp-no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='serialize'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vaes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='vpclmulqdq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Client'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Client-IBRS'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Client-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Client-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Client-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Client-v4'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Server'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Server-IBRS'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Server-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Server-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='hle'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='rtm'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Server-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Server-v4'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Skylake-Server-v5'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512bw'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512cd'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512dq'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512f'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='avx512vl'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='invpcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pcid'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='pku'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Snowridge'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='cldemote'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='core-capability'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdir64b'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdiri'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='mpx'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='split-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Snowridge-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='cldemote'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='core-capability'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdir64b'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdiri'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='mpx'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='split-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Snowridge-v2'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='cldemote'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='core-capability'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdir64b'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdiri'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='split-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Snowridge-v3'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='cldemote'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='core-capability'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdir64b'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdiri'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='split-lock-detect'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='Snowridge-v4'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='cldemote'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='erms'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='gfni'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdir64b'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='movdiri'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='xsaves'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='athlon'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='3dnow'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='3dnowext'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='athlon-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='3dnow'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='3dnowext'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='core2duo'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='core2duo-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='coreduo'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='coreduo-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='n270'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='n270-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='ss'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='phenom'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='3dnow'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='3dnowext'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <blockers model='phenom-v1'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='3dnow'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <feature name='3dnowext'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </blockers>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </mode>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   </cpu>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <memoryBacking supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <enum name='sourceType'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <value>file</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <value>anonymous</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <value>memfd</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   </memoryBacking>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <devices>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <disk supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='diskDevice'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>disk</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>cdrom</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>floppy</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>lun</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='bus'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>fdc</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>scsi</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>virtio</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>usb</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>sata</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='model'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>virtio</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>virtio-transitional</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>virtio-non-transitional</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </disk>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <graphics supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='type'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>vnc</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>egl-headless</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>dbus</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </graphics>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <video supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='modelType'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>vga</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>cirrus</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>virtio</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>none</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>bochs</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>ramfb</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </video>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <hostdev supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='mode'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>subsystem</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='startupPolicy'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>default</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>mandatory</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>requisite</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>optional</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='subsysType'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>usb</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>pci</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>scsi</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='capsType'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='pciBackend'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </hostdev>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <rng supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='model'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>virtio</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>virtio-transitional</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>virtio-non-transitional</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='backendModel'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>random</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>egd</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>builtin</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </rng>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <filesystem supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='driverType'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>path</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>handle</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>virtiofs</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </filesystem>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <tpm supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='model'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>tpm-tis</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>tpm-crb</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='backendModel'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>emulator</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>external</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='backendVersion'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>2.0</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </tpm>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <redirdev supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='bus'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>usb</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </redirdev>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <channel supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='type'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>pty</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>unix</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </channel>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <crypto supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='model'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='type'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>qemu</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='backendModel'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>builtin</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </crypto>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <interface supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='backendType'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>default</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>passt</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </interface>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <panic supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='model'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>isa</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>hyperv</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </panic>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <console supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='type'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>null</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>vc</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>pty</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>dev</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>file</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>pipe</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>stdio</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>udp</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>tcp</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>unix</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>qemu-vdagent</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>dbus</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </console>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   </devices>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   <features>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <gic supported='no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <vmcoreinfo supported='yes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <genid supported='yes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <backingStoreInput supported='yes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <backup supported='yes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <async-teardown supported='yes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <ps2 supported='yes'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <sev supported='no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <sgx supported='no'/>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <hyperv supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='features'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>relaxed</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>vapic</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>spinlocks</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>vpindex</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>runtime</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>synic</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>stimer</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>reset</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>vendor_id</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>frequencies</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>reenlightenment</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>tlbflush</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>ipi</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>avic</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>emsr_bitmap</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>xmm_input</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <defaults>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <spinlocks>4095</spinlocks>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <stimer_direct>on</stimer_direct>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <tlbflush_direct>off</tlbflush_direct>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <tlbflush_extended>off</tlbflush_extended>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </defaults>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </hyperv>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     <launchSecurity supported='yes'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       <enum name='sectype'>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:         <value>tdx</value>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:       </enum>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:     </launchSecurity>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:   </features>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: </domainCapabilities>
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.480 281049 DEBUG nova.virt.libvirt.host [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.481 281049 DEBUG nova.virt.libvirt.host [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.481 281049 DEBUG nova.virt.libvirt.host [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.482 281049 INFO nova.virt.libvirt.host [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Secure Boot support detected
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.484 281049 INFO nova.virt.libvirt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.484 281049 INFO nova.virt.libvirt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.494 281049 DEBUG nova.virt.libvirt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.510 281049 INFO nova.virt.node [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Determined node identity 9ec09c1a-d246-41d7-94f4-b482f646a9f1 from /var/lib/nova/compute_id
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.526 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Verified node 9ec09c1a-d246-41d7-94f4-b482f646a9f1 matches my host np0005541914.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.553 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.609 281049 DEBUG oslo_concurrency.lockutils [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.609 281049 DEBUG oslo_concurrency.lockutils [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.609 281049 DEBUG oslo_concurrency.lockutils [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.610 281049 DEBUG nova.compute.resource_tracker [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 09:47:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:45.610 281049 DEBUG oslo_concurrency.processutils [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:47:45 np0005541914.localdomain sshd[281312]: Invalid user Test from 34.78.29.97 port 35702
Dec 02 09:47:45 np0005541914.localdomain sshd[281312]: Received disconnect from 34.78.29.97 port 35702:11: Bye Bye [preauth]
Dec 02 09:47:45 np0005541914.localdomain sshd[281312]: Disconnected from invalid user Test 34.78.29.97 port 35702 [preauth]
Dec 02 09:47:46 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26085 DF PROTO=TCP SPT=36006 DPT=9102 SEQ=3478909886 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5534C9E0000000001030307) 
Dec 02 09:47:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:46.084 281049 DEBUG oslo_concurrency.processutils [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:47:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:46.211 281049 WARNING nova.virt.libvirt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 09:47:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:46.212 281049 DEBUG nova.compute.resource_tracker [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=12513MB free_disk=41.837242126464844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 09:47:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:46.212 281049 DEBUG oslo_concurrency.lockutils [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:47:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:46.213 281049 DEBUG oslo_concurrency.lockutils [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:47:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:46.305 281049 DEBUG nova.compute.resource_tracker [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 09:47:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:46.306 281049 DEBUG nova.compute.resource_tracker [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 09:47:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:46.321 281049 DEBUG nova.scheduler.client.report [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Refreshing inventories for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 02 09:47:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:46.384 281049 DEBUG nova.scheduler.client.report [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Updating ProviderTree inventory for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 02 09:47:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:46.384 281049 DEBUG nova.compute.provider_tree [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Updating inventory in ProviderTree for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 02 09:47:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:46.398 281049 DEBUG nova.scheduler.client.report [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Refreshing aggregate associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 02 09:47:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:46.420 281049 DEBUG nova.scheduler.client.report [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Refreshing trait associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, traits: HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,HW_CPU_X86_SSE42,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_IDE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AKI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 02 09:47:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:46.437 281049 DEBUG oslo_concurrency.processutils [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:47:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:46.861 281049 DEBUG oslo_concurrency.processutils [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:47:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:46.866 281049 DEBUG nova.virt.libvirt.host [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Dec 02 09:47:46 np0005541914.localdomain nova_compute[281045]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Dec 02 09:47:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:46.867 281049 INFO nova.virt.libvirt.host [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] kernel doesn't support AMD SEV
Dec 02 09:47:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:46.868 281049 DEBUG nova.compute.provider_tree [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 09:47:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:46.869 281049 DEBUG nova.virt.libvirt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 02 09:47:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:46.896 281049 DEBUG nova.scheduler.client.report [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 09:47:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:46.923 281049 DEBUG nova.compute.resource_tracker [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 09:47:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:46.923 281049 DEBUG oslo_concurrency.lockutils [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:47:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:46.924 281049 DEBUG nova.service [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Dec 02 09:47:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:46.959 281049 DEBUG nova.service [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Dec 02 09:47:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:47:46.960 281049 DEBUG nova.servicegroup.drivers.db [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] DB_Driver: join new ServiceGroup member np0005541914.localdomain to the compute group, service = <Service: host=np0005541914.localdomain, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Dec 02 09:47:47 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26086 DF PROTO=TCP SPT=36006 DPT=9102 SEQ=3478909886 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55350A30000000001030307) 
Dec 02 09:47:47 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10177 DF PROTO=TCP SPT=38574 DPT=9102 SEQ=3134322641 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55353220000000001030307) 
Dec 02 09:47:49 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26087 DF PROTO=TCP SPT=36006 DPT=9102 SEQ=3478909886 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55358A20000000001030307) 
Dec 02 09:47:50 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59046 DF PROTO=TCP SPT=40808 DPT=9102 SEQ=3749391443 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5535D220000000001030307) 
Dec 02 09:47:51 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:47:51 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:47:52 np0005541914.localdomain podman[281379]: 2025-12-02 09:47:52.091075782 +0000 UTC m=+0.089470129 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 09:47:52 np0005541914.localdomain podman[281379]: 2025-12-02 09:47:52.101577744 +0000 UTC m=+0.099972121 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 09:47:52 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:47:52 np0005541914.localdomain podman[281380]: 2025-12-02 09:47:52.198498461 +0000 UTC m=+0.196843046 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, release=1755695350, io.openshift.expose-services=, distribution-scope=public, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vcs-type=git, version=9.6, container_name=openstack_network_exporter, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 02 09:47:52 np0005541914.localdomain podman[281380]: 2025-12-02 09:47:52.21218526 +0000 UTC m=+0.210529835 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, config_id=edpm, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, version=9.6, release=1755695350, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc.)
Dec 02 09:47:52 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:47:53 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26088 DF PROTO=TCP SPT=36006 DPT=9102 SEQ=3478909886 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55368620000000001030307) 
Dec 02 09:47:58 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:47:59 np0005541914.localdomain systemd[1]: tmp-crun.OzSf8X.mount: Deactivated successfully.
Dec 02 09:47:59 np0005541914.localdomain podman[281422]: 2025-12-02 09:47:59.085798145 +0000 UTC m=+0.080599437 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:47:59 np0005541914.localdomain podman[281422]: 2025-12-02 09:47:59.096303856 +0000 UTC m=+0.091105138 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 02 09:47:59 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:48:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26089 DF PROTO=TCP SPT=36006 DPT=9102 SEQ=3478909886 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55389220000000001030307) 
Dec 02 09:48:02 np0005541914.localdomain sudo[281440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:48:02 np0005541914.localdomain sudo[281440]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:48:02 np0005541914.localdomain sudo[281440]: pam_unix(sudo:session): session closed for user root
Dec 02 09:48:02 np0005541914.localdomain sudo[281458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:48:02 np0005541914.localdomain sudo[281458]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:48:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:48:03.158 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:48:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:48:03.158 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:48:03 np0005541914.localdomain sudo[281458]: pam_unix(sudo:session): session closed for user root
Dec 02 09:48:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:48:03.158 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:48:03 np0005541914.localdomain sudo[281508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:48:03 np0005541914.localdomain sudo[281508]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:48:03 np0005541914.localdomain sudo[281508]: pam_unix(sudo:session): session closed for user root
Dec 02 09:48:03 np0005541914.localdomain sudo[281526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074 -- inventory --format=json-pretty --filter-for-batch
Dec 02 09:48:03 np0005541914.localdomain sudo[281526]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:48:03 np0005541914.localdomain podman[239757]: time="2025-12-02T09:48:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 09:48:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:48:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150198 "" "Go-http-client/1.1"
Dec 02 09:48:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:48:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17702 "" "Go-http-client/1.1"
Dec 02 09:48:04 np0005541914.localdomain podman[281587]: 
Dec 02 09:48:04 np0005541914.localdomain podman[281587]: 2025-12-02 09:48:04.05322295 +0000 UTC m=+0.073081658 container create b57fe1113bca25f16f707a1c9067f2ffd30d23613e5b4ea242c5ea7cdc637350 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_hofstadter, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, ceph=True, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.buildah.version=1.41.4, release=1763362218)
Dec 02 09:48:04 np0005541914.localdomain systemd[1]: Started libpod-conmon-b57fe1113bca25f16f707a1c9067f2ffd30d23613e5b4ea242c5ea7cdc637350.scope.
Dec 02 09:48:04 np0005541914.localdomain systemd[1]: tmp-crun.3gFVUK.mount: Deactivated successfully.
Dec 02 09:48:04 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:48:04 np0005541914.localdomain podman[281587]: 2025-12-02 09:48:04.120949843 +0000 UTC m=+0.140808541 container init b57fe1113bca25f16f707a1c9067f2ffd30d23613e5b4ea242c5ea7cdc637350 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_hofstadter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, name=rhceph, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, io.openshift.expose-services=, vcs-type=git, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.buildah.version=1.41.4, ceph=True, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, version=7)
Dec 02 09:48:04 np0005541914.localdomain podman[281587]: 2025-12-02 09:48:04.025558294 +0000 UTC m=+0.045416992 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:48:04 np0005541914.localdomain systemd[1]: tmp-crun.YQVXs0.mount: Deactivated successfully.
Dec 02 09:48:04 np0005541914.localdomain podman[281587]: 2025-12-02 09:48:04.135850379 +0000 UTC m=+0.155709077 container start b57fe1113bca25f16f707a1c9067f2ffd30d23613e5b4ea242c5ea7cdc637350 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_hofstadter, vcs-type=git, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, name=rhceph, build-date=2025-11-26T19:44:28Z)
Dec 02 09:48:04 np0005541914.localdomain podman[281587]: 2025-12-02 09:48:04.136086256 +0000 UTC m=+0.155944994 container attach b57fe1113bca25f16f707a1c9067f2ffd30d23613e5b4ea242c5ea7cdc637350 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_hofstadter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, version=7, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, distribution-scope=public, architecture=x86_64, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, name=rhceph, CEPH_POINT_RELEASE=, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7)
Dec 02 09:48:04 np0005541914.localdomain agitated_hofstadter[281602]: 167 167
Dec 02 09:48:04 np0005541914.localdomain systemd[1]: libpod-b57fe1113bca25f16f707a1c9067f2ffd30d23613e5b4ea242c5ea7cdc637350.scope: Deactivated successfully.
Dec 02 09:48:04 np0005541914.localdomain podman[281587]: 2025-12-02 09:48:04.14111395 +0000 UTC m=+0.160972698 container died b57fe1113bca25f16f707a1c9067f2ffd30d23613e5b4ea242c5ea7cdc637350 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_hofstadter, release=1763362218, RELEASE=main, io.openshift.tags=rhceph ceph, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, distribution-scope=public, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 02 09:48:04 np0005541914.localdomain podman[281607]: 2025-12-02 09:48:04.253195351 +0000 UTC m=+0.098357942 container remove b57fe1113bca25f16f707a1c9067f2ffd30d23613e5b4ea242c5ea7cdc637350 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_hofstadter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, release=1763362218, io.openshift.expose-services=, vendor=Red Hat, Inc., ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, distribution-scope=public, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, architecture=x86_64, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, RELEASE=main)
Dec 02 09:48:04 np0005541914.localdomain systemd[1]: libpod-conmon-b57fe1113bca25f16f707a1c9067f2ffd30d23613e5b4ea242c5ea7cdc637350.scope: Deactivated successfully.
Dec 02 09:48:04 np0005541914.localdomain podman[281630]: 
Dec 02 09:48:04 np0005541914.localdomain podman[281630]: 2025-12-02 09:48:04.465163588 +0000 UTC m=+0.077076050 container create e27c8690468191bd8d0a7beec6dcff3d4971f75f25c87dcb689e43a2d78533bc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_jones, io.openshift.expose-services=, RELEASE=main, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, distribution-scope=public, vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, ceph=True, version=7, release=1763362218, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7)
Dec 02 09:48:04 np0005541914.localdomain podman[281630]: 2025-12-02 09:48:04.433956472 +0000 UTC m=+0.045869004 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:48:04 np0005541914.localdomain systemd[1]: Started libpod-conmon-e27c8690468191bd8d0a7beec6dcff3d4971f75f25c87dcb689e43a2d78533bc.scope.
Dec 02 09:48:04 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:48:04 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bda2654cca2a3bf25e1d593d92ea0197fdeb2139d82abaea956267da0e5f201e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 02 09:48:04 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bda2654cca2a3bf25e1d593d92ea0197fdeb2139d82abaea956267da0e5f201e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 02 09:48:04 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bda2654cca2a3bf25e1d593d92ea0197fdeb2139d82abaea956267da0e5f201e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 02 09:48:04 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bda2654cca2a3bf25e1d593d92ea0197fdeb2139d82abaea956267da0e5f201e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 02 09:48:04 np0005541914.localdomain podman[281630]: 2025-12-02 09:48:04.571487942 +0000 UTC m=+0.183400404 container init e27c8690468191bd8d0a7beec6dcff3d4971f75f25c87dcb689e43a2d78533bc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_jones, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, RELEASE=main, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-type=git, GIT_BRANCH=main, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 02 09:48:04 np0005541914.localdomain podman[281630]: 2025-12-02 09:48:04.582034675 +0000 UTC m=+0.193947137 container start e27c8690468191bd8d0a7beec6dcff3d4971f75f25c87dcb689e43a2d78533bc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_jones, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.openshift.tags=rhceph ceph, RELEASE=main, vcs-type=git, distribution-scope=public, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 02 09:48:04 np0005541914.localdomain podman[281630]: 2025-12-02 09:48:04.582335284 +0000 UTC m=+0.194247746 container attach e27c8690468191bd8d0a7beec6dcff3d4971f75f25c87dcb689e43a2d78533bc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_jones, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, RELEASE=main, CEPH_POINT_RELEASE=, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, architecture=x86_64, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, GIT_CLEAN=True, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 02 09:48:05 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e498577e0240969a04071b726bc29775f1d342827de0f022348cd7ec1e73f3de-merged.mount: Deactivated successfully.
Dec 02 09:48:05 np0005541914.localdomain agitated_jones[281645]: [
Dec 02 09:48:05 np0005541914.localdomain agitated_jones[281645]:     {
Dec 02 09:48:05 np0005541914.localdomain agitated_jones[281645]:         "available": false,
Dec 02 09:48:05 np0005541914.localdomain agitated_jones[281645]:         "ceph_device": false,
Dec 02 09:48:05 np0005541914.localdomain agitated_jones[281645]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 02 09:48:05 np0005541914.localdomain agitated_jones[281645]:         "lsm_data": {},
Dec 02 09:48:05 np0005541914.localdomain agitated_jones[281645]:         "lvs": [],
Dec 02 09:48:05 np0005541914.localdomain agitated_jones[281645]:         "path": "/dev/sr0",
Dec 02 09:48:05 np0005541914.localdomain agitated_jones[281645]:         "rejected_reasons": [
Dec 02 09:48:05 np0005541914.localdomain agitated_jones[281645]:             "Has a FileSystem",
Dec 02 09:48:05 np0005541914.localdomain agitated_jones[281645]:             "Insufficient space (<5GB)"
Dec 02 09:48:05 np0005541914.localdomain agitated_jones[281645]:         ],
Dec 02 09:48:05 np0005541914.localdomain agitated_jones[281645]:         "sys_api": {
Dec 02 09:48:05 np0005541914.localdomain agitated_jones[281645]:             "actuators": null,
Dec 02 09:48:05 np0005541914.localdomain agitated_jones[281645]:             "device_nodes": "sr0",
Dec 02 09:48:05 np0005541914.localdomain agitated_jones[281645]:             "human_readable_size": "482.00 KB",
Dec 02 09:48:05 np0005541914.localdomain agitated_jones[281645]:             "id_bus": "ata",
Dec 02 09:48:05 np0005541914.localdomain agitated_jones[281645]:             "model": "QEMU DVD-ROM",
Dec 02 09:48:05 np0005541914.localdomain agitated_jones[281645]:             "nr_requests": "2",
Dec 02 09:48:05 np0005541914.localdomain agitated_jones[281645]:             "partitions": {},
Dec 02 09:48:05 np0005541914.localdomain agitated_jones[281645]:             "path": "/dev/sr0",
Dec 02 09:48:05 np0005541914.localdomain agitated_jones[281645]:             "removable": "1",
Dec 02 09:48:05 np0005541914.localdomain agitated_jones[281645]:             "rev": "2.5+",
Dec 02 09:48:05 np0005541914.localdomain agitated_jones[281645]:             "ro": "0",
Dec 02 09:48:05 np0005541914.localdomain agitated_jones[281645]:             "rotational": "1",
Dec 02 09:48:05 np0005541914.localdomain agitated_jones[281645]:             "sas_address": "",
Dec 02 09:48:05 np0005541914.localdomain agitated_jones[281645]:             "sas_device_handle": "",
Dec 02 09:48:05 np0005541914.localdomain agitated_jones[281645]:             "scheduler_mode": "mq-deadline",
Dec 02 09:48:05 np0005541914.localdomain agitated_jones[281645]:             "sectors": 0,
Dec 02 09:48:05 np0005541914.localdomain agitated_jones[281645]:             "sectorsize": "2048",
Dec 02 09:48:05 np0005541914.localdomain agitated_jones[281645]:             "size": 493568.0,
Dec 02 09:48:05 np0005541914.localdomain agitated_jones[281645]:             "support_discard": "0",
Dec 02 09:48:05 np0005541914.localdomain agitated_jones[281645]:             "type": "disk",
Dec 02 09:48:05 np0005541914.localdomain agitated_jones[281645]:             "vendor": "QEMU"
Dec 02 09:48:05 np0005541914.localdomain agitated_jones[281645]:         }
Dec 02 09:48:05 np0005541914.localdomain agitated_jones[281645]:     }
Dec 02 09:48:05 np0005541914.localdomain agitated_jones[281645]: ]
Dec 02 09:48:05 np0005541914.localdomain systemd[1]: libpod-e27c8690468191bd8d0a7beec6dcff3d4971f75f25c87dcb689e43a2d78533bc.scope: Deactivated successfully.
Dec 02 09:48:05 np0005541914.localdomain podman[281630]: 2025-12-02 09:48:05.428814032 +0000 UTC m=+1.040726534 container died e27c8690468191bd8d0a7beec6dcff3d4971f75f25c87dcb689e43a2d78533bc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_jones, version=7, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, ceph=True, io.openshift.expose-services=, com.redhat.component=rhceph-container, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 02 09:48:05 np0005541914.localdomain systemd[1]: tmp-crun.NBLHZm.mount: Deactivated successfully.
Dec 02 09:48:05 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-bda2654cca2a3bf25e1d593d92ea0197fdeb2139d82abaea956267da0e5f201e-merged.mount: Deactivated successfully.
Dec 02 09:48:05 np0005541914.localdomain podman[283686]: 2025-12-02 09:48:05.527930196 +0000 UTC m=+0.085363335 container remove e27c8690468191bd8d0a7beec6dcff3d4971f75f25c87dcb689e43a2d78533bc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_jones, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, vcs-type=git, CEPH_POINT_RELEASE=, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_CLEAN=True, com.redhat.component=rhceph-container, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 09:48:05 np0005541914.localdomain systemd[1]: libpod-conmon-e27c8690468191bd8d0a7beec6dcff3d4971f75f25c87dcb689e43a2d78533bc.scope: Deactivated successfully.
Dec 02 09:48:05 np0005541914.localdomain sudo[281526]: pam_unix(sudo:session): session closed for user root
Dec 02 09:48:06 np0005541914.localdomain sudo[283701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:48:06 np0005541914.localdomain sudo[283701]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:48:06 np0005541914.localdomain sudo[283701]: pam_unix(sudo:session): session closed for user root
Dec 02 09:48:08 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:48:08.320 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 09:48:08 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:48:08.322 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 02 09:48:11 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:48:11 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:48:12 np0005541914.localdomain podman[283719]: 2025-12-02 09:48:12.068298933 +0000 UTC m=+0.068966582 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 02 09:48:12 np0005541914.localdomain podman[283719]: 2025-12-02 09:48:12.083119747 +0000 UTC m=+0.083787446 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 09:48:12 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 09:48:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:48:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:48:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:48:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 09:48:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:48:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:48:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:48:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 09:48:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:48:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:48:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 09:48:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:48:12 np0005541914.localdomain podman[283720]: 2025-12-02 09:48:12.155328097 +0000 UTC m=+0.151474547 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Dec 02 09:48:12 np0005541914.localdomain podman[283720]: 2025-12-02 09:48:12.189895615 +0000 UTC m=+0.186042075 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 02 09:48:12 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 09:48:13 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:48:13.326 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 09:48:13 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:48:13 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:48:14 np0005541914.localdomain podman[283761]: 2025-12-02 09:48:14.083688977 +0000 UTC m=+0.086444157 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 02 09:48:14 np0005541914.localdomain podman[283761]: 2025-12-02 09:48:14.091762864 +0000 UTC m=+0.094518034 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 02 09:48:14 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:48:14 np0005541914.localdomain podman[283762]: 2025-12-02 09:48:14.187642118 +0000 UTC m=+0.187393486 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 02 09:48:14 np0005541914.localdomain podman[283762]: 2025-12-02 09:48:14.262356195 +0000 UTC m=+0.262107623 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 02 09:48:14 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:48:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:48:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:48:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:48:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:48:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:48:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:48:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:48:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:48:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:48:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:48:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:48:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:48:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:48:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:48:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:48:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:48:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:48:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:48:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:48:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:48:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:48:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:48:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:48:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:48:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:48:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:48:16 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56143 DF PROTO=TCP SPT=54658 DPT=9102 SEQ=4214977276 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD553C1CE0000000001030307) 
Dec 02 09:48:17 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56144 DF PROTO=TCP SPT=54658 DPT=9102 SEQ=4214977276 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD553C5E20000000001030307) 
Dec 02 09:48:17 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26090 DF PROTO=TCP SPT=36006 DPT=9102 SEQ=3478909886 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD553C9220000000001030307) 
Dec 02 09:48:19 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56145 DF PROTO=TCP SPT=54658 DPT=9102 SEQ=4214977276 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD553CDE20000000001030307) 
Dec 02 09:48:19 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10178 DF PROTO=TCP SPT=38574 DPT=9102 SEQ=3134322641 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD553D1220000000001030307) 
Dec 02 09:48:22 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:48:22 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:48:23 np0005541914.localdomain systemd[1]: tmp-crun.a7MXOw.mount: Deactivated successfully.
Dec 02 09:48:23 np0005541914.localdomain podman[283805]: 2025-12-02 09:48:23.077748563 +0000 UTC m=+0.082616780 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41)
Dec 02 09:48:23 np0005541914.localdomain podman[283805]: 2025-12-02 09:48:23.093891957 +0000 UTC m=+0.098760204 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, release=1755695350, config_id=edpm, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-type=git, architecture=x86_64)
Dec 02 09:48:23 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:48:23 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56146 DF PROTO=TCP SPT=54658 DPT=9102 SEQ=4214977276 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD553DDA20000000001030307) 
Dec 02 09:48:23 np0005541914.localdomain systemd[1]: tmp-crun.naHGM6.mount: Deactivated successfully.
Dec 02 09:48:23 np0005541914.localdomain podman[283804]: 2025-12-02 09:48:23.180660362 +0000 UTC m=+0.187311394 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 02 09:48:23 np0005541914.localdomain podman[283804]: 2025-12-02 09:48:23.186537592 +0000 UTC m=+0.193188634 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 09:48:23 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:48:29 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:48:30 np0005541914.localdomain podman[283849]: 2025-12-02 09:48:30.064259865 +0000 UTC m=+0.070005484 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:48:30 np0005541914.localdomain podman[283849]: 2025-12-02 09:48:30.097957846 +0000 UTC m=+0.103703455 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 02 09:48:30 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:48:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56147 DF PROTO=TCP SPT=54658 DPT=9102 SEQ=4214977276 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD553FD220000000001030307) 
Dec 02 09:48:33 np0005541914.localdomain podman[239757]: time="2025-12-02T09:48:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 09:48:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:48:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150198 "" "Go-http-client/1.1"
Dec 02 09:48:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:48:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17705 "" "Go-http-client/1.1"
Dec 02 09:48:36 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:48:36.961 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:48:36 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:48:36.988 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:48:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:48:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:48:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:48:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 09:48:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:48:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:48:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:48:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 09:48:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:48:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:48:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 09:48:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:48:42 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:48:42 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:48:43 np0005541914.localdomain systemd[1]: tmp-crun.vJOlmn.mount: Deactivated successfully.
Dec 02 09:48:43 np0005541914.localdomain podman[283868]: 2025-12-02 09:48:43.094616954 +0000 UTC m=+0.100486317 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 02 09:48:43 np0005541914.localdomain podman[283868]: 2025-12-02 09:48:43.102163365 +0000 UTC m=+0.108032688 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 02 09:48:43 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 09:48:43 np0005541914.localdomain podman[283869]: 2025-12-02 09:48:43.188370933 +0000 UTC m=+0.192163762 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 02 09:48:43 np0005541914.localdomain podman[283869]: 2025-12-02 09:48:43.204813696 +0000 UTC m=+0.208606495 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 02 09:48:43 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 09:48:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:48:44.530 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:48:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:48:44.530 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:48:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:48:44.531 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 09:48:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:48:44.531 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 09:48:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:48:44.605 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 09:48:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:48:44.605 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:48:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:48:44.605 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:48:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:48:44.606 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:48:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:48:44.606 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:48:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:48:44.607 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:48:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:48:44.607 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:48:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:48:44.607 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 09:48:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:48:44.608 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:48:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:48:44.626 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:48:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:48:44.626 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:48:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:48:44.627 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:48:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:48:44.627 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 09:48:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:48:44.628 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:48:44 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:48:44 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:48:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:48:45.070 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:48:45 np0005541914.localdomain podman[283932]: 2025-12-02 09:48:45.077799887 +0000 UTC m=+0.081842656 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:48:45 np0005541914.localdomain podman[283932]: 2025-12-02 09:48:45.11414991 +0000 UTC m=+0.118192659 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent)
Dec 02 09:48:45 np0005541914.localdomain systemd[1]: tmp-crun.4StRXj.mount: Deactivated successfully.
Dec 02 09:48:45 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:48:45 np0005541914.localdomain podman[283933]: 2025-12-02 09:48:45.137531865 +0000 UTC m=+0.137833870 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Dec 02 09:48:45 np0005541914.localdomain podman[283933]: 2025-12-02 09:48:45.169772263 +0000 UTC m=+0.170074258 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Dec 02 09:48:45 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:48:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:48:45.293 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 09:48:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:48:45.296 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=12462MB free_disk=41.83708190917969GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 09:48:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:48:45.296 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:48:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:48:45.296 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:48:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:48:45.597 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 09:48:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:48:45.597 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 09:48:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:48:45.622 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:48:46 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48169 DF PROTO=TCP SPT=36722 DPT=9102 SEQ=2892143737 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55436FE0000000001030307) 
Dec 02 09:48:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:48:46.078 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:48:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:48:46.084 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 09:48:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:48:46.108 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 09:48:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:48:46.110 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 09:48:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:48:46.111 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:48:47 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48170 DF PROTO=TCP SPT=36722 DPT=9102 SEQ=2892143737 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5543B220000000001030307) 
Dec 02 09:48:47 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56148 DF PROTO=TCP SPT=54658 DPT=9102 SEQ=4214977276 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5543D220000000001030307) 
Dec 02 09:48:49 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48171 DF PROTO=TCP SPT=36722 DPT=9102 SEQ=2892143737 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55443230000000001030307) 
Dec 02 09:48:50 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26091 DF PROTO=TCP SPT=36006 DPT=9102 SEQ=3478909886 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55447220000000001030307) 
Dec 02 09:48:50 np0005541914.localdomain sshd[283998]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:48:50 np0005541914.localdomain sshd[283998]: Invalid user jose from 34.78.29.97 port 36438
Dec 02 09:48:50 np0005541914.localdomain sshd[283998]: Received disconnect from 34.78.29.97 port 36438:11: Bye Bye [preauth]
Dec 02 09:48:50 np0005541914.localdomain sshd[283998]: Disconnected from invalid user jose 34.78.29.97 port 36438 [preauth]
Dec 02 09:48:53 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48172 DF PROTO=TCP SPT=36722 DPT=9102 SEQ=2892143737 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55452E20000000001030307) 
Dec 02 09:48:53 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:48:53 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:48:54 np0005541914.localdomain podman[284000]: 2025-12-02 09:48:54.07654885 +0000 UTC m=+0.082573309 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 09:48:54 np0005541914.localdomain podman[284000]: 2025-12-02 09:48:54.083563815 +0000 UTC m=+0.089588264 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 09:48:54 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:48:54 np0005541914.localdomain podman[284001]: 2025-12-02 09:48:54.13206993 +0000 UTC m=+0.132801837 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, distribution-scope=public, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.buildah.version=1.33.7, io.openshift.expose-services=, container_name=openstack_network_exporter, architecture=x86_64, maintainer=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 02 09:48:54 np0005541914.localdomain podman[284001]: 2025-12-02 09:48:54.147975377 +0000 UTC m=+0.148707294 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, version=9.6, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, release=1755695350, vendor=Red Hat, Inc., config_id=edpm, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 02 09:48:54 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:49:00 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:49:01 np0005541914.localdomain systemd[1]: tmp-crun.XXGVGe.mount: Deactivated successfully.
Dec 02 09:49:01 np0005541914.localdomain podman[284042]: 2025-12-02 09:49:01.077785863 +0000 UTC m=+0.080675561 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 02 09:49:01 np0005541914.localdomain podman[284042]: 2025-12-02 09:49:01.111754942 +0000 UTC m=+0.114644590 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 02 09:49:01 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:49:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48173 DF PROTO=TCP SPT=36722 DPT=9102 SEQ=2892143737 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55473220000000001030307) 
Dec 02 09:49:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:49:03.159 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:49:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:49:03.159 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:49:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:49:03.159 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:49:03 np0005541914.localdomain podman[239757]: time="2025-12-02T09:49:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 09:49:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:49:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150198 "" "Go-http-client/1.1"
Dec 02 09:49:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:49:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17695 "" "Go-http-client/1.1"
Dec 02 09:49:06 np0005541914.localdomain sudo[284061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:49:06 np0005541914.localdomain sudo[284061]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:49:06 np0005541914.localdomain sudo[284061]: pam_unix(sudo:session): session closed for user root
Dec 02 09:49:06 np0005541914.localdomain sudo[284079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:49:06 np0005541914.localdomain sudo[284079]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:49:07 np0005541914.localdomain sudo[284079]: pam_unix(sudo:session): session closed for user root
Dec 02 09:49:08 np0005541914.localdomain sudo[284128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:49:08 np0005541914.localdomain sudo[284128]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:49:08 np0005541914.localdomain sudo[284128]: pam_unix(sudo:session): session closed for user root
Dec 02 09:49:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:49:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 09:49:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:49:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:49:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:49:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:49:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:49:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 09:49:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:49:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:49:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 09:49:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:49:13 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 02 09:49:13 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:49:13 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:49:14 np0005541914.localdomain podman[284147]: 2025-12-02 09:49:14.071882161 +0000 UTC m=+0.078835075 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 09:49:14 np0005541914.localdomain podman[284147]: 2025-12-02 09:49:14.111805723 +0000 UTC m=+0.118758647 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 02 09:49:14 np0005541914.localdomain systemd[1]: tmp-crun.v0mQUE.mount: Deactivated successfully.
Dec 02 09:49:14 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 09:49:14 np0005541914.localdomain podman[284148]: 2025-12-02 09:49:14.13097667 +0000 UTC m=+0.136870592 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:49:14 np0005541914.localdomain podman[284148]: 2025-12-02 09:49:14.144910886 +0000 UTC m=+0.150804788 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.build-date=20251125)
Dec 02 09:49:14 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 09:49:15 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:49:15 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:49:16 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8755 DF PROTO=TCP SPT=55450 DPT=9102 SEQ=2835534972 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD554AC2E0000000001030307) 
Dec 02 09:49:16 np0005541914.localdomain podman[284189]: 2025-12-02 09:49:16.068070708 +0000 UTC m=+0.074214812 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 02 09:49:16 np0005541914.localdomain podman[284189]: 2025-12-02 09:49:16.074895008 +0000 UTC m=+0.081039152 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 02 09:49:16 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:49:16 np0005541914.localdomain podman[284190]: 2025-12-02 09:49:16.137627078 +0000 UTC m=+0.136409588 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:49:16 np0005541914.localdomain podman[284190]: 2025-12-02 09:49:16.201965267 +0000 UTC m=+0.200747797 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 02 09:49:16 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:49:17 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8756 DF PROTO=TCP SPT=55450 DPT=9102 SEQ=2835534972 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD554B0230000000001030307) 
Dec 02 09:49:17 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48174 DF PROTO=TCP SPT=36722 DPT=9102 SEQ=2892143737 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD554B3230000000001030307) 
Dec 02 09:49:19 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8757 DF PROTO=TCP SPT=55450 DPT=9102 SEQ=2835534972 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD554B8220000000001030307) 
Dec 02 09:49:19 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56149 DF PROTO=TCP SPT=54658 DPT=9102 SEQ=4214977276 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD554BB220000000001030307) 
Dec 02 09:49:23 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8758 DF PROTO=TCP SPT=55450 DPT=9102 SEQ=2835534972 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD554C7E20000000001030307) 
Dec 02 09:49:24 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:49:24 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:49:25 np0005541914.localdomain podman[284234]: 2025-12-02 09:49:25.064219872 +0000 UTC m=+0.072264003 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 09:49:25 np0005541914.localdomain podman[284235]: 2025-12-02 09:49:25.128018575 +0000 UTC m=+0.131599030 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, release=1755695350, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, architecture=x86_64, build-date=2025-08-20T13:12:41)
Dec 02 09:49:25 np0005541914.localdomain podman[284235]: 2025-12-02 09:49:25.144039705 +0000 UTC m=+0.147620130 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.buildah.version=1.33.7, release=1755695350, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, distribution-scope=public, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git)
Dec 02 09:49:25 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:49:25 np0005541914.localdomain podman[284234]: 2025-12-02 09:49:25.202258838 +0000 UTC m=+0.210302979 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 09:49:25 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:49:30 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T09:49:30Z|00037|memory_trim|INFO|Detected inactivity (last active 30015 ms ago): trimming memory
Dec 02 09:49:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8759 DF PROTO=TCP SPT=55450 DPT=9102 SEQ=2835534972 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD554E9220000000001030307) 
Dec 02 09:49:31 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:49:32 np0005541914.localdomain podman[284277]: 2025-12-02 09:49:32.07015107 +0000 UTC m=+0.071095227 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd)
Dec 02 09:49:32 np0005541914.localdomain podman[284277]: 2025-12-02 09:49:32.083913302 +0000 UTC m=+0.084857419 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 02 09:49:32 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:49:33 np0005541914.localdomain podman[239757]: time="2025-12-02T09:49:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 09:49:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:49:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150198 "" "Go-http-client/1.1"
Dec 02 09:49:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:49:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17708 "" "Go-http-client/1.1"
Dec 02 09:49:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:49:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:49:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:49:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 09:49:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:49:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:49:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:49:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 09:49:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:49:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:49:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 09:49:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:49:44 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:49:44 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:49:45 np0005541914.localdomain systemd[1]: tmp-crun.rm4fFB.mount: Deactivated successfully.
Dec 02 09:49:45 np0005541914.localdomain podman[284296]: 2025-12-02 09:49:45.120686458 +0000 UTC m=+0.127411942 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 09:49:45 np0005541914.localdomain podman[284297]: 2025-12-02 09:49:45.17596964 +0000 UTC m=+0.139003796 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec 02 09:49:45 np0005541914.localdomain podman[284297]: 2025-12-02 09:49:45.189891036 +0000 UTC m=+0.152925202 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm)
Dec 02 09:49:45 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 09:49:45 np0005541914.localdomain podman[284296]: 2025-12-02 09:49:45.206872236 +0000 UTC m=+0.213597650 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 09:49:45 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 09:49:46 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31533 DF PROTO=TCP SPT=53390 DPT=9102 SEQ=1279323034 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD555215E0000000001030307) 
Dec 02 09:49:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:49:46.104 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:49:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:49:46.121 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:49:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:49:46.121 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:49:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:49:46.121 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 09:49:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:49:46.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:49:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:49:46.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:49:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:49:46.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 09:49:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:49:46.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 09:49:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:49:46.552 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 09:49:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:49:46.552 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:49:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:49:46.553 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:49:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:49:46.553 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:49:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:49:46.553 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:49:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:49:46.554 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:49:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:49:46.571 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:49:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:49:46.572 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:49:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:49:46.572 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:49:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:49:46.572 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 09:49:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:49:46.573 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:49:46 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:49:46 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:49:47 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31534 DF PROTO=TCP SPT=53390 DPT=9102 SEQ=1279323034 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55525620000000001030307) 
Dec 02 09:49:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:49:47.064 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:49:47 np0005541914.localdomain systemd[1]: tmp-crun.fEMO3O.mount: Deactivated successfully.
Dec 02 09:49:47 np0005541914.localdomain podman[284359]: 2025-12-02 09:49:47.078654705 +0000 UTC m=+0.086058926 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 02 09:49:47 np0005541914.localdomain podman[284359]: 2025-12-02 09:49:47.109495949 +0000 UTC m=+0.116900190 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 02 09:49:47 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:49:47 np0005541914.localdomain podman[284360]: 2025-12-02 09:49:47.115521263 +0000 UTC m=+0.120797678 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, container_name=ovn_controller)
Dec 02 09:49:47 np0005541914.localdomain podman[284360]: 2025-12-02 09:49:47.195067279 +0000 UTC m=+0.200343734 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:49:47 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:49:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:49:47.257 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 09:49:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:49:47.258 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=12466MB free_disk=41.8370246887207GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 09:49:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:49:47.258 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:49:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:49:47.259 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:49:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:49:47.302 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 09:49:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:49:47.302 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 09:49:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:49:47.319 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:49:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:49:47.781 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:49:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:49:47.787 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 09:49:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:49:47.817 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 09:49:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:49:47.818 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 09:49:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:49:47.819 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:49:47 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8760 DF PROTO=TCP SPT=55450 DPT=9102 SEQ=2835534972 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55529220000000001030307) 
Dec 02 09:49:49 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31535 DF PROTO=TCP SPT=53390 DPT=9102 SEQ=1279323034 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5552D620000000001030307) 
Dec 02 09:49:50 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48175 DF PROTO=TCP SPT=36722 DPT=9102 SEQ=2892143737 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55531220000000001030307) 
Dec 02 09:49:50 np0005541914.localdomain sshd[284425]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:49:50 np0005541914.localdomain sshd[284425]: Invalid user aa from 34.78.29.97 port 33274
Dec 02 09:49:50 np0005541914.localdomain sshd[284425]: Received disconnect from 34.78.29.97 port 33274:11: Bye Bye [preauth]
Dec 02 09:49:50 np0005541914.localdomain sshd[284425]: Disconnected from invalid user aa 34.78.29.97 port 33274 [preauth]
Dec 02 09:49:53 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31536 DF PROTO=TCP SPT=53390 DPT=9102 SEQ=1279323034 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5553D220000000001030307) 
Dec 02 09:49:55 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:49:55 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:49:56 np0005541914.localdomain podman[284427]: 2025-12-02 09:49:56.082807613 +0000 UTC m=+0.088481819 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 09:49:56 np0005541914.localdomain podman[284427]: 2025-12-02 09:49:56.094945194 +0000 UTC m=+0.100619500 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 02 09:49:56 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:49:56 np0005541914.localdomain systemd[1]: tmp-crun.BA4kce.mount: Deactivated successfully.
Dec 02 09:49:56 np0005541914.localdomain podman[284428]: 2025-12-02 09:49:56.184496215 +0000 UTC m=+0.187269532 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, managed_by=edpm_ansible, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9)
Dec 02 09:49:56 np0005541914.localdomain podman[284428]: 2025-12-02 09:49:56.196274637 +0000 UTC m=+0.199047944 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., version=9.6, managed_by=edpm_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.buildah.version=1.33.7, vcs-type=git, architecture=x86_64, io.openshift.tags=minimal rhel9, release=1755695350)
Dec 02 09:49:56 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:50:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31537 DF PROTO=TCP SPT=53390 DPT=9102 SEQ=1279323034 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5555D220000000001030307) 
Dec 02 09:50:02 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:50:03 np0005541914.localdomain systemd[1]: tmp-crun.UCJsNd.mount: Deactivated successfully.
Dec 02 09:50:03 np0005541914.localdomain podman[284470]: 2025-12-02 09:50:03.07274734 +0000 UTC m=+0.079722392 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 02 09:50:03 np0005541914.localdomain podman[284470]: 2025-12-02 09:50:03.088240994 +0000 UTC m=+0.095215986 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 02 09:50:03 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:50:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:50:03.160 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:50:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:50:03.160 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:50:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:50:03.161 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:50:03 np0005541914.localdomain podman[239757]: time="2025-12-02T09:50:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 09:50:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:50:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150198 "" "Go-http-client/1.1"
Dec 02 09:50:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:50:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17702 "" "Go-http-client/1.1"
Dec 02 09:50:08 np0005541914.localdomain sudo[284490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:50:08 np0005541914.localdomain sudo[284490]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:50:08 np0005541914.localdomain sudo[284490]: pam_unix(sudo:session): session closed for user root
Dec 02 09:50:08 np0005541914.localdomain sudo[284508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:50:08 np0005541914.localdomain sudo[284508]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:50:08 np0005541914.localdomain sudo[284508]: pam_unix(sudo:session): session closed for user root
Dec 02 09:50:11 np0005541914.localdomain sudo[284559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:50:11 np0005541914.localdomain sudo[284559]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:50:11 np0005541914.localdomain sudo[284559]: pam_unix(sudo:session): session closed for user root
Dec 02 09:50:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:50:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:50:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:50:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:50:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:50:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 09:50:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:50:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 09:50:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:50:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:50:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 09:50:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:50:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.435 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:50:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:50:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:50:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:50:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:50:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:50:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:50:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:50:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:50:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:50:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:50:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:50:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:50:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:50:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:50:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:50:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:50:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:50:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:50:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:50:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:50:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:50:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:50:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:50:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:50:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:50:15 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:50:15 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:50:16 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17469 DF PROTO=TCP SPT=44588 DPT=9102 SEQ=2810891178 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD555968E0000000001030307) 
Dec 02 09:50:16 np0005541914.localdomain podman[284578]: 2025-12-02 09:50:16.090598215 +0000 UTC m=+0.090010287 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:50:16 np0005541914.localdomain podman[284577]: 2025-12-02 09:50:16.065506747 +0000 UTC m=+0.068905140 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 02 09:50:16 np0005541914.localdomain podman[284578]: 2025-12-02 09:50:16.128912048 +0000 UTC m=+0.128324090 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 02 09:50:16 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 09:50:16 np0005541914.localdomain podman[284577]: 2025-12-02 09:50:16.153541102 +0000 UTC m=+0.156939545 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 09:50:16 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 09:50:17 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17470 DF PROTO=TCP SPT=44588 DPT=9102 SEQ=2810891178 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5559AA20000000001030307) 
Dec 02 09:50:17 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31538 DF PROTO=TCP SPT=53390 DPT=9102 SEQ=1279323034 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5559D230000000001030307) 
Dec 02 09:50:17 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:50:17 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:50:18 np0005541914.localdomain systemd[1]: tmp-crun.X742SO.mount: Deactivated successfully.
Dec 02 09:50:18 np0005541914.localdomain podman[284620]: 2025-12-02 09:50:18.084645408 +0000 UTC m=+0.086601552 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:50:18 np0005541914.localdomain podman[284620]: 2025-12-02 09:50:18.096986505 +0000 UTC m=+0.098942619 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:50:18 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:50:18 np0005541914.localdomain podman[284621]: 2025-12-02 09:50:18.18727266 +0000 UTC m=+0.184031315 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller)
Dec 02 09:50:18 np0005541914.localdomain podman[284621]: 2025-12-02 09:50:18.250096953 +0000 UTC m=+0.246855658 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 02 09:50:18 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:50:19 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17471 DF PROTO=TCP SPT=44588 DPT=9102 SEQ=2810891178 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD555A2A20000000001030307) 
Dec 02 09:50:20 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8761 DF PROTO=TCP SPT=55450 DPT=9102 SEQ=2835534972 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD555A7220000000001030307) 
Dec 02 09:50:23 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17472 DF PROTO=TCP SPT=44588 DPT=9102 SEQ=2810891178 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD555B2620000000001030307) 
Dec 02 09:50:26 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:50:26 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:50:27 np0005541914.localdomain podman[284666]: 2025-12-02 09:50:27.078201701 +0000 UTC m=+0.081288779 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 02 09:50:27 np0005541914.localdomain podman[284666]: 2025-12-02 09:50:27.083349879 +0000 UTC m=+0.086436967 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 02 09:50:27 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:50:27 np0005541914.localdomain podman[284667]: 2025-12-02 09:50:27.133488244 +0000 UTC m=+0.135554720 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, distribution-scope=public, config_id=edpm, vcs-type=git, managed_by=edpm_ansible, release=1755695350, io.buildah.version=1.33.7, vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9)
Dec 02 09:50:27 np0005541914.localdomain podman[284667]: 2025-12-02 09:50:27.143947794 +0000 UTC m=+0.146014290 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, vcs-type=git, io.openshift.expose-services=, release=1755695350, version=9.6, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=edpm, architecture=x86_64, io.buildah.version=1.33.7, managed_by=edpm_ansible, distribution-scope=public, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 02 09:50:27 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:50:31 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17473 DF PROTO=TCP SPT=44588 DPT=9102 SEQ=2810891178 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD555D3220000000001030307) 
Dec 02 09:50:33 np0005541914.localdomain podman[239757]: time="2025-12-02T09:50:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 09:50:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:50:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150198 "" "Go-http-client/1.1"
Dec 02 09:50:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:50:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17712 "" "Go-http-client/1.1"
Dec 02 09:50:33 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:50:34 np0005541914.localdomain podman[284711]: 2025-12-02 09:50:34.078995993 +0000 UTC m=+0.077052970 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 02 09:50:34 np0005541914.localdomain podman[284711]: 2025-12-02 09:50:34.113890582 +0000 UTC m=+0.111947649 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd)
Dec 02 09:50:34 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:50:34 np0005541914.localdomain sshd[284730]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:50:34 np0005541914.localdomain sshd[284730]: Accepted publickey for zuul from 38.102.83.114 port 38160 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 09:50:34 np0005541914.localdomain systemd-logind[760]: New session 61 of user zuul.
Dec 02 09:50:34 np0005541914.localdomain systemd[1]: Started Session 61 of User zuul.
Dec 02 09:50:34 np0005541914.localdomain sshd[284730]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 09:50:34 np0005541914.localdomain sudo[284750]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-grugovtspmkvtnqqbbkxtmpacivmhptd ; /usr/bin/python3
Dec 02 09:50:34 np0005541914.localdomain sudo[284750]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 09:50:35 np0005541914.localdomain python3[284752]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager unregister _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:50:35 np0005541914.localdomain subscription-manager[284753]: Unregistered machine with identity: 5e8bb4be-b98c-46c0-ac7b-5189dfb48508
Dec 02 09:50:35 np0005541914.localdomain sudo[284750]: pam_unix(sudo:session): session closed for user root
Dec 02 09:50:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:50:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 09:50:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:50:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:50:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:50:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:50:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:50:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 09:50:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:50:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:50:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 09:50:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:50:46 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36388 DF PROTO=TCP SPT=56236 DPT=9102 SEQ=2067433686 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5560BBE0000000001030307) 
Dec 02 09:50:46 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:50:46 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:50:47 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36389 DF PROTO=TCP SPT=56236 DPT=9102 SEQ=2067433686 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5560FE20000000001030307) 
Dec 02 09:50:47 np0005541914.localdomain podman[284756]: 2025-12-02 09:50:47.083740319 +0000 UTC m=+0.085534550 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 02 09:50:47 np0005541914.localdomain podman[284756]: 2025-12-02 09:50:47.099107429 +0000 UTC m=+0.100901640 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 02 09:50:47 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 09:50:47 np0005541914.localdomain podman[284755]: 2025-12-02 09:50:47.181935315 +0000 UTC m=+0.185594924 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 02 09:50:47 np0005541914.localdomain podman[284755]: 2025-12-02 09:50:47.21902921 +0000 UTC m=+0.222688789 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 09:50:47 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 09:50:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:50:47.793 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:50:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:50:47.793 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:50:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:50:47.794 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 09:50:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:50:47.794 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 09:50:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:50:47.811 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 09:50:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:50:47.811 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:50:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:50:47.811 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:50:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:50:47.812 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:50:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:50:47.812 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:50:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:50:47.813 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:50:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:50:47.813 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 09:50:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:50:47.813 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:50:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:50:47.828 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:50:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:50:47.828 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:50:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:50:47.829 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:50:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:50:47.829 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 09:50:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:50:47.829 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:50:47 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17474 DF PROTO=TCP SPT=44588 DPT=9102 SEQ=2810891178 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55613220000000001030307) 
Dec 02 09:50:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:50:48.284 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:50:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:50:48.544 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 09:50:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:50:48.547 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=12504MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 09:50:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:50:48.547 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:50:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:50:48.548 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:50:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:50:48.630 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 09:50:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:50:48.631 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 09:50:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:50:48.648 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:50:48 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:50:48 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:50:49 np0005541914.localdomain systemd[1]: tmp-crun.ttJMoM.mount: Deactivated successfully.
Dec 02 09:50:49 np0005541914.localdomain podman[284840]: 2025-12-02 09:50:49.073392537 +0000 UTC m=+0.076583465 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:50:49 np0005541914.localdomain podman[284839]: 2025-12-02 09:50:49.091612064 +0000 UTC m=+0.093399440 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec 02 09:50:49 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36390 DF PROTO=TCP SPT=56236 DPT=9102 SEQ=2067433686 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55617E20000000001030307) 
Dec 02 09:50:49 np0005541914.localdomain podman[284839]: 2025-12-02 09:50:49.119993954 +0000 UTC m=+0.121781310 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 02 09:50:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:50:49.124 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:50:49 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:50:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:50:49.133 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 09:50:49 np0005541914.localdomain podman[284840]: 2025-12-02 09:50:49.141344766 +0000 UTC m=+0.144535734 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:50:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:50:49.153 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 09:50:49 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:50:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:50:49.155 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 09:50:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:50:49.156 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:50:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:50:49.871 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:50:49 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31539 DF PROTO=TCP SPT=53390 DPT=9102 SEQ=1279323034 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD5561B220000000001030307) 
Dec 02 09:50:50 np0005541914.localdomain sshd[284884]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:50:51 np0005541914.localdomain sshd[284884]: Received disconnect from 34.78.29.97 port 46578:11: Bye Bye [preauth]
Dec 02 09:50:51 np0005541914.localdomain sshd[284884]: Disconnected from authenticating user root 34.78.29.97 port 46578 [preauth]
Dec 02 09:50:53 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36391 DF PROTO=TCP SPT=56236 DPT=9102 SEQ=2067433686 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55627A20000000001030307) 
Dec 02 09:50:57 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:50:57 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:50:58 np0005541914.localdomain systemd[1]: tmp-crun.ehOcL5.mount: Deactivated successfully.
Dec 02 09:50:58 np0005541914.localdomain podman[284886]: 2025-12-02 09:50:58.067189802 +0000 UTC m=+0.073865447 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 09:50:58 np0005541914.localdomain podman[284886]: 2025-12-02 09:50:58.075231188 +0000 UTC m=+0.081906843 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 09:50:58 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:50:58 np0005541914.localdomain podman[284887]: 2025-12-02 09:50:58.124800452 +0000 UTC m=+0.125435963 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, version=9.6, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, name=ubi9-minimal, config_id=edpm, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-type=git, distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible)
Dec 02 09:50:58 np0005541914.localdomain podman[284887]: 2025-12-02 09:50:58.135586322 +0000 UTC m=+0.136221873 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, maintainer=Red Hat, Inc., architecture=x86_64, config_id=edpm, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, version=9.6, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, distribution-scope=public)
Dec 02 09:50:58 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:51:01 np0005541914.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:58:a5:f0 MACDST=fa:16:3e:08:72:ba MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.108 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36392 DF PROTO=TCP SPT=56236 DPT=9102 SEQ=2067433686 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AD55647220000000001030307) 
Dec 02 09:51:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:51:03.161 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:51:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:51:03.162 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:51:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:51:03.162 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:51:03 np0005541914.localdomain sudo[284928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:51:03 np0005541914.localdomain sudo[284928]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:51:03 np0005541914.localdomain sudo[284928]: pam_unix(sudo:session): session closed for user root
Dec 02 09:51:03 np0005541914.localdomain sudo[284946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 02 09:51:03 np0005541914.localdomain sudo[284946]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:51:03 np0005541914.localdomain podman[239757]: time="2025-12-02T09:51:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 09:51:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:51:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 150198 "" "Go-http-client/1.1"
Dec 02 09:51:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:51:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17705 "" "Go-http-client/1.1"
Dec 02 09:51:03 np0005541914.localdomain sudo[284946]: pam_unix(sudo:session): session closed for user root
Dec 02 09:51:04 np0005541914.localdomain sudo[284984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:51:04 np0005541914.localdomain sudo[284984]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:51:04 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:51:04 np0005541914.localdomain sudo[284984]: pam_unix(sudo:session): session closed for user root
Dec 02 09:51:04 np0005541914.localdomain podman[285002]: 2025-12-02 09:51:04.722910237 +0000 UTC m=+0.078695665 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:51:04 np0005541914.localdomain podman[285002]: 2025-12-02 09:51:04.736884664 +0000 UTC m=+0.092670142 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 02 09:51:04 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:51:05 np0005541914.localdomain sudo[285019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:51:05 np0005541914.localdomain sudo[285019]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:51:05 np0005541914.localdomain sudo[285019]: pam_unix(sudo:session): session closed for user root
Dec 02 09:51:06 np0005541914.localdomain sudo[285037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:51:06 np0005541914.localdomain sudo[285037]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:51:06 np0005541914.localdomain sudo[285037]: pam_unix(sudo:session): session closed for user root
Dec 02 09:51:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:51:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:51:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:51:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 09:51:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:51:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:51:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:51:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 09:51:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:51:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:51:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 09:51:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:51:12 np0005541914.localdomain sshd[285055]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:51:12 np0005541914.localdomain sshd[285055]: Accepted publickey for tripleo-admin from 192.168.122.11 port 34416 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 09:51:12 np0005541914.localdomain systemd[1]: Created slice User Slice of UID 1003.
Dec 02 09:51:12 np0005541914.localdomain systemd[1]: Starting User Runtime Directory /run/user/1003...
Dec 02 09:51:12 np0005541914.localdomain systemd-logind[760]: New session 62 of user tripleo-admin.
Dec 02 09:51:12 np0005541914.localdomain systemd[1]: Finished User Runtime Directory /run/user/1003.
Dec 02 09:51:12 np0005541914.localdomain systemd[1]: Starting User Manager for UID 1003...
Dec 02 09:51:12 np0005541914.localdomain systemd[285059]: pam_unix(systemd-user:session): session opened for user tripleo-admin(uid=1003) by (uid=0)
Dec 02 09:51:12 np0005541914.localdomain systemd[285059]: Queued start job for default target Main User Target.
Dec 02 09:51:12 np0005541914.localdomain systemd[285059]: Created slice User Application Slice.
Dec 02 09:51:12 np0005541914.localdomain systemd[285059]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 02 09:51:12 np0005541914.localdomain systemd[285059]: Started Daily Cleanup of User's Temporary Directories.
Dec 02 09:51:12 np0005541914.localdomain systemd[285059]: Reached target Paths.
Dec 02 09:51:12 np0005541914.localdomain systemd[285059]: Reached target Timers.
Dec 02 09:51:12 np0005541914.localdomain systemd[285059]: Starting D-Bus User Message Bus Socket...
Dec 02 09:51:12 np0005541914.localdomain systemd[285059]: Starting Create User's Volatile Files and Directories...
Dec 02 09:51:12 np0005541914.localdomain systemd[285059]: Finished Create User's Volatile Files and Directories.
Dec 02 09:51:12 np0005541914.localdomain systemd[285059]: Listening on D-Bus User Message Bus Socket.
Dec 02 09:51:12 np0005541914.localdomain systemd[285059]: Reached target Sockets.
Dec 02 09:51:12 np0005541914.localdomain systemd[285059]: Reached target Basic System.
Dec 02 09:51:12 np0005541914.localdomain systemd[285059]: Reached target Main User Target.
Dec 02 09:51:12 np0005541914.localdomain systemd[285059]: Startup finished in 155ms.
Dec 02 09:51:12 np0005541914.localdomain systemd[1]: Started User Manager for UID 1003.
Dec 02 09:51:12 np0005541914.localdomain systemd[1]: Started Session 62 of User tripleo-admin.
Dec 02 09:51:12 np0005541914.localdomain sshd[285055]: pam_unix(sshd:session): session opened for user tripleo-admin(uid=1003) by (uid=0)
Dec 02 09:51:13 np0005541914.localdomain sudo[285199]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-esccwvwwtnkdcppporuiwblgxkixooux ; /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764669072.8420515-60033-62939621895647/AnsiballZ_blockinfile.py
Dec 02 09:51:13 np0005541914.localdomain sudo[285199]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Dec 02 09:51:13 np0005541914.localdomain python3[285201]: ansible-ansible.builtin.blockinfile Invoked with marker_begin=BEGIN ceph firewall rules marker_end=END ceph firewall rules path=/etc/nftables/edpm-rules.nft mode=0644 block=# 100 ceph_alertmanager (9093)
                                                          add rule inet filter EDPM_INPUT tcp dport { 9093 } ct state new counter accept comment "100 ceph_alertmanager"
                                                          # 100 ceph_dashboard (8443)
                                                          add rule inet filter EDPM_INPUT tcp dport { 8443 } ct state new counter accept comment "100 ceph_dashboard"
                                                          # 100 ceph_grafana (3100)
                                                          add rule inet filter EDPM_INPUT tcp dport { 3100 } ct state new counter accept comment "100 ceph_grafana"
                                                          # 100 ceph_prometheus (9092)
                                                          add rule inet filter EDPM_INPUT tcp dport { 9092 } ct state new counter accept comment "100 ceph_prometheus"
                                                          # 100 ceph_rgw (8080)
                                                          add rule inet filter EDPM_INPUT tcp dport { 8080 } ct state new counter accept comment "100 ceph_rgw"
                                                          # 110 ceph_mon (6789, 3300, 9100)
                                                          add rule inet filter EDPM_INPUT tcp dport { 6789,3300,9100 } ct state new counter accept comment "110 ceph_mon"
                                                          # 112 ceph_mds (6800-7300, 9100)
                                                          add rule inet filter EDPM_INPUT tcp dport { 6800-7300,9100 } ct state new counter accept comment "112 ceph_mds"
                                                          # 113 ceph_mgr (6800-7300, 8444)
                                                          add rule inet filter EDPM_INPUT tcp dport { 6800-7300,8444 } ct state new counter accept comment "113 ceph_mgr"
                                                          # 120 ceph_nfs (2049, 12049)
                                                          add rule inet filter EDPM_INPUT tcp dport { 2049,12049 } ct state new counter accept comment "120 ceph_nfs"
                                                          # 123 ceph_dashboard (9090, 9094, 9283)
                                                          add rule inet filter EDPM_INPUT tcp dport { 9090,9094,9283 } ct state new counter accept comment "123 ceph_dashboard"
                                                           insertbefore=^# Lock down INPUT chains state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False unsafe_writes=False insertafter=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:51:13 np0005541914.localdomain sudo[285199]: pam_unix(sudo:session): session closed for user root
Dec 02 09:51:13 np0005541914.localdomain systemd-journald[47679]: Field hash table of /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal has a fill level at 80.5 (268 of 333 items), suggesting rotation.
Dec 02 09:51:13 np0005541914.localdomain systemd-journald[47679]: /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal: Journal header limits reached or header out-of-date, rotating.
Dec 02 09:51:13 np0005541914.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 02 09:51:13 np0005541914.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 02 09:51:14 np0005541914.localdomain sudo[285344]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fawuxicipuijbyhwwkmapoqbmjjyyazk ; /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764669073.6398294-60047-4834328469042/AnsiballZ_systemd.py
Dec 02 09:51:14 np0005541914.localdomain sudo[285344]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Dec 02 09:51:14 np0005541914.localdomain python3[285346]: ansible-ansible.builtin.systemd Invoked with name=nftables state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 02 09:51:14 np0005541914.localdomain systemd[1]: Stopping Netfilter Tables...
Dec 02 09:51:14 np0005541914.localdomain systemd[1]: nftables.service: Deactivated successfully.
Dec 02 09:51:14 np0005541914.localdomain systemd[1]: Stopped Netfilter Tables.
Dec 02 09:51:14 np0005541914.localdomain systemd[1]: Starting Netfilter Tables...
Dec 02 09:51:14 np0005541914.localdomain systemd[1]: Finished Netfilter Tables.
Dec 02 09:51:14 np0005541914.localdomain sudo[285344]: pam_unix(sudo:session): session closed for user root
Dec 02 09:51:16 np0005541914.localdomain sudo[285371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:51:16 np0005541914.localdomain sudo[285371]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:51:16 np0005541914.localdomain sudo[285371]: pam_unix(sudo:session): session closed for user root
Dec 02 09:51:17 np0005541914.localdomain sudo[285389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:51:17 np0005541914.localdomain sudo[285389]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:51:17 np0005541914.localdomain sudo[285389]: pam_unix(sudo:session): session closed for user root
Dec 02 09:51:17 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:51:17 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:51:18 np0005541914.localdomain systemd[1]: tmp-crun.9d9aCk.mount: Deactivated successfully.
Dec 02 09:51:18 np0005541914.localdomain podman[285439]: 2025-12-02 09:51:18.090045515 +0000 UTC m=+0.089890256 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:51:18 np0005541914.localdomain podman[285439]: 2025-12-02 09:51:18.124965212 +0000 UTC m=+0.124809993 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 02 09:51:18 np0005541914.localdomain podman[285438]: 2025-12-02 09:51:18.13503789 +0000 UTC m=+0.140947266 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 09:51:18 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 09:51:18 np0005541914.localdomain podman[285438]: 2025-12-02 09:51:18.141423605 +0000 UTC m=+0.147332961 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 09:51:18 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 09:51:18 np0005541914.localdomain sudo[285481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:51:18 np0005541914.localdomain sudo[285481]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:51:18 np0005541914.localdomain sudo[285481]: pam_unix(sudo:session): session closed for user root
Dec 02 09:51:18 np0005541914.localdomain sudo[285499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:51:18 np0005541914.localdomain sudo[285499]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:51:18 np0005541914.localdomain sudo[285499]: pam_unix(sudo:session): session closed for user root
Dec 02 09:51:19 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:51:19 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:51:20 np0005541914.localdomain systemd[1]: tmp-crun.F8cAj2.mount: Deactivated successfully.
Dec 02 09:51:20 np0005541914.localdomain podman[285518]: 2025-12-02 09:51:20.086373497 +0000 UTC m=+0.081535312 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec 02 09:51:20 np0005541914.localdomain systemd[1]: tmp-crun.8XRguH.mount: Deactivated successfully.
Dec 02 09:51:20 np0005541914.localdomain podman[285517]: 2025-12-02 09:51:20.148746553 +0000 UTC m=+0.146597650 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 02 09:51:20 np0005541914.localdomain podman[285517]: 2025-12-02 09:51:20.15849512 +0000 UTC m=+0.156346217 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent)
Dec 02 09:51:20 np0005541914.localdomain sudo[285546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:51:20 np0005541914.localdomain sudo[285546]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:51:20 np0005541914.localdomain sudo[285546]: pam_unix(sudo:session): session closed for user root
Dec 02 09:51:20 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:51:20 np0005541914.localdomain podman[285518]: 2025-12-02 09:51:20.201329059 +0000 UTC m=+0.196490864 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 02 09:51:20 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:51:21 np0005541914.localdomain sudo[285576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:51:21 np0005541914.localdomain sudo[285576]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:51:21 np0005541914.localdomain sudo[285576]: pam_unix(sudo:session): session closed for user root
Dec 02 09:51:22 np0005541914.localdomain sudo[285595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:51:22 np0005541914.localdomain sudo[285595]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:51:22 np0005541914.localdomain sudo[285595]: pam_unix(sudo:session): session closed for user root
Dec 02 09:51:23 np0005541914.localdomain sudo[285613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:51:23 np0005541914.localdomain sudo[285613]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:51:23 np0005541914.localdomain sudo[285613]: pam_unix(sudo:session): session closed for user root
Dec 02 09:51:25 np0005541914.localdomain sudo[285631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:51:25 np0005541914.localdomain sudo[285631]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:51:25 np0005541914.localdomain sudo[285631]: pam_unix(sudo:session): session closed for user root
Dec 02 09:51:25 np0005541914.localdomain sudo[285649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:51:25 np0005541914.localdomain sudo[285649]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:51:25 np0005541914.localdomain podman[285710]: 
Dec 02 09:51:25 np0005541914.localdomain podman[285710]: 2025-12-02 09:51:25.713068917 +0000 UTC m=+0.067758480 container create 677b793e521fa5e0b954b7ba2c4a8303ddabb31f89f81c3baf696ac3da67654e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_jemison, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, version=7, build-date=2025-11-26T19:44:28Z, distribution-scope=public, io.openshift.tags=rhceph ceph, ceph=True, release=1763362218, com.redhat.component=rhceph-container, vcs-type=git, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True)
Dec 02 09:51:25 np0005541914.localdomain systemd[1]: Started libpod-conmon-677b793e521fa5e0b954b7ba2c4a8303ddabb31f89f81c3baf696ac3da67654e.scope.
Dec 02 09:51:25 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:51:25 np0005541914.localdomain podman[285710]: 2025-12-02 09:51:25.686358821 +0000 UTC m=+0.041048374 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:51:25 np0005541914.localdomain podman[285710]: 2025-12-02 09:51:25.800803227 +0000 UTC m=+0.155492800 container init 677b793e521fa5e0b954b7ba2c4a8303ddabb31f89f81c3baf696ac3da67654e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_jemison, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, ceph=True, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z)
Dec 02 09:51:25 np0005541914.localdomain podman[285710]: 2025-12-02 09:51:25.813156125 +0000 UTC m=+0.167845718 container start 677b793e521fa5e0b954b7ba2c4a8303ddabb31f89f81c3baf696ac3da67654e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_jemison, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, distribution-scope=public, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=1763362218, architecture=x86_64, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, version=7, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container)
Dec 02 09:51:25 np0005541914.localdomain podman[285710]: 2025-12-02 09:51:25.813784384 +0000 UTC m=+0.168474007 container attach 677b793e521fa5e0b954b7ba2c4a8303ddabb31f89f81c3baf696ac3da67654e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_jemison, io.k8s.description=Red Hat Ceph Storage 7, version=7, release=1763362218, GIT_CLEAN=True, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, architecture=x86_64, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, vcs-type=git, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z)
Dec 02 09:51:25 np0005541914.localdomain vigorous_jemison[285725]: 167 167
Dec 02 09:51:25 np0005541914.localdomain systemd[1]: libpod-677b793e521fa5e0b954b7ba2c4a8303ddabb31f89f81c3baf696ac3da67654e.scope: Deactivated successfully.
Dec 02 09:51:25 np0005541914.localdomain podman[285710]: 2025-12-02 09:51:25.817517678 +0000 UTC m=+0.172207301 container died 677b793e521fa5e0b954b7ba2c4a8303ddabb31f89f81c3baf696ac3da67654e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_jemison, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, ceph=True, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, distribution-scope=public, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, version=7, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 02 09:51:25 np0005541914.localdomain podman[285730]: 2025-12-02 09:51:25.904205006 +0000 UTC m=+0.073994731 container remove 677b793e521fa5e0b954b7ba2c4a8303ddabb31f89f81c3baf696ac3da67654e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_jemison, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, architecture=x86_64, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, version=7, description=Red Hat Ceph Storage 7)
Dec 02 09:51:25 np0005541914.localdomain systemd[1]: libpod-conmon-677b793e521fa5e0b954b7ba2c4a8303ddabb31f89f81c3baf696ac3da67654e.scope: Deactivated successfully.
Dec 02 09:51:25 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:51:26 np0005541914.localdomain systemd-sysv-generator[285776]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:51:26 np0005541914.localdomain systemd-rc-local-generator[285772]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:51:26 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:51:26 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:51:26 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:51:26 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:51:26 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:51:26 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:51:26 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:51:26 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:51:26 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:51:26 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f3ba8f606b49fa1e33413a82c09c7cdfb7eb5c5934a67c8545afa5ff820a4849-merged.mount: Deactivated successfully.
Dec 02 09:51:26 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:51:26 np0005541914.localdomain systemd-sysv-generator[285817]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:51:26 np0005541914.localdomain systemd-rc-local-generator[285814]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:51:26 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:51:26 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:51:26 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:51:26 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:51:26 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:51:26 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:51:26 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:51:26 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:51:26 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:51:26 np0005541914.localdomain systemd[1]: Starting Ceph mds.mds.np0005541914.sqgqkj for c7c8e171-a193-56fb-95fa-8879fcfa7074...
Dec 02 09:51:27 np0005541914.localdomain podman[285877]: 
Dec 02 09:51:27 np0005541914.localdomain podman[285877]: 2025-12-02 09:51:27.104877433 +0000 UTC m=+0.086713320 container create 3b458e17899960af4c5df398e5b51007fb1c60e12a37386a9aca0dd6b8b14fda (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mds-mds-np0005541914-sqgqkj, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., distribution-scope=public, release=1763362218, version=7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 09:51:27 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d33854e09090b80cf8d8ff2a3795a72e18c22b896efd7f82c7b880820de8fe54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 02 09:51:27 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d33854e09090b80cf8d8ff2a3795a72e18c22b896efd7f82c7b880820de8fe54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 02 09:51:27 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d33854e09090b80cf8d8ff2a3795a72e18c22b896efd7f82c7b880820de8fe54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 02 09:51:27 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d33854e09090b80cf8d8ff2a3795a72e18c22b896efd7f82c7b880820de8fe54/merged/var/lib/ceph/mds/ceph-mds.np0005541914.sqgqkj supports timestamps until 2038 (0x7fffffff)
Dec 02 09:51:27 np0005541914.localdomain podman[285877]: 2025-12-02 09:51:27.068353498 +0000 UTC m=+0.050189445 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:51:27 np0005541914.localdomain podman[285877]: 2025-12-02 09:51:27.185997011 +0000 UTC m=+0.167832878 container init 3b458e17899960af4c5df398e5b51007fb1c60e12a37386a9aca0dd6b8b14fda (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mds-mds-np0005541914-sqgqkj, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, io.openshift.tags=rhceph ceph, ceph=True, GIT_CLEAN=True, RELEASE=main, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7)
Dec 02 09:51:27 np0005541914.localdomain podman[285877]: 2025-12-02 09:51:27.237812594 +0000 UTC m=+0.219648461 container start 3b458e17899960af4c5df398e5b51007fb1c60e12a37386a9aca0dd6b8b14fda (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mds-mds-np0005541914-sqgqkj, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, version=7, vcs-type=git, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, RELEASE=main, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.expose-services=)
Dec 02 09:51:27 np0005541914.localdomain bash[285877]: 3b458e17899960af4c5df398e5b51007fb1c60e12a37386a9aca0dd6b8b14fda
Dec 02 09:51:27 np0005541914.localdomain systemd[1]: Started Ceph mds.mds.np0005541914.sqgqkj for c7c8e171-a193-56fb-95fa-8879fcfa7074.
Dec 02 09:51:27 np0005541914.localdomain ceph-mds[285895]: set uid:gid to 167:167 (ceph:ceph)
Dec 02 09:51:27 np0005541914.localdomain ceph-mds[285895]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mds, pid 2
Dec 02 09:51:27 np0005541914.localdomain ceph-mds[285895]: main not setting numa affinity
Dec 02 09:51:27 np0005541914.localdomain ceph-mds[285895]: pidfile_write: ignore empty --pid-file
Dec 02 09:51:27 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mds-mds-np0005541914-sqgqkj[285891]: starting mds.mds.np0005541914.sqgqkj at 
Dec 02 09:51:27 np0005541914.localdomain ceph-mds[285895]: mds.mds.np0005541914.sqgqkj Updating MDS map to version 6 from mon.0
Dec 02 09:51:27 np0005541914.localdomain sudo[285649]: pam_unix(sudo:session): session closed for user root
Dec 02 09:51:27 np0005541914.localdomain ceph-mds[285895]: mds.mds.np0005541914.sqgqkj Updating MDS map to version 7 from mon.0
Dec 02 09:51:27 np0005541914.localdomain ceph-mds[285895]: mds.mds.np0005541914.sqgqkj Monitors have assigned me to become a standby.
Dec 02 09:51:28 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:51:28 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:51:29 np0005541914.localdomain podman[285915]: 2025-12-02 09:51:29.072042884 +0000 UTC m=+0.073188896 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 02 09:51:29 np0005541914.localdomain podman[285916]: 2025-12-02 09:51:29.131056407 +0000 UTC m=+0.129011242 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, architecture=x86_64, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, release=1755695350, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, vendor=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 02 09:51:29 np0005541914.localdomain podman[285915]: 2025-12-02 09:51:29.161511887 +0000 UTC m=+0.162657859 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 09:51:29 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:51:29 np0005541914.localdomain podman[285916]: 2025-12-02 09:51:29.173336138 +0000 UTC m=+0.171290923 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_id=edpm, container_name=openstack_network_exporter, io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.buildah.version=1.33.7, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 02 09:51:29 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:51:32 np0005541914.localdomain sudo[285960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:51:32 np0005541914.localdomain sudo[285960]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:51:32 np0005541914.localdomain sudo[285960]: pam_unix(sudo:session): session closed for user root
Dec 02 09:51:32 np0005541914.localdomain sudo[285978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:51:32 np0005541914.localdomain sudo[285978]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:51:32 np0005541914.localdomain sudo[285978]: pam_unix(sudo:session): session closed for user root
Dec 02 09:51:32 np0005541914.localdomain sudo[285996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 02 09:51:32 np0005541914.localdomain sudo[285996]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:51:33 np0005541914.localdomain systemd[1]: tmp-crun.JS3a6W.mount: Deactivated successfully.
Dec 02 09:51:33 np0005541914.localdomain podman[286084]: 2025-12-02 09:51:33.565682891 +0000 UTC m=+0.091147865 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, GIT_BRANCH=main, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, ceph=True, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4)
Dec 02 09:51:33 np0005541914.localdomain podman[239757]: time="2025-12-02T09:51:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 09:51:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:51:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 152345 "" "Go-http-client/1.1"
Dec 02 09:51:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:51:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18190 "" "Go-http-client/1.1"
Dec 02 09:51:33 np0005541914.localdomain podman[286084]: 2025-12-02 09:51:33.731770894 +0000 UTC m=+0.257235888 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, version=7, CEPH_POINT_RELEASE=, name=rhceph, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4)
Dec 02 09:51:34 np0005541914.localdomain sudo[285996]: pam_unix(sudo:session): session closed for user root
Dec 02 09:51:34 np0005541914.localdomain sudo[286166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:51:34 np0005541914.localdomain sudo[286166]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:51:34 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:51:34 np0005541914.localdomain sudo[286166]: pam_unix(sudo:session): session closed for user root
Dec 02 09:51:34 np0005541914.localdomain systemd[1]: tmp-crun.Y1EGnD.mount: Deactivated successfully.
Dec 02 09:51:34 np0005541914.localdomain podman[286183]: 2025-12-02 09:51:34.896990118 +0000 UTC m=+0.097707625 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Dec 02 09:51:34 np0005541914.localdomain podman[286183]: 2025-12-02 09:51:34.914214724 +0000 UTC m=+0.114932221 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, config_id=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 02 09:51:34 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:51:35 np0005541914.localdomain sudo[286203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:51:35 np0005541914.localdomain sudo[286203]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:51:35 np0005541914.localdomain sudo[286203]: pam_unix(sudo:session): session closed for user root
Dec 02 09:51:35 np0005541914.localdomain sshd[284733]: Received disconnect from 38.102.83.114 port 38160:11: disconnected by user
Dec 02 09:51:35 np0005541914.localdomain sshd[284733]: Disconnected from user zuul 38.102.83.114 port 38160
Dec 02 09:51:35 np0005541914.localdomain sshd[284730]: pam_unix(sshd:session): session closed for user zuul
Dec 02 09:51:35 np0005541914.localdomain systemd[1]: session-61.scope: Deactivated successfully.
Dec 02 09:51:35 np0005541914.localdomain systemd-logind[760]: Session 61 logged out. Waiting for processes to exit.
Dec 02 09:51:35 np0005541914.localdomain systemd-logind[760]: Removed session 61.
Dec 02 09:51:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:51:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:51:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:51:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:51:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:51:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 09:51:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:51:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 09:51:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:51:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:51:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 09:51:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:51:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:51:46.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:51:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:51:46.552 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:51:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:51:46.552 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:51:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:51:46.552 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:51:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:51:46.553 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 09:51:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:51:46.553 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:51:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:51:46.998 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:51:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:51:47.211 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 09:51:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:51:47.212 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=12486MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 09:51:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:51:47.212 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:51:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:51:47.213 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:51:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:51:47.265 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 09:51:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:51:47.266 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 09:51:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:51:47.279 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:51:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:51:47.728 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:51:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:51:47.734 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 09:51:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:51:47.765 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 09:51:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:51:47.768 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 09:51:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:51:47.768 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:51:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:51:48.769 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:51:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:51:48.805 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:51:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:51:48.805 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:51:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:51:48.806 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:51:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:51:48.807 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:51:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:51:48.807 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:51:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:51:48.808 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:51:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:51:48.808 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 09:51:48 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:51:48 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:51:49 np0005541914.localdomain systemd[1]: tmp-crun.NLdBox.mount: Deactivated successfully.
Dec 02 09:51:49 np0005541914.localdomain podman[286266]: 2025-12-02 09:51:49.106241751 +0000 UTC m=+0.104843695 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 02 09:51:49 np0005541914.localdomain podman[286266]: 2025-12-02 09:51:49.140853288 +0000 UTC m=+0.139455182 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 02 09:51:49 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 09:51:49 np0005541914.localdomain podman[286265]: 2025-12-02 09:51:49.191244067 +0000 UTC m=+0.191813761 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 09:51:49 np0005541914.localdomain podman[286265]: 2025-12-02 09:51:49.204918835 +0000 UTC m=+0.205488559 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 09:51:49 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 09:51:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:51:49.531 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:51:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:51:49.531 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:51:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:51:49.532 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 09:51:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:51:49.532 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 09:51:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:51:49.606 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 09:51:50 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:51:50 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:51:51 np0005541914.localdomain podman[286308]: 2025-12-02 09:51:51.079519318 +0000 UTC m=+0.082680707 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 02 09:51:51 np0005541914.localdomain podman[286307]: 2025-12-02 09:51:51.125623136 +0000 UTC m=+0.132596201 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:51:51 np0005541914.localdomain podman[286307]: 2025-12-02 09:51:51.133791795 +0000 UTC m=+0.140764850 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:51:51 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:51:51 np0005541914.localdomain podman[286308]: 2025-12-02 09:51:51.192343374 +0000 UTC m=+0.195504793 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:51:51 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:51:52 np0005541914.localdomain sshd[286348]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:51:53 np0005541914.localdomain sshd[286348]: Received disconnect from 34.78.29.97 port 45708:11: Bye Bye [preauth]
Dec 02 09:51:53 np0005541914.localdomain sshd[286348]: Disconnected from authenticating user daemon 34.78.29.97 port 45708 [preauth]
Dec 02 09:51:59 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:51:59 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:52:00 np0005541914.localdomain systemd[1]: tmp-crun.TUJEyQ.mount: Deactivated successfully.
Dec 02 09:52:00 np0005541914.localdomain podman[286350]: 2025-12-02 09:52:00.086564447 +0000 UTC m=+0.083719239 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 09:52:00 np0005541914.localdomain podman[286350]: 2025-12-02 09:52:00.094354015 +0000 UTC m=+0.091508767 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 02 09:52:00 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:52:00 np0005541914.localdomain systemd[1]: tmp-crun.MjrBwk.mount: Deactivated successfully.
Dec 02 09:52:00 np0005541914.localdomain podman[286351]: 2025-12-02 09:52:00.136259055 +0000 UTC m=+0.131074275 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, distribution-scope=public, version=9.6, release=1755695350, container_name=openstack_network_exporter, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_id=edpm, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7)
Dec 02 09:52:00 np0005541914.localdomain podman[286351]: 2025-12-02 09:52:00.151690416 +0000 UTC m=+0.146505616 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=edpm, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc.)
Dec 02 09:52:00 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:52:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:52:03.162 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:52:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:52:03.163 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:52:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:52:03.163 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:52:03 np0005541914.localdomain podman[239757]: time="2025-12-02T09:52:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 09:52:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:52:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 152345 "" "Go-http-client/1.1"
Dec 02 09:52:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:52:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18194 "" "Go-http-client/1.1"
Dec 02 09:52:04 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:52:05 np0005541914.localdomain podman[286393]: 2025-12-02 09:52:05.088218293 +0000 UTC m=+0.091734373 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 02 09:52:05 np0005541914.localdomain podman[286393]: 2025-12-02 09:52:05.10288943 +0000 UTC m=+0.106405560 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:52:05 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:52:07 np0005541914.localdomain sudo[286413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:52:07 np0005541914.localdomain sudo[286413]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:52:07 np0005541914.localdomain sudo[286413]: pam_unix(sudo:session): session closed for user root
Dec 02 09:52:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:52:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 09:52:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:52:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:52:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:52:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:52:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:52:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 09:52:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:52:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:52:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 09:52:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:52:13 np0005541914.localdomain sshd[285074]: Received disconnect from 192.168.122.11 port 34416:11: disconnected by user
Dec 02 09:52:13 np0005541914.localdomain sshd[285074]: Disconnected from user tripleo-admin 192.168.122.11 port 34416
Dec 02 09:52:13 np0005541914.localdomain sshd[285055]: pam_unix(sshd:session): session closed for user tripleo-admin
Dec 02 09:52:13 np0005541914.localdomain systemd[1]: session-62.scope: Deactivated successfully.
Dec 02 09:52:13 np0005541914.localdomain systemd[1]: session-62.scope: Consumed 1.250s CPU time.
Dec 02 09:52:13 np0005541914.localdomain systemd-logind[760]: Session 62 logged out. Waiting for processes to exit.
Dec 02 09:52:13 np0005541914.localdomain systemd-logind[760]: Removed session 62.
Dec 02 09:52:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:52:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:52:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:52:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:52:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:52:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:52:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:52:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:52:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:52:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:52:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:52:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:52:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.439 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:52:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.439 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:52:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.439 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:52:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.439 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:52:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.439 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:52:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.439 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:52:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.439 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:52:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.439 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:52:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:52:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:52:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:52:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:52:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:52:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:52:19 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:52:19 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:52:20 np0005541914.localdomain podman[286432]: 2025-12-02 09:52:20.076925426 +0000 UTC m=+0.076923930 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:52:20 np0005541914.localdomain podman[286432]: 2025-12-02 09:52:20.086911232 +0000 UTC m=+0.086909746 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:52:20 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 09:52:20 np0005541914.localdomain podman[286431]: 2025-12-02 09:52:20.139295681 +0000 UTC m=+0.143075761 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 02 09:52:20 np0005541914.localdomain podman[286431]: 2025-12-02 09:52:20.176064494 +0000 UTC m=+0.179844634 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 02 09:52:20 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 09:52:21 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:52:21 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:52:22 np0005541914.localdomain systemd[1]: tmp-crun.BIGbSC.mount: Deactivated successfully.
Dec 02 09:52:22 np0005541914.localdomain podman[286471]: 2025-12-02 09:52:22.083767189 +0000 UTC m=+0.085776310 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:52:22 np0005541914.localdomain systemd[1]: tmp-crun.b1DZtH.mount: Deactivated successfully.
Dec 02 09:52:22 np0005541914.localdomain podman[286472]: 2025-12-02 09:52:22.119055797 +0000 UTC m=+0.116524710 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Dec 02 09:52:22 np0005541914.localdomain podman[286472]: 2025-12-02 09:52:22.148238629 +0000 UTC m=+0.145707542 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 02 09:52:22 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:52:22 np0005541914.localdomain podman[286471]: 2025-12-02 09:52:22.171291013 +0000 UTC m=+0.173300144 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 02 09:52:22 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:52:23 np0005541914.localdomain systemd[1]: Stopping User Manager for UID 1003...
Dec 02 09:52:23 np0005541914.localdomain systemd[285059]: Activating special unit Exit the Session...
Dec 02 09:52:23 np0005541914.localdomain systemd[285059]: Stopped target Main User Target.
Dec 02 09:52:23 np0005541914.localdomain systemd[285059]: Stopped target Basic System.
Dec 02 09:52:23 np0005541914.localdomain systemd[285059]: Stopped target Paths.
Dec 02 09:52:23 np0005541914.localdomain systemd[285059]: Stopped target Sockets.
Dec 02 09:52:23 np0005541914.localdomain systemd[285059]: Stopped target Timers.
Dec 02 09:52:23 np0005541914.localdomain systemd[285059]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 02 09:52:23 np0005541914.localdomain systemd[285059]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 02 09:52:23 np0005541914.localdomain systemd[285059]: Closed D-Bus User Message Bus Socket.
Dec 02 09:52:23 np0005541914.localdomain systemd[285059]: Stopped Create User's Volatile Files and Directories.
Dec 02 09:52:23 np0005541914.localdomain systemd[285059]: Removed slice User Application Slice.
Dec 02 09:52:23 np0005541914.localdomain systemd[285059]: Reached target Shutdown.
Dec 02 09:52:23 np0005541914.localdomain systemd[285059]: Finished Exit the Session.
Dec 02 09:52:23 np0005541914.localdomain systemd[285059]: Reached target Exit the Session.
Dec 02 09:52:23 np0005541914.localdomain systemd[1]: user@1003.service: Deactivated successfully.
Dec 02 09:52:23 np0005541914.localdomain systemd[1]: Stopped User Manager for UID 1003.
Dec 02 09:52:23 np0005541914.localdomain systemd[1]: Stopping User Runtime Directory /run/user/1003...
Dec 02 09:52:24 np0005541914.localdomain systemd[1]: run-user-1003.mount: Deactivated successfully.
Dec 02 09:52:24 np0005541914.localdomain systemd[1]: user-runtime-dir@1003.service: Deactivated successfully.
Dec 02 09:52:24 np0005541914.localdomain systemd[1]: Stopped User Runtime Directory /run/user/1003.
Dec 02 09:52:24 np0005541914.localdomain systemd[1]: Removed slice User Slice of UID 1003.
Dec 02 09:52:24 np0005541914.localdomain systemd[1]: user-1003.slice: Consumed 1.681s CPU time.
Dec 02 09:52:25 np0005541914.localdomain sudo[286515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:52:25 np0005541914.localdomain sudo[286515]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:52:25 np0005541914.localdomain sudo[286515]: pam_unix(sudo:session): session closed for user root
Dec 02 09:52:25 np0005541914.localdomain sudo[286533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:52:25 np0005541914.localdomain sudo[286533]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:52:25 np0005541914.localdomain sudo[286533]: pam_unix(sudo:session): session closed for user root
Dec 02 09:52:25 np0005541914.localdomain sudo[286551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:52:25 np0005541914.localdomain sudo[286551]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:52:26 np0005541914.localdomain sudo[286551]: pam_unix(sudo:session): session closed for user root
Dec 02 09:52:28 np0005541914.localdomain sudo[286602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:52:28 np0005541914.localdomain sudo[286602]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:52:28 np0005541914.localdomain sudo[286602]: pam_unix(sudo:session): session closed for user root
Dec 02 09:52:28 np0005541914.localdomain sudo[286620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:52:28 np0005541914.localdomain sudo[286620]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:52:28 np0005541914.localdomain sudo[286620]: pam_unix(sudo:session): session closed for user root
Dec 02 09:52:30 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:52:30 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:52:31 np0005541914.localdomain systemd[1]: tmp-crun.56zb2Z.mount: Deactivated successfully.
Dec 02 09:52:31 np0005541914.localdomain podman[286639]: 2025-12-02 09:52:31.08686126 +0000 UTC m=+0.085530904 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_id=edpm, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=9.6, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 02 09:52:31 np0005541914.localdomain podman[286638]: 2025-12-02 09:52:31.124956184 +0000 UTC m=+0.125037741 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 09:52:31 np0005541914.localdomain podman[286638]: 2025-12-02 09:52:31.137986191 +0000 UTC m=+0.138067778 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 09:52:31 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:52:31 np0005541914.localdomain podman[286639]: 2025-12-02 09:52:31.151589247 +0000 UTC m=+0.150258911 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, distribution-scope=public, vcs-type=git, name=ubi9-minimal, config_id=edpm, vendor=Red Hat, Inc.)
Dec 02 09:52:31 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:52:33 np0005541914.localdomain podman[239757]: time="2025-12-02T09:52:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 09:52:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:52:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 152345 "" "Go-http-client/1.1"
Dec 02 09:52:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:52:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18198 "" "Go-http-client/1.1"
Dec 02 09:52:35 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:52:36 np0005541914.localdomain systemd[1]: tmp-crun.nYXnl6.mount: Deactivated successfully.
Dec 02 09:52:36 np0005541914.localdomain podman[286680]: 2025-12-02 09:52:36.080362077 +0000 UTC m=+0.083581814 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 02 09:52:36 np0005541914.localdomain podman[286680]: 2025-12-02 09:52:36.095081007 +0000 UTC m=+0.098300724 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 02 09:52:36 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:52:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:52:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 09:52:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:52:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:52:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:52:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:52:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:52:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 09:52:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:52:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:52:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 09:52:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:52:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:44.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:52:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:44.529 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 02 09:52:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:44.549 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 02 09:52:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:44.550 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:52:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:44.551 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 02 09:52:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:44.562 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:52:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:47.571 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:52:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:47.593 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:52:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:47.593 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:52:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:47.594 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:52:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:47.594 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 09:52:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:47.594 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:52:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:48.043 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:52:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:48.249 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 09:52:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:48.250 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=12491MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 09:52:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:48.250 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:52:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:48.251 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:52:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:48.337 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 09:52:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:48.337 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 09:52:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:48.392 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Refreshing inventories for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 02 09:52:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:48.448 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Updating ProviderTree inventory for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 02 09:52:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:48.449 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Updating inventory in ProviderTree for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 02 09:52:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:48.465 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Refreshing aggregate associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 02 09:52:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:48.487 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Refreshing trait associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, traits: HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,HW_CPU_X86_SSE42,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_IDE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AKI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 02 09:52:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:48.510 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:52:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:48.964 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:52:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:48.969 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 09:52:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:48.984 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 09:52:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:48.985 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 09:52:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:48.985 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:52:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:49.942 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:52:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:49.942 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:52:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:49.943 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 09:52:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:49.943 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 09:52:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:49.955 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 09:52:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:49.956 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:52:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:49.956 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:52:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:49.956 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:52:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:49.957 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:52:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:49.957 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:52:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:49.957 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 09:52:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:52:50.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:52:50 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:52:50 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:52:51 np0005541914.localdomain podman[286746]: 2025-12-02 09:52:51.309191463 +0000 UTC m=+0.311469795 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 09:52:51 np0005541914.localdomain podman[286746]: 2025-12-02 09:52:51.319972843 +0000 UTC m=+0.322251125 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 09:52:51 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 09:52:51 np0005541914.localdomain podman[286747]: 2025-12-02 09:52:51.382866644 +0000 UTC m=+0.383250589 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 02 09:52:51 np0005541914.localdomain podman[286747]: 2025-12-02 09:52:51.417908004 +0000 UTC m=+0.418291959 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Dec 02 09:52:51 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 09:52:52 np0005541914.localdomain sudo[286788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:52:52 np0005541914.localdomain sudo[286788]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:52:52 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:52:52 np0005541914.localdomain sudo[286788]: pam_unix(sudo:session): session closed for user root
Dec 02 09:52:52 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:52:52 np0005541914.localdomain podman[286807]: 2025-12-02 09:52:52.393491836 +0000 UTC m=+0.062204241 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:52:52 np0005541914.localdomain podman[286807]: 2025-12-02 09:52:52.452271571 +0000 UTC m=+0.120983976 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 02 09:52:52 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:52:52 np0005541914.localdomain podman[286806]: 2025-12-02 09:52:52.459656117 +0000 UTC m=+0.129750515 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 02 09:52:52 np0005541914.localdomain podman[286806]: 2025-12-02 09:52:52.540177636 +0000 UTC m=+0.210272034 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent)
Dec 02 09:52:52 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:52:53 np0005541914.localdomain sudo[286847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:52:53 np0005541914.localdomain sudo[286847]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:52:53 np0005541914.localdomain sudo[286847]: pam_unix(sudo:session): session closed for user root
Dec 02 09:52:54 np0005541914.localdomain sudo[286865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:52:54 np0005541914.localdomain sudo[286865]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:52:54 np0005541914.localdomain sudo[286865]: pam_unix(sudo:session): session closed for user root
Dec 02 09:52:57 np0005541914.localdomain sshd[286883]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:52:57 np0005541914.localdomain sshd[286883]: Received disconnect from 34.78.29.97 port 54824:11: Bye Bye [preauth]
Dec 02 09:52:57 np0005541914.localdomain sshd[286883]: Disconnected from authenticating user root 34.78.29.97 port 54824 [preauth]
Dec 02 09:53:00 np0005541914.localdomain sudo[286885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:53:00 np0005541914.localdomain sudo[286885]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:00 np0005541914.localdomain sudo[286885]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:00 np0005541914.localdomain sudo[286903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:53:00 np0005541914.localdomain sudo[286903]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:01 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:53:01 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:53:01 np0005541914.localdomain podman[286936]: 2025-12-02 09:53:01.361090281 +0000 UTC m=+0.080452689 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 09:53:01 np0005541914.localdomain podman[286937]: 2025-12-02 09:53:01.40983321 +0000 UTC m=+0.124808193 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, vcs-type=git, config_id=edpm, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.33.7, architecture=x86_64, name=ubi9-minimal, io.openshift.expose-services=)
Dec 02 09:53:01 np0005541914.localdomain podman[286937]: 2025-12-02 09:53:01.423871819 +0000 UTC m=+0.138846802 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, name=ubi9-minimal, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-type=git, distribution-scope=public, version=9.6, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 02 09:53:01 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:53:01 np0005541914.localdomain podman[286936]: 2025-12-02 09:53:01.476978771 +0000 UTC m=+0.196341259 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 09:53:01 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:53:01 np0005541914.localdomain podman[287004]: 
Dec 02 09:53:01 np0005541914.localdomain podman[287004]: 2025-12-02 09:53:01.549211848 +0000 UTC m=+0.068649478 container create 762e0cd191f3142a6e3fff73f26f02a431b645f7b9d71f75b913ee3de79b6092 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_dhawan, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_BRANCH=main, CEPH_POINT_RELEASE=, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, com.redhat.component=rhceph-container, version=7, distribution-scope=public, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, architecture=x86_64, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 02 09:53:01 np0005541914.localdomain systemd[1]: Started libpod-conmon-762e0cd191f3142a6e3fff73f26f02a431b645f7b9d71f75b913ee3de79b6092.scope.
Dec 02 09:53:01 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:53:01 np0005541914.localdomain podman[287004]: 2025-12-02 09:53:01.608039385 +0000 UTC m=+0.127476995 container init 762e0cd191f3142a6e3fff73f26f02a431b645f7b9d71f75b913ee3de79b6092 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_dhawan, io.buildah.version=1.41.4, GIT_CLEAN=True, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, build-date=2025-11-26T19:44:28Z, ceph=True, io.openshift.tags=rhceph ceph, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 02 09:53:01 np0005541914.localdomain podman[287004]: 2025-12-02 09:53:01.615837893 +0000 UTC m=+0.135275523 container start 762e0cd191f3142a6e3fff73f26f02a431b645f7b9d71f75b913ee3de79b6092 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_dhawan, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, CEPH_POINT_RELEASE=, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, ceph=True, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., name=rhceph, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, release=1763362218, RELEASE=main)
Dec 02 09:53:01 np0005541914.localdomain podman[287004]: 2025-12-02 09:53:01.616186354 +0000 UTC m=+0.135624014 container attach 762e0cd191f3142a6e3fff73f26f02a431b645f7b9d71f75b913ee3de79b6092 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_dhawan, vcs-type=git, distribution-scope=public, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, ceph=True, architecture=x86_64)
Dec 02 09:53:01 np0005541914.localdomain podman[287004]: 2025-12-02 09:53:01.518205911 +0000 UTC m=+0.037643571 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:53:01 np0005541914.localdomain nostalgic_dhawan[287019]: 167 167
Dec 02 09:53:01 np0005541914.localdomain systemd[1]: libpod-762e0cd191f3142a6e3fff73f26f02a431b645f7b9d71f75b913ee3de79b6092.scope: Deactivated successfully.
Dec 02 09:53:01 np0005541914.localdomain podman[287004]: 2025-12-02 09:53:01.620147474 +0000 UTC m=+0.139585134 container died 762e0cd191f3142a6e3fff73f26f02a431b645f7b9d71f75b913ee3de79b6092 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_dhawan, RELEASE=main, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, GIT_BRANCH=main, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, ceph=True, name=rhceph, release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, distribution-scope=public)
Dec 02 09:53:01 np0005541914.localdomain podman[287025]: 2025-12-02 09:53:01.70608803 +0000 UTC m=+0.077322823 container remove 762e0cd191f3142a6e3fff73f26f02a431b645f7b9d71f75b913ee3de79b6092 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nostalgic_dhawan, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, vcs-type=git, GIT_BRANCH=main, distribution-scope=public, release=1763362218, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 02 09:53:01 np0005541914.localdomain systemd[1]: libpod-conmon-762e0cd191f3142a6e3fff73f26f02a431b645f7b9d71f75b913ee3de79b6092.scope: Deactivated successfully.
Dec 02 09:53:01 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:53:01 np0005541914.localdomain systemd-rc-local-generator[287065]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:53:01 np0005541914.localdomain systemd-sysv-generator[287070]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:53:01 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:53:01 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:53:01 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:53:01 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:53:01 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:53:01 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:53:01 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:53:01 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:53:01 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:53:02 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a52f860537c2330b9e76cf281d9d8bae38bd40ea02009faf801d41c688e46c4c-merged.mount: Deactivated successfully.
Dec 02 09:53:02 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:53:02 np0005541914.localdomain systemd-rc-local-generator[287104]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:53:02 np0005541914.localdomain systemd-sysv-generator[287108]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:53:02 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:53:02 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:53:02 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:53:02 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:53:02 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:53:02 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:53:02 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:53:02 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:53:02 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:53:02 np0005541914.localdomain systemd[1]: Starting Ceph mgr.np0005541914.lljzmk for c7c8e171-a193-56fb-95fa-8879fcfa7074...
Dec 02 09:53:02 np0005541914.localdomain podman[287168]: 
Dec 02 09:53:02 np0005541914.localdomain podman[287168]: 2025-12-02 09:53:02.880956939 +0000 UTC m=+0.074044143 container create 40cf237ba25227db3f61f0d42a0b07debc8628b4fa5c88c59967ae6d5f7c4e2e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, com.redhat.component=rhceph-container, GIT_CLEAN=True, name=rhceph, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.41.4, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, GIT_BRANCH=main, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public)
Dec 02 09:53:02 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90ebaa8cde4207b2021c704b75d676b1b710d7e338634f37f4e3181a65d5fbbc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 02 09:53:02 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90ebaa8cde4207b2021c704b75d676b1b710d7e338634f37f4e3181a65d5fbbc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 02 09:53:02 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90ebaa8cde4207b2021c704b75d676b1b710d7e338634f37f4e3181a65d5fbbc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 02 09:53:02 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90ebaa8cde4207b2021c704b75d676b1b710d7e338634f37f4e3181a65d5fbbc/merged/var/lib/ceph/mgr/ceph-np0005541914.lljzmk supports timestamps until 2038 (0x7fffffff)
Dec 02 09:53:02 np0005541914.localdomain podman[287168]: 2025-12-02 09:53:02.942034544 +0000 UTC m=+0.135121758 container init 40cf237ba25227db3f61f0d42a0b07debc8628b4fa5c88c59967ae6d5f7c4e2e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, name=rhceph, ceph=True, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7)
Dec 02 09:53:02 np0005541914.localdomain podman[287168]: 2025-12-02 09:53:02.851317293 +0000 UTC m=+0.044404557 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:53:02 np0005541914.localdomain podman[287168]: 2025-12-02 09:53:02.951109542 +0000 UTC m=+0.144196746 container start 40cf237ba25227db3f61f0d42a0b07debc8628b4fa5c88c59967ae6d5f7c4e2e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.openshift.tags=rhceph ceph, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, name=rhceph, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.buildah.version=1.41.4, release=1763362218, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True)
Dec 02 09:53:02 np0005541914.localdomain bash[287168]: 40cf237ba25227db3f61f0d42a0b07debc8628b4fa5c88c59967ae6d5f7c4e2e
Dec 02 09:53:02 np0005541914.localdomain systemd[1]: Started Ceph mgr.np0005541914.lljzmk for c7c8e171-a193-56fb-95fa-8879fcfa7074.
Dec 02 09:53:03 np0005541914.localdomain sudo[286903]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:03 np0005541914.localdomain ceph-mgr[287188]: set uid:gid to 167:167 (ceph:ceph)
Dec 02 09:53:03 np0005541914.localdomain ceph-mgr[287188]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mgr, pid 2
Dec 02 09:53:03 np0005541914.localdomain ceph-mgr[287188]: pidfile_write: ignore empty --pid-file
Dec 02 09:53:03 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'alerts'
Dec 02 09:53:03 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 02 09:53:03 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'balancer'
Dec 02 09:53:03 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:03.125+0000 7f5cc20af140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 02 09:53:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:53:03.163 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:53:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:53:03.164 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:53:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:53:03.164 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:53:03 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 02 09:53:03 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'cephadm'
Dec 02 09:53:03 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:03.190+0000 7f5cc20af140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 02 09:53:03 np0005541914.localdomain sudo[287213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:53:03 np0005541914.localdomain sudo[287213]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:03 np0005541914.localdomain sudo[287213]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:03 np0005541914.localdomain sudo[287231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:53:03 np0005541914.localdomain sudo[287231]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:03 np0005541914.localdomain sudo[287231]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:03 np0005541914.localdomain podman[239757]: time="2025-12-02T09:53:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 09:53:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:53:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154481 "" "Go-http-client/1.1"
Dec 02 09:53:03 np0005541914.localdomain sudo[287250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 02 09:53:03 np0005541914.localdomain sudo[287250]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:53:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18680 "" "Go-http-client/1.1"
Dec 02 09:53:03 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'crash'
Dec 02 09:53:03 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 02 09:53:03 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'dashboard'
Dec 02 09:53:03 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:03.845+0000 7f5cc20af140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 02 09:53:04 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'devicehealth'
Dec 02 09:53:04 np0005541914.localdomain systemd[1]: tmp-crun.F8IFrU.mount: Deactivated successfully.
Dec 02 09:53:04 np0005541914.localdomain podman[287345]: 2025-12-02 09:53:04.413763582 +0000 UTC m=+0.096754417 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, vcs-type=git, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, name=rhceph, io.openshift.expose-services=, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 02 09:53:04 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 02 09:53:04 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'diskprediction_local'
Dec 02 09:53:04 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:04.427+0000 7f5cc20af140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 02 09:53:04 np0005541914.localdomain podman[287345]: 2025-12-02 09:53:04.530150167 +0000 UTC m=+0.213140982 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, name=rhceph, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, release=1763362218, GIT_BRANCH=main, GIT_CLEAN=True, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, com.redhat.component=rhceph-container)
Dec 02 09:53:04 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 02 09:53:04 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 02 09:53:04 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]:   from numpy import show_config as show_numpy_config
Dec 02 09:53:04 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 02 09:53:04 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'influx'
Dec 02 09:53:04 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:04.574+0000 7f5cc20af140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 02 09:53:04 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 02 09:53:04 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'insights'
Dec 02 09:53:04 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:04.633+0000 7f5cc20af140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 02 09:53:04 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'iostat'
Dec 02 09:53:04 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 02 09:53:04 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'k8sevents'
Dec 02 09:53:04 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:04.750+0000 7f5cc20af140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 02 09:53:05 np0005541914.localdomain sudo[287250]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:05 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'localpool'
Dec 02 09:53:05 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'mds_autoscaler'
Dec 02 09:53:05 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'mirroring'
Dec 02 09:53:05 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'nfs'
Dec 02 09:53:05 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 02 09:53:05 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'orchestrator'
Dec 02 09:53:05 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:05.537+0000 7f5cc20af140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 02 09:53:05 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 02 09:53:05 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'osd_perf_query'
Dec 02 09:53:05 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:05.689+0000 7f5cc20af140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 02 09:53:05 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 02 09:53:05 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:05.753+0000 7f5cc20af140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 02 09:53:05 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'osd_support'
Dec 02 09:53:05 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 02 09:53:05 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'pg_autoscaler'
Dec 02 09:53:05 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:05.811+0000 7f5cc20af140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 02 09:53:05 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 02 09:53:05 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'progress'
Dec 02 09:53:05 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:05.882+0000 7f5cc20af140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 02 09:53:05 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 02 09:53:05 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'prometheus'
Dec 02 09:53:05 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:05.943+0000 7f5cc20af140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 02 09:53:06 np0005541914.localdomain sudo[287447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:53:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:53:06 np0005541914.localdomain sudo[287447]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:06 np0005541914.localdomain sudo[287447]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:06 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 02 09:53:06 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'rbd_support'
Dec 02 09:53:06 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:06.269+0000 7f5cc20af140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 02 09:53:06 np0005541914.localdomain podman[287464]: 2025-12-02 09:53:06.291880043 +0000 UTC m=+0.077944343 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec 02 09:53:06 np0005541914.localdomain podman[287464]: 2025-12-02 09:53:06.307954754 +0000 UTC m=+0.094019094 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:53:06 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:53:06 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 02 09:53:06 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'restful'
Dec 02 09:53:06 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:06.358+0000 7f5cc20af140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 02 09:53:06 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'rgw'
Dec 02 09:53:06 np0005541914.localdomain sudo[287486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:53:06 np0005541914.localdomain sudo[287486]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:06 np0005541914.localdomain sudo[287486]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:06 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 02 09:53:06 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:06.689+0000 7f5cc20af140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 02 09:53:06 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'rook'
Dec 02 09:53:07 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 02 09:53:07 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'selftest'
Dec 02 09:53:07 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:07.136+0000 7f5cc20af140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 02 09:53:07 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 02 09:53:07 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'snap_schedule'
Dec 02 09:53:07 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:07.198+0000 7f5cc20af140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 02 09:53:07 np0005541914.localdomain sudo[287504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:53:07 np0005541914.localdomain sudo[287504]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:07 np0005541914.localdomain sudo[287504]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:07 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'stats'
Dec 02 09:53:07 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'status'
Dec 02 09:53:07 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 02 09:53:07 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'telegraf'
Dec 02 09:53:07 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:07.392+0000 7f5cc20af140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 02 09:53:07 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 02 09:53:07 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'telemetry'
Dec 02 09:53:07 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:07.451+0000 7f5cc20af140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 02 09:53:07 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 02 09:53:07 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'test_orchestrator'
Dec 02 09:53:07 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:07.584+0000 7f5cc20af140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 02 09:53:07 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 02 09:53:07 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'volumes'
Dec 02 09:53:07 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:07.727+0000 7f5cc20af140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 02 09:53:07 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 02 09:53:07 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'zabbix'
Dec 02 09:53:07 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:07.912+0000 7f5cc20af140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 02 09:53:07 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 02 09:53:07 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:53:07.969+0000 7f5cc20af140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 02 09:53:07 np0005541914.localdomain ceph-mgr[287188]: ms_deliver_dispatch: unhandled message 0x55910bb5f1e0 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0
Dec 02 09:53:07 np0005541914.localdomain ceph-mgr[287188]: client.0 ms_handle_reset on v2:172.18.0.103:6800/3096645673
Dec 02 09:53:08 np0005541914.localdomain ceph-mgr[287188]: client.0 ms_handle_reset on v2:172.18.0.103:6800/3096645673
Dec 02 09:53:11 np0005541914.localdomain sudo[287522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:53:11 np0005541914.localdomain sudo[287522]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:11 np0005541914.localdomain sudo[287522]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:53:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:53:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:53:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 09:53:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:53:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:53:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:53:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 09:53:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:53:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:53:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 09:53:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:53:14 np0005541914.localdomain sudo[287540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:53:14 np0005541914.localdomain sudo[287540]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:14 np0005541914.localdomain sudo[287540]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:14 np0005541914.localdomain sudo[287558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 02 09:53:14 np0005541914.localdomain sudo[287558]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:14 np0005541914.localdomain sudo[287558]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:14 np0005541914.localdomain sudo[287576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph
Dec 02 09:53:14 np0005541914.localdomain sudo[287576]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:14 np0005541914.localdomain sudo[287576]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:14 np0005541914.localdomain sudo[287594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:53:14 np0005541914.localdomain sudo[287594]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:14 np0005541914.localdomain sudo[287594]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:14 np0005541914.localdomain sudo[287612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:53:14 np0005541914.localdomain sudo[287612]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:14 np0005541914.localdomain sudo[287612]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:14 np0005541914.localdomain sudo[287630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:53:14 np0005541914.localdomain sudo[287630]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:14 np0005541914.localdomain sudo[287630]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:14 np0005541914.localdomain sudo[287664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:53:14 np0005541914.localdomain sudo[287664]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:14 np0005541914.localdomain sudo[287664]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:14 np0005541914.localdomain sudo[287682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:53:14 np0005541914.localdomain sudo[287682]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:14 np0005541914.localdomain sudo[287682]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:14 np0005541914.localdomain sudo[287700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 02 09:53:14 np0005541914.localdomain sudo[287700]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:14 np0005541914.localdomain sudo[287700]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:15 np0005541914.localdomain sudo[287718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 09:53:15 np0005541914.localdomain sudo[287718]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:15 np0005541914.localdomain sudo[287718]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:15 np0005541914.localdomain sudo[287736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 09:53:15 np0005541914.localdomain sudo[287736]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:15 np0005541914.localdomain sudo[287736]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:15 np0005541914.localdomain sudo[287754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:53:15 np0005541914.localdomain sudo[287754]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:15 np0005541914.localdomain sudo[287754]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:15 np0005541914.localdomain sudo[287772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:53:15 np0005541914.localdomain sudo[287772]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:15 np0005541914.localdomain sudo[287772]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:15 np0005541914.localdomain sudo[287790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:53:15 np0005541914.localdomain sudo[287790]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:15 np0005541914.localdomain sudo[287790]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:15 np0005541914.localdomain sudo[287824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:53:15 np0005541914.localdomain sudo[287824]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:15 np0005541914.localdomain sudo[287824]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:15 np0005541914.localdomain sudo[287842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:53:15 np0005541914.localdomain sudo[287842]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:15 np0005541914.localdomain sudo[287842]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:15 np0005541914.localdomain sudo[287860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:53:15 np0005541914.localdomain sudo[287860]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:15 np0005541914.localdomain sudo[287860]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:15 np0005541914.localdomain sudo[287878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 02 09:53:15 np0005541914.localdomain sudo[287878]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:15 np0005541914.localdomain sudo[287878]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:15 np0005541914.localdomain sudo[287896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph
Dec 02 09:53:15 np0005541914.localdomain sudo[287896]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:15 np0005541914.localdomain sudo[287896]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:15 np0005541914.localdomain sudo[287914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.client.admin.keyring.new
Dec 02 09:53:15 np0005541914.localdomain sudo[287914]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:15 np0005541914.localdomain sudo[287914]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:15 np0005541914.localdomain sudo[287932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:53:15 np0005541914.localdomain sudo[287932]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:15 np0005541914.localdomain sudo[287932]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:15 np0005541914.localdomain sudo[287950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.client.admin.keyring.new
Dec 02 09:53:15 np0005541914.localdomain sudo[287950]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:15 np0005541914.localdomain sudo[287950]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:16 np0005541914.localdomain sudo[287984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.client.admin.keyring.new
Dec 02 09:53:16 np0005541914.localdomain sudo[287984]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:16 np0005541914.localdomain sudo[287984]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:16 np0005541914.localdomain sudo[288002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.client.admin.keyring.new
Dec 02 09:53:16 np0005541914.localdomain sudo[288002]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:16 np0005541914.localdomain sudo[288002]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:16 np0005541914.localdomain sudo[288020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 02 09:53:16 np0005541914.localdomain sudo[288020]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:16 np0005541914.localdomain sudo[288020]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:16 np0005541914.localdomain sudo[288038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 09:53:16 np0005541914.localdomain sudo[288038]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:16 np0005541914.localdomain sudo[288038]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:16 np0005541914.localdomain sudo[288056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 09:53:16 np0005541914.localdomain sudo[288056]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:16 np0005541914.localdomain sudo[288056]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:16 np0005541914.localdomain sudo[288074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring.new
Dec 02 09:53:16 np0005541914.localdomain sudo[288074]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:16 np0005541914.localdomain sudo[288074]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:16 np0005541914.localdomain sudo[288092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:53:16 np0005541914.localdomain sudo[288092]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:16 np0005541914.localdomain sudo[288092]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:16 np0005541914.localdomain sudo[288110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring.new
Dec 02 09:53:16 np0005541914.localdomain sudo[288110]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:16 np0005541914.localdomain sudo[288110]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:16 np0005541914.localdomain sudo[288144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring.new
Dec 02 09:53:16 np0005541914.localdomain sudo[288144]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:16 np0005541914.localdomain sudo[288144]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:16 np0005541914.localdomain sudo[288162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring.new
Dec 02 09:53:16 np0005541914.localdomain sudo[288162]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:16 np0005541914.localdomain sudo[288162]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:16 np0005541914.localdomain sudo[288180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring.new /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:53:16 np0005541914.localdomain sudo[288180]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:16 np0005541914.localdomain sudo[288180]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:16 np0005541914.localdomain sudo[288198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:53:16 np0005541914.localdomain sudo[288198]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:16 np0005541914.localdomain sudo[288198]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:17 np0005541914.localdomain sudo[288216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:53:17 np0005541914.localdomain sudo[288216]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:17 np0005541914.localdomain podman[288278]: 
Dec 02 09:53:17 np0005541914.localdomain podman[288278]: 2025-12-02 09:53:17.578812376 +0000 UTC m=+0.077163648 container create 0110b4829192ca29463aa3d03f9130e6e324aa275348a26e21dac4539785a346 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_benz, distribution-scope=public, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, com.redhat.component=rhceph-container, GIT_BRANCH=main, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=)
Dec 02 09:53:17 np0005541914.localdomain systemd[1]: Started libpod-conmon-0110b4829192ca29463aa3d03f9130e6e324aa275348a26e21dac4539785a346.scope.
Dec 02 09:53:17 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:53:17 np0005541914.localdomain podman[288278]: 2025-12-02 09:53:17.545979863 +0000 UTC m=+0.044331135 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:53:17 np0005541914.localdomain podman[288278]: 2025-12-02 09:53:17.649476134 +0000 UTC m=+0.147827436 container init 0110b4829192ca29463aa3d03f9130e6e324aa275348a26e21dac4539785a346 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_benz, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, version=7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, distribution-scope=public, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, description=Red Hat Ceph Storage 7)
Dec 02 09:53:17 np0005541914.localdomain podman[288278]: 2025-12-02 09:53:17.661113931 +0000 UTC m=+0.159465213 container start 0110b4829192ca29463aa3d03f9130e6e324aa275348a26e21dac4539785a346 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_benz, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, architecture=x86_64, distribution-scope=public, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.buildah.version=1.41.4, com.redhat.component=rhceph-container, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, name=rhceph, description=Red Hat Ceph Storage 7)
Dec 02 09:53:17 np0005541914.localdomain podman[288278]: 2025-12-02 09:53:17.661531743 +0000 UTC m=+0.159883065 container attach 0110b4829192ca29463aa3d03f9130e6e324aa275348a26e21dac4539785a346 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_benz, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, ceph=True, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7)
Dec 02 09:53:17 np0005541914.localdomain bold_benz[288293]: 167 167
Dec 02 09:53:17 np0005541914.localdomain systemd[1]: libpod-0110b4829192ca29463aa3d03f9130e6e324aa275348a26e21dac4539785a346.scope: Deactivated successfully.
Dec 02 09:53:17 np0005541914.localdomain podman[288278]: 2025-12-02 09:53:17.664423742 +0000 UTC m=+0.162775034 container died 0110b4829192ca29463aa3d03f9130e6e324aa275348a26e21dac4539785a346 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_benz, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, RELEASE=main, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, ceph=True, GIT_CLEAN=True)
Dec 02 09:53:17 np0005541914.localdomain podman[288300]: 2025-12-02 09:53:17.748602783 +0000 UTC m=+0.072326061 container remove 0110b4829192ca29463aa3d03f9130e6e324aa275348a26e21dac4539785a346 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=bold_benz, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, architecture=x86_64, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, GIT_BRANCH=main, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, build-date=2025-11-26T19:44:28Z, release=1763362218)
Dec 02 09:53:17 np0005541914.localdomain systemd[1]: libpod-conmon-0110b4829192ca29463aa3d03f9130e6e324aa275348a26e21dac4539785a346.scope: Deactivated successfully.
Dec 02 09:53:17 np0005541914.localdomain podman[288317]: 
Dec 02 09:53:17 np0005541914.localdomain podman[288317]: 2025-12-02 09:53:17.847239116 +0000 UTC m=+0.071403713 container create 0d9a18dc0ca93721912928f04032a1404d960537dc18e2cfe22f79515700cdf5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_shirley, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, name=rhceph, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, architecture=x86_64, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, version=7, io.openshift.expose-services=, release=1763362218, vcs-type=git, GIT_CLEAN=True)
Dec 02 09:53:17 np0005541914.localdomain systemd[1]: Started libpod-conmon-0d9a18dc0ca93721912928f04032a1404d960537dc18e2cfe22f79515700cdf5.scope.
Dec 02 09:53:17 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:53:17 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628e92a618a11d767b6c844342998e7261329566e2e19b97a6c0c642bce62acc/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Dec 02 09:53:17 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628e92a618a11d767b6c844342998e7261329566e2e19b97a6c0c642bce62acc/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Dec 02 09:53:17 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628e92a618a11d767b6c844342998e7261329566e2e19b97a6c0c642bce62acc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 02 09:53:17 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/628e92a618a11d767b6c844342998e7261329566e2e19b97a6c0c642bce62acc/merged/var/lib/ceph/mon/ceph-np0005541914 supports timestamps until 2038 (0x7fffffff)
Dec 02 09:53:17 np0005541914.localdomain podman[288317]: 2025-12-02 09:53:17.907171216 +0000 UTC m=+0.131335813 container init 0d9a18dc0ca93721912928f04032a1404d960537dc18e2cfe22f79515700cdf5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_shirley, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vcs-type=git, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, name=rhceph, release=1763362218, com.redhat.component=rhceph-container)
Dec 02 09:53:17 np0005541914.localdomain podman[288317]: 2025-12-02 09:53:17.915705427 +0000 UTC m=+0.139870024 container start 0d9a18dc0ca93721912928f04032a1404d960537dc18e2cfe22f79515700cdf5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_shirley, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.openshift.tags=rhceph ceph, distribution-scope=public, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 09:53:17 np0005541914.localdomain podman[288317]: 2025-12-02 09:53:17.915985656 +0000 UTC m=+0.140150283 container attach 0d9a18dc0ca93721912928f04032a1404d960537dc18e2cfe22f79515700cdf5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_shirley, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, version=7, CEPH_POINT_RELEASE=, name=rhceph, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.openshift.expose-services=, distribution-scope=public)
Dec 02 09:53:17 np0005541914.localdomain podman[288317]: 2025-12-02 09:53:17.821339165 +0000 UTC m=+0.045503812 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:53:18 np0005541914.localdomain systemd[1]: libpod-0d9a18dc0ca93721912928f04032a1404d960537dc18e2cfe22f79515700cdf5.scope: Deactivated successfully.
Dec 02 09:53:18 np0005541914.localdomain podman[288317]: 2025-12-02 09:53:18.012278627 +0000 UTC m=+0.236443254 container died 0d9a18dc0ca93721912928f04032a1404d960537dc18e2cfe22f79515700cdf5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_shirley, ceph=True, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, com.redhat.component=rhceph-container, RELEASE=main, io.openshift.tags=rhceph ceph, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, vendor=Red Hat, Inc., GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, version=7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 02 09:53:18 np0005541914.localdomain podman[288358]: 2025-12-02 09:53:18.097396617 +0000 UTC m=+0.073381212 container remove 0d9a18dc0ca93721912928f04032a1404d960537dc18e2cfe22f79515700cdf5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_shirley, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, ceph=True, vendor=Red Hat, Inc., version=7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, RELEASE=main, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, architecture=x86_64, distribution-scope=public, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=)
Dec 02 09:53:18 np0005541914.localdomain systemd[1]: libpod-conmon-0d9a18dc0ca93721912928f04032a1404d960537dc18e2cfe22f79515700cdf5.scope: Deactivated successfully.
Dec 02 09:53:18 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:53:18 np0005541914.localdomain systemd-sysv-generator[288402]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:53:18 np0005541914.localdomain systemd-rc-local-generator[288396]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:53:18 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:53:18 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:53:18 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:53:18 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:53:18 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:53:18 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:53:18 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:53:18 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:53:18 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:53:18 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a176400a0e49fa3930f61de145058187d36c22c3614a805fe3876899af335cf8-merged.mount: Deactivated successfully.
Dec 02 09:53:18 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:53:18 np0005541914.localdomain systemd-rc-local-generator[288439]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:53:18 np0005541914.localdomain systemd-sysv-generator[288444]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:53:18 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:53:18 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:53:18 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:53:18 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:53:18 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:53:18 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:53:18 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:53:18 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:53:18 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:53:18 np0005541914.localdomain systemd[1]: Starting Ceph mon.np0005541914 for c7c8e171-a193-56fb-95fa-8879fcfa7074...
Dec 02 09:53:19 np0005541914.localdomain podman[288508]: 
Dec 02 09:53:19 np0005541914.localdomain podman[288508]: 2025-12-02 09:53:19.190702305 +0000 UTC m=+0.058333723 container create 699b233252c58098b0dcca9b2b21425d550e7754773bf4b3759bf26abfe89544 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mon-np0005541914, vendor=Red Hat, Inc., architecture=x86_64, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, RELEASE=main, io.openshift.expose-services=, distribution-scope=public, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7)
Dec 02 09:53:19 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce118f9e1514dd9e8c61f039c0b5ce0d2beef8304000bf74b350ea0ec7a4ea4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 02 09:53:19 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce118f9e1514dd9e8c61f039c0b5ce0d2beef8304000bf74b350ea0ec7a4ea4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 02 09:53:19 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce118f9e1514dd9e8c61f039c0b5ce0d2beef8304000bf74b350ea0ec7a4ea4b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 02 09:53:19 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce118f9e1514dd9e8c61f039c0b5ce0d2beef8304000bf74b350ea0ec7a4ea4b/merged/var/lib/ceph/mon/ceph-np0005541914 supports timestamps until 2038 (0x7fffffff)
Dec 02 09:53:19 np0005541914.localdomain podman[288508]: 2025-12-02 09:53:19.249371096 +0000 UTC m=+0.117002514 container init 699b233252c58098b0dcca9b2b21425d550e7754773bf4b3759bf26abfe89544 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mon-np0005541914, CEPH_POINT_RELEASE=, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, GIT_CLEAN=True, distribution-scope=public, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, vcs-type=git, version=7, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, release=1763362218, GIT_BRANCH=main)
Dec 02 09:53:19 np0005541914.localdomain podman[288508]: 2025-12-02 09:53:19.258928089 +0000 UTC m=+0.126559517 container start 699b233252c58098b0dcca9b2b21425d550e7754773bf4b3759bf26abfe89544 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mon-np0005541914, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, CEPH_POINT_RELEASE=, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, version=7, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7)
Dec 02 09:53:19 np0005541914.localdomain bash[288508]: 699b233252c58098b0dcca9b2b21425d550e7754773bf4b3759bf26abfe89544
Dec 02 09:53:19 np0005541914.localdomain podman[288508]: 2025-12-02 09:53:19.161112351 +0000 UTC m=+0.028743769 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:53:19 np0005541914.localdomain systemd[1]: Started Ceph mon.np0005541914 for c7c8e171-a193-56fb-95fa-8879fcfa7074.
Dec 02 09:53:19 np0005541914.localdomain sudo[288216]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: set uid:gid to 167:167 (ceph:ceph)
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mon, pid 2
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: pidfile_write: ignore empty --pid-file
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: load: jerasure load: lrc 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: RocksDB version: 7.9.2
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: Git sha 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: Compile date 2025-09-23 00:00:00
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: DB SUMMARY
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: DB Session ID:  ES6HEAUO0NO66H72LGQU
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: CURRENT file:  CURRENT
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: IDENTITY file:  IDENTITY
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: SST files in /var/lib/ceph/mon/ceph-np0005541914/store.db dir, Total Num: 0, files: 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-np0005541914/store.db: 000004.log size: 761 ; 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                         Options.error_if_exists: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                       Options.create_if_missing: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                         Options.paranoid_checks: 1
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                                     Options.env: 0x5617097a49e0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                                Options.info_log: 0x56170abd2d20
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                Options.max_file_opening_threads: 16
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                              Options.statistics: (nil)
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                               Options.use_fsync: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                       Options.max_log_file_size: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                         Options.allow_fallocate: 1
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                        Options.use_direct_reads: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:          Options.create_missing_column_families: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                              Options.db_log_dir: 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                                 Options.wal_dir: 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                   Options.advise_random_on_open: 1
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                    Options.write_buffer_manager: 0x56170abe3540
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                            Options.rate_limiter: (nil)
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                  Options.unordered_write: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                               Options.row_cache: None
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                              Options.wal_filter: None
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:             Options.allow_ingest_behind: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:             Options.two_write_queues: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:             Options.manual_wal_flush: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:             Options.wal_compression: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:             Options.atomic_flush: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                 Options.log_readahead_size: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:             Options.allow_data_in_errors: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:             Options.db_host_id: __hostname__
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:             Options.max_background_jobs: 2
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:             Options.max_background_compactions: -1
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:             Options.max_subcompactions: 1
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:             Options.max_total_wal_size: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                          Options.max_open_files: -1
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                          Options.bytes_per_sync: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:       Options.compaction_readahead_size: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                  Options.max_background_flushes: -1
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: Compression algorithms supported:
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:         kZSTD supported: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:         kXpressCompression supported: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:         kBZip2Compression supported: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:         kLZ4Compression supported: 1
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:         kZlibCompression supported: 1
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:         kLZ4HCCompression supported: 1
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:         kSnappyCompression supported: 1
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-np0005541914/store.db/MANIFEST-000005
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:           Options.merge_operator: 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:        Options.compaction_filter: None
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56170abd2980)
                                                             cache_index_and_filter_blocks: 1
                                                             cache_index_and_filter_blocks_with_high_priority: 0
                                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                                             pin_top_level_index_and_filter: 1
                                                             index_type: 0
                                                             data_block_index_type: 0
                                                             index_shortening: 1
                                                             data_block_hash_table_util_ratio: 0.750000
                                                             checksum: 4
                                                             no_block_cache: 0
                                                             block_cache: 0x56170abcf350
                                                             block_cache_name: BinnedLRUCache
                                                             block_cache_options:
                                                               capacity : 536870912
                                                               num_shard_bits : 4
                                                               strict_capacity_limit : 0
                                                               high_pri_pool_ratio: 0.000
                                                             block_cache_compressed: (nil)
                                                             persistent_cache: (nil)
                                                             block_size: 4096
                                                             block_size_deviation: 10
                                                             block_restart_interval: 16
                                                             index_block_restart_interval: 1
                                                             metadata_block_size: 4096
                                                             partition_filters: 0
                                                             use_delta_encoding: 1
                                                             filter_policy: bloomfilter
                                                             whole_key_filtering: 1
                                                             verify_compression: 0
                                                             read_amp_bytes_per_bit: 0
                                                             format_version: 5
                                                             enable_index_compression: 1
                                                             block_align: 0
                                                             max_auto_readahead_size: 262144
                                                             prepopulate_block_cache: 0
                                                             initial_auto_readahead_size: 8192
                                                             num_file_reads_for_auto_readahead: 2
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:        Options.write_buffer_size: 33554432
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:  Options.max_write_buffer_number: 2
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:          Options.compression: NoCompression
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:             Options.num_levels: 7
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                   Options.table_properties_collectors: 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                           Options.bloom_locality: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                               Options.ttl: 2592000
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                       Options.enable_blob_files: false
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                           Options.min_blob_size: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-np0005541914/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: fef79939-f0d3-4c6e-a3c1-7bf191246dd2
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669199325356, "job": 1, "event": "recovery_started", "wal_files": [4]}
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669199328258, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1887, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 773, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 651, "raw_average_value_size": 130, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669199, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fef79939-f0d3-4c6e-a3c1-7bf191246dd2", "db_session_id": "ES6HEAUO0NO66H72LGQU", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669199328432, "job": 1, "event": "recovery_finished"}
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56170abf6e00
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: DB pointer 0x56170acec000
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                           ** DB Stats **
                                                           Uptime(secs): 0.0 total, 0.0 interval
                                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                           
                                                           ** Compaction Stats [default] **
                                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                             L0      1/0    1.84 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                            Sum      1/0    1.84 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                           
                                                           ** Compaction Stats [default] **
                                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                           
                                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                           
                                                           Uptime(secs): 0.0 total, 0.0 interval
                                                           Flush(GB): cumulative 0.000, interval 0.000
                                                           AddFile(GB): cumulative 0.000, interval 0.000
                                                           AddFile(Total Files): cumulative 0, interval 0
                                                           AddFile(L0 Files): cumulative 0, interval 0
                                                           AddFile(Keys): cumulative 0, interval 0
                                                           Cumulative compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                           Interval compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                           Block cache BinnedLRUCache@0x56170abcf350#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.2e-05 secs_since: 0
                                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                                           
                                                           ** File Read Latency Histogram By Level [default] **
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914 does not exist in monmap, will attempt to join an existing cluster
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: using public_addr v2:172.18.0.108:0/0 -> [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0]
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: starting mon.np0005541914 rank -1 at public addrs [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] at bind addrs [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon_data /var/lib/ceph/mon/ceph-np0005541914 fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@-1(???) e0 preinit fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@-1(synchronizing) e3 sync_obtain_latest_monmap
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@-1(synchronizing) e3 sync_obtain_latest_monmap obtained monmap e3
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@-1(synchronizing).mds e16 new map
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@-1(synchronizing).mds e16 print_map
                                                           e16
                                                           enable_multiple, ever_enabled_multiple: 1,1
                                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}
                                                           legacy client fscid: 1
                                                            
                                                           Filesystem 'cephfs' (1)
                                                           fs_name        cephfs
                                                           epoch        15
                                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                                           created        2025-12-02T08:05:53.424954+0000
                                                           modified        2025-12-02T09:52:13.505190+0000
                                                           tableserver        0
                                                           root        0
                                                           session_timeout        60
                                                           session_autoclose        300
                                                           max_file_size        1099511627776
                                                           required_client_features        {}
                                                           last_failure        0
                                                           last_failure_osd_epoch        84
                                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}
                                                           max_mds        1
                                                           in        0
                                                           up        {0=26573}
                                                           failed        
                                                           damaged        
                                                           stopped        
                                                           data_pools        [6]
                                                           metadata_pool        7
                                                           inline_data        disabled
                                                           balancer        
                                                           bal_rank_mask        -1
                                                           standby_count_wanted        1
                                                           qdb_cluster        leader: 26573 members: 26573
                                                           [mds.mds.np0005541912.ghcwcm{0:26573} state up:active seq 13 addr [v2:172.18.0.106:6808/955707462,v1:172.18.0.106:6809/955707462] compat {c=[1],r=[1],i=[17ff]}]
                                                            
                                                            
                                                           Standby daemons:
                                                            
                                                           [mds.mds.np0005541914.sqgqkj{-1:16923} state up:standby seq 1 addr [v2:172.18.0.108:6808/2216063099,v1:172.18.0.108:6809/2216063099] compat {c=[1],r=[1],i=[17ff]}]
                                                           [mds.mds.np0005541913.maexpe{-1:26386} state up:standby seq 1 addr [v2:172.18.0.107:6808/3746047079,v1:172.18.0.107:6809/3746047079] compat {c=[1],r=[1],i=[17ff]}]
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@-1(synchronizing).osd e85 crush map has features 3314933000852226048, adjusting msgr requires
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@-1(synchronizing).osd e85 crush map has features 288514051259236352, adjusting msgr requires
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@-1(synchronizing).osd e85 crush map has features 288514051259236352, adjusting msgr requires
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@-1(synchronizing).osd e85 crush map has features 288514051259236352, adjusting msgr requires
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: pgmap v3921: 177 pgs: 177 active+clean; 104 MiB data, 544 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.106:0/3860598798' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.106:0/2713840862' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: pgmap v3922: 177 pgs: 177 active+clean; 104 MiB data, 544 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='client.17058 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005541912.localdomain", "label": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: Added label mgr to host np0005541912.localdomain
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: pgmap v3923: 177 pgs: 177 active+clean; 104 MiB data, 544 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='client.17064 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005541913.localdomain", "label": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: Added label mgr to host np0005541913.localdomain
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='client.17070 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005541914.localdomain", "label": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: Added label mgr to host np0005541914.localdomain
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: pgmap v3924: 177 pgs: 177 active+clean; 104 MiB data, 544 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='client.17076 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: Saving service mgr spec with placement label:mgr
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: Deploying daemon mgr.np0005541912.qwddia on np0005541912.localdomain
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: pgmap v3925: 177 pgs: 177 active+clean; 104 MiB data, 544 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='client.17082 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mgr", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: Deploying daemon mgr.np0005541913.mfesdm on np0005541913.localdomain
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: pgmap v3926: 177 pgs: 177 active+clean; 104 MiB data, 544 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='client.17094 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005541909.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: Added label mon to host np0005541909.localdomain
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='client.17100 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005541909.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: Added label _admin to host np0005541909.localdomain
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: Deploying daemon mgr.np0005541914.lljzmk on np0005541914.localdomain
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: pgmap v3927: 177 pgs: 177 active+clean; 104 MiB data, 544 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='client.17109 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005541910.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: Added label mon to host np0005541910.localdomain
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='client.17115 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005541910.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: Added label _admin to host np0005541910.localdomain
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: Standby manager daemon np0005541912.qwddia started
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: pgmap v3928: 177 pgs: 177 active+clean; 104 MiB data, 544 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: mgrmap e12: np0005541909.kfesnk(active, since 2h), standbys: np0005541911.adcgiw, np0005541910.kzipdo, np0005541912.qwddia
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mgr metadata", "who": "np0005541912.qwddia", "id": "np0005541912.qwddia"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.32:0/1830186127' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.32:0/1830186127' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='client.17127 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005541911.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: Added label mon to host np0005541911.localdomain
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: Standby manager daemon np0005541913.mfesdm started
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: pgmap v3929: 177 pgs: 177 active+clean; 104 MiB data, 544 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='client.17142 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005541911.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: Added label _admin to host np0005541911.localdomain
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='client.17148 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005541912.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: mgrmap e13: np0005541909.kfesnk(active, since 2h), standbys: np0005541911.adcgiw, np0005541910.kzipdo, np0005541913.mfesdm, np0005541912.qwddia
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mgr metadata", "who": "np0005541913.mfesdm", "id": "np0005541913.mfesdm"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: Added label mon to host np0005541912.localdomain
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: pgmap v3930: 177 pgs: 177 active+clean; 104 MiB data, 544 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: Standby manager daemon np0005541914.lljzmk started
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='client.17154 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005541912.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: Added label _admin to host np0005541912.localdomain
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: mgrmap e14: np0005541909.kfesnk(active, since 2h), standbys: np0005541911.adcgiw, np0005541914.lljzmk, np0005541910.kzipdo, np0005541913.mfesdm, np0005541912.qwddia
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mgr metadata", "who": "np0005541914.lljzmk", "id": "np0005541914.lljzmk"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: pgmap v3931: 177 pgs: 177 active+clean; 104 MiB data, 544 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: Updating np0005541912.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='client.17160 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005541913.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: Added label mon to host np0005541913.localdomain
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='client.17166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005541913.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: Added label _admin to host np0005541913.localdomain
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: pgmap v3932: 177 pgs: 177 active+clean; 104 MiB data, 544 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='client.17172 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005541914.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: Added label mon to host np0005541914.localdomain
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: Updating np0005541913.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: pgmap v3933: 177 pgs: 177 active+clean; 104 MiB data, 544 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='client.17178 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005541914.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: Added label _admin to host np0005541914.localdomain
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='client.17184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: Saving service mon spec with placement label:mon
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: pgmap v3934: 177 pgs: 177 active+clean; 104 MiB data, 544 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: Updating np0005541914.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='client.17190 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005541912", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: Deploying daemon mon.np0005541914 on np0005541914.localdomain
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: pgmap v3935: 177 pgs: 177 active+clean; 104 MiB data, 544 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:53:19 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@-1(synchronizing).paxosservice(auth 1..34) refresh upgraded, format 0 -> 3
Dec 02 09:53:19 np0005541914.localdomain ceph-mgr[287188]: ms_deliver_dispatch: unhandled message 0x55910bb5f1e0 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0
Dec 02 09:53:21 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@-1(probing) e4  my rank is now 3 (was -1)
Dec 02 09:53:21 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 calling monitor election
Dec 02 09:53:21 np0005541914.localdomain ceph-mon[288526]: paxos.3).electionLogic(0) init, first boot, initializing epoch at 1 
Dec 02 09:53:21 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@3(electing) e4 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:53:21 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:53:21 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:53:22 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@3(electing) e4  adding peer [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] to list of hints
Dec 02 09:53:22 np0005541914.localdomain systemd[1]: tmp-crun.Fx4rX9.mount: Deactivated successfully.
Dec 02 09:53:22 np0005541914.localdomain podman[288566]: 2025-12-02 09:53:22.109493295 +0000 UTC m=+0.101682297 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 02 09:53:22 np0005541914.localdomain podman[288565]: 2025-12-02 09:53:22.07133176 +0000 UTC m=+0.070992491 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 09:53:22 np0005541914.localdomain podman[288565]: 2025-12-02 09:53:22.155997016 +0000 UTC m=+0.155657707 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 09:53:22 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 09:53:22 np0005541914.localdomain podman[288566]: 2025-12-02 09:53:22.174865643 +0000 UTC m=+0.167054655 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:53:22 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 09:53:22 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@3(electing) e4  adding peer [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] to list of hints
Dec 02 09:53:22 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:53:22 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:53:23 np0005541914.localdomain podman[288606]: 2025-12-02 09:53:23.074516224 +0000 UTC m=+0.081191841 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:53:23 np0005541914.localdomain podman[288606]: 2025-12-02 09:53:23.084005254 +0000 UTC m=+0.090680911 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 02 09:53:23 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:53:23 np0005541914.localdomain systemd[1]: tmp-crun.9iKH8a.mount: Deactivated successfully.
Dec 02 09:53:23 np0005541914.localdomain podman[288607]: 2025-12-02 09:53:23.186373571 +0000 UTC m=+0.188164178 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 02 09:53:23 np0005541914.localdomain podman[288607]: 2025-12-02 09:53:23.254603985 +0000 UTC m=+0.256394552 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 02 09:53:23 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@3(electing) e4  adding peer [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] to list of hints
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@3(electing) e4 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@3(peon) e4 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@3(peon) e4 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@3(peon) e4 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: mgrc update_daemon_metadata mon.np0005541914 metadata {addrs=[v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable),ceph_version_short=18.2.1-361.el9cp,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=np0005541914.localdomain,container_image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=rhel,distro_description=Red Hat Enterprise Linux 9.7 (Plow),distro_version=9.7,hostname=np0005541914.localdomain,kernel_description=#1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023,kernel_version=5.14.0-284.11.1.el9_2.x86_64,mem_swap_kb=1048572,mem_total_kb=16116612,os=Linux}
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: pgmap v3936: 177 pgs: 177 active+clean; 104 MiB data, 544 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: Deploying daemon mon.np0005541913 on np0005541913.localdomain
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541909"} : dispatch
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541910"} : dispatch
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541911"} : dispatch
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: mon.np0005541909 calling monitor election
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: mon.np0005541911 calling monitor election
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: mon.np0005541910 calling monitor election
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: pgmap v3937: 177 pgs: 177 active+clean; 104 MiB data, 544 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914 calling monitor election
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: pgmap v3938: 177 pgs: 177 active+clean; 104 MiB data, 544 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: mon.np0005541909 is new leader, mons np0005541909,np0005541911,np0005541910,np0005541914 in quorum (ranks 0,1,2,3)
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: monmap epoch 4
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: last_changed 2025-12-02T09:53:19.558333+0000
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: created 2025-12-02T07:44:17.411659+0000
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: min_mon_release 18 (reef)
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: election_strategy: 1
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: 0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005541909
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: 1: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005541911
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005541910
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: 3: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005541914
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: fsmap cephfs:1 {0=mds.np0005541912.ghcwcm=up:active} 2 up:standby
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: osdmap e85: 6 total, 6 up, 6 in
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: mgrmap e14: np0005541909.kfesnk(active, since 2h), standbys: np0005541911.adcgiw, np0005541914.lljzmk, np0005541910.kzipdo, np0005541913.mfesdm, np0005541912.qwddia
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: overall HEALTH_OK
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:24 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@3(peon) e4 handle_auth_request failed to assign global_id
Dec 02 09:53:25 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:25 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:25 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:53:25 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 02 09:53:25 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:53:25 np0005541914.localdomain ceph-mon[288526]: Deploying daemon mon.np0005541912 on np0005541912.localdomain
Dec 02 09:53:25 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:53:25 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch
Dec 02 09:53:26 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@3(peon) e4  adding peer [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] to list of hints
Dec 02 09:53:26 np0005541914.localdomain ceph-mgr[287188]: ms_deliver_dispatch: unhandled message 0x55910bb5ef20 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0
Dec 02 09:53:26 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 calling monitor election
Dec 02 09:53:26 np0005541914.localdomain ceph-mon[288526]: paxos.3).electionLogic(18) init, last seen epoch 18
Dec 02 09:53:26 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@3(electing) e5 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:53:26 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@3(electing) e5 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:53:27 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@3(electing) e5  adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints
Dec 02 09:53:27 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@3(electing) e5  adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints
Dec 02 09:53:27 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@3(electing) e5  adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints
Dec 02 09:53:29 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@3(electing) e5  adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints
Dec 02 09:53:31 np0005541914.localdomain ceph-mds[285895]: mds.beacon.mds.np0005541914.sqgqkj missed beacon ack from the monitors
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@3(electing) e5 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@3(peon) e5 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@3(peon) e5 handle_auth_request failed to assign global_id
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541909"} : dispatch
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541910"} : dispatch
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: mon.np0005541910 calling monitor election
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: mon.np0005541909 calling monitor election
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541911"} : dispatch
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: mon.np0005541911 calling monitor election
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914 calling monitor election
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: pgmap v3940: 177 pgs: 177 active+clean; 104 MiB data, 544 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: mon.np0005541913 calling monitor election
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: pgmap v3941: 177 pgs: 177 active+clean; 104 MiB data, 544 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: mon.np0005541909 is new leader, mons np0005541909,np0005541911,np0005541910,np0005541914,np0005541913 in quorum (ranks 0,1,2,3,4)
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: monmap epoch 5
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: last_changed 2025-12-02T09:53:26.303070+0000
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: created 2025-12-02T07:44:17.411659+0000
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: min_mon_release 18 (reef)
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: election_strategy: 1
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: 0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005541909
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: 1: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005541911
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005541910
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: 3: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005541914
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: 4: [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] mon.np0005541913
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: fsmap cephfs:1 {0=mds.np0005541912.ghcwcm=up:active} 2 up:standby
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: osdmap e85: 6 total, 6 up, 6 in
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: mgrmap e14: np0005541909.kfesnk(active, since 2h), standbys: np0005541911.adcgiw, np0005541914.lljzmk, np0005541910.kzipdo, np0005541913.mfesdm, np0005541912.qwddia
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: overall HEALTH_OK
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@3(peon) e5  adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints
Dec 02 09:53:31 np0005541914.localdomain ceph-mgr[287188]: ms_deliver_dispatch: unhandled message 0x55910bb5f600 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 calling monitor election
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: paxos.3).electionLogic(22) init, last seen epoch 22
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@3(electing) e6 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:53:31 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@3(electing) e6 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:53:31 np0005541914.localdomain sudo[288650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:53:31 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:53:31 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:53:32 np0005541914.localdomain sudo[288650]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:32 np0005541914.localdomain sudo[288650]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:32 np0005541914.localdomain systemd[1]: tmp-crun.mqs3HQ.mount: Deactivated successfully.
Dec 02 09:53:32 np0005541914.localdomain podman[288667]: 2025-12-02 09:53:32.091413126 +0000 UTC m=+0.088518135 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 09:53:32 np0005541914.localdomain sudo[288689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:53:32 np0005541914.localdomain sudo[288689]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:32 np0005541914.localdomain podman[288667]: 2025-12-02 09:53:32.099911006 +0000 UTC m=+0.097016085 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 09:53:32 np0005541914.localdomain sudo[288689]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:32 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:53:32 np0005541914.localdomain sudo[288719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 02 09:53:32 np0005541914.localdomain sudo[288719]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:32 np0005541914.localdomain podman[288668]: 2025-12-02 09:53:32.196728693 +0000 UTC m=+0.191521761 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, architecture=x86_64, io.buildah.version=1.33.7, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vcs-type=git, io.openshift.tags=minimal rhel9)
Dec 02 09:53:32 np0005541914.localdomain podman[288668]: 2025-12-02 09:53:32.21530238 +0000 UTC m=+0.210095438 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, version=9.6, architecture=x86_64, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, managed_by=edpm_ansible, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 02 09:53:32 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:53:33 np0005541914.localdomain podman[288819]: 2025-12-02 09:53:33.069735731 +0000 UTC m=+0.088347610 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, distribution-scope=public, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=1763362218, io.buildah.version=1.41.4, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, GIT_CLEAN=True, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=7, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 02 09:53:33 np0005541914.localdomain systemd[1]: tmp-crun.HCbcC8.mount: Deactivated successfully.
Dec 02 09:53:33 np0005541914.localdomain podman[288819]: 2025-12-02 09:53:33.180776183 +0000 UTC m=+0.199388052 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, release=1763362218, distribution-scope=public, vcs-type=git, ceph=True, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, name=rhceph, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., RELEASE=main)
Dec 02 09:53:33 np0005541914.localdomain podman[239757]: time="2025-12-02T09:53:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 09:53:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:53:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 09:53:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:53:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19167 "" "Go-http-client/1.1"
Dec 02 09:53:33 np0005541914.localdomain sudo[288719]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@3(electing) e6 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@3(peon) e6 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: pgmap v3942: 177 pgs: 177 active+clean; 104 MiB data, 544 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541909"} : dispatch
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541910"} : dispatch
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541911"} : dispatch
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: mon.np0005541909 calling monitor election
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: mon.np0005541910 calling monitor election
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: mon.np0005541911 calling monitor election
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914 calling monitor election
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: mon.np0005541913 calling monitor election
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: from='client.17200 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005541912", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: pgmap v3943: 177 pgs: 177 active+clean; 104 MiB data, 544 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: mon.np0005541912 calling monitor election
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: pgmap v3944: 177 pgs: 177 active+clean; 104 MiB data, 544 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: mon.np0005541909 is new leader, mons np0005541909,np0005541911,np0005541910,np0005541914,np0005541913,np0005541912 in quorum (ranks 0,1,2,3,4,5)
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: monmap epoch 6
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: last_changed 2025-12-02T09:53:31.525725+0000
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: created 2025-12-02T07:44:17.411659+0000
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: min_mon_release 18 (reef)
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: election_strategy: 1
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: 0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005541909
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: 1: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005541911
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005541910
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: 3: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005541914
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: 4: [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] mon.np0005541913
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: 5: [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] mon.np0005541912
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: fsmap cephfs:1 {0=mds.np0005541912.ghcwcm=up:active} 2 up:standby
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: osdmap e85: 6 total, 6 up, 6 in
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: mgrmap e14: np0005541909.kfesnk(active, since 2h), standbys: np0005541911.adcgiw, np0005541914.lljzmk, np0005541910.kzipdo, np0005541913.mfesdm, np0005541912.qwddia
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: overall HEALTH_OK
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:36 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:36 np0005541914.localdomain sudo[288937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:53:36 np0005541914.localdomain sudo[288937]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:36 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:53:36 np0005541914.localdomain sudo[288937]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:36 np0005541914.localdomain sudo[288957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:53:36 np0005541914.localdomain sudo[288957]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:36 np0005541914.localdomain systemd[1]: tmp-crun.8Fkl6g.mount: Deactivated successfully.
Dec 02 09:53:36 np0005541914.localdomain podman[288955]: 2025-12-02 09:53:36.908845464 +0000 UTC m=+0.091717192 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:53:36 np0005541914.localdomain podman[288955]: 2025-12-02 09:53:36.924774261 +0000 UTC m=+0.107645989 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 02 09:53:36 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:53:37 np0005541914.localdomain sudo[288957]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:37 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:37 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:37 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:37 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:37 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:37 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 09:53:37 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:53:37 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:53:37 np0005541914.localdomain sudo[289023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 02 09:53:37 np0005541914.localdomain sudo[289023]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:37 np0005541914.localdomain sudo[289023]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:37 np0005541914.localdomain sudo[289041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph
Dec 02 09:53:37 np0005541914.localdomain sudo[289041]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:37 np0005541914.localdomain sudo[289041]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:37 np0005541914.localdomain sudo[289059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:53:37 np0005541914.localdomain sudo[289059]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:37 np0005541914.localdomain sudo[289059]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:37 np0005541914.localdomain sudo[289077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:53:37 np0005541914.localdomain sudo[289077]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:37 np0005541914.localdomain sudo[289077]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:38 np0005541914.localdomain sudo[289095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:53:38 np0005541914.localdomain sudo[289095]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:38 np0005541914.localdomain sudo[289095]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:38 np0005541914.localdomain sudo[289129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:53:38 np0005541914.localdomain sudo[289129]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:38 np0005541914.localdomain sudo[289129]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:38 np0005541914.localdomain sudo[289147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:53:38 np0005541914.localdomain sudo[289147]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:38 np0005541914.localdomain sudo[289147]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:38 np0005541914.localdomain sudo[289165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 02 09:53:38 np0005541914.localdomain sudo[289165]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:38 np0005541914.localdomain sudo[289165]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:38 np0005541914.localdomain sudo[289183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 09:53:38 np0005541914.localdomain sudo[289183]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:38 np0005541914.localdomain sudo[289183]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:38 np0005541914.localdomain sudo[289201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 09:53:38 np0005541914.localdomain sudo[289201]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:38 np0005541914.localdomain sudo[289201]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:38 np0005541914.localdomain sudo[289219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:53:38 np0005541914.localdomain sudo[289219]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:38 np0005541914.localdomain sudo[289219]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:38 np0005541914.localdomain sudo[289237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:53:38 np0005541914.localdomain sudo[289237]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:38 np0005541914.localdomain sudo[289237]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:38 np0005541914.localdomain sudo[289255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:53:38 np0005541914.localdomain sudo[289255]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:38 np0005541914.localdomain sudo[289255]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:38 np0005541914.localdomain sudo[289289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:53:38 np0005541914.localdomain sudo[289289]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:38 np0005541914.localdomain sudo[289289]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:38 np0005541914.localdomain sudo[289307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:53:38 np0005541914.localdomain sudo[289307]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:38 np0005541914.localdomain sudo[289307]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:38 np0005541914.localdomain sudo[289325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:53:38 np0005541914.localdomain sudo[289325]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:38 np0005541914.localdomain sudo[289325]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:38 np0005541914.localdomain ceph-mon[288526]: pgmap v3945: 177 pgs: 177 active+clean; 104 MiB data, 544 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:53:38 np0005541914.localdomain ceph-mon[288526]: Updating np0005541909.localdomain:/etc/ceph/ceph.conf
Dec 02 09:53:38 np0005541914.localdomain ceph-mon[288526]: Updating np0005541910.localdomain:/etc/ceph/ceph.conf
Dec 02 09:53:38 np0005541914.localdomain ceph-mon[288526]: Updating np0005541911.localdomain:/etc/ceph/ceph.conf
Dec 02 09:53:38 np0005541914.localdomain ceph-mon[288526]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf
Dec 02 09:53:38 np0005541914.localdomain ceph-mon[288526]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf
Dec 02 09:53:38 np0005541914.localdomain ceph-mon[288526]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf
Dec 02 09:53:38 np0005541914.localdomain ceph-mon[288526]: from='client.17208 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005541912", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 02 09:53:38 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:39 np0005541914.localdomain sudo[289343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:53:39 np0005541914.localdomain sudo[289343]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:39 np0005541914.localdomain sudo[289343]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: Updating np0005541910.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: Updating np0005541909.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:53:39.843517) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669219843620, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 10056, "num_deletes": 255, "total_data_size": 10697787, "memory_usage": 10990336, "flush_reason": "Manual Compaction"}
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669219895795, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 9090844, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 10061, "table_properties": {"data_size": 9037725, "index_size": 28245, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23813, "raw_key_size": 250162, "raw_average_key_size": 26, "raw_value_size": 8876005, "raw_average_value_size": 934, "num_data_blocks": 1084, "num_entries": 9502, "num_filter_entries": 9502, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669199, "oldest_key_time": 1764669199, "file_creation_time": 1764669219, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fef79939-f0d3-4c6e-a3c1-7bf191246dd2", "db_session_id": "ES6HEAUO0NO66H72LGQU", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 52335 microseconds, and 14235 cpu microseconds.
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:53:39.895855) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 9090844 bytes OK
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:53:39.895880) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:53:39.897521) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:53:39.897551) EVENT_LOG_v1 {"time_micros": 1764669219897543, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:53:39.897572) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 10628616, prev total WAL file size 10628616, number of live WAL files 2.
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:53:39.899777) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130303430' seq:72057594037927935, type:22 .. '7061786F73003130323932' seq:0, type:0; will stop at (end)
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(8877KB) 8(1887B)]
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669219899902, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 9092731, "oldest_snapshot_seqno": -1}
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 9250 keys, 9086935 bytes, temperature: kUnknown
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669219967548, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 9086935, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9034484, "index_size": 28222, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23173, "raw_key_size": 245355, "raw_average_key_size": 26, "raw_value_size": 8876063, "raw_average_value_size": 959, "num_data_blocks": 1083, "num_entries": 9250, "num_filter_entries": 9250, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669199, "oldest_key_time": 0, "file_creation_time": 1764669219, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fef79939-f0d3-4c6e-a3c1-7bf191246dd2", "db_session_id": "ES6HEAUO0NO66H72LGQU", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:53:39.967897) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 9086935 bytes
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:53:39.970300) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 134.2 rd, 134.1 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(8.7, 0.0 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 9507, records dropped: 257 output_compression: NoCompression
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:53:39.970329) EVENT_LOG_v1 {"time_micros": 1764669219970317, "job": 4, "event": "compaction_finished", "compaction_time_micros": 67773, "compaction_time_cpu_micros": 30217, "output_level": 6, "num_output_files": 1, "total_output_size": 9086935, "num_input_records": 9507, "num_output_records": 9250, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669219971751, "job": 4, "event": "table_file_deletion", "file_number": 14}
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669219971828, "job": 4, "event": "table_file_deletion", "file_number": 8}
Dec 02 09:53:39 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:53:39.899433) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 09:53:40 np0005541914.localdomain ceph-mon[288526]: pgmap v3946: 177 pgs: 177 active+clean; 104 MiB data, 544 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:53:40 np0005541914.localdomain ceph-mon[288526]: from='client.17214 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005541913", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 02 09:53:40 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mon.np0005541909 (monmap changed)...
Dec 02 09:53:40 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mon.np0005541909 on np0005541909.localdomain
Dec 02 09:53:40 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:40 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:40 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541909.kfesnk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:53:40 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:53:40 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:53:41 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mgr.np0005541909.kfesnk (monmap changed)...
Dec 02 09:53:41 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mgr.np0005541909.kfesnk on np0005541909.localdomain
Dec 02 09:53:41 np0005541914.localdomain ceph-mon[288526]: from='client.34107 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005541914", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 02 09:53:41 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:41 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:41 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541909.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:53:41 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:53:41 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:53:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:53:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:53:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:53:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:53:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 09:53:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:53:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 09:53:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:53:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:53:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 09:53:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:53:42 np0005541914.localdomain ceph-mon[288526]: pgmap v3947: 177 pgs: 177 active+clean; 104 MiB data, 544 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:53:42 np0005541914.localdomain ceph-mon[288526]: Reconfiguring crash.np0005541909 (monmap changed)...
Dec 02 09:53:42 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon crash.np0005541909 on np0005541909.localdomain
Dec 02 09:53:42 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:42 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:42 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541910.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:53:42 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:53:42 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.103:0/3005476938' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 02 09:53:43 np0005541914.localdomain ceph-mon[288526]: Reconfiguring crash.np0005541910 (monmap changed)...
Dec 02 09:53:43 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon crash.np0005541910 on np0005541910.localdomain
Dec 02 09:53:43 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:43 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:43 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:53:43 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 02 09:53:43 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:53:43 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.103:0/3224647752' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json"} : dispatch
Dec 02 09:53:44 np0005541914.localdomain ceph-mon[288526]: pgmap v3948: 177 pgs: 177 active+clean; 104 MiB data, 544 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:53:44 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mon.np0005541910 (monmap changed)...
Dec 02 09:53:44 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mon.np0005541910 on np0005541910.localdomain
Dec 02 09:53:44 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:44 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:44 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541910.kzipdo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:53:44 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:53:44 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:53:44 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:44 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' 
Dec 02 09:53:44 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:53:44 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@3(peon).osd e85 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@3(peon).osd e85 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@3(peon).osd e86 e86: 6 total, 6 up, 6 in
Dec 02 09:53:45 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 02 09:53:45 np0005541914.localdomain sshd[26286]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 02 09:53:45 np0005541914.localdomain sshd[26422]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 02 09:53:45 np0005541914.localdomain sshd[26308]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 02 09:53:45 np0005541914.localdomain sshd[26268]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 02 09:53:45 np0005541914.localdomain sshd[26384]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 02 09:53:45 np0005541914.localdomain sshd[26403]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 02 09:53:45 np0005541914.localdomain sshd[26327]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 02 09:53:45 np0005541914.localdomain systemd[1]: session-23.scope: Deactivated successfully.
Dec 02 09:53:45 np0005541914.localdomain sshd[26365]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 02 09:53:45 np0005541914.localdomain sshd[26441]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 02 09:53:45 np0005541914.localdomain systemd[1]: session-21.scope: Deactivated successfully.
Dec 02 09:53:45 np0005541914.localdomain systemd[1]: session-18.scope: Deactivated successfully.
Dec 02 09:53:45 np0005541914.localdomain systemd[1]: session-20.scope: Deactivated successfully.
Dec 02 09:53:45 np0005541914.localdomain systemd[1]: session-24.scope: Deactivated successfully.
Dec 02 09:53:45 np0005541914.localdomain systemd[1]: session-17.scope: Deactivated successfully.
Dec 02 09:53:45 np0005541914.localdomain systemd[1]: session-22.scope: Deactivated successfully.
Dec 02 09:53:45 np0005541914.localdomain systemd[1]: session-14.scope: Deactivated successfully.
Dec 02 09:53:45 np0005541914.localdomain systemd[1]: session-16.scope: Deactivated successfully.
Dec 02 09:53:45 np0005541914.localdomain systemd-logind[760]: Session 16 logged out. Waiting for processes to exit.
Dec 02 09:53:45 np0005541914.localdomain systemd-logind[760]: Session 23 logged out. Waiting for processes to exit.
Dec 02 09:53:45 np0005541914.localdomain systemd-logind[760]: Session 22 logged out. Waiting for processes to exit.
Dec 02 09:53:45 np0005541914.localdomain systemd-logind[760]: Session 21 logged out. Waiting for processes to exit.
Dec 02 09:53:45 np0005541914.localdomain systemd-logind[760]: Session 24 logged out. Waiting for processes to exit.
Dec 02 09:53:45 np0005541914.localdomain systemd-logind[760]: Session 18 logged out. Waiting for processes to exit.
Dec 02 09:53:45 np0005541914.localdomain systemd-logind[760]: Session 20 logged out. Waiting for processes to exit.
Dec 02 09:53:45 np0005541914.localdomain systemd-logind[760]: Session 17 logged out. Waiting for processes to exit.
Dec 02 09:53:45 np0005541914.localdomain systemd-logind[760]: Session 14 logged out. Waiting for processes to exit.
Dec 02 09:53:45 np0005541914.localdomain systemd-logind[760]: Removed session 23.
Dec 02 09:53:45 np0005541914.localdomain sshd[26346]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 02 09:53:45 np0005541914.localdomain sshd[26477]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 02 09:53:45 np0005541914.localdomain systemd[1]: session-19.scope: Deactivated successfully.
Dec 02 09:53:45 np0005541914.localdomain systemd[1]: session-26.scope: Deactivated successfully.
Dec 02 09:53:45 np0005541914.localdomain systemd[1]: session-26.scope: Consumed 3min 26.753s CPU time.
Dec 02 09:53:45 np0005541914.localdomain systemd-logind[760]: Removed session 21.
Dec 02 09:53:45 np0005541914.localdomain systemd-logind[760]: Session 19 logged out. Waiting for processes to exit.
Dec 02 09:53:45 np0005541914.localdomain systemd-logind[760]: Session 26 logged out. Waiting for processes to exit.
Dec 02 09:53:45 np0005541914.localdomain systemd-logind[760]: Removed session 18.
Dec 02 09:53:45 np0005541914.localdomain sshd[26458]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 02 09:53:45 np0005541914.localdomain systemd-logind[760]: Removed session 20.
Dec 02 09:53:45 np0005541914.localdomain systemd[1]: session-25.scope: Deactivated successfully.
Dec 02 09:53:45 np0005541914.localdomain systemd-logind[760]: Session 25 logged out. Waiting for processes to exit.
Dec 02 09:53:45 np0005541914.localdomain systemd-logind[760]: Removed session 24.
Dec 02 09:53:45 np0005541914.localdomain systemd-logind[760]: Removed session 17.
Dec 02 09:53:45 np0005541914.localdomain systemd-logind[760]: Removed session 22.
Dec 02 09:53:45 np0005541914.localdomain systemd-logind[760]: Removed session 14.
Dec 02 09:53:45 np0005541914.localdomain systemd-logind[760]: Removed session 16.
Dec 02 09:53:45 np0005541914.localdomain systemd-logind[760]: Removed session 19.
Dec 02 09:53:45 np0005541914.localdomain systemd-logind[760]: Removed session 26.
Dec 02 09:53:45 np0005541914.localdomain systemd-logind[760]: Removed session 25.
Dec 02 09:53:45 np0005541914.localdomain sshd[289362]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:53:45 np0005541914.localdomain sshd[289362]: Accepted publickey for ceph-admin from 192.168.122.105 port 50128 ssh2: RSA SHA256:/YNuT9SDdDKzRDkfkhY38xwkOxxhXakQO/p8xUCPPz0
Dec 02 09:53:45 np0005541914.localdomain systemd-logind[760]: New session 64 of user ceph-admin.
Dec 02 09:53:45 np0005541914.localdomain systemd[1]: Started Session 64 of User ceph-admin.
Dec 02 09:53:45 np0005541914.localdomain sshd[289362]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 02 09:53:45 np0005541914.localdomain sudo[289366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:53:45 np0005541914.localdomain sudo[289366]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:45 np0005541914.localdomain sudo[289366]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mgr.np0005541910.kzipdo (monmap changed)...
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mgr.np0005541910.kzipdo on np0005541910.localdomain
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mon.np0005541911 (monmap changed)...
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.14120 172.18.0.103:0/408290768' entity='mgr.np0005541909.kfesnk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mon.np0005541911 on np0005541911.localdomain
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.103:0/1327578721' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: Activating manager daemon np0005541911.adcgiw
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: osdmap e86: 6 total, 6 up, 6 in
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.103:0/1327578721' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: mgrmap e15: np0005541911.adcgiw(active, starting, since 0.0533978s), standbys: np0005541914.lljzmk, np0005541910.kzipdo, np0005541913.mfesdm, np0005541912.qwddia
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mon metadata", "id": "np0005541909"} : dispatch
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mon metadata", "id": "np0005541910"} : dispatch
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mon metadata", "id": "np0005541911"} : dispatch
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mds metadata", "who": "mds.np0005541914.sqgqkj"} : dispatch
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mds metadata", "who": "mds.np0005541913.maexpe"} : dispatch
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mds metadata", "who": "mds.np0005541912.ghcwcm"} : dispatch
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mgr metadata", "who": "np0005541911.adcgiw", "id": "np0005541911.adcgiw"} : dispatch
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mgr metadata", "who": "np0005541914.lljzmk", "id": "np0005541914.lljzmk"} : dispatch
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mgr metadata", "who": "np0005541910.kzipdo", "id": "np0005541910.kzipdo"} : dispatch
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mgr metadata", "who": "np0005541913.mfesdm", "id": "np0005541913.mfesdm"} : dispatch
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mgr metadata", "who": "np0005541912.qwddia", "id": "np0005541912.qwddia"} : dispatch
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "osd metadata", "id": 3} : dispatch
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "osd metadata", "id": 4} : dispatch
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "osd metadata", "id": 5} : dispatch
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mds metadata"} : dispatch
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "osd metadata"} : dispatch
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mon metadata"} : dispatch
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: Manager daemon np0005541911.adcgiw is now available
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541911.adcgiw/mirror_snapshot_schedule"} : dispatch
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541911.adcgiw/mirror_snapshot_schedule"} : dispatch
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541911.adcgiw/trash_purge_schedule"} : dispatch
Dec 02 09:53:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541911.adcgiw/trash_purge_schedule"} : dispatch
Dec 02 09:53:45 np0005541914.localdomain sudo[289384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 02 09:53:45 np0005541914.localdomain sudo[289384]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:46 np0005541914.localdomain podman[289475]: 2025-12-02 09:53:46.763199939 +0000 UTC m=+0.096859680 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, architecture=x86_64, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, RELEASE=main, vcs-type=git, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218)
Dec 02 09:53:46 np0005541914.localdomain podman[289475]: 2025-12-02 09:53:46.869722793 +0000 UTC m=+0.203382524 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, version=7, GIT_CLEAN=True, GIT_BRANCH=main, io.openshift.expose-services=, release=1763362218, CEPH_POINT_RELEASE=, name=rhceph, vendor=Red Hat, Inc., ceph=True, architecture=x86_64, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 02 09:53:47 np0005541914.localdomain ceph-mon[288526]: mgrmap e16: np0005541911.adcgiw(active, since 1.07358s), standbys: np0005541914.lljzmk, np0005541910.kzipdo, np0005541913.mfesdm, np0005541912.qwddia
Dec 02 09:53:47 np0005541914.localdomain ceph-mon[288526]: [02/Dec/2025:09:53:46] ENGINE Bus STARTING
Dec 02 09:53:47 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:47 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:47 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.107:0/969090503' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:53:47 np0005541914.localdomain sudo[289384]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:47 np0005541914.localdomain sudo[289597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:53:47 np0005541914.localdomain sudo[289597]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:47 np0005541914.localdomain sudo[289597]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:47 np0005541914.localdomain sudo[289615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:53:47 np0005541914.localdomain sudo[289615]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:48 np0005541914.localdomain ceph-mon[288526]: [02/Dec/2025:09:53:46] ENGINE Serving on https://172.18.0.105:7150
Dec 02 09:53:48 np0005541914.localdomain ceph-mon[288526]: [02/Dec/2025:09:53:46] ENGINE Client ('172.18.0.105', 60410) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 02 09:53:48 np0005541914.localdomain ceph-mon[288526]: [02/Dec/2025:09:53:47] ENGINE Serving on http://172.18.0.105:8765
Dec 02 09:53:48 np0005541914.localdomain ceph-mon[288526]: [02/Dec/2025:09:53:47] ENGINE Bus STARTED
Dec 02 09:53:48 np0005541914.localdomain ceph-mon[288526]: pgmap v6: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:53:48 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:48 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:48 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:48 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:48 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:48 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:48 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:48 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:48 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:48 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:48 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.107:0/22063984' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:53:48 np0005541914.localdomain sudo[289615]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:53:48.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:53:48 np0005541914.localdomain sudo[289665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:53:48 np0005541914.localdomain sudo[289665]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:53:48.549 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:53:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:53:48.550 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:53:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:53:48.550 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:53:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:53:48.551 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 09:53:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:53:48.551 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:53:48 np0005541914.localdomain sudo[289665]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:48 np0005541914.localdomain sudo[289684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Dec 02 09:53:48 np0005541914.localdomain sudo[289684]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:53:49.004 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:53:49 np0005541914.localdomain sudo[289684]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:53:49.174 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 09:53:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:53:49.176 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=12021MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 09:53:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:53:49.177 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:53:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:53:49.177 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:53:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:53:49.236 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 09:53:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:53:49.236 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 09:53:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:53:49.257 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:53:49 np0005541914.localdomain sudo[289742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 02 09:53:49 np0005541914.localdomain sudo[289742]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:49 np0005541914.localdomain sudo[289742]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:49 np0005541914.localdomain sudo[289761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph
Dec 02 09:53:49 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@3(peon).osd e86 _set_new_cache_sizes cache_size:1019818837 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:53:49 np0005541914.localdomain sudo[289761]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:49 np0005541914.localdomain sudo[289761]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:49 np0005541914.localdomain sudo[289780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:53:49 np0005541914.localdomain sudo[289780]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:49 np0005541914.localdomain sudo[289780]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:49 np0005541914.localdomain ceph-mon[288526]: mgrmap e17: np0005541911.adcgiw(active, since 3s), standbys: np0005541914.lljzmk, np0005541910.kzipdo, np0005541913.mfesdm, np0005541912.qwddia
Dec 02 09:53:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd/host:np0005541911", "name": "osd_memory_target"} : dispatch
Dec 02 09:53:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd/host:np0005541911", "name": "osd_memory_target"} : dispatch
Dec 02 09:53:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:49 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.108:0/1424767511' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:53:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd/host:np0005541910", "name": "osd_memory_target"} : dispatch
Dec 02 09:53:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd/host:np0005541910", "name": "osd_memory_target"} : dispatch
Dec 02 09:53:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd/host:np0005541909", "name": "osd_memory_target"} : dispatch
Dec 02 09:53:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd/host:np0005541909", "name": "osd_memory_target"} : dispatch
Dec 02 09:53:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 02 09:53:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 02 09:53:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 02 09:53:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 02 09:53:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 02 09:53:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 02 09:53:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 02 09:53:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 02 09:53:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 02 09:53:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 02 09:53:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 02 09:53:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 02 09:53:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:53:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:53:49 np0005541914.localdomain sudo[289816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:53:49 np0005541914.localdomain sudo[289816]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:49 np0005541914.localdomain sudo[289816]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:49 np0005541914.localdomain sudo[289834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:53:49 np0005541914.localdomain sudo[289834]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:49 np0005541914.localdomain sudo[289834]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:53:49.684 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:53:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:53:49.691 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 09:53:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:53:49.706 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 09:53:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:53:49.709 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 09:53:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:53:49.710 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:53:49 np0005541914.localdomain sudo[289870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:53:49 np0005541914.localdomain sudo[289870]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:49 np0005541914.localdomain sudo[289870]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:49 np0005541914.localdomain sudo[289888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:53:49 np0005541914.localdomain sudo[289888]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:49 np0005541914.localdomain sudo[289888]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:49 np0005541914.localdomain sudo[289906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 02 09:53:49 np0005541914.localdomain sudo[289906]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:49 np0005541914.localdomain sudo[289906]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:49 np0005541914.localdomain sudo[289924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 09:53:49 np0005541914.localdomain sudo[289924]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:49 np0005541914.localdomain sudo[289924]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:50 np0005541914.localdomain sudo[289942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 09:53:50 np0005541914.localdomain sudo[289942]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:50 np0005541914.localdomain sudo[289942]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:50 np0005541914.localdomain sudo[289960]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:53:50 np0005541914.localdomain sudo[289960]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:50 np0005541914.localdomain sudo[289960]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:50 np0005541914.localdomain sudo[289978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:53:50 np0005541914.localdomain sudo[289978]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:50 np0005541914.localdomain sudo[289978]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:50 np0005541914.localdomain sudo[289996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:53:50 np0005541914.localdomain sudo[289996]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:50 np0005541914.localdomain sudo[289996]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:50 np0005541914.localdomain sudo[290030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:53:50 np0005541914.localdomain sudo[290030]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:50 np0005541914.localdomain sudo[290030]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:50 np0005541914.localdomain sudo[290048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:53:50 np0005541914.localdomain sudo[290048]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:50 np0005541914.localdomain sudo[290048]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:50 np0005541914.localdomain ceph-mon[288526]: Adjusting osd_memory_target on np0005541913.localdomain to 836.6M
Dec 02 09:53:50 np0005541914.localdomain ceph-mon[288526]: Adjusting osd_memory_target on np0005541912.localdomain to 836.6M
Dec 02 09:53:50 np0005541914.localdomain ceph-mon[288526]: Adjusting osd_memory_target on np0005541914.localdomain to 836.6M
Dec 02 09:53:50 np0005541914.localdomain ceph-mon[288526]: Unable to set osd_memory_target on np0005541913.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 09:53:50 np0005541914.localdomain ceph-mon[288526]: Unable to set osd_memory_target on np0005541912.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 09:53:50 np0005541914.localdomain ceph-mon[288526]: Unable to set osd_memory_target on np0005541914.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 09:53:50 np0005541914.localdomain ceph-mon[288526]: Updating np0005541909.localdomain:/etc/ceph/ceph.conf
Dec 02 09:53:50 np0005541914.localdomain ceph-mon[288526]: Updating np0005541910.localdomain:/etc/ceph/ceph.conf
Dec 02 09:53:50 np0005541914.localdomain ceph-mon[288526]: Updating np0005541911.localdomain:/etc/ceph/ceph.conf
Dec 02 09:53:50 np0005541914.localdomain ceph-mon[288526]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf
Dec 02 09:53:50 np0005541914.localdomain ceph-mon[288526]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf
Dec 02 09:53:50 np0005541914.localdomain ceph-mon[288526]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf
Dec 02 09:53:50 np0005541914.localdomain ceph-mon[288526]: pgmap v7: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:53:50 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.108:0/3338051788' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:53:50 np0005541914.localdomain ceph-mon[288526]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:53:50 np0005541914.localdomain ceph-mon[288526]: mgrmap e18: np0005541911.adcgiw(active, since 5s), standbys: np0005541914.lljzmk, np0005541910.kzipdo, np0005541913.mfesdm, np0005541912.qwddia
Dec 02 09:53:50 np0005541914.localdomain ceph-mon[288526]: Standby manager daemon np0005541909.kfesnk started
Dec 02 09:53:50 np0005541914.localdomain sudo[290066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:53:50 np0005541914.localdomain sudo[290066]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:50 np0005541914.localdomain sudo[290066]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:50 np0005541914.localdomain sudo[290084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 02 09:53:50 np0005541914.localdomain sudo[290084]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:50 np0005541914.localdomain sudo[290084]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:50 np0005541914.localdomain sudo[290102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph
Dec 02 09:53:50 np0005541914.localdomain sudo[290102]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:50 np0005541914.localdomain sudo[290102]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:50 np0005541914.localdomain sudo[290120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.client.admin.keyring.new
Dec 02 09:53:50 np0005541914.localdomain sudo[290120]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:50 np0005541914.localdomain sudo[290120]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:53:50.707 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:53:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:53:50.707 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:53:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:53:50.724 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:53:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:53:50.725 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 09:53:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:53:50.725 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 09:53:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:53:50.737 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 09:53:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:53:50.738 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:53:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:53:50.739 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:53:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:53:50.739 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:53:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:53:50.739 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 09:53:50 np0005541914.localdomain sudo[290138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:53:50 np0005541914.localdomain sudo[290138]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:50 np0005541914.localdomain sudo[290138]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:50 np0005541914.localdomain sudo[290156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.client.admin.keyring.new
Dec 02 09:53:50 np0005541914.localdomain sudo[290156]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:50 np0005541914.localdomain sudo[290156]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:50 np0005541914.localdomain sudo[290190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.client.admin.keyring.new
Dec 02 09:53:50 np0005541914.localdomain sudo[290190]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:50 np0005541914.localdomain sudo[290190]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:51 np0005541914.localdomain sudo[290208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.client.admin.keyring.new
Dec 02 09:53:51 np0005541914.localdomain sudo[290208]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:51 np0005541914.localdomain sudo[290208]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:51 np0005541914.localdomain sudo[290226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 02 09:53:51 np0005541914.localdomain sudo[290226]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:51 np0005541914.localdomain sudo[290226]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:51 np0005541914.localdomain sudo[290244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 09:53:51 np0005541914.localdomain sudo[290244]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:51 np0005541914.localdomain sudo[290244]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:51 np0005541914.localdomain sudo[290262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 09:53:51 np0005541914.localdomain sudo[290262]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:51 np0005541914.localdomain sudo[290262]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:51 np0005541914.localdomain sudo[290280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring.new
Dec 02 09:53:51 np0005541914.localdomain sudo[290280]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:51 np0005541914.localdomain sudo[290280]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:51 np0005541914.localdomain sudo[290298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:53:51 np0005541914.localdomain sudo[290298]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:51 np0005541914.localdomain sudo[290298]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:51 np0005541914.localdomain sudo[290316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring.new
Dec 02 09:53:51 np0005541914.localdomain sudo[290316]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:51 np0005541914.localdomain sudo[290316]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:51 np0005541914.localdomain ceph-mon[288526]: Updating np0005541909.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:53:51 np0005541914.localdomain ceph-mon[288526]: Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:53:51 np0005541914.localdomain ceph-mon[288526]: Updating np0005541910.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:53:51 np0005541914.localdomain ceph-mon[288526]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:53:51 np0005541914.localdomain ceph-mon[288526]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:53:51 np0005541914.localdomain ceph-mon[288526]: Updating np0005541911.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 09:53:51 np0005541914.localdomain ceph-mon[288526]: Updating np0005541913.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 09:53:51 np0005541914.localdomain ceph-mon[288526]: Updating np0005541910.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 09:53:51 np0005541914.localdomain ceph-mon[288526]: Updating np0005541909.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 09:53:51 np0005541914.localdomain ceph-mon[288526]: Updating np0005541914.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 09:53:51 np0005541914.localdomain ceph-mon[288526]: Updating np0005541912.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 09:53:51 np0005541914.localdomain ceph-mon[288526]: mgrmap e19: np0005541911.adcgiw(active, since 6s), standbys: np0005541914.lljzmk, np0005541910.kzipdo, np0005541913.mfesdm, np0005541912.qwddia, np0005541909.kfesnk
Dec 02 09:53:51 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mgr metadata", "who": "np0005541909.kfesnk", "id": "np0005541909.kfesnk"} : dispatch
Dec 02 09:53:51 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.106:0/3009783839' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:53:51 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:51 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:53:51.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:53:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:53:51.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:53:51 np0005541914.localdomain sudo[290350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring.new
Dec 02 09:53:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:53:51.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:53:51 np0005541914.localdomain sudo[290350]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:51 np0005541914.localdomain sudo[290350]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:51 np0005541914.localdomain sudo[290368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring.new
Dec 02 09:53:51 np0005541914.localdomain sudo[290368]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:51 np0005541914.localdomain sudo[290368]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:51 np0005541914.localdomain sudo[290386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring.new /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:53:51 np0005541914.localdomain sudo[290386]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:51 np0005541914.localdomain sudo[290386]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:52 np0005541914.localdomain sudo[290404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:53:52 np0005541914.localdomain sudo[290404]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:53:52 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:53:52 np0005541914.localdomain sudo[290404]: pam_unix(sudo:session): session closed for user root
Dec 02 09:53:52 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:53:52 np0005541914.localdomain systemd[1]: tmp-crun.zqo9r8.mount: Deactivated successfully.
Dec 02 09:53:52 np0005541914.localdomain podman[290423]: 2025-12-02 09:53:52.471925207 +0000 UTC m=+0.144056613 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 02 09:53:52 np0005541914.localdomain podman[290423]: 2025-12-02 09:53:52.479769437 +0000 UTC m=+0.151900833 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 02 09:53:52 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 09:53:52 np0005541914.localdomain ceph-mon[288526]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:53:52 np0005541914.localdomain ceph-mon[288526]: Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:53:52 np0005541914.localdomain ceph-mon[288526]: Updating np0005541909.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:53:52 np0005541914.localdomain ceph-mon[288526]: Updating np0005541910.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:53:52 np0005541914.localdomain ceph-mon[288526]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:53:52 np0005541914.localdomain ceph-mon[288526]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:53:52 np0005541914.localdomain ceph-mon[288526]: pgmap v8: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 0 B/s wr, 23 op/s
Dec 02 09:53:52 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:52 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:52 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:52 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:52 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:52 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:52 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:52 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:52 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:52 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:52 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:52 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:53:52 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.106:0/4070629179' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:53:52 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:53:52 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 02 09:53:52 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:53:52 np0005541914.localdomain podman[290422]: 2025-12-02 09:53:52.435214155 +0000 UTC m=+0.110551668 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 09:53:52 np0005541914.localdomain podman[290422]: 2025-12-02 09:53:52.56338216 +0000 UTC m=+0.238719703 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 02 09:53:52 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 09:53:53 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:53:53 np0005541914.localdomain podman[290460]: 2025-12-02 09:53:53.321221901 +0000 UTC m=+0.078316083 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 02 09:53:53 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:53:53 np0005541914.localdomain podman[290460]: 2025-12-02 09:53:53.352086134 +0000 UTC m=+0.109180316 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Dec 02 09:53:53 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:53:53 np0005541914.localdomain systemd[1]: tmp-crun.G93iCy.mount: Deactivated successfully.
Dec 02 09:53:53 np0005541914.localdomain podman[290479]: 2025-12-02 09:53:53.433624904 +0000 UTC m=+0.081847641 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec 02 09:53:53 np0005541914.localdomain podman[290479]: 2025-12-02 09:53:53.501976732 +0000 UTC m=+0.150199519 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 02 09:53:53 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:53:53 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mon.np0005541911 (monmap changed)...
Dec 02 09:53:53 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mon.np0005541911 on np0005541911.localdomain
Dec 02 09:53:53 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:53 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:53 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541911.adcgiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:53:53 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541911.adcgiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:53:53 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:53:53 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:53:54 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@3(peon).osd e86 _set_new_cache_sizes cache_size:1020050548 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:53:54 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mgr.np0005541911.adcgiw (monmap changed)...
Dec 02 09:53:54 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mgr.np0005541911.adcgiw on np0005541911.localdomain
Dec 02 09:53:54 np0005541914.localdomain ceph-mon[288526]: pgmap v9: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 0 B/s wr, 16 op/s
Dec 02 09:53:54 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:54 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:54 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541911.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:53:54 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541911.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:53:54 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:53:55 np0005541914.localdomain ceph-mon[288526]: Reconfiguring crash.np0005541911 (monmap changed)...
Dec 02 09:53:55 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon crash.np0005541911 on np0005541911.localdomain
Dec 02 09:53:55 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:55 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:55 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:53:55 np0005541914.localdomain ceph-mon[288526]: Reconfiguring crash.np0005541912 (monmap changed)...
Dec 02 09:53:55 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:53:55 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:53:55 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon crash.np0005541912 on np0005541912.localdomain
Dec 02 09:53:55 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:56 np0005541914.localdomain ceph-mon[288526]: pgmap v10: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 0 B/s wr, 12 op/s
Dec 02 09:53:56 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:56 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:56 np0005541914.localdomain ceph-mon[288526]: Reconfiguring osd.2 (monmap changed)...
Dec 02 09:53:56 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 02 09:53:56 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:53:56 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.2 on np0005541912.localdomain
Dec 02 09:53:58 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:58 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:58 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 02 09:53:58 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:53:59 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@3(peon).osd e86 _set_new_cache_sizes cache_size:1020054656 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:53:59 np0005541914.localdomain ceph-mon[288526]: Reconfiguring osd.5 (monmap changed)...
Dec 02 09:53:59 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.5 on np0005541912.localdomain
Dec 02 09:53:59 np0005541914.localdomain ceph-mon[288526]: from='client.34161 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 02 09:53:59 np0005541914.localdomain ceph-mon[288526]: pgmap v11: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s
Dec 02 09:53:59 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:59 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:59 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:53:59 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)...
Dec 02 09:53:59 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:53:59 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:53:59 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:59 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:53:59 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:53:59 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:53:59 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:53:59 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:00 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain
Dec 02 09:54:00 np0005541914.localdomain ceph-mon[288526]: pgmap v12: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Dec 02 09:54:00 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mgr.np0005541912.qwddia (monmap changed)...
Dec 02 09:54:00 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mgr.np0005541912.qwddia on np0005541912.localdomain
Dec 02 09:54:00 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:00 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 ' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:00 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:54:00 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 02 09:54:00 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:01 np0005541914.localdomain ceph-mgr[287188]: ms_deliver_dispatch: unhandled message 0x55910bb5f600 mon_map magic: 0 from mon.0 v2:172.18.0.103:3300/0
Dec 02 09:54:01 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@3(peon) e7  my rank is now 2 (was 3)
Dec 02 09:54:01 np0005541914.localdomain ceph-mgr[287188]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0
Dec 02 09:54:01 np0005541914.localdomain ceph-mgr[287188]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0
Dec 02 09:54:01 np0005541914.localdomain ceph-mgr[287188]: client.0 ms_handle_reset on v2:172.18.0.107:3300/0
Dec 02 09:54:01 np0005541914.localdomain ceph-mgr[287188]: client.0 ms_handle_reset on v2:172.18.0.107:3300/0
Dec 02 09:54:01 np0005541914.localdomain ceph-mgr[287188]: ms_deliver_dispatch: unhandled message 0x55910bb5ef20 mon_map magic: 0 from mon.3 v2:172.18.0.107:3300/0
Dec 02 09:54:01 np0005541914.localdomain sshd[290504]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:54:02 np0005541914.localdomain sshd[290504]: Received disconnect from 34.78.29.97 port 51816:11: Bye Bye [preauth]
Dec 02 09:54:02 np0005541914.localdomain sshd[290504]: Disconnected from authenticating user root 34.78.29.97 port 51816 [preauth]
Dec 02 09:54:02 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:54:02 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:54:02 np0005541914.localdomain podman[290507]: 2025-12-02 09:54:02.556052578 +0000 UTC m=+0.088680109 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_id=edpm, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, build-date=2025-08-20T13:12:41, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.buildah.version=1.33.7, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 02 09:54:02 np0005541914.localdomain podman[290506]: 2025-12-02 09:54:02.629134291 +0000 UTC m=+0.162616908 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 09:54:02 np0005541914.localdomain podman[290507]: 2025-12-02 09:54:02.650020559 +0000 UTC m=+0.182648070 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, name=ubi9-minimal, vendor=Red Hat, Inc., container_name=openstack_network_exporter, version=9.6, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, release=1755695350, architecture=x86_64)
Dec 02 09:54:02 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:54:02 np0005541914.localdomain podman[290506]: 2025-12-02 09:54:02.665947096 +0000 UTC m=+0.199429753 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 02 09:54:02 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:54:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:54:03.165 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:54:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:54:03.165 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:54:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:54:03.165 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:54:03 np0005541914.localdomain podman[239757]: time="2025-12-02T09:54:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 09:54:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:54:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 09:54:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:54:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19174 "" "Go-http-client/1.1"
Dec 02 09:54:03 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 calling monitor election
Dec 02 09:54:03 np0005541914.localdomain ceph-mon[288526]: paxos.2).electionLogic(26) init, last seen epoch 26
Dec 02 09:54:03 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(electing) e7 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:54:05 np0005541914.localdomain sshd[290549]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:54:06 np0005541914.localdomain sshd[290549]: error: kex_exchange_identification: Connection closed by remote host
Dec 02 09:54:06 np0005541914.localdomain sshd[290549]: Connection closed by 103.42.181.150 port 45800
Dec 02 09:54:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:54:07 np0005541914.localdomain podman[290550]: 2025-12-02 09:54:07.074676599 +0000 UTC m=+0.081888262 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd)
Dec 02 09:54:07 np0005541914.localdomain podman[290550]: 2025-12-02 09:54:07.088976977 +0000 UTC m=+0.096188650 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:54:07 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: paxos.2).electionLogic(27) init, last seen epoch 27, mid-election, bumping
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(electing) e7 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(electing) e7 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mon.np0005541912 (monmap changed)...
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mon.np0005541912 on np0005541912.localdomain
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: pgmap v13: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: Reconfiguring crash.np0005541913 (monmap changed)...
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: from='client.34179 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005541909"], "force": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: Remove daemons mon.np0005541909
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "quorum_status"} : dispatch
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: Safe to remove mon.np0005541909: new quorum should be ['np0005541911', 'np0005541910', 'np0005541914', 'np0005541913', 'np0005541912'] (from ['np0005541911', 'np0005541910', 'np0005541914', 'np0005541913', 'np0005541912'])
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: Removing monitor np0005541909 from monmap...
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mon rm", "name": "np0005541909"} : dispatch
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: Removing daemon mon.np0005541909 from np0005541909.localdomain -- ports []
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: mon.np0005541910 calling monitor election
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: mon.np0005541912 calling monitor election
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: mon.np0005541911 calling monitor election
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: mon.np0005541913 calling monitor election
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mon metadata", "id": "np0005541910"} : dispatch
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mon metadata", "id": "np0005541911"} : dispatch
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: pgmap v14: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: pgmap v15: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: mon.np0005541911 is new leader, mons np0005541911,np0005541910,np0005541913,np0005541912 in quorum (ranks 0,1,3,4)
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: monmap epoch 7
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: last_changed 2025-12-02T09:54:01.619349+0000
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: created 2025-12-02T07:44:17.411659+0000
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: min_mon_release 18 (reef)
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: election_strategy: 1
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: 0: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005541911
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005541910
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: 2: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005541914
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: 3: [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] mon.np0005541913
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: 4: [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] mon.np0005541912
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: fsmap cephfs:1 {0=mds.np0005541912.ghcwcm=up:active} 2 up:standby
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: osdmap e86: 6 total, 6 up, 6 in
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: mgrmap e19: np0005541911.adcgiw(active, since 21s), standbys: np0005541914.lljzmk, np0005541910.kzipdo, np0005541913.mfesdm, np0005541912.qwddia, np0005541909.kfesnk
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: Health check failed: 1/5 mons down, quorum np0005541911,np0005541910,np0005541913,np0005541912 (MON_DOWN)
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: Health detail: HEALTH_WARN 1/5 mons down, quorum np0005541911,np0005541910,np0005541913,np0005541912
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: [WRN] MON_DOWN: 1/5 mons down, quorum np0005541911,np0005541910,np0005541913,np0005541912
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]:     mon.np0005541914 (rank 2) addr [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] is down (out of quorum)
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: Reconfiguring osd.0 (monmap changed)...
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.0 on np0005541913.localdomain
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.32:0/3328361141' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.32:0/3328361141' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 09:54:08 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e7 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:54:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054730 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:54:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914 calling monitor election
Dec 02 09:54:09 np0005541914.localdomain ceph-mon[288526]: from='client.34169 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005541909.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:54:09 np0005541914.localdomain ceph-mon[288526]: Removed label mon from host np0005541909.localdomain
Dec 02 09:54:09 np0005541914.localdomain ceph-mon[288526]: Reconfiguring osd.3 (monmap changed)...
Dec 02 09:54:09 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.3 on np0005541913.localdomain
Dec 02 09:54:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541910 calling monitor election
Dec 02 09:54:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541911 calling monitor election
Dec 02 09:54:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541911 is new leader, mons np0005541911,np0005541910,np0005541914,np0005541913,np0005541912 in quorum (ranks 0,1,2,3,4)
Dec 02 09:54:09 np0005541914.localdomain ceph-mon[288526]: monmap epoch 7
Dec 02 09:54:09 np0005541914.localdomain ceph-mon[288526]: fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:54:09 np0005541914.localdomain ceph-mon[288526]: last_changed 2025-12-02T09:54:01.619349+0000
Dec 02 09:54:09 np0005541914.localdomain ceph-mon[288526]: created 2025-12-02T07:44:17.411659+0000
Dec 02 09:54:09 np0005541914.localdomain ceph-mon[288526]: min_mon_release 18 (reef)
Dec 02 09:54:09 np0005541914.localdomain ceph-mon[288526]: election_strategy: 1
Dec 02 09:54:09 np0005541914.localdomain ceph-mon[288526]: 0: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005541911
Dec 02 09:54:09 np0005541914.localdomain ceph-mon[288526]: 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005541910
Dec 02 09:54:09 np0005541914.localdomain ceph-mon[288526]: 2: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005541914
Dec 02 09:54:09 np0005541914.localdomain ceph-mon[288526]: 3: [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] mon.np0005541913
Dec 02 09:54:09 np0005541914.localdomain ceph-mon[288526]: 4: [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] mon.np0005541912
Dec 02 09:54:09 np0005541914.localdomain ceph-mon[288526]: fsmap cephfs:1 {0=mds.np0005541912.ghcwcm=up:active} 2 up:standby
Dec 02 09:54:09 np0005541914.localdomain ceph-mon[288526]: osdmap e86: 6 total, 6 up, 6 in
Dec 02 09:54:09 np0005541914.localdomain ceph-mon[288526]: mgrmap e19: np0005541911.adcgiw(active, since 23s), standbys: np0005541914.lljzmk, np0005541910.kzipdo, np0005541913.mfesdm, np0005541912.qwddia, np0005541909.kfesnk
Dec 02 09:54:09 np0005541914.localdomain ceph-mon[288526]: Health check cleared: MON_DOWN (was: 1/5 mons down, quorum np0005541911,np0005541910,np0005541913,np0005541912)
Dec 02 09:54:09 np0005541914.localdomain ceph-mon[288526]: Cluster is now healthy
Dec 02 09:54:09 np0005541914.localdomain ceph-mon[288526]: overall HEALTH_OK
Dec 02 09:54:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:10 np0005541914.localdomain ceph-mon[288526]: pgmap v17: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:54:10 np0005541914.localdomain ceph-mon[288526]: from='client.34203 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005541909.localdomain", "label": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:54:10 np0005541914.localdomain ceph-mon[288526]: Removed label mgr from host np0005541909.localdomain
Dec 02 09:54:10 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:10 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)...
Dec 02 09:54:10 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:54:10 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:10 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain
Dec 02 09:54:10 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:10 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:10 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:54:10 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:54:10 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:11 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mgr.np0005541913.mfesdm (monmap changed)...
Dec 02 09:54:11 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain
Dec 02 09:54:11 np0005541914.localdomain ceph-mon[288526]: from='client.34184 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005541909.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:54:11 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:11 np0005541914.localdomain ceph-mon[288526]: Removed label _admin from host np0005541909.localdomain
Dec 02 09:54:11 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:11 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:11 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:54:11 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 02 09:54:11 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:54:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:54:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:54:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 09:54:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:54:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 09:54:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:54:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:54:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 09:54:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:54:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:54:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:54:12 np0005541914.localdomain sudo[290569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:54:12 np0005541914.localdomain sudo[290569]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:12 np0005541914.localdomain sudo[290569]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:12 np0005541914.localdomain sudo[290587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:54:12 np0005541914.localdomain sudo[290587]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:12 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mon.np0005541913 (monmap changed)...
Dec 02 09:54:12 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mon.np0005541913 on np0005541913.localdomain
Dec 02 09:54:12 np0005541914.localdomain ceph-mon[288526]: pgmap v18: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:54:12 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:12 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:12 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:54:12 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:13 np0005541914.localdomain podman[290622]: 
Dec 02 09:54:13 np0005541914.localdomain podman[290622]: 2025-12-02 09:54:13.084502113 +0000 UTC m=+0.078997874 container create 4d815163426320e9601f62fd2462b4d8788d28ea60c210450fd85dde6e0ca093 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_jepsen, GIT_CLEAN=True, io.buildah.version=1.41.4, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, distribution-scope=public, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., release=1763362218, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, vcs-type=git, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, version=7)
Dec 02 09:54:13 np0005541914.localdomain systemd[1]: Started libpod-conmon-4d815163426320e9601f62fd2462b4d8788d28ea60c210450fd85dde6e0ca093.scope.
Dec 02 09:54:13 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:54:13 np0005541914.localdomain podman[290622]: 2025-12-02 09:54:13.052614488 +0000 UTC m=+0.047110269 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:54:13 np0005541914.localdomain podman[290622]: 2025-12-02 09:54:13.15613848 +0000 UTC m=+0.150634251 container init 4d815163426320e9601f62fd2462b4d8788d28ea60c210450fd85dde6e0ca093 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_jepsen, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, architecture=x86_64, version=7, GIT_CLEAN=True, release=1763362218, ceph=True, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, vcs-type=git, description=Red Hat Ceph Storage 7, RELEASE=main, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc.)
Dec 02 09:54:13 np0005541914.localdomain jovial_jepsen[290637]: 167 167
Dec 02 09:54:13 np0005541914.localdomain podman[290622]: 2025-12-02 09:54:13.166063514 +0000 UTC m=+0.160559275 container start 4d815163426320e9601f62fd2462b4d8788d28ea60c210450fd85dde6e0ca093 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_jepsen, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, version=7, distribution-scope=public, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, io.buildah.version=1.41.4, io.openshift.expose-services=, GIT_BRANCH=main, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z)
Dec 02 09:54:13 np0005541914.localdomain podman[290622]: 2025-12-02 09:54:13.167535748 +0000 UTC m=+0.162031559 container attach 4d815163426320e9601f62fd2462b4d8788d28ea60c210450fd85dde6e0ca093 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_jepsen, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, version=7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_BRANCH=main, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, RELEASE=main, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64)
Dec 02 09:54:13 np0005541914.localdomain systemd[1]: libpod-4d815163426320e9601f62fd2462b4d8788d28ea60c210450fd85dde6e0ca093.scope: Deactivated successfully.
Dec 02 09:54:13 np0005541914.localdomain podman[290622]: 2025-12-02 09:54:13.171776929 +0000 UTC m=+0.166272660 container died 4d815163426320e9601f62fd2462b4d8788d28ea60c210450fd85dde6e0ca093 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_jepsen, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, architecture=x86_64, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, RELEASE=main, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, release=1763362218, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, ceph=True)
Dec 02 09:54:13 np0005541914.localdomain podman[290642]: 2025-12-02 09:54:13.262724296 +0000 UTC m=+0.079386225 container remove 4d815163426320e9601f62fd2462b4d8788d28ea60c210450fd85dde6e0ca093 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_jepsen, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main)
Dec 02 09:54:13 np0005541914.localdomain systemd[1]: libpod-conmon-4d815163426320e9601f62fd2462b4d8788d28ea60c210450fd85dde6e0ca093.scope: Deactivated successfully.
Dec 02 09:54:13 np0005541914.localdomain sudo[290587]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:13 np0005541914.localdomain sudo[290658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:54:13 np0005541914.localdomain sudo[290658]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:13 np0005541914.localdomain sudo[290658]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:13 np0005541914.localdomain sudo[290676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:54:13 np0005541914.localdomain sudo[290676]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:13 np0005541914.localdomain ceph-mon[288526]: Reconfiguring crash.np0005541914 (monmap changed)...
Dec 02 09:54:13 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon crash.np0005541914 on np0005541914.localdomain
Dec 02 09:54:13 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:13 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:13 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 02 09:54:13 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:13 np0005541914.localdomain podman[290712]: 
Dec 02 09:54:13 np0005541914.localdomain podman[290712]: 2025-12-02 09:54:13.920669825 +0000 UTC m=+0.068669918 container create f8c8ff9d300da94eb76a38105be9851a2fbe1edd16aeb17a815d2c7534409936 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_carver, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, release=1763362218, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, GIT_BRANCH=main, RELEASE=main, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, vcs-type=git)
Dec 02 09:54:13 np0005541914.localdomain systemd[1]: Started libpod-conmon-f8c8ff9d300da94eb76a38105be9851a2fbe1edd16aeb17a815d2c7534409936.scope.
Dec 02 09:54:13 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:54:13 np0005541914.localdomain podman[290712]: 2025-12-02 09:54:13.981966997 +0000 UTC m=+0.129967080 container init f8c8ff9d300da94eb76a38105be9851a2fbe1edd16aeb17a815d2c7534409936 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_carver, RELEASE=main, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_CLEAN=True, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 09:54:13 np0005541914.localdomain podman[290712]: 2025-12-02 09:54:13.991473708 +0000 UTC m=+0.139473811 container start f8c8ff9d300da94eb76a38105be9851a2fbe1edd16aeb17a815d2c7534409936 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_carver, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, ceph=True, RELEASE=main, com.redhat.component=rhceph-container, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.openshift.tags=rhceph ceph, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_BRANCH=main)
Dec 02 09:54:13 np0005541914.localdomain busy_carver[290727]: 167 167
Dec 02 09:54:13 np0005541914.localdomain podman[290712]: 2025-12-02 09:54:13.991755037 +0000 UTC m=+0.139755130 container attach f8c8ff9d300da94eb76a38105be9851a2fbe1edd16aeb17a815d2c7534409936 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_carver, GIT_CLEAN=True, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, name=rhceph, GIT_BRANCH=main, vcs-type=git, io.openshift.tags=rhceph ceph, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, RELEASE=main)
Dec 02 09:54:13 np0005541914.localdomain systemd[1]: libpod-f8c8ff9d300da94eb76a38105be9851a2fbe1edd16aeb17a815d2c7534409936.scope: Deactivated successfully.
Dec 02 09:54:13 np0005541914.localdomain podman[290712]: 2025-12-02 09:54:13.994279804 +0000 UTC m=+0.142279927 container died f8c8ff9d300da94eb76a38105be9851a2fbe1edd16aeb17a815d2c7534409936 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_carver, name=rhceph, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, vcs-type=git, RELEASE=main, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 02 09:54:13 np0005541914.localdomain podman[290712]: 2025-12-02 09:54:13.89498375 +0000 UTC m=+0.042983843 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:54:14 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-b9ab0ceb23e7c8e86a9a8169cd8fdc71fff65f33366517cca8b93fbb82cb94c0-merged.mount: Deactivated successfully.
Dec 02 09:54:14 np0005541914.localdomain podman[290732]: 2025-12-02 09:54:14.088626825 +0000 UTC m=+0.086922606 container remove f8c8ff9d300da94eb76a38105be9851a2fbe1edd16aeb17a815d2c7534409936 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_carver, ceph=True, com.redhat.component=rhceph-container, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, vcs-type=git, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.openshift.expose-services=, GIT_CLEAN=True, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218)
Dec 02 09:54:14 np0005541914.localdomain systemd[1]: libpod-conmon-f8c8ff9d300da94eb76a38105be9851a2fbe1edd16aeb17a815d2c7534409936.scope: Deactivated successfully.
Dec 02 09:54:14 np0005541914.localdomain sudo[290676]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:14 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:54:14 np0005541914.localdomain sudo[290755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:54:14 np0005541914.localdomain sudo[290755]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:14 np0005541914.localdomain sudo[290755]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:14 np0005541914.localdomain sudo[290773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:54:14 np0005541914.localdomain sudo[290773]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:14 np0005541914.localdomain ceph-mon[288526]: pgmap v19: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:54:14 np0005541914.localdomain ceph-mon[288526]: Reconfiguring osd.1 (monmap changed)...
Dec 02 09:54:14 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.1 on np0005541914.localdomain
Dec 02 09:54:14 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:14 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:14 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Dec 02 09:54:14 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:14 np0005541914.localdomain podman[290807]: 
Dec 02 09:54:14 np0005541914.localdomain podman[290807]: 2025-12-02 09:54:14.925855901 +0000 UTC m=+0.077535100 container create 20ff04a4a756c042c1dca3f9556a2f1b0e3941acf25051d77bcb5cf15d13180e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_euler, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, version=7, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, RELEASE=main, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, release=1763362218)
Dec 02 09:54:14 np0005541914.localdomain systemd[1]: Started libpod-conmon-20ff04a4a756c042c1dca3f9556a2f1b0e3941acf25051d77bcb5cf15d13180e.scope.
Dec 02 09:54:14 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:54:14 np0005541914.localdomain podman[290807]: 2025-12-02 09:54:14.894385339 +0000 UTC m=+0.046064568 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:54:14 np0005541914.localdomain podman[290807]: 2025-12-02 09:54:14.995299502 +0000 UTC m=+0.146978701 container init 20ff04a4a756c042c1dca3f9556a2f1b0e3941acf25051d77bcb5cf15d13180e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_euler, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, release=1763362218, GIT_CLEAN=True, RELEASE=main, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, CEPH_POINT_RELEASE=, version=7)
Dec 02 09:54:15 np0005541914.localdomain nice_euler[290821]: 167 167
Dec 02 09:54:15 np0005541914.localdomain podman[290807]: 2025-12-02 09:54:15.004824863 +0000 UTC m=+0.156504042 container start 20ff04a4a756c042c1dca3f9556a2f1b0e3941acf25051d77bcb5cf15d13180e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_euler, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, version=7, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, io.openshift.tags=rhceph ceph, vcs-type=git, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.expose-services=, RELEASE=main, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 09:54:15 np0005541914.localdomain systemd[1]: libpod-20ff04a4a756c042c1dca3f9556a2f1b0e3941acf25051d77bcb5cf15d13180e.scope: Deactivated successfully.
Dec 02 09:54:15 np0005541914.localdomain podman[290807]: 2025-12-02 09:54:15.008385212 +0000 UTC m=+0.160064421 container attach 20ff04a4a756c042c1dca3f9556a2f1b0e3941acf25051d77bcb5cf15d13180e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_euler, version=7, GIT_CLEAN=True, ceph=True, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, name=rhceph, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.openshift.expose-services=, RELEASE=main, CEPH_POINT_RELEASE=)
Dec 02 09:54:15 np0005541914.localdomain podman[290807]: 2025-12-02 09:54:15.01159665 +0000 UTC m=+0.163275919 container died 20ff04a4a756c042c1dca3f9556a2f1b0e3941acf25051d77bcb5cf15d13180e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_euler, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, release=1763362218, io.buildah.version=1.41.4)
Dec 02 09:54:15 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c7163d93bb4bf6f371b0e4c4358728f4668ecc9afbd8d8214956e22beed7c7e2-merged.mount: Deactivated successfully.
Dec 02 09:54:15 np0005541914.localdomain podman[290826]: 2025-12-02 09:54:15.111056348 +0000 UTC m=+0.088620618 container remove 20ff04a4a756c042c1dca3f9556a2f1b0e3941acf25051d77bcb5cf15d13180e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nice_euler, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, ceph=True, build-date=2025-11-26T19:44:28Z, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, version=7, name=rhceph, RELEASE=main, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 02 09:54:15 np0005541914.localdomain systemd[1]: libpod-conmon-20ff04a4a756c042c1dca3f9556a2f1b0e3941acf25051d77bcb5cf15d13180e.scope: Deactivated successfully.
Dec 02 09:54:15 np0005541914.localdomain sudo[290773]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:15 np0005541914.localdomain sudo[290850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:54:15 np0005541914.localdomain sudo[290850]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.436 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:54:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:54:15 np0005541914.localdomain sudo[290850]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:54:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:54:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.437 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:54:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:54:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:54:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:54:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:54:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:54:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:54:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:54:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:54:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.439 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:54:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.439 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:54:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.439 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:54:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.439 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:54:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:54:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:54:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:54:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:54:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:54:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:54:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:54:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:54:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:54:15 np0005541914.localdomain sudo[290868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:54:15 np0005541914.localdomain sudo[290868]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:15 np0005541914.localdomain ceph-mon[288526]: Reconfiguring osd.4 (monmap changed)...
Dec 02 09:54:15 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.4 on np0005541914.localdomain
Dec 02 09:54:15 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:15 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:15 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:54:15 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:15 np0005541914.localdomain podman[290903]: 
Dec 02 09:54:15 np0005541914.localdomain podman[290903]: 2025-12-02 09:54:15.944766045 +0000 UTC m=+0.076565030 container create 791d3ec3be6c26f754a150d64c96b8f89d3a153c4d1ce71a2cb5d2e8a9c2956f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_yonath, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, ceph=True, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, RELEASE=main, build-date=2025-11-26T19:44:28Z, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_BRANCH=main, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Dec 02 09:54:15 np0005541914.localdomain systemd[1]: Started libpod-conmon-791d3ec3be6c26f754a150d64c96b8f89d3a153c4d1ce71a2cb5d2e8a9c2956f.scope.
Dec 02 09:54:15 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:54:16 np0005541914.localdomain podman[290903]: 2025-12-02 09:54:16.01169449 +0000 UTC m=+0.143493465 container init 791d3ec3be6c26f754a150d64c96b8f89d3a153c4d1ce71a2cb5d2e8a9c2956f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_yonath, release=1763362218, architecture=x86_64, version=7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 02 09:54:16 np0005541914.localdomain podman[290903]: 2025-12-02 09:54:15.913670856 +0000 UTC m=+0.045469901 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:54:16 np0005541914.localdomain podman[290903]: 2025-12-02 09:54:16.018604541 +0000 UTC m=+0.150403516 container start 791d3ec3be6c26f754a150d64c96b8f89d3a153c4d1ce71a2cb5d2e8a9c2956f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_yonath, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, com.redhat.component=rhceph-container, ceph=True, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, release=1763362218, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7)
Dec 02 09:54:16 np0005541914.localdomain podman[290903]: 2025-12-02 09:54:16.019051754 +0000 UTC m=+0.150850739 container attach 791d3ec3be6c26f754a150d64c96b8f89d3a153c4d1ce71a2cb5d2e8a9c2956f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_yonath, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, ceph=True, GIT_CLEAN=True, GIT_BRANCH=main, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, name=rhceph)
Dec 02 09:54:16 np0005541914.localdomain cool_yonath[290918]: 167 167
Dec 02 09:54:16 np0005541914.localdomain systemd[1]: libpod-791d3ec3be6c26f754a150d64c96b8f89d3a153c4d1ce71a2cb5d2e8a9c2956f.scope: Deactivated successfully.
Dec 02 09:54:16 np0005541914.localdomain podman[290903]: 2025-12-02 09:54:16.022848611 +0000 UTC m=+0.154647616 container died 791d3ec3be6c26f754a150d64c96b8f89d3a153c4d1ce71a2cb5d2e8a9c2956f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_yonath, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, GIT_CLEAN=True, GIT_BRANCH=main, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, version=7, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, vcs-type=git, distribution-scope=public, architecture=x86_64)
Dec 02 09:54:16 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-18259d2b484ac379674b8964c1b8161277c44a0b1fd19fa51ffcfda950bba7e7-merged.mount: Deactivated successfully.
Dec 02 09:54:16 np0005541914.localdomain podman[290923]: 2025-12-02 09:54:16.098640536 +0000 UTC m=+0.071895078 container remove 791d3ec3be6c26f754a150d64c96b8f89d3a153c4d1ce71a2cb5d2e8a9c2956f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_yonath, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_BRANCH=main, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, distribution-scope=public, release=1763362218, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, architecture=x86_64, io.buildah.version=1.41.4, ceph=True, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 09:54:16 np0005541914.localdomain systemd[1]: libpod-conmon-791d3ec3be6c26f754a150d64c96b8f89d3a153c4d1ce71a2cb5d2e8a9c2956f.scope: Deactivated successfully.
Dec 02 09:54:16 np0005541914.localdomain sudo[290868]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:16 np0005541914.localdomain sudo[290937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:54:16 np0005541914.localdomain sudo[290937]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:16 np0005541914.localdomain sudo[290937]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:16 np0005541914.localdomain sudo[290955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:54:16 np0005541914.localdomain sudo[290955]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:16 np0005541914.localdomain podman[290990]: 
Dec 02 09:54:16 np0005541914.localdomain podman[290990]: 2025-12-02 09:54:16.771879591 +0000 UTC m=+0.060672555 container create 236c6771042e038c5271c307aa9d0514b96a988941797f7f1491a7e1d34c0754 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_faraday, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.tags=rhceph ceph, name=rhceph, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, release=1763362218)
Dec 02 09:54:16 np0005541914.localdomain systemd[1]: Started libpod-conmon-236c6771042e038c5271c307aa9d0514b96a988941797f7f1491a7e1d34c0754.scope.
Dec 02 09:54:16 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:54:16 np0005541914.localdomain ceph-mon[288526]: pgmap v20: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:54:16 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mds.mds.np0005541914.sqgqkj (monmap changed)...
Dec 02 09:54:16 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mds.mds.np0005541914.sqgqkj on np0005541914.localdomain
Dec 02 09:54:16 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:16 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:16 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:54:16 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:54:16 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:16 np0005541914.localdomain podman[290990]: 2025-12-02 09:54:16.83009427 +0000 UTC m=+0.118887234 container init 236c6771042e038c5271c307aa9d0514b96a988941797f7f1491a7e1d34c0754 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_faraday, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, version=7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_CLEAN=True, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, io.openshift.tags=rhceph ceph, name=rhceph, GIT_BRANCH=main, ceph=True, RELEASE=main, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7)
Dec 02 09:54:16 np0005541914.localdomain podman[290990]: 2025-12-02 09:54:16.743566016 +0000 UTC m=+0.032359030 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:54:16 np0005541914.localdomain podman[290990]: 2025-12-02 09:54:16.84384435 +0000 UTC m=+0.132637324 container start 236c6771042e038c5271c307aa9d0514b96a988941797f7f1491a7e1d34c0754 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_faraday, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, architecture=x86_64, io.openshift.expose-services=, ceph=True, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.openshift.tags=rhceph ceph, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=7, io.buildah.version=1.41.4, vcs-type=git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 02 09:54:16 np0005541914.localdomain podman[290990]: 2025-12-02 09:54:16.844088157 +0000 UTC m=+0.132881131 container attach 236c6771042e038c5271c307aa9d0514b96a988941797f7f1491a7e1d34c0754 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_faraday, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, release=1763362218, architecture=x86_64, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, name=rhceph, ceph=True, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, GIT_BRANCH=main)
Dec 02 09:54:16 np0005541914.localdomain jovial_faraday[291005]: 167 167
Dec 02 09:54:16 np0005541914.localdomain systemd[1]: libpod-236c6771042e038c5271c307aa9d0514b96a988941797f7f1491a7e1d34c0754.scope: Deactivated successfully.
Dec 02 09:54:16 np0005541914.localdomain podman[290990]: 2025-12-02 09:54:16.84679552 +0000 UTC m=+0.135588484 container died 236c6771042e038c5271c307aa9d0514b96a988941797f7f1491a7e1d34c0754 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_faraday, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.openshift.tags=rhceph ceph, io.openshift.expose-services=)
Dec 02 09:54:16 np0005541914.localdomain podman[291010]: 2025-12-02 09:54:16.923987447 +0000 UTC m=+0.069050919 container remove 236c6771042e038c5271c307aa9d0514b96a988941797f7f1491a7e1d34c0754 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=jovial_faraday, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, architecture=x86_64, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, release=1763362218, io.buildah.version=1.41.4, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 02 09:54:16 np0005541914.localdomain systemd[1]: libpod-conmon-236c6771042e038c5271c307aa9d0514b96a988941797f7f1491a7e1d34c0754.scope: Deactivated successfully.
Dec 02 09:54:16 np0005541914.localdomain sudo[290955]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:17 np0005541914.localdomain sudo[291027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:54:17 np0005541914.localdomain systemd[1]: tmp-crun.CugJDd.mount: Deactivated successfully.
Dec 02 09:54:17 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-63d33f8bf0bcc57917b521a9eb24896aaf5cfeff03484b73cf77571d7ef36d56-merged.mount: Deactivated successfully.
Dec 02 09:54:17 np0005541914.localdomain sudo[291027]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:17 np0005541914.localdomain sudo[291027]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:17 np0005541914.localdomain sudo[291045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:54:17 np0005541914.localdomain sudo[291045]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:17 np0005541914.localdomain podman[291079]: 
Dec 02 09:54:17 np0005541914.localdomain podman[291079]: 2025-12-02 09:54:17.617656477 +0000 UTC m=+0.071541217 container create e8bd50f02079cd1d4228d2814bc77720717db6d48d67234ee5d68660393272ba (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_northcutt, ceph=True, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, version=7, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, release=1763362218, architecture=x86_64, name=rhceph, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 02 09:54:17 np0005541914.localdomain systemd[1]: Started libpod-conmon-e8bd50f02079cd1d4228d2814bc77720717db6d48d67234ee5d68660393272ba.scope.
Dec 02 09:54:17 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:54:17 np0005541914.localdomain podman[291079]: 2025-12-02 09:54:17.679319691 +0000 UTC m=+0.133204431 container init e8bd50f02079cd1d4228d2814bc77720717db6d48d67234ee5d68660393272ba (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_northcutt, com.redhat.component=rhceph-container, GIT_BRANCH=main, vcs-type=git, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, ceph=True, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., release=1763362218, version=7, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 02 09:54:17 np0005541914.localdomain podman[291079]: 2025-12-02 09:54:17.589396264 +0000 UTC m=+0.043281084 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:54:17 np0005541914.localdomain systemd[1]: tmp-crun.9UgLo4.mount: Deactivated successfully.
Dec 02 09:54:17 np0005541914.localdomain vigilant_northcutt[291094]: 167 167
Dec 02 09:54:17 np0005541914.localdomain systemd[1]: libpod-e8bd50f02079cd1d4228d2814bc77720717db6d48d67234ee5d68660393272ba.scope: Deactivated successfully.
Dec 02 09:54:17 np0005541914.localdomain podman[291079]: 2025-12-02 09:54:17.695261548 +0000 UTC m=+0.149146288 container start e8bd50f02079cd1d4228d2814bc77720717db6d48d67234ee5d68660393272ba (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_northcutt, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_CLEAN=True, vcs-type=git, io.openshift.tags=rhceph ceph, name=rhceph, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, release=1763362218, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, distribution-scope=public, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, version=7)
Dec 02 09:54:17 np0005541914.localdomain podman[291079]: 2025-12-02 09:54:17.695883997 +0000 UTC m=+0.149768737 container attach e8bd50f02079cd1d4228d2814bc77720717db6d48d67234ee5d68660393272ba (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_northcutt, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., RELEASE=main, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, CEPH_POINT_RELEASE=, name=rhceph, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, com.redhat.component=rhceph-container, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 02 09:54:17 np0005541914.localdomain podman[291079]: 2025-12-02 09:54:17.698477856 +0000 UTC m=+0.152362646 container died e8bd50f02079cd1d4228d2814bc77720717db6d48d67234ee5d68660393272ba (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_northcutt, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, CEPH_POINT_RELEASE=, name=rhceph, vendor=Red Hat, Inc., architecture=x86_64, version=7, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True)
Dec 02 09:54:17 np0005541914.localdomain podman[291099]: 2025-12-02 09:54:17.772964431 +0000 UTC m=+0.067949726 container remove e8bd50f02079cd1d4228d2814bc77720717db6d48d67234ee5d68660393272ba (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_northcutt, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, name=rhceph, vcs-type=git, io.buildah.version=1.41.4, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, ceph=True, CEPH_POINT_RELEASE=, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 02 09:54:17 np0005541914.localdomain systemd[1]: libpod-conmon-e8bd50f02079cd1d4228d2814bc77720717db6d48d67234ee5d68660393272ba.scope: Deactivated successfully.
Dec 02 09:54:17 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mgr.np0005541914.lljzmk (monmap changed)...
Dec 02 09:54:17 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mgr.np0005541914.lljzmk on np0005541914.localdomain
Dec 02 09:54:17 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:17 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:17 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:54:17 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 02 09:54:17 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:17 np0005541914.localdomain sudo[291045]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:18 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-57a2763eff75fdd2a6aa47d39a665171b782e47ff990e71b1865281b27311483-merged.mount: Deactivated successfully.
Dec 02 09:54:18 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mon.np0005541914 (monmap changed)...
Dec 02 09:54:18 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mon.np0005541914 on np0005541914.localdomain
Dec 02 09:54:18 np0005541914.localdomain ceph-mon[288526]: pgmap v21: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:54:18 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:18 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:19 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:54:19 np0005541914.localdomain sudo[291116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 02 09:54:19 np0005541914.localdomain sudo[291116]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:19 np0005541914.localdomain sudo[291116]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:19 np0005541914.localdomain sudo[291134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph
Dec 02 09:54:19 np0005541914.localdomain sudo[291134]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:19 np0005541914.localdomain sudo[291134]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:19 np0005541914.localdomain sudo[291152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:54:19 np0005541914.localdomain sudo[291152]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:19 np0005541914.localdomain sudo[291152]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:19 np0005541914.localdomain sudo[291170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:54:19 np0005541914.localdomain sudo[291170]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:19 np0005541914.localdomain sudo[291170]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:19 np0005541914.localdomain sudo[291188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:54:19 np0005541914.localdomain sudo[291188]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:19 np0005541914.localdomain sudo[291188]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:19 np0005541914.localdomain sudo[291222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:54:19 np0005541914.localdomain sudo[291222]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:19 np0005541914.localdomain sudo[291222]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:19 np0005541914.localdomain sudo[291240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:54:19 np0005541914.localdomain sudo[291240]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:19 np0005541914.localdomain sudo[291240]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:20 np0005541914.localdomain sudo[291258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 02 09:54:20 np0005541914.localdomain sudo[291258]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:20 np0005541914.localdomain sudo[291258]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:20 np0005541914.localdomain sudo[291276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 09:54:20 np0005541914.localdomain sudo[291276]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:20 np0005541914.localdomain sudo[291276]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:20 np0005541914.localdomain sudo[291294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 09:54:20 np0005541914.localdomain sudo[291294]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:20 np0005541914.localdomain sudo[291294]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:20 np0005541914.localdomain sudo[291312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:54:20 np0005541914.localdomain sudo[291312]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:20 np0005541914.localdomain sudo[291312]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:20 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:20 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:20 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:20 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:54:20 np0005541914.localdomain ceph-mon[288526]: Removing np0005541909.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:54:20 np0005541914.localdomain ceph-mon[288526]: Updating np0005541910.localdomain:/etc/ceph/ceph.conf
Dec 02 09:54:20 np0005541914.localdomain ceph-mon[288526]: Updating np0005541911.localdomain:/etc/ceph/ceph.conf
Dec 02 09:54:20 np0005541914.localdomain ceph-mon[288526]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf
Dec 02 09:54:20 np0005541914.localdomain ceph-mon[288526]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf
Dec 02 09:54:20 np0005541914.localdomain ceph-mon[288526]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf
Dec 02 09:54:20 np0005541914.localdomain ceph-mon[288526]: pgmap v22: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:54:20 np0005541914.localdomain ceph-mon[288526]: Removing np0005541909.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 09:54:20 np0005541914.localdomain ceph-mon[288526]: Removing np0005541909.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:54:20 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:20 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:20 np0005541914.localdomain sudo[291330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:54:20 np0005541914.localdomain sudo[291330]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:20 np0005541914.localdomain sudo[291330]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:20 np0005541914.localdomain sudo[291348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:54:20 np0005541914.localdomain sudo[291348]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:20 np0005541914.localdomain sudo[291348]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:20 np0005541914.localdomain sudo[291382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:54:20 np0005541914.localdomain sudo[291382]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:20 np0005541914.localdomain sudo[291382]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:20 np0005541914.localdomain sudo[291400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:54:20 np0005541914.localdomain sudo[291400]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:20 np0005541914.localdomain sudo[291400]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:20 np0005541914.localdomain sudo[291418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:54:20 np0005541914.localdomain sudo[291418]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:20 np0005541914.localdomain sudo[291418]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:21 np0005541914.localdomain ceph-mon[288526]: Updating np0005541910.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:54:21 np0005541914.localdomain ceph-mon[288526]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:54:21 np0005541914.localdomain ceph-mon[288526]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:54:21 np0005541914.localdomain ceph-mon[288526]: Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:54:21 np0005541914.localdomain ceph-mon[288526]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:54:21 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:21 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:21 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:21 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:21 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:21 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:21 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:21 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:21 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:21 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:21 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:21 np0005541914.localdomain ceph-mon[288526]: Removing daemon mgr.np0005541909.kfesnk from np0005541909.localdomain -- ports [9283, 8765]
Dec 02 09:54:22 np0005541914.localdomain ceph-mon[288526]: pgmap v23: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:54:22 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:22 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:22 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:54:22 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:54:23 np0005541914.localdomain podman[291436]: 2025-12-02 09:54:23.094730937 +0000 UTC m=+0.092843877 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 09:54:23 np0005541914.localdomain systemd[1]: tmp-crun.qNhU6v.mount: Deactivated successfully.
Dec 02 09:54:23 np0005541914.localdomain podman[291437]: 2025-12-02 09:54:23.135024617 +0000 UTC m=+0.133781257 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125)
Dec 02 09:54:23 np0005541914.localdomain podman[291436]: 2025-12-02 09:54:23.16094396 +0000 UTC m=+0.159056930 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 09:54:23 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 09:54:23 np0005541914.localdomain podman[291437]: 2025-12-02 09:54:23.175907117 +0000 UTC m=+0.174663757 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm)
Dec 02 09:54:23 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 09:54:23 np0005541914.localdomain sudo[291478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:54:23 np0005541914.localdomain sudo[291478]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:23 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:54:23 np0005541914.localdomain sudo[291478]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:23 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:54:23 np0005541914.localdomain podman[291496]: 2025-12-02 09:54:23.536772551 +0000 UTC m=+0.101515772 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true)
Dec 02 09:54:23 np0005541914.localdomain podman[291496]: 2025-12-02 09:54:23.573093851 +0000 UTC m=+0.137837002 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 02 09:54:23 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:54:23 np0005541914.localdomain podman[291514]: 2025-12-02 09:54:23.616602539 +0000 UTC m=+0.068826983 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 02 09:54:23 np0005541914.localdomain podman[291514]: 2025-12-02 09:54:23.648788623 +0000 UTC m=+0.101013087 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 02 09:54:23 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:54:23 np0005541914.localdomain ceph-mon[288526]: from='client.26645 -' entity='client.admin' cmd=[{"prefix": "orch host drain", "hostname": "np0005541909.localdomain", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:54:23 np0005541914.localdomain ceph-mon[288526]: Added label _no_schedule to host np0005541909.localdomain
Dec 02 09:54:23 np0005541914.localdomain ceph-mon[288526]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005541909.localdomain
Dec 02 09:54:23 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth rm", "entity": "mgr.np0005541909.kfesnk"} : dispatch
Dec 02 09:54:23 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005541909.kfesnk"}]': finished
Dec 02 09:54:23 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:23 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:23 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:54:24 np0005541914.localdomain systemd[1]: tmp-crun.EzEoFd.mount: Deactivated successfully.
Dec 02 09:54:24 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:54:24 np0005541914.localdomain ceph-mon[288526]: Removing key for mgr.np0005541909.kfesnk
Dec 02 09:54:24 np0005541914.localdomain ceph-mon[288526]: pgmap v24: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:54:24 np0005541914.localdomain ceph-mon[288526]: from='client.26801 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "host_pattern": "np0005541909.localdomain", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 02 09:54:24 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:24 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:24 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:24 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:54:24 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:25 np0005541914.localdomain sudo[291539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:54:25 np0005541914.localdomain sudo[291539]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:25 np0005541914.localdomain sudo[291539]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:25 np0005541914.localdomain ceph-mon[288526]: Removing daemon crash.np0005541909 from np0005541909.localdomain -- ports []
Dec 02 09:54:25 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:25 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541909.localdomain"} : dispatch
Dec 02 09:54:25 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005541909.localdomain"}]': finished
Dec 02 09:54:25 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth rm", "entity": "client.crash.np0005541909.localdomain"} : dispatch
Dec 02 09:54:25 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd='[{"prefix": "auth rm", "entity": "client.crash.np0005541909.localdomain"}]': finished
Dec 02 09:54:25 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:25 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:25 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:54:25 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:25 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:25 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:54:25 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:25 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:54:25 np0005541914.localdomain sudo[291557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:54:25 np0005541914.localdomain sudo[291557]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:25 np0005541914.localdomain sudo[291557]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:26 np0005541914.localdomain ceph-mon[288526]: from='client.34182 -' entity='client.admin' cmd=[{"prefix": "orch host rm", "hostname": "np0005541909.localdomain", "force": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:54:26 np0005541914.localdomain ceph-mon[288526]: Removed host np0005541909.localdomain
Dec 02 09:54:26 np0005541914.localdomain ceph-mon[288526]: Removing key for client.crash.np0005541909.localdomain
Dec 02 09:54:26 np0005541914.localdomain ceph-mon[288526]: pgmap v25: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:54:26 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541910.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:54:26 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:26 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:26 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:26 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:54:26 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 02 09:54:27 np0005541914.localdomain ceph-mon[288526]: Reconfiguring crash.np0005541910 (monmap changed)...
Dec 02 09:54:27 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon crash.np0005541910 on np0005541910.localdomain
Dec 02 09:54:27 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mon.np0005541910 (monmap changed)...
Dec 02 09:54:27 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:27 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mon.np0005541910 on np0005541910.localdomain
Dec 02 09:54:27 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:27 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:27 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541910.kzipdo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:54:27 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:54:27 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:28 np0005541914.localdomain ceph-mon[288526]: pgmap v26: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:54:28 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mgr.np0005541910.kzipdo (monmap changed)...
Dec 02 09:54:28 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mgr.np0005541910.kzipdo on np0005541910.localdomain
Dec 02 09:54:28 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:29 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:54:29 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:29 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mon.np0005541911 (monmap changed)...
Dec 02 09:54:29 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:54:29 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 02 09:54:29 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:29 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mon.np0005541911 on np0005541911.localdomain
Dec 02 09:54:29 np0005541914.localdomain ceph-mon[288526]: pgmap v27: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:54:29 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:29 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:29 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mgr.np0005541911.adcgiw (monmap changed)...
Dec 02 09:54:29 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541911.adcgiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:54:29 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:54:29 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:29 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mgr.np0005541911.adcgiw on np0005541911.localdomain
Dec 02 09:54:31 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:31 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:31 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:31 np0005541914.localdomain ceph-mon[288526]: Reconfiguring crash.np0005541911 (monmap changed)...
Dec 02 09:54:31 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541911.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:54:31 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:31 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon crash.np0005541911 on np0005541911.localdomain
Dec 02 09:54:32 np0005541914.localdomain ceph-mon[288526]: pgmap v28: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:54:32 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:32 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:32 np0005541914.localdomain ceph-mon[288526]: Reconfiguring crash.np0005541912 (monmap changed)...
Dec 02 09:54:32 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:54:32 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:32 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon crash.np0005541912 on np0005541912.localdomain
Dec 02 09:54:32 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:32 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:32 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 02 09:54:32 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:32 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:54:32 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:54:33 np0005541914.localdomain podman[291575]: 2025-12-02 09:54:33.074813392 +0000 UTC m=+0.078009444 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 09:54:33 np0005541914.localdomain podman[291575]: 2025-12-02 09:54:33.085944663 +0000 UTC m=+0.089140665 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 02 09:54:33 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:54:33 np0005541914.localdomain podman[291576]: 2025-12-02 09:54:33.129506273 +0000 UTC m=+0.130603690 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, version=9.6, container_name=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, vcs-type=git)
Dec 02 09:54:33 np0005541914.localdomain podman[291576]: 2025-12-02 09:54:33.164944486 +0000 UTC m=+0.166041923 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.)
Dec 02 09:54:33 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:54:33 np0005541914.localdomain ceph-mon[288526]: Reconfiguring osd.2 (monmap changed)...
Dec 02 09:54:33 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.2 on np0005541912.localdomain
Dec 02 09:54:33 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:33 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:33 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:33 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 02 09:54:33 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:33 np0005541914.localdomain podman[239757]: time="2025-12-02T09:54:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 09:54:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:54:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 09:54:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:54:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19174 "" "Go-http-client/1.1"
Dec 02 09:54:34 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:54:34 np0005541914.localdomain ceph-mon[288526]: from='client.34233 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:54:34 np0005541914.localdomain ceph-mon[288526]: Saving service mon spec with placement label:mon
Dec 02 09:54:34 np0005541914.localdomain ceph-mon[288526]: Reconfiguring osd.5 (monmap changed)...
Dec 02 09:54:34 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.5 on np0005541912.localdomain
Dec 02 09:54:34 np0005541914.localdomain ceph-mon[288526]: pgmap v29: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:54:34 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:34 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:34 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:54:34 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:35 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)...
Dec 02 09:54:35 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain
Dec 02 09:54:35 np0005541914.localdomain ceph-mon[288526]: from='client.34187 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005541912", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 02 09:54:35 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:35 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:35 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:54:35 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:54:35 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:35 np0005541914.localdomain ceph-mgr[287188]: ms_deliver_dispatch: unhandled message 0x55910bb5f1e0 mon_map magic: 0 from mon.3 v2:172.18.0.107:3300/0
Dec 02 09:54:35 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 calling monitor election
Dec 02 09:54:35 np0005541914.localdomain ceph-mon[288526]: paxos.2).electionLogic(32) init, last seen epoch 32
Dec 02 09:54:35 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(electing) e8 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:54:35 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(electing) e8 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:54:37 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:54:38 np0005541914.localdomain podman[291618]: 2025-12-02 09:54:38.070228699 +0000 UTC m=+0.078091387 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:54:38 np0005541914.localdomain podman[291618]: 2025-12-02 09:54:38.107101005 +0000 UTC m=+0.114963683 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 02 09:54:38 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:54:40 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e8 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:54:40 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 calling monitor election
Dec 02 09:54:40 np0005541914.localdomain ceph-mon[288526]: paxos.2).electionLogic(35) init, last seen epoch 35, mid-election, bumping
Dec 02 09:54:40 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(electing) e8 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:54:40 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(electing) e8 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:54:40 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(electing) e8 handle_timecheck drop unexpected msg
Dec 02 09:54:40 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(electing) e8 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:54:40 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e8 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: mon.np0005541910 calling monitor election
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914 calling monitor election
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: mon.np0005541913 calling monitor election
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: mon.np0005541910 calling monitor election
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914 calling monitor election
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: monmap epoch 8
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: last_changed 2025-12-02T09:54:35.609159+0000
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: created 2025-12-02T07:44:17.411659+0000
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: min_mon_release 18 (reef)
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: election_strategy: 1
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: 0: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005541911
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005541910
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: 2: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005541914
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: 3: [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] mon.np0005541913
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: fsmap cephfs:1 {0=mds.np0005541912.ghcwcm=up:active} 2 up:standby
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: osdmap e86: 6 total, 6 up, 6 in
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: mgrmap e19: np0005541911.adcgiw(active, since 55s), standbys: np0005541914.lljzmk, np0005541910.kzipdo, np0005541913.mfesdm, np0005541912.qwddia, np0005541909.kfesnk
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: Health check failed: 1/4 mons down, quorum np0005541911,np0005541910,np0005541914 (MON_DOWN)
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: overall HEALTH_OK
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: mon.np0005541911 calling monitor election
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: mon.np0005541911 is new leader, mons np0005541911,np0005541910,np0005541914,np0005541913 in quorum (ranks 0,1,2,3)
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: monmap epoch 8
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: last_changed 2025-12-02T09:54:35.609159+0000
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: created 2025-12-02T07:44:17.411659+0000
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: min_mon_release 18 (reef)
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: election_strategy: 1
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: 0: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005541911
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005541910
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: 2: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005541914
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: 3: [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] mon.np0005541913
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: fsmap cephfs:1 {0=mds.np0005541912.ghcwcm=up:active} 2 up:standby
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: osdmap e86: 6 total, 6 up, 6 in
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: mgrmap e19: np0005541911.adcgiw(active, since 55s), standbys: np0005541914.lljzmk, np0005541910.kzipdo, np0005541913.mfesdm, np0005541912.qwddia, np0005541909.kfesnk
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: Health check cleared: MON_DOWN (was: 1/4 mons down, quorum np0005541911,np0005541910,np0005541914)
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: Cluster is now healthy
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: overall HEALTH_OK
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: Reconfiguring crash.np0005541913 (monmap changed)...
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 02 09:54:41 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:54:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 09:54:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:54:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:54:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:54:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:54:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:54:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 09:54:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:54:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:54:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 09:54:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:42.638083) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669282638604, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 3141, "num_deletes": 517, "total_data_size": 9043259, "memory_usage": 9615240, "flush_reason": "Manual Compaction"}
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669282679260, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 5540365, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10066, "largest_seqno": 13202, "table_properties": {"data_size": 5527476, "index_size": 7666, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4165, "raw_key_size": 35681, "raw_average_key_size": 21, "raw_value_size": 5497233, "raw_average_value_size": 3327, "num_data_blocks": 331, "num_entries": 1652, "num_filter_entries": 1652, "num_deletions": 516, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669219, "oldest_key_time": 1764669219, "file_creation_time": 1764669282, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fef79939-f0d3-4c6e-a3c1-7bf191246dd2", "db_session_id": "ES6HEAUO0NO66H72LGQU", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 41221 microseconds, and 14932 cpu microseconds.
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:42.679310) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 5540365 bytes OK
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:42.679335) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:42.681167) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:42.681183) EVENT_LOG_v1 {"time_micros": 1764669282681179, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:42.681200) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 9027079, prev total WAL file size 9075868, number of live WAL files 2.
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:42.682591) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130323931' seq:72057594037927935, type:22 .. '7061786F73003130353433' seq:0, type:0; will stop at (end)
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(5410KB)], [15(8873KB)]
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669282682625, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 14627300, "oldest_snapshot_seqno": -1}
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 9820 keys, 12520093 bytes, temperature: kUnknown
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669282761768, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 12520093, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12463332, "index_size": 31124, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24581, "raw_key_size": 262018, "raw_average_key_size": 26, "raw_value_size": 12294228, "raw_average_value_size": 1251, "num_data_blocks": 1189, "num_entries": 9820, "num_filter_entries": 9820, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669199, "oldest_key_time": 0, "file_creation_time": 1764669282, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fef79939-f0d3-4c6e-a3c1-7bf191246dd2", "db_session_id": "ES6HEAUO0NO66H72LGQU", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:42.762019) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 12520093 bytes
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:42.764322) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 184.6 rd, 158.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(5.3, 8.7 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(4.9) write-amplify(2.3) OK, records in: 10902, records dropped: 1082 output_compression: NoCompression
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:42.764343) EVENT_LOG_v1 {"time_micros": 1764669282764333, "job": 6, "event": "compaction_finished", "compaction_time_micros": 79224, "compaction_time_cpu_micros": 25715, "output_level": 6, "num_output_files": 1, "total_output_size": 12520093, "num_input_records": 10902, "num_output_records": 9820, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669282764960, "job": 6, "event": "table_file_deletion", "file_number": 17}
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669282765807, "job": 6, "event": "table_file_deletion", "file_number": 15}
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:42.682523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:42.765943) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:42.765951) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:42.765954) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:42.765957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:42.765960) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: pgmap v33: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: Reconfiguring osd.0 (monmap changed)...
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.0 on np0005541913.localdomain
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Dec 02 09:54:42 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:43 np0005541914.localdomain ceph-mon[288526]: Reconfiguring osd.3 (monmap changed)...
Dec 02 09:54:43 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.3 on np0005541913.localdomain
Dec 02 09:54:43 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:43 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:43 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:54:43 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:44 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:54:44 np0005541914.localdomain ceph-mon[288526]: pgmap v34: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:54:44 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)...
Dec 02 09:54:44 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain
Dec 02 09:54:44 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:44 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:44 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:54:44 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:54:44 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:45 np0005541914.localdomain sudo[291637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:54:45 np0005541914.localdomain sudo[291637]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:45 np0005541914.localdomain sudo[291637]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:45 np0005541914.localdomain sudo[291655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:54:45 np0005541914.localdomain sudo[291655]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:45 np0005541914.localdomain podman[291691]: 
Dec 02 09:54:45 np0005541914.localdomain podman[291691]: 2025-12-02 09:54:45.708605458 +0000 UTC m=+0.074176536 container create 00b63c321547e0589d0a616fc2cdd17d1e172d2a43507dee742c194a492521ea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_sanderson, GIT_BRANCH=main, io.buildah.version=1.41.4, RELEASE=main, architecture=x86_64, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, distribution-scope=public, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph)
Dec 02 09:54:45 np0005541914.localdomain systemd[1]: Started libpod-conmon-00b63c321547e0589d0a616fc2cdd17d1e172d2a43507dee742c194a492521ea.scope.
Dec 02 09:54:45 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:54:45 np0005541914.localdomain podman[291691]: 2025-12-02 09:54:45.677104266 +0000 UTC m=+0.042675394 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:54:45 np0005541914.localdomain podman[291691]: 2025-12-02 09:54:45.786139757 +0000 UTC m=+0.151710835 container init 00b63c321547e0589d0a616fc2cdd17d1e172d2a43507dee742c194a492521ea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_sanderson, io.buildah.version=1.41.4, GIT_BRANCH=main, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, vcs-type=git, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, build-date=2025-11-26T19:44:28Z, ceph=True, GIT_CLEAN=True, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 09:54:45 np0005541914.localdomain systemd[1]: tmp-crun.aiMjCk.mount: Deactivated successfully.
Dec 02 09:54:45 np0005541914.localdomain podman[291691]: 2025-12-02 09:54:45.800864326 +0000 UTC m=+0.166435404 container start 00b63c321547e0589d0a616fc2cdd17d1e172d2a43507dee742c194a492521ea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_sanderson, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, version=7, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, GIT_CLEAN=True, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, GIT_BRANCH=main, io.openshift.expose-services=, release=1763362218, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 02 09:54:45 np0005541914.localdomain podman[291691]: 2025-12-02 09:54:45.801489996 +0000 UTC m=+0.167061084 container attach 00b63c321547e0589d0a616fc2cdd17d1e172d2a43507dee742c194a492521ea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_sanderson, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.buildah.version=1.41.4, architecture=x86_64, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, ceph=True, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git)
Dec 02 09:54:45 np0005541914.localdomain hungry_sanderson[291707]: 167 167
Dec 02 09:54:45 np0005541914.localdomain systemd[1]: libpod-00b63c321547e0589d0a616fc2cdd17d1e172d2a43507dee742c194a492521ea.scope: Deactivated successfully.
Dec 02 09:54:45 np0005541914.localdomain podman[291691]: 2025-12-02 09:54:45.806270662 +0000 UTC m=+0.171841750 container died 00b63c321547e0589d0a616fc2cdd17d1e172d2a43507dee742c194a492521ea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_sanderson, GIT_CLEAN=True, io.openshift.expose-services=, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, vendor=Red Hat, Inc., architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 09:54:45 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mgr.np0005541913.mfesdm (monmap changed)...
Dec 02 09:54:45 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain
Dec 02 09:54:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:54:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:45 np0005541914.localdomain podman[291712]: 2025-12-02 09:54:45.887583136 +0000 UTC m=+0.072443755 container remove 00b63c321547e0589d0a616fc2cdd17d1e172d2a43507dee742c194a492521ea (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_sanderson, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, io.buildah.version=1.41.4, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, version=7, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, vendor=Red Hat, Inc.)
Dec 02 09:54:45 np0005541914.localdomain systemd[1]: libpod-conmon-00b63c321547e0589d0a616fc2cdd17d1e172d2a43507dee742c194a492521ea.scope: Deactivated successfully.
Dec 02 09:54:45 np0005541914.localdomain sudo[291655]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:46 np0005541914.localdomain sudo[291728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:54:46 np0005541914.localdomain sudo[291728]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:46 np0005541914.localdomain sudo[291728]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:46 np0005541914.localdomain sudo[291746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:54:46 np0005541914.localdomain sudo[291746]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:46 np0005541914.localdomain podman[291780]: 
Dec 02 09:54:46 np0005541914.localdomain podman[291780]: 2025-12-02 09:54:46.571371893 +0000 UTC m=+0.061561651 container create 57d2a861d02d89f5ad16cb4d8f75bf22a0b7686bfa32ad405e7029dc813fca67 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_shockley, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, com.redhat.component=rhceph-container, architecture=x86_64, version=7, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, ceph=True, vcs-type=git, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public)
Dec 02 09:54:46 np0005541914.localdomain systemd[1]: Started libpod-conmon-57d2a861d02d89f5ad16cb4d8f75bf22a0b7686bfa32ad405e7029dc813fca67.scope.
Dec 02 09:54:46 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:54:46 np0005541914.localdomain podman[291780]: 2025-12-02 09:54:46.633477891 +0000 UTC m=+0.123667569 container init 57d2a861d02d89f5ad16cb4d8f75bf22a0b7686bfa32ad405e7029dc813fca67 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_shockley, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, vcs-type=git, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, ceph=True, architecture=x86_64, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, name=rhceph, distribution-scope=public, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph)
Dec 02 09:54:46 np0005541914.localdomain podman[291780]: 2025-12-02 09:54:46.643209828 +0000 UTC m=+0.133399506 container start 57d2a861d02d89f5ad16cb4d8f75bf22a0b7686bfa32ad405e7029dc813fca67 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_shockley, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vcs-type=git, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, version=7, GIT_BRANCH=main, name=rhceph, GIT_CLEAN=True, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 02 09:54:46 np0005541914.localdomain podman[291780]: 2025-12-02 09:54:46.643478876 +0000 UTC m=+0.133668584 container attach 57d2a861d02d89f5ad16cb4d8f75bf22a0b7686bfa32ad405e7029dc813fca67 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_shockley, GIT_CLEAN=True, distribution-scope=public, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, GIT_BRANCH=main, version=7, RELEASE=main, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, name=rhceph, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 02 09:54:46 np0005541914.localdomain objective_shockley[291796]: 167 167
Dec 02 09:54:46 np0005541914.localdomain systemd[1]: libpod-57d2a861d02d89f5ad16cb4d8f75bf22a0b7686bfa32ad405e7029dc813fca67.scope: Deactivated successfully.
Dec 02 09:54:46 np0005541914.localdomain podman[291780]: 2025-12-02 09:54:46.644925461 +0000 UTC m=+0.135115199 container died 57d2a861d02d89f5ad16cb4d8f75bf22a0b7686bfa32ad405e7029dc813fca67 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_shockley, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, architecture=x86_64, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_CLEAN=True, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git)
Dec 02 09:54:46 np0005541914.localdomain podman[291780]: 2025-12-02 09:54:46.54766805 +0000 UTC m=+0.037857758 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:54:46 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-ac5900ab8ffae1918adf6b8e816c3760e45171d74d4508e28545bac0e0560d3b-merged.mount: Deactivated successfully.
Dec 02 09:54:46 np0005541914.localdomain systemd[1]: tmp-crun.yMFp4C.mount: Deactivated successfully.
Dec 02 09:54:46 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c2e651cc5af721df3b5c00d846ce741cc00f245d0bc892a81f8435ea41956d06-merged.mount: Deactivated successfully.
Dec 02 09:54:46 np0005541914.localdomain podman[291801]: 2025-12-02 09:54:46.748707541 +0000 UTC m=+0.089572277 container remove 57d2a861d02d89f5ad16cb4d8f75bf22a0b7686bfa32ad405e7029dc813fca67 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_shockley, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, distribution-scope=public, ceph=True, io.buildah.version=1.41.4, name=rhceph, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, architecture=x86_64, release=1763362218, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_BRANCH=main)
Dec 02 09:54:46 np0005541914.localdomain systemd[1]: libpod-conmon-57d2a861d02d89f5ad16cb4d8f75bf22a0b7686bfa32ad405e7029dc813fca67.scope: Deactivated successfully.
Dec 02 09:54:46 np0005541914.localdomain ceph-mon[288526]: Reconfiguring crash.np0005541914 (monmap changed)...
Dec 02 09:54:46 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon crash.np0005541914 on np0005541914.localdomain
Dec 02 09:54:46 np0005541914.localdomain ceph-mon[288526]: pgmap v35: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:54:46 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:46 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:46 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 02 09:54:46 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:46 np0005541914.localdomain sudo[291746]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:47 np0005541914.localdomain sudo[291824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:54:47 np0005541914.localdomain sudo[291824]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:47 np0005541914.localdomain sudo[291824]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:47 np0005541914.localdomain sudo[291842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:54:47 np0005541914.localdomain sudo[291842]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:47 np0005541914.localdomain podman[291876]: 
Dec 02 09:54:47 np0005541914.localdomain podman[291876]: 2025-12-02 09:54:47.535328739 +0000 UTC m=+0.059768126 container create 3f7f772eb5e0f4b04bd1c02b7cd12f232a40cbdbb5c2d953c1b24fb1b61a3fc1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_euler, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, architecture=x86_64, CEPH_POINT_RELEASE=, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, name=rhceph)
Dec 02 09:54:47 np0005541914.localdomain systemd[1]: Started libpod-conmon-3f7f772eb5e0f4b04bd1c02b7cd12f232a40cbdbb5c2d953c1b24fb1b61a3fc1.scope.
Dec 02 09:54:47 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:54:47 np0005541914.localdomain podman[291876]: 2025-12-02 09:54:47.602395068 +0000 UTC m=+0.126834455 container init 3f7f772eb5e0f4b04bd1c02b7cd12f232a40cbdbb5c2d953c1b24fb1b61a3fc1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_euler, com.redhat.component=rhceph-container, ceph=True, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, release=1763362218, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vcs-type=git, build-date=2025-11-26T19:44:28Z, architecture=x86_64, vendor=Red Hat, Inc., RELEASE=main)
Dec 02 09:54:47 np0005541914.localdomain podman[291876]: 2025-12-02 09:54:47.505387935 +0000 UTC m=+0.029827352 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:54:47 np0005541914.localdomain podman[291876]: 2025-12-02 09:54:47.611769375 +0000 UTC m=+0.136208762 container start 3f7f772eb5e0f4b04bd1c02b7cd12f232a40cbdbb5c2d953c1b24fb1b61a3fc1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_euler, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, distribution-scope=public, name=rhceph, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_BRANCH=main, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=)
Dec 02 09:54:47 np0005541914.localdomain podman[291876]: 2025-12-02 09:54:47.611956171 +0000 UTC m=+0.136395558 container attach 3f7f772eb5e0f4b04bd1c02b7cd12f232a40cbdbb5c2d953c1b24fb1b61a3fc1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_euler, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, build-date=2025-11-26T19:44:28Z, RELEASE=main, io.openshift.expose-services=, io.buildah.version=1.41.4, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, version=7, vcs-type=git, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_BRANCH=main)
Dec 02 09:54:47 np0005541914.localdomain intelligent_euler[291891]: 167 167
Dec 02 09:54:47 np0005541914.localdomain systemd[1]: libpod-3f7f772eb5e0f4b04bd1c02b7cd12f232a40cbdbb5c2d953c1b24fb1b61a3fc1.scope: Deactivated successfully.
Dec 02 09:54:47 np0005541914.localdomain podman[291876]: 2025-12-02 09:54:47.614518619 +0000 UTC m=+0.138958006 container died 3f7f772eb5e0f4b04bd1c02b7cd12f232a40cbdbb5c2d953c1b24fb1b61a3fc1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_euler, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_BRANCH=main, com.redhat.component=rhceph-container, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, name=rhceph, io.openshift.expose-services=, version=7, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, CEPH_POINT_RELEASE=, vcs-type=git, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7)
Dec 02 09:54:47 np0005541914.localdomain podman[291896]: 2025-12-02 09:54:47.704223799 +0000 UTC m=+0.082148861 container remove 3f7f772eb5e0f4b04bd1c02b7cd12f232a40cbdbb5c2d953c1b24fb1b61a3fc1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=intelligent_euler, GIT_CLEAN=True, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, ceph=True, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7)
Dec 02 09:54:47 np0005541914.localdomain systemd[1]: libpod-conmon-3f7f772eb5e0f4b04bd1c02b7cd12f232a40cbdbb5c2d953c1b24fb1b61a3fc1.scope: Deactivated successfully.
Dec 02 09:54:47 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-dd30a025531b0c3b4a7362020c544394b19a7cbea7909b5c730e8590ebc0dc7b-merged.mount: Deactivated successfully.
Dec 02 09:54:47 np0005541914.localdomain ceph-mon[288526]: Reconfiguring osd.1 (monmap changed)...
Dec 02 09:54:47 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.1 on np0005541914.localdomain
Dec 02 09:54:47 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:47 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:47 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Dec 02 09:54:47 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:47 np0005541914.localdomain sudo[291842]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:48 np0005541914.localdomain sudo[291920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:54:48 np0005541914.localdomain sudo[291920]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:48 np0005541914.localdomain sudo[291920]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:48 np0005541914.localdomain sudo[291938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:54:48 np0005541914.localdomain sudo[291938]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:48 np0005541914.localdomain podman[291973]: 
Dec 02 09:54:48 np0005541914.localdomain podman[291973]: 2025-12-02 09:54:48.497863822 +0000 UTC m=+0.055375742 container create d5f97e1f9c81014a48671ef823f2f5196876e8cbb141b1ac843a84261fe42039 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_spence, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., version=7, GIT_CLEAN=True, GIT_BRANCH=main, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, distribution-scope=public, ceph=True, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218)
Dec 02 09:54:48 np0005541914.localdomain systemd[1]: Started libpod-conmon-d5f97e1f9c81014a48671ef823f2f5196876e8cbb141b1ac843a84261fe42039.scope.
Dec 02 09:54:48 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:54:48 np0005541914.localdomain podman[291973]: 2025-12-02 09:54:48.536481052 +0000 UTC m=+0.093992972 container init d5f97e1f9c81014a48671ef823f2f5196876e8cbb141b1ac843a84261fe42039 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_spence, vendor=Red Hat, Inc., name=rhceph, ceph=True, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.component=rhceph-container, release=1763362218, CEPH_POINT_RELEASE=, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, architecture=x86_64)
Dec 02 09:54:48 np0005541914.localdomain podman[291973]: 2025-12-02 09:54:48.544757125 +0000 UTC m=+0.102269045 container start d5f97e1f9c81014a48671ef823f2f5196876e8cbb141b1ac843a84261fe42039 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_spence, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.expose-services=, version=7, RELEASE=main, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, architecture=x86_64, distribution-scope=public, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 02 09:54:48 np0005541914.localdomain podman[291973]: 2025-12-02 09:54:48.545534889 +0000 UTC m=+0.103047029 container attach d5f97e1f9c81014a48671ef823f2f5196876e8cbb141b1ac843a84261fe42039 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_spence, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.buildah.version=1.41.4, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=rhceph-container, distribution-scope=public, io.openshift.expose-services=, version=7, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, CEPH_POINT_RELEASE=, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 02 09:54:48 np0005541914.localdomain happy_spence[291988]: 167 167
Dec 02 09:54:48 np0005541914.localdomain systemd[1]: libpod-d5f97e1f9c81014a48671ef823f2f5196876e8cbb141b1ac843a84261fe42039.scope: Deactivated successfully.
Dec 02 09:54:48 np0005541914.localdomain podman[291973]: 2025-12-02 09:54:48.547333134 +0000 UTC m=+0.104845074 container died d5f97e1f9c81014a48671ef823f2f5196876e8cbb141b1ac843a84261fe42039 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_spence, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, vendor=Red Hat, Inc., version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_CLEAN=True, GIT_BRANCH=main, com.redhat.component=rhceph-container, distribution-scope=public, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, RELEASE=main, build-date=2025-11-26T19:44:28Z)
Dec 02 09:54:48 np0005541914.localdomain podman[291973]: 2025-12-02 09:54:48.474118387 +0000 UTC m=+0.031630317 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:54:48 np0005541914.localdomain podman[291993]: 2025-12-02 09:54:48.618530648 +0000 UTC m=+0.062648474 container remove d5f97e1f9c81014a48671ef823f2f5196876e8cbb141b1ac843a84261fe42039 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_spence, RELEASE=main, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, ceph=True, name=rhceph, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container)
Dec 02 09:54:48 np0005541914.localdomain systemd[1]: libpod-conmon-d5f97e1f9c81014a48671ef823f2f5196876e8cbb141b1ac843a84261fe42039.scope: Deactivated successfully.
Dec 02 09:54:48 np0005541914.localdomain sudo[291938]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:48 np0005541914.localdomain systemd[1]: tmp-crun.Jz6Zw7.mount: Deactivated successfully.
Dec 02 09:54:48 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-2ce7e79137ded3fe60f33574aa99549fece0636d032717af0b44d503b046d447-merged.mount: Deactivated successfully.
Dec 02 09:54:48 np0005541914.localdomain sudo[292009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:54:48 np0005541914.localdomain sudo[292009]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:48 np0005541914.localdomain sudo[292009]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:48 np0005541914.localdomain ceph-mon[288526]: Reconfiguring osd.4 (monmap changed)...
Dec 02 09:54:48 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.4 on np0005541914.localdomain
Dec 02 09:54:48 np0005541914.localdomain ceph-mon[288526]: pgmap v36: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:54:48 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:48 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:48 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:54:48 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:48 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.107:0/2742484200' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:54:48 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:48 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:48 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:54:48 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:54:48 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:48 np0005541914.localdomain sudo[292027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:54:48 np0005541914.localdomain sudo[292027]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:49.316002) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669289316048, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 520, "num_deletes": 256, "total_data_size": 621723, "memory_usage": 632584, "flush_reason": "Manual Compaction"}
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669289321170, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 358219, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13207, "largest_seqno": 13722, "table_properties": {"data_size": 355267, "index_size": 935, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7391, "raw_average_key_size": 19, "raw_value_size": 349101, "raw_average_value_size": 921, "num_data_blocks": 39, "num_entries": 379, "num_filter_entries": 379, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669282, "oldest_key_time": 1764669282, "file_creation_time": 1764669289, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fef79939-f0d3-4c6e-a3c1-7bf191246dd2", "db_session_id": "ES6HEAUO0NO66H72LGQU", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 5233 microseconds, and 1960 cpu microseconds.
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:49.321238) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 358219 bytes OK
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:49.321265) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:49.323346) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:49.323389) EVENT_LOG_v1 {"time_micros": 1764669289323378, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:49.323416) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 618499, prev total WAL file size 618823, number of live WAL files 2.
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:49.325169) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353136' seq:72057594037927935, type:22 .. '6C6F676D0033373638' seq:0, type:0; will stop at (end)
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(349KB)], [18(11MB)]
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669289325227, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 12878312, "oldest_snapshot_seqno": -1}
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 9669 keys, 12768818 bytes, temperature: kUnknown
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669289403048, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 12768818, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12712041, "index_size": 31524, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24197, "raw_key_size": 260025, "raw_average_key_size": 26, "raw_value_size": 12544563, "raw_average_value_size": 1297, "num_data_blocks": 1204, "num_entries": 9669, "num_filter_entries": 9669, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669199, "oldest_key_time": 0, "file_creation_time": 1764669289, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fef79939-f0d3-4c6e-a3c1-7bf191246dd2", "db_session_id": "ES6HEAUO0NO66H72LGQU", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 09:54:49 np0005541914.localdomain podman[292063]: 
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:49.403439) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 12768818 bytes
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:49.405679) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 165.3 rd, 163.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 11.9 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(71.6) write-amplify(35.6) OK, records in: 10199, records dropped: 530 output_compression: NoCompression
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:49.405719) EVENT_LOG_v1 {"time_micros": 1764669289405703, "job": 8, "event": "compaction_finished", "compaction_time_micros": 77926, "compaction_time_cpu_micros": 41575, "output_level": 6, "num_output_files": 1, "total_output_size": 12768818, "num_input_records": 10199, "num_output_records": 9669, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669289406079, "job": 8, "event": "table_file_deletion", "file_number": 20}
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669289408841, "job": 8, "event": "table_file_deletion", "file_number": 18}
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:49.325069) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:49.408941) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:49.408949) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:49.408952) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:49.408955) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:54:49.408957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 09:54:49 np0005541914.localdomain podman[292063]: 2025-12-02 09:54:49.419523576 +0000 UTC m=+0.074724454 container create e3f0ee712ad7b3441b609c1065be85409a35426282aef2befa32f9dce7785f17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_carver, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, name=rhceph, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, com.redhat.component=rhceph-container, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 09:54:49 np0005541914.localdomain systemd[1]: Started libpod-conmon-e3f0ee712ad7b3441b609c1065be85409a35426282aef2befa32f9dce7785f17.scope.
Dec 02 09:54:49 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:54:49 np0005541914.localdomain podman[292063]: 2025-12-02 09:54:49.387804498 +0000 UTC m=+0.043005366 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:54:49 np0005541914.localdomain podman[292063]: 2025-12-02 09:54:49.489156074 +0000 UTC m=+0.144356902 container init e3f0ee712ad7b3441b609c1065be85409a35426282aef2befa32f9dce7785f17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_carver, name=rhceph, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, version=7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.openshift.expose-services=)
Dec 02 09:54:49 np0005541914.localdomain podman[292063]: 2025-12-02 09:54:49.499475439 +0000 UTC m=+0.154676267 container start e3f0ee712ad7b3441b609c1065be85409a35426282aef2befa32f9dce7785f17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_carver, version=7, com.redhat.component=rhceph-container, release=1763362218, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, name=rhceph, architecture=x86_64, io.buildah.version=1.41.4, io.openshift.expose-services=)
Dec 02 09:54:49 np0005541914.localdomain podman[292063]: 2025-12-02 09:54:49.499779398 +0000 UTC m=+0.154980226 container attach e3f0ee712ad7b3441b609c1065be85409a35426282aef2befa32f9dce7785f17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_carver, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, distribution-scope=public, vcs-type=git, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, RELEASE=main, build-date=2025-11-26T19:44:28Z, version=7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., release=1763362218, CEPH_POINT_RELEASE=)
Dec 02 09:54:49 np0005541914.localdomain zealous_carver[292078]: 167 167
Dec 02 09:54:49 np0005541914.localdomain systemd[1]: libpod-e3f0ee712ad7b3441b609c1065be85409a35426282aef2befa32f9dce7785f17.scope: Deactivated successfully.
Dec 02 09:54:49 np0005541914.localdomain podman[292063]: 2025-12-02 09:54:49.503032498 +0000 UTC m=+0.158233356 container died e3f0ee712ad7b3441b609c1065be85409a35426282aef2befa32f9dce7785f17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_carver, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., ceph=True, vcs-type=git, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, version=7, GIT_BRANCH=main, name=rhceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 02 09:54:49 np0005541914.localdomain podman[292083]: 2025-12-02 09:54:49.603261749 +0000 UTC m=+0.087982109 container remove e3f0ee712ad7b3441b609c1065be85409a35426282aef2befa32f9dce7785f17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=zealous_carver, ceph=True, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_CLEAN=True, io.buildah.version=1.41.4, vcs-type=git, version=7, release=1763362218, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 02 09:54:49 np0005541914.localdomain systemd[1]: libpod-conmon-e3f0ee712ad7b3441b609c1065be85409a35426282aef2befa32f9dce7785f17.scope: Deactivated successfully.
Dec 02 09:54:49 np0005541914.localdomain sudo[292027]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:49 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-2d608890f6997853bd8310342bf75301ea75477298cb74acfb311ca4af0b73b7-merged.mount: Deactivated successfully.
Dec 02 09:54:49 np0005541914.localdomain sudo[292099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:54:49 np0005541914.localdomain sudo[292099]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:49 np0005541914.localdomain sudo[292099]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:49 np0005541914.localdomain sudo[292117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:54:49 np0005541914.localdomain sudo[292117]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mds.mds.np0005541914.sqgqkj (monmap changed)...
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mds.mds.np0005541914.sqgqkj on np0005541914.localdomain
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mgr.np0005541914.lljzmk (monmap changed)...
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mgr.np0005541914.lljzmk on np0005541914.localdomain
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: pgmap v37: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.107:0/312733303' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:50 np0005541914.localdomain sudo[292117]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:54:50.524 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:54:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:54:50.526 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:54:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:54:50.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:54:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:54:50.527 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 09:54:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:54:50.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:54:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:54:50.679 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:54:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:54:50.679 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:54:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:54:50.679 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:54:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:54:50.680 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 09:54:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:54:50.680 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:54:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:54:51.124 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:54:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:54:51.273 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 09:54:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:54:51.274 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=12050MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 09:54:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:54:51.274 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:54:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:54:51.274 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:54:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:54:51.351 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 09:54:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:54:51.352 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 09:54:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:54:51.369 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:54:51 np0005541914.localdomain sudo[292209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 02 09:54:51 np0005541914.localdomain sudo[292209]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:51 np0005541914.localdomain sudo[292209]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:51 np0005541914.localdomain sudo[292227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph
Dec 02 09:54:51 np0005541914.localdomain sudo[292227]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:51 np0005541914.localdomain sudo[292227]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:51 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e8 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 09:54:51 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3012433288' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:54:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:54:51.813 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:54:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:54:51.820 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 09:54:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:54:51.840 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 09:54:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:54:51.843 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 09:54:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:54:51.844 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:54:51 np0005541914.localdomain sudo[292246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:54:51 np0005541914.localdomain sudo[292246]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:51 np0005541914.localdomain sudo[292246]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:51 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:51 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:51 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.108:0/955695829' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:54:51 np0005541914.localdomain ceph-mon[288526]: pgmap v38: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:54:51 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:51 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:54:51 np0005541914.localdomain ceph-mon[288526]: Updating np0005541910.localdomain:/etc/ceph/ceph.conf
Dec 02 09:54:51 np0005541914.localdomain ceph-mon[288526]: Updating np0005541911.localdomain:/etc/ceph/ceph.conf
Dec 02 09:54:51 np0005541914.localdomain ceph-mon[288526]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf
Dec 02 09:54:51 np0005541914.localdomain ceph-mon[288526]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf
Dec 02 09:54:51 np0005541914.localdomain ceph-mon[288526]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf
Dec 02 09:54:51 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.108:0/3012433288' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:54:51 np0005541914.localdomain sudo[292265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:54:51 np0005541914.localdomain sudo[292265]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:51 np0005541914.localdomain sudo[292265]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:51 np0005541914.localdomain sudo[292283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:54:51 np0005541914.localdomain sudo[292283]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:51 np0005541914.localdomain sudo[292283]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:52 np0005541914.localdomain sudo[292317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:54:52 np0005541914.localdomain sudo[292317]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:52 np0005541914.localdomain sudo[292317]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:52 np0005541914.localdomain sudo[292335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:54:52 np0005541914.localdomain sudo[292335]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:52 np0005541914.localdomain sudo[292335]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:52 np0005541914.localdomain sudo[292353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 02 09:54:52 np0005541914.localdomain sudo[292353]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:52 np0005541914.localdomain sudo[292353]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:52 np0005541914.localdomain sudo[292371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 09:54:52 np0005541914.localdomain sudo[292371]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:52 np0005541914.localdomain sudo[292371]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:52 np0005541914.localdomain sudo[292389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 09:54:52 np0005541914.localdomain sudo[292389]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:52 np0005541914.localdomain sudo[292389]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:52 np0005541914.localdomain sudo[292407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:54:52 np0005541914.localdomain sudo[292407]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:52 np0005541914.localdomain sudo[292407]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:52 np0005541914.localdomain sudo[292425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:54:52 np0005541914.localdomain sudo[292425]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:52 np0005541914.localdomain sudo[292425]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:52 np0005541914.localdomain sudo[292443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:54:52 np0005541914.localdomain sudo[292443]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:52 np0005541914.localdomain sudo[292443]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:52 np0005541914.localdomain sudo[292477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:54:52 np0005541914.localdomain sudo[292477]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:52 np0005541914.localdomain sudo[292477]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:52 np0005541914.localdomain sudo[292495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:54:52 np0005541914.localdomain sudo[292495]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:52 np0005541914.localdomain sudo[292495]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:52 np0005541914.localdomain sudo[292513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:54:52 np0005541914.localdomain sudo[292513]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:52 np0005541914.localdomain sudo[292513]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:54:52.846 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:54:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:54:52.846 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 09:54:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:54:52.846 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 09:54:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:54:52.981 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 09:54:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:54:52.982 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:54:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:54:52.983 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:54:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:54:52.983 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:54:53 np0005541914.localdomain ceph-mon[288526]: from='client.26696 -' entity='client.admin' cmd=[{"prefix": "orch daemon add", "daemon_type": "mon", "placement": "np0005541912.localdomain:172.18.0.103", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:54:53 np0005541914.localdomain ceph-mon[288526]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:54:53 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:53 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:54:53 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:53 np0005541914.localdomain ceph-mon[288526]: Deploying daemon mon.np0005541912 on np0005541912.localdomain
Dec 02 09:54:53 np0005541914.localdomain ceph-mon[288526]: Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:54:53 np0005541914.localdomain ceph-mon[288526]: Updating np0005541910.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:54:53 np0005541914.localdomain ceph-mon[288526]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:54:53 np0005541914.localdomain ceph-mon[288526]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:54:53 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:53 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:53 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:53 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:53 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:53 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:53 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:53 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:53 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:53 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:53 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:53 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:54:53 np0005541914.localdomain sudo[292531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:54:53 np0005541914.localdomain sudo[292531]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:54:53 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:54:53 np0005541914.localdomain sudo[292531]: pam_unix(sudo:session): session closed for user root
Dec 02 09:54:53 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:54:53 np0005541914.localdomain systemd[1]: tmp-crun.NrjGrB.mount: Deactivated successfully.
Dec 02 09:54:53 np0005541914.localdomain podman[292550]: 2025-12-02 09:54:53.383786143 +0000 UTC m=+0.078522060 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3)
Dec 02 09:54:53 np0005541914.localdomain podman[292549]: 2025-12-02 09:54:53.409997564 +0000 UTC m=+0.104380340 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 09:54:53 np0005541914.localdomain podman[292549]: 2025-12-02 09:54:53.444812007 +0000 UTC m=+0.139194773 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 09:54:53 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 09:54:53 np0005541914.localdomain podman[292550]: 2025-12-02 09:54:53.469945395 +0000 UTC m=+0.164681262 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Dec 02 09:54:53 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 09:54:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:54:53.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:54:53 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:54:53 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:54:54 np0005541914.localdomain podman[292591]: 2025-12-02 09:54:54.082478497 +0000 UTC m=+0.083803271 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 02 09:54:54 np0005541914.localdomain podman[292591]: 2025-12-02 09:54:54.115844556 +0000 UTC m=+0.117169350 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 02 09:54:54 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:54:54 np0005541914.localdomain podman[292592]: 2025-12-02 09:54:54.125538672 +0000 UTC m=+0.123232416 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec 02 09:54:54 np0005541914.localdomain podman[292592]: 2025-12-02 09:54:54.267968273 +0000 UTC m=+0.265662057 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller)
Dec 02 09:54:54 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:54:54 np0005541914.localdomain ceph-mon[288526]: Reconfiguring crash.np0005541910 (monmap changed)...
Dec 02 09:54:54 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541910.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:54:54 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:54 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon crash.np0005541910 on np0005541910.localdomain
Dec 02 09:54:54 np0005541914.localdomain ceph-mon[288526]: pgmap v39: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:54:54 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.106:0/2775911751' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:54:54 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:54:54 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e8  adding peer [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] to list of hints
Dec 02 09:54:54 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e8  adding peer [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] to list of hints
Dec 02 09:54:55 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.106:0/2644624687' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:54:55 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:55 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:55 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mgr.np0005541910.kzipdo (monmap changed)...
Dec 02 09:54:55 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541910.kzipdo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:54:55 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:54:55 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:54:55 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mgr.np0005541910.kzipdo on np0005541910.localdomain
Dec 02 09:54:55 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:55 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:54:55 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e8  adding peer [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] to list of hints
Dec 02 09:54:55 np0005541914.localdomain ceph-mgr[287188]: ms_deliver_dispatch: unhandled message 0x55910bb5ef20 mon_map magic: 0 from mon.3 v2:172.18.0.107:3300/0
Dec 02 09:54:55 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 calling monitor election
Dec 02 09:54:55 np0005541914.localdomain ceph-mon[288526]: paxos.2).electionLogic(38) init, last seen epoch 38
Dec 02 09:54:55 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(electing) e9 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:54:55 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(electing) e9 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:54:55 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(electing) e9 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mgr.np0005541911.adcgiw (monmap changed)...
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mgr.np0005541911.adcgiw on np0005541911.localdomain
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: pgmap v40: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: mon.np0005541910 calling monitor election
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914 calling monitor election
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: mon.np0005541913 calling monitor election
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mon metadata", "id": "np0005541910"} : dispatch
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mon metadata", "id": "np0005541911"} : dispatch
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: mon.np0005541911 calling monitor election
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: pgmap v41: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: pgmap v42: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: mon.np0005541911 is new leader, mons np0005541911,np0005541910,np0005541914,np0005541913 in quorum (ranks 0,1,2,3)
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: monmap epoch 9
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: last_changed 2025-12-02T09:54:55.352641+0000
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: created 2025-12-02T07:44:17.411659+0000
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: min_mon_release 18 (reef)
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: election_strategy: 1
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: 0: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005541911
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005541910
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: 2: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005541914
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: 3: [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] mon.np0005541913
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: 4: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005541912
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: fsmap cephfs:1 {0=mds.np0005541912.ghcwcm=up:active} 2 up:standby
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: osdmap e86: 6 total, 6 up, 6 in
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: mgrmap e19: np0005541911.adcgiw(active, since 75s), standbys: np0005541914.lljzmk, np0005541910.kzipdo, np0005541913.mfesdm, np0005541912.qwddia, np0005541909.kfesnk
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: Health check failed: 1/5 mons down, quorum np0005541911,np0005541910,np0005541914,np0005541913 (MON_DOWN)
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: Health detail: HEALTH_WARN 1/5 mons down, quorum np0005541911,np0005541910,np0005541914,np0005541913
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: [WRN] MON_DOWN: 1/5 mons down, quorum np0005541911,np0005541910,np0005541914,np0005541913
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]:     mon.np0005541912 (rank 4) addr [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] is down (out of quorum)
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:55:00 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:55:01 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:55:01 np0005541914.localdomain ceph-mon[288526]: Reconfiguring crash.np0005541911 (monmap changed)...
Dec 02 09:55:01 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541911.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:55:01 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:01 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon crash.np0005541911 on np0005541911.localdomain
Dec 02 09:55:01 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:55:01 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:55:01 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:55:01 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:01 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 09:55:02 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 calling monitor election
Dec 02 09:55:02 np0005541914.localdomain ceph-mon[288526]: paxos.2).electionLogic(40) init, last seen epoch 40
Dec 02 09:55:02 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(electing) e9 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:55:02 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(electing) e9 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:55:02 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:55:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:55:03.165 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:55:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:55:03.166 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:55:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:55:03.166 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:55:03 np0005541914.localdomain ceph-mon[288526]: mon.np0005541912 calling monitor election
Dec 02 09:55:03 np0005541914.localdomain ceph-mon[288526]: Reconfiguring osd.2 (monmap changed)...
Dec 02 09:55:03 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.2 on np0005541912.localdomain
Dec 02 09:55:03 np0005541914.localdomain ceph-mon[288526]: mon.np0005541910 calling monitor election
Dec 02 09:55:03 np0005541914.localdomain ceph-mon[288526]: mon.np0005541912 calling monitor election
Dec 02 09:55:03 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914 calling monitor election
Dec 02 09:55:03 np0005541914.localdomain ceph-mon[288526]: mon.np0005541913 calling monitor election
Dec 02 09:55:03 np0005541914.localdomain ceph-mon[288526]: mon.np0005541911 calling monitor election
Dec 02 09:55:03 np0005541914.localdomain ceph-mon[288526]: mon.np0005541911 is new leader, mons np0005541911,np0005541910,np0005541914,np0005541913,np0005541912 in quorum (ranks 0,1,2,3,4)
Dec 02 09:55:03 np0005541914.localdomain ceph-mon[288526]: monmap epoch 9
Dec 02 09:55:03 np0005541914.localdomain ceph-mon[288526]: fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:55:03 np0005541914.localdomain ceph-mon[288526]: last_changed 2025-12-02T09:54:55.352641+0000
Dec 02 09:55:03 np0005541914.localdomain ceph-mon[288526]: created 2025-12-02T07:44:17.411659+0000
Dec 02 09:55:03 np0005541914.localdomain ceph-mon[288526]: min_mon_release 18 (reef)
Dec 02 09:55:03 np0005541914.localdomain ceph-mon[288526]: election_strategy: 1
Dec 02 09:55:03 np0005541914.localdomain ceph-mon[288526]: 0: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005541911
Dec 02 09:55:03 np0005541914.localdomain ceph-mon[288526]: 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005541910
Dec 02 09:55:03 np0005541914.localdomain ceph-mon[288526]: 2: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005541914
Dec 02 09:55:03 np0005541914.localdomain ceph-mon[288526]: 3: [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] mon.np0005541913
Dec 02 09:55:03 np0005541914.localdomain ceph-mon[288526]: 4: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005541912
Dec 02 09:55:03 np0005541914.localdomain ceph-mon[288526]: fsmap cephfs:1 {0=mds.np0005541912.ghcwcm=up:active} 2 up:standby
Dec 02 09:55:03 np0005541914.localdomain ceph-mon[288526]: osdmap e86: 6 total, 6 up, 6 in
Dec 02 09:55:03 np0005541914.localdomain ceph-mon[288526]: mgrmap e19: np0005541911.adcgiw(active, since 77s), standbys: np0005541914.lljzmk, np0005541910.kzipdo, np0005541913.mfesdm, np0005541912.qwddia, np0005541909.kfesnk
Dec 02 09:55:03 np0005541914.localdomain ceph-mon[288526]: Health check cleared: MON_DOWN (was: 1/5 mons down, quorum np0005541911,np0005541910,np0005541914,np0005541913)
Dec 02 09:55:03 np0005541914.localdomain ceph-mon[288526]: Cluster is now healthy
Dec 02 09:55:03 np0005541914.localdomain ceph-mon[288526]: overall HEALTH_OK
Dec 02 09:55:03 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:55:03 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:55:03 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 02 09:55:03 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:03 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 09:55:03 np0005541914.localdomain podman[239757]: time="2025-12-02T09:55:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 09:55:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:55:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 09:55:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:55:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19170 "" "Go-http-client/1.1"
Dec 02 09:55:03 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:55:03 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:55:04 np0005541914.localdomain podman[292635]: 2025-12-02 09:55:04.116179098 +0000 UTC m=+0.108431543 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, config_id=edpm, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible)
Dec 02 09:55:04 np0005541914.localdomain podman[292634]: 2025-12-02 09:55:04.078022713 +0000 UTC m=+0.077373315 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 09:55:04 np0005541914.localdomain podman[292634]: 2025-12-02 09:55:04.159892354 +0000 UTC m=+0.159242956 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 09:55:04 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:55:04 np0005541914.localdomain podman[292635]: 2025-12-02 09:55:04.181092961 +0000 UTC m=+0.173345456 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, vcs-type=git, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.buildah.version=1.33.7, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible)
Dec 02 09:55:04 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:55:04 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:55:04 np0005541914.localdomain ceph-mon[288526]: Reconfiguring osd.5 (monmap changed)...
Dec 02 09:55:04 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.5 on np0005541912.localdomain
Dec 02 09:55:04 np0005541914.localdomain ceph-mon[288526]: pgmap v44: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:04 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:55:04 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:55:04 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:55:04 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:04 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.32:0/371118405' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 09:55:04 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.32:0/371118405' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 09:55:05 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)...
Dec 02 09:55:05 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain
Dec 02 09:55:05 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:55:05 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:55:05 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:55:05 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:55:05 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:05 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:55:05 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:55:05 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:55:05 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:05 np0005541914.localdomain sshd[292675]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:55:06 np0005541914.localdomain sshd[292675]: Invalid user centos from 34.78.29.97 port 43536
Dec 02 09:55:06 np0005541914.localdomain sshd[292675]: Received disconnect from 34.78.29.97 port 43536:11: Bye Bye [preauth]
Dec 02 09:55:06 np0005541914.localdomain sshd[292675]: Disconnected from invalid user centos 34.78.29.97 port 43536 [preauth]
Dec 02 09:55:06 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mgr.np0005541912.qwddia (monmap changed)...
Dec 02 09:55:06 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mgr.np0005541912.qwddia on np0005541912.localdomain
Dec 02 09:55:06 np0005541914.localdomain ceph-mon[288526]: pgmap v45: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:06 np0005541914.localdomain ceph-mon[288526]: Reconfiguring crash.np0005541913 (monmap changed)...
Dec 02 09:55:06 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain
Dec 02 09:55:06 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.200:0/432160104' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 02 09:55:06 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:55:06 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:55:06 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 02 09:55:06 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:07 np0005541914.localdomain ceph-mon[288526]: Reconfiguring osd.0 (monmap changed)...
Dec 02 09:55:07 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.0 on np0005541913.localdomain
Dec 02 09:55:07 np0005541914.localdomain ceph-mon[288526]: pgmap v46: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:07 np0005541914.localdomain ceph-mon[288526]: from='client.26891 -' entity='client.admin' cmd=[{"prefix": "orch", "action": "reconfig", "service_name": "osd.default_drive_group", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:55:07 np0005541914.localdomain ceph-mon[288526]: Reconfig service osd.default_drive_group
Dec 02 09:55:07 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:55:07 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:55:07 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:55:08 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:55:08 np0005541914.localdomain podman[292677]: 2025-12-02 09:55:08.536281769 +0000 UTC m=+0.053546866 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 02 09:55:08 np0005541914.localdomain podman[292677]: 2025-12-02 09:55:08.54872117 +0000 UTC m=+0.065986227 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd)
Dec 02 09:55:08 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:55:08 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:55:08 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:55:08 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:55:08 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:55:08 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:55:08 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:55:08 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:55:08 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:55:08 np0005541914.localdomain ceph-mon[288526]: Reconfiguring osd.3 (monmap changed)...
Dec 02 09:55:08 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Dec 02 09:55:08 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:08 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.3 on np0005541913.localdomain
Dec 02 09:55:08 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:55:08 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:55:08 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:55:08 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:55:08 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon).osd e87 e87: 6 total, 6 up, 6 in
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: mgr handle_mgr_map Activating!
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: mgr handle_mgr_map I am now activating
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541910"} v 0)
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541910"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541911"} v 0)
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541911"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541912"} v 0)
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0)
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541914"} v 0)
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005541914.sqgqkj"} v 0)
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mds metadata", "who": "mds.np0005541914.sqgqkj"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon).mds e16 all = 0
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005541913.maexpe"} v 0)
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mds metadata", "who": "mds.np0005541913.maexpe"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon).mds e16 all = 0
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005541912.ghcwcm"} v 0)
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mds metadata", "who": "mds.np0005541912.ghcwcm"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon).mds e16 all = 0
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005541914.lljzmk", "id": "np0005541914.lljzmk"} v 0)
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr metadata", "who": "np0005541914.lljzmk", "id": "np0005541914.lljzmk"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005541910.kzipdo", "id": "np0005541910.kzipdo"} v 0)
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr metadata", "who": "np0005541910.kzipdo", "id": "np0005541910.kzipdo"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005541913.mfesdm", "id": "np0005541913.mfesdm"} v 0)
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr metadata", "who": "np0005541913.mfesdm", "id": "np0005541913.mfesdm"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005541912.qwddia", "id": "np0005541912.qwddia"} v 0)
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr metadata", "who": "np0005541912.qwddia", "id": "np0005541912.qwddia"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005541909.kfesnk", "id": "np0005541909.kfesnk"} v 0)
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr metadata", "who": "np0005541909.kfesnk", "id": "np0005541909.kfesnk"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 3} v 0)
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata", "id": 3} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 4} v 0)
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata", "id": 4} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "osd metadata", "id": 5} v 0)
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata", "id": 5} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mds metadata"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon).mds e16 all = 1
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: mgr load Constructed class from module: balancer
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Starting
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_09:55:09
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Dec 02 09:55:09 np0005541914.localdomain sshd[289362]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 02 09:55:09 np0005541914.localdomain systemd[1]: session-64.scope: Deactivated successfully.
Dec 02 09:55:09 np0005541914.localdomain systemd[1]: session-64.scope: Consumed 17.128s CPU time.
Dec 02 09:55:09 np0005541914.localdomain systemd-logind[760]: Session 64 logged out. Waiting for processes to exit.
Dec 02 09:55:09 np0005541914.localdomain systemd-logind[760]: Removed session 64.
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [cephadm WARNING root] removing stray HostCache host record np0005541909.localdomain.devices.0
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [WRN] : removing stray HostCache host record np0005541909.localdomain.devices.0
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005541909.localdomain.devices.0"} v 0)
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541909.localdomain.devices.0"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005541909.localdomain.devices.0"} v 0)
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541909.localdomain.devices.0"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: mgr load Constructed class from module: cephadm
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: mgr load Constructed class from module: crash
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: mgr load Constructed class from module: devicehealth
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: mgr load Constructed class from module: iostat
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: mgr load Constructed class from module: nfs
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: mgr load Constructed class from module: orchestrator
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: mgr load Constructed class from module: pg_autoscaler
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: mgr load Constructed class from module: progress
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Loading...
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f5c47a0a550>, <progress.module.GhostEvent object at 0x7f5c47a0a790>, <progress.module.GhostEvent object at 0x7f5c47a0a7c0>, <progress.module.GhostEvent object at 0x7f5c47a0a7f0>, <progress.module.GhostEvent object at 0x7f5c47a0a820>, <progress.module.GhostEvent object at 0x7f5c47a0a850>, <progress.module.GhostEvent object at 0x7f5c47a0a880>, <progress.module.GhostEvent object at 0x7f5c47a0a8b0>, <progress.module.GhostEvent object at 0x7f5c47a0a8e0>, <progress.module.GhostEvent object at 0x7f5c47a0a910>, <progress.module.GhostEvent object at 0x7f5c47a0a940>, <progress.module.GhostEvent object at 0x7f5c47a0a970>, <progress.module.GhostEvent object at 0x7f5c47a0a9a0>, <progress.module.GhostEvent object at 0x7f5c47a0a9d0>, <progress.module.GhostEvent object at 0x7f5c47a0aa00>, <progress.module.GhostEvent object at 0x7f5c47a0aa30>, <progress.module.GhostEvent object at 0x7f5c47a0aa60>, <progress.module.GhostEvent object at 0x7f5c47a0aa90>, <progress.module.GhostEvent object at 0x7f5c47a0aac0>, <progress.module.GhostEvent object at 0x7f5c47a0aaf0>, <progress.module.GhostEvent object at 0x7f5c47a0ab20>, <progress.module.GhostEvent object at 0x7f5c47a0ab50>, <progress.module.GhostEvent object at 0x7f5c47a0ab80>, <progress.module.GhostEvent object at 0x7f5c47a0abb0>, <progress.module.GhostEvent object at 0x7f5c47a0abe0>, <progress.module.GhostEvent object at 0x7f5c47a0ac10>, <progress.module.GhostEvent object at 0x7f5c47a0ac40>, <progress.module.GhostEvent object at 0x7f5c47a0ac70>, <progress.module.GhostEvent object at 0x7f5c47a0aca0>, <progress.module.GhostEvent object at 0x7f5c47a0acd0>, <progress.module.GhostEvent object at 0x7f5c47a0ad00>, <progress.module.GhostEvent object at 0x7f5c47a0ad30>, <progress.module.GhostEvent object at 0x7f5c47a0ad60>, <progress.module.GhostEvent object at 0x7f5c47a0ad90>, <progress.module.GhostEvent object at 0x7f5c47a0adc0>, <progress.module.GhostEvent object at 0x7f5c47a0adf0>, <progress.module.GhostEvent object at 0x7f5c47a0ae20>, <progress.module.GhostEvent object at 0x7f5c47a0ae50>, <progress.module.GhostEvent object at 0x7f5c47a0ae80>, <progress.module.GhostEvent object at 0x7f5c47a0aeb0>, <progress.module.GhostEvent object at 0x7f5c47a0aee0>, <progress.module.GhostEvent object at 0x7f5c47a0af10>, <progress.module.GhostEvent object at 0x7f5c47a0af40>, <progress.module.GhostEvent object at 0x7f5c47a0af70>, <progress.module.GhostEvent object at 0x7f5c47a0afa0>, <progress.module.GhostEvent object at 0x7f5c47a0afd0>, <progress.module.GhostEvent object at 0x7f5c47a19040>, <progress.module.GhostEvent object at 0x7f5c47a19070>, <progress.module.GhostEvent object at 0x7f5c47a190a0>, <progress.module.GhostEvent object at 0x7f5c47a190d0>] historic events
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [devicehealth INFO root] Starting
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Loaded OSDMap, ready.
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] recovery thread starting
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] starting setup
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: mgr load Constructed class from module: rbd_support
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: mgr load Constructed class from module: restful
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: mgr load Constructed class from module: status
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: mgr load Constructed class from module: telemetry
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [restful INFO root] server_addr: :: server_port: 8003
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [restful WARNING root] server not running: no certificate configured
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/mirror_snapshot_schedule"} v 0)
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/mirror_snapshot_schedule"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: mgr load Constructed class from module: volumes
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] PerfHandler: starting
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_task_task: vms, start_after=
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 09:55:09 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:55:09.782+0000 7f5c3090d640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 09:55:09 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:55:09.782+0000 7f5c3090d640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 09:55:09 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:55:09.782+0000 7f5c3090d640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 09:55:09 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:55:09.782+0000 7f5c3090d640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 09:55:09 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:55:09.782+0000 7f5c3090d640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 09:55:09 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:55:09.784+0000 7f5c33112640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 09:55:09 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:55:09.784+0000 7f5c33112640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 09:55:09 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:55:09.784+0000 7f5c33112640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 09:55:09 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:55:09.784+0000 7f5c33112640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 09:55:09 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:55:09.784+0000 7f5c33112640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_task_task: volumes, start_after=
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_task_task: images, start_after=
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_task_task: backups, start_after=
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] TaskHandler: starting
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/trash_purge_schedule"} v 0)
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/trash_purge_schedule"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 02 09:55:09 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] setup complete
Dec 02 09:55:09 np0005541914.localdomain sshd[292837]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:55:09 np0005541914.localdomain sshd[292837]: Accepted publickey for ceph-admin from 192.168.122.108 port 50484 ssh2: RSA SHA256:/YNuT9SDdDKzRDkfkhY38xwkOxxhXakQO/p8xUCPPz0
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' 
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.14184 172.18.0.105:0/1560580735' entity='mgr.np0005541911.adcgiw' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.200:0/2202206912' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: Activating manager daemon np0005541914.lljzmk
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: osdmap e87: 6 total, 6 up, 6 in
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: mgrmap e20: np0005541914.lljzmk(active, starting, since 0.198391s), standbys: np0005541910.kzipdo, np0005541913.mfesdm, np0005541912.qwddia, np0005541909.kfesnk
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541910"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541911"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mds metadata", "who": "mds.np0005541914.sqgqkj"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mds metadata", "who": "mds.np0005541913.maexpe"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mds metadata", "who": "mds.np0005541912.ghcwcm"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr metadata", "who": "np0005541914.lljzmk", "id": "np0005541914.lljzmk"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr metadata", "who": "np0005541910.kzipdo", "id": "np0005541910.kzipdo"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr metadata", "who": "np0005541913.mfesdm", "id": "np0005541913.mfesdm"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr metadata", "who": "np0005541912.qwddia", "id": "np0005541912.qwddia"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr metadata", "who": "np0005541909.kfesnk", "id": "np0005541909.kfesnk"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata", "id": 3} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata", "id": 4} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata", "id": 5} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mds metadata"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: Manager daemon np0005541914.lljzmk is now available
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541909.localdomain.devices.0"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541909.localdomain.devices.0"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005541909.localdomain.devices.0"}]': finished
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541909.localdomain.devices.0"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541909.localdomain.devices.0"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005541909.localdomain.devices.0"}]': finished
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/mirror_snapshot_schedule"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/mirror_snapshot_schedule"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/trash_purge_schedule"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/trash_purge_schedule"} : dispatch
Dec 02 09:55:09 np0005541914.localdomain systemd-logind[760]: New session 65 of user ceph-admin.
Dec 02 09:55:10 np0005541914.localdomain systemd[1]: Started Session 65 of User ceph-admin.
Dec 02 09:55:10 np0005541914.localdomain sshd[292837]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 02 09:55:10 np0005541914.localdomain sudo[292841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:55:10 np0005541914.localdomain sudo[292841]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:10 np0005541914.localdomain sudo[292841]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:10 np0005541914.localdomain sudo[292859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 02 09:55:10 np0005541914.localdomain sudo[292859]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:10 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v3: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:10 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cherrypy.error] [02/Dec/2025:09:55:10] ENGINE Bus STARTING
Dec 02 09:55:10 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : [02/Dec/2025:09:55:10] ENGINE Bus STARTING
Dec 02 09:55:10 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cherrypy.error] [02/Dec/2025:09:55:10] ENGINE Serving on http://172.18.0.108:8765
Dec 02 09:55:10 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : [02/Dec/2025:09:55:10] ENGINE Serving on http://172.18.0.108:8765
Dec 02 09:55:10 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cherrypy.error] [02/Dec/2025:09:55:10] ENGINE Serving on https://172.18.0.108:7150
Dec 02 09:55:10 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : [02/Dec/2025:09:55:10] ENGINE Serving on https://172.18.0.108:7150
Dec 02 09:55:10 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cherrypy.error] [02/Dec/2025:09:55:10] ENGINE Client ('172.18.0.108', 34066) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 02 09:55:10 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cherrypy.error] [02/Dec/2025:09:55:10] ENGINE Bus STARTED
Dec 02 09:55:10 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : [02/Dec/2025:09:55:10] ENGINE Client ('172.18.0.108', 34066) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 02 09:55:10 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : [02/Dec/2025:09:55:10] ENGINE Bus STARTED
Dec 02 09:55:10 np0005541914.localdomain ceph-mon[288526]: removing stray HostCache host record np0005541909.localdomain.devices.0
Dec 02 09:55:10 np0005541914.localdomain ceph-mon[288526]: mgrmap e21: np0005541914.lljzmk(active, since 1.22617s), standbys: np0005541910.kzipdo, np0005541913.mfesdm, np0005541912.qwddia, np0005541909.kfesnk
Dec 02 09:55:10 np0005541914.localdomain podman[292972]: 2025-12-02 09:55:10.99150984 +0000 UTC m=+0.095196049 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, distribution-scope=public, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, name=rhceph, CEPH_POINT_RELEASE=, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218)
Dec 02 09:55:11 np0005541914.localdomain podman[292972]: 2025-12-02 09:55:11.096003142 +0000 UTC m=+0.199689351 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_CLEAN=True, io.buildah.version=1.41.4, RELEASE=main, vendor=Red Hat, Inc.)
Dec 02 09:55:11 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541910.localdomain.devices.0}] v 0)
Dec 02 09:55:11 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541910.localdomain}] v 0)
Dec 02 09:55:11 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain.devices.0}] v 0)
Dec 02 09:55:11 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain}] v 0)
Dec 02 09:55:11 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 09:55:11 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 09:55:11 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v4: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:11 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 09:55:11 np0005541914.localdomain sudo[292859]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:11 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 09:55:11 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0)
Dec 02 09:55:11 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0)
Dec 02 09:55:11 np0005541914.localdomain sudo[293094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:55:11 np0005541914.localdomain sudo[293094]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:11 np0005541914.localdomain sudo[293094]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:11 np0005541914.localdomain ceph-mgr[287188]: [devicehealth INFO root] Check health
Dec 02 09:55:11 np0005541914.localdomain sudo[293122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:55:11 np0005541914.localdomain sudo[293122]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:11 np0005541914.localdomain ceph-mon[288526]: pgmap v3: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:11 np0005541914.localdomain ceph-mon[288526]: [02/Dec/2025:09:55:10] ENGINE Bus STARTING
Dec 02 09:55:11 np0005541914.localdomain ceph-mon[288526]: [02/Dec/2025:09:55:10] ENGINE Serving on http://172.18.0.108:8765
Dec 02 09:55:11 np0005541914.localdomain ceph-mon[288526]: [02/Dec/2025:09:55:10] ENGINE Serving on https://172.18.0.108:7150
Dec 02 09:55:11 np0005541914.localdomain ceph-mon[288526]: [02/Dec/2025:09:55:10] ENGINE Client ('172.18.0.108', 34066) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 02 09:55:11 np0005541914.localdomain ceph-mon[288526]: [02/Dec/2025:09:55:10] ENGINE Bus STARTED
Dec 02 09:55:11 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:11 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:11 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:11 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:11 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:11 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:11 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:11 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:11 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:11 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:55:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 09:55:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:55:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:55:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:55:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:55:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:55:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 09:55:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:55:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:55:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 09:55:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:55:12 np0005541914.localdomain sudo[293122]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:12 np0005541914.localdomain sudo[293173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:55:12 np0005541914.localdomain sudo[293173]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:12 np0005541914.localdomain sudo[293173]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:12 np0005541914.localdomain sudo[293191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Dec 02 09:55:12 np0005541914.localdomain sudo[293191]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:12 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain.devices.0}] v 0)
Dec 02 09:55:12 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain}] v 0)
Dec 02 09:55:12 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541910.localdomain.devices.0}] v 0)
Dec 02 09:55:12 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd/host:np0005541911", "name": "osd_memory_target"} v 0)
Dec 02 09:55:12 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd/host:np0005541911", "name": "osd_memory_target"} : dispatch
Dec 02 09:55:12 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541910.localdomain}] v 0)
Dec 02 09:55:12 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd/host:np0005541910", "name": "osd_memory_target"} v 0)
Dec 02 09:55:12 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd/host:np0005541910", "name": "osd_memory_target"} : dispatch
Dec 02 09:55:12 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 09:55:12 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 09:55:12 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Dec 02 09:55:12 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 02 09:55:12 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0)
Dec 02 09:55:12 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 02 09:55:12 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO root] Adjusting osd_memory_target on np0005541912.localdomain to 836.6M
Dec 02 09:55:12 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005541912.localdomain to 836.6M
Dec 02 09:55:12 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 02 09:55:12 np0005541914.localdomain ceph-mgr[287188]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005541912.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 09:55:12 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005541912.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 09:55:13 np0005541914.localdomain ceph-mon[288526]: pgmap v4: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:13 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:13 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:13 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd/host:np0005541911", "name": "osd_memory_target"} : dispatch
Dec 02 09:55:13 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd/host:np0005541911", "name": "osd_memory_target"} : dispatch
Dec 02 09:55:13 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:13 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:13 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd/host:np0005541910", "name": "osd_memory_target"} : dispatch
Dec 02 09:55:13 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd/host:np0005541910", "name": "osd_memory_target"} : dispatch
Dec 02 09:55:13 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:13 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:13 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 02 09:55:13 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 02 09:55:13 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 02 09:55:13 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 02 09:55:13 np0005541914.localdomain sudo[293191]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:13 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0)
Dec 02 09:55:13 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0)
Dec 02 09:55:13 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec 02 09:55:13 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 02 09:55:13 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0)
Dec 02 09:55:13 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 02 09:55:13 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO root] Adjusting osd_memory_target on np0005541914.localdomain to 836.6M
Dec 02 09:55:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005541914.localdomain to 836.6M
Dec 02 09:55:13 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 02 09:55:13 np0005541914.localdomain ceph-mgr[287188]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005541914.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 09:55:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005541914.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 09:55:13 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 09:55:13 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 09:55:13 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec 02 09:55:13 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 02 09:55:13 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0)
Dec 02 09:55:13 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 02 09:55:13 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO root] Adjusting osd_memory_target on np0005541913.localdomain to 836.6M
Dec 02 09:55:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005541913.localdomain to 836.6M
Dec 02 09:55:13 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 02 09:55:13 np0005541914.localdomain ceph-mgr[287188]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005541913.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 09:55:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005541913.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 09:55:13 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:13 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:13 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 02 09:55:13 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:55:13 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541910.localdomain:/etc/ceph/ceph.conf
Dec 02 09:55:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541910.localdomain:/etc/ceph/ceph.conf
Dec 02 09:55:13 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541911.localdomain:/etc/ceph/ceph.conf
Dec 02 09:55:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541911.localdomain:/etc/ceph/ceph.conf
Dec 02 09:55:13 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541912.localdomain:/etc/ceph/ceph.conf
Dec 02 09:55:13 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541913.localdomain:/etc/ceph/ceph.conf
Dec 02 09:55:13 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541914.localdomain:/etc/ceph/ceph.conf
Dec 02 09:55:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541912.localdomain:/etc/ceph/ceph.conf
Dec 02 09:55:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541913.localdomain:/etc/ceph/ceph.conf
Dec 02 09:55:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541914.localdomain:/etc/ceph/ceph.conf
Dec 02 09:55:13 np0005541914.localdomain sudo[293228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 02 09:55:13 np0005541914.localdomain sudo[293228]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:13 np0005541914.localdomain sudo[293228]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:13 np0005541914.localdomain sudo[293246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph
Dec 02 09:55:13 np0005541914.localdomain sudo[293246]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:13 np0005541914.localdomain sudo[293246]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:13 np0005541914.localdomain sudo[293264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:55:13 np0005541914.localdomain sudo[293264]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:13 np0005541914.localdomain sudo[293264]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:13 np0005541914.localdomain sudo[293282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:55:13 np0005541914.localdomain sudo[293282]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:13 np0005541914.localdomain sudo[293282]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:13 np0005541914.localdomain sudo[293300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:55:13 np0005541914.localdomain sudo[293300]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:13 np0005541914.localdomain sudo[293300]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:13 np0005541914.localdomain sudo[293334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:55:13 np0005541914.localdomain sudo[293334]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:13 np0005541914.localdomain sudo[293334]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v5: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:13 np0005541914.localdomain sudo[293352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:55:13 np0005541914.localdomain sudo[293352]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:13 np0005541914.localdomain sudo[293352]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:13 np0005541914.localdomain sudo[293370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 02 09:55:13 np0005541914.localdomain sudo[293370]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:13 np0005541914.localdomain sudo[293370]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:13 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:55:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:55:13 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:55:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:55:13 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:55:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:55:13 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541910.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:55:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541910.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:55:13 np0005541914.localdomain sudo[293388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 09:55:13 np0005541914.localdomain sudo[293388]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:13 np0005541914.localdomain sudo[293388]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:13 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:55:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:55:13 np0005541914.localdomain sudo[293406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 09:55:13 np0005541914.localdomain sudo[293406]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:13 np0005541914.localdomain sudo[293406]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:13 np0005541914.localdomain sudo[293424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:55:13 np0005541914.localdomain sudo[293424]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:13 np0005541914.localdomain sudo[293424]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:14 np0005541914.localdomain sudo[293442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:55:14 np0005541914.localdomain sudo[293442]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:14 np0005541914.localdomain sudo[293442]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:14 np0005541914.localdomain ceph-mon[288526]: Adjusting osd_memory_target on np0005541912.localdomain to 836.6M
Dec 02 09:55:14 np0005541914.localdomain ceph-mon[288526]: Unable to set osd_memory_target on np0005541912.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 09:55:14 np0005541914.localdomain ceph-mon[288526]: mgrmap e22: np0005541914.lljzmk(active, since 3s), standbys: np0005541910.kzipdo, np0005541913.mfesdm, np0005541912.qwddia, np0005541909.kfesnk
Dec 02 09:55:14 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:14 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:14 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 02 09:55:14 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 02 09:55:14 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 02 09:55:14 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 02 09:55:14 np0005541914.localdomain ceph-mon[288526]: Adjusting osd_memory_target on np0005541914.localdomain to 836.6M
Dec 02 09:55:14 np0005541914.localdomain ceph-mon[288526]: Unable to set osd_memory_target on np0005541914.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 09:55:14 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:14 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:14 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 02 09:55:14 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 02 09:55:14 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 02 09:55:14 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 02 09:55:14 np0005541914.localdomain ceph-mon[288526]: Adjusting osd_memory_target on np0005541913.localdomain to 836.6M
Dec 02 09:55:14 np0005541914.localdomain ceph-mon[288526]: Unable to set osd_memory_target on np0005541913.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 09:55:14 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:14 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:55:14 np0005541914.localdomain ceph-mon[288526]: Updating np0005541910.localdomain:/etc/ceph/ceph.conf
Dec 02 09:55:14 np0005541914.localdomain ceph-mon[288526]: Updating np0005541911.localdomain:/etc/ceph/ceph.conf
Dec 02 09:55:14 np0005541914.localdomain ceph-mon[288526]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf
Dec 02 09:55:14 np0005541914.localdomain ceph-mon[288526]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf
Dec 02 09:55:14 np0005541914.localdomain ceph-mon[288526]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf
Dec 02 09:55:14 np0005541914.localdomain ceph-mon[288526]: pgmap v5: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:14 np0005541914.localdomain ceph-mon[288526]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:55:14 np0005541914.localdomain ceph-mon[288526]: Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:55:14 np0005541914.localdomain ceph-mon[288526]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:55:14 np0005541914.localdomain ceph-mon[288526]: Updating np0005541910.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:55:14 np0005541914.localdomain ceph-mon[288526]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:55:14 np0005541914.localdomain sudo[293460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:55:14 np0005541914.localdomain sudo[293460]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:14 np0005541914.localdomain sudo[293460]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:14 np0005541914.localdomain sudo[293494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:55:14 np0005541914.localdomain sudo[293494]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:14 np0005541914.localdomain sudo[293494]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:14 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541911.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 09:55:14 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541911.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 09:55:14 np0005541914.localdomain sudo[293512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:55:14 np0005541914.localdomain sudo[293512]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:14 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541912.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 09:55:14 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541912.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 09:55:14 np0005541914.localdomain sudo[293512]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:14 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541910.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 09:55:14 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541910.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 09:55:14 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541913.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 09:55:14 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541913.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 09:55:14 np0005541914.localdomain sudo[293530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:55:14 np0005541914.localdomain sudo[293530]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:14 np0005541914.localdomain sudo[293530]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:14 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541914.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 09:55:14 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541914.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 09:55:14 np0005541914.localdomain sudo[293548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 02 09:55:14 np0005541914.localdomain sudo[293548]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:14 np0005541914.localdomain sudo[293548]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:14 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:55:14 np0005541914.localdomain sudo[293566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph
Dec 02 09:55:14 np0005541914.localdomain sudo[293566]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:14 np0005541914.localdomain sudo[293566]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:14 np0005541914.localdomain sudo[293584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.client.admin.keyring.new
Dec 02 09:55:14 np0005541914.localdomain sudo[293584]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:14 np0005541914.localdomain sudo[293584]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:14 np0005541914.localdomain sudo[293602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:55:14 np0005541914.localdomain sudo[293602]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:14 np0005541914.localdomain sudo[293602]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:14 np0005541914.localdomain sudo[293620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.client.admin.keyring.new
Dec 02 09:55:14 np0005541914.localdomain sudo[293620]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:14 np0005541914.localdomain sudo[293620]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:14 np0005541914.localdomain ceph-mgr[287188]: mgr.server handle_open ignoring open from mgr.np0005541911.adcgiw 172.18.0.105:0/3351624532; not ready for session (expect reconnect)
Dec 02 09:55:14 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:55:14 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:55:14 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541910.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:55:14 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541910.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:55:14 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:55:14 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:55:14 np0005541914.localdomain sudo[293654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.client.admin.keyring.new
Dec 02 09:55:14 np0005541914.localdomain sudo[293654]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:14 np0005541914.localdomain sudo[293654]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:14 np0005541914.localdomain sudo[293672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.client.admin.keyring.new
Dec 02 09:55:14 np0005541914.localdomain sudo[293672]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:14 np0005541914.localdomain sudo[293672]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:15 np0005541914.localdomain sudo[293690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 02 09:55:15 np0005541914.localdomain sudo[293690]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:15 np0005541914.localdomain sudo[293690]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:15 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:55:15 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:55:15 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:55:15 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:55:15 np0005541914.localdomain sudo[293708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 09:55:15 np0005541914.localdomain sudo[293708]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:15 np0005541914.localdomain sudo[293708]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:15 np0005541914.localdomain sudo[293726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 09:55:15 np0005541914.localdomain sudo[293726]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:15 np0005541914.localdomain sudo[293726]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:15 np0005541914.localdomain sudo[293744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring.new
Dec 02 09:55:15 np0005541914.localdomain sudo[293744]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:15 np0005541914.localdomain sudo[293744]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:15 np0005541914.localdomain sudo[293762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:55:15 np0005541914.localdomain sudo[293762]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:15 np0005541914.localdomain sudo[293762]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:15 np0005541914.localdomain sudo[293780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring.new
Dec 02 09:55:15 np0005541914.localdomain sudo[293780]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:15 np0005541914.localdomain sudo[293780]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:15 np0005541914.localdomain ceph-mon[288526]: Updating np0005541911.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 09:55:15 np0005541914.localdomain ceph-mon[288526]: Updating np0005541912.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 09:55:15 np0005541914.localdomain ceph-mon[288526]: Updating np0005541910.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 09:55:15 np0005541914.localdomain ceph-mon[288526]: Updating np0005541913.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 09:55:15 np0005541914.localdomain ceph-mon[288526]: Updating np0005541914.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 09:55:15 np0005541914.localdomain ceph-mon[288526]: Standby manager daemon np0005541911.adcgiw started
Dec 02 09:55:15 np0005541914.localdomain ceph-mon[288526]: Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:55:15 np0005541914.localdomain ceph-mon[288526]: Updating np0005541910.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:55:15 np0005541914.localdomain ceph-mon[288526]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:55:15 np0005541914.localdomain ceph-mon[288526]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:55:15 np0005541914.localdomain sudo[293814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring.new
Dec 02 09:55:15 np0005541914.localdomain sudo[293814]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:15 np0005541914.localdomain sudo[293814]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:15 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541910.localdomain.devices.0}] v 0)
Dec 02 09:55:15 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain.devices.0}] v 0)
Dec 02 09:55:15 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005541911.adcgiw", "id": "np0005541911.adcgiw"} v 0)
Dec 02 09:55:15 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr metadata", "who": "np0005541911.adcgiw", "id": "np0005541911.adcgiw"} : dispatch
Dec 02 09:55:15 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 09:55:15 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541910.localdomain}] v 0)
Dec 02 09:55:15 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain}] v 0)
Dec 02 09:55:15 np0005541914.localdomain sudo[293832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring.new
Dec 02 09:55:15 np0005541914.localdomain sudo[293832]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:15 np0005541914.localdomain sudo[293832]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:15 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 09:55:15 np0005541914.localdomain sudo[293850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring.new /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:55:15 np0005541914.localdomain sudo[293850]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:15 np0005541914.localdomain sudo[293850]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:15 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0)
Dec 02 09:55:15 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0)
Dec 02 09:55:15 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v6: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:15 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 09:55:15 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 09:55:15 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v7: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 0 B/s wr, 22 op/s
Dec 02 09:55:15 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 02 09:55:15 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] update: starting ev 96530081-b35f-4689-9f69-1623f6fb18d4 (Updating node-proxy deployment (+5 -> 5))
Dec 02 09:55:15 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] complete: finished ev 96530081-b35f-4689-9f69-1623f6fb18d4 (Updating node-proxy deployment (+5 -> 5))
Dec 02 09:55:15 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Completed event 96530081-b35f-4689-9f69-1623f6fb18d4 (Updating node-proxy deployment (+5 -> 5)) in 0 seconds
Dec 02 09:55:15 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 02 09:55:15 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:55:16 np0005541914.localdomain sudo[293868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:55:16 np0005541914.localdomain sudo[293868]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:16 np0005541914.localdomain sudo[293868]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:16 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005541910 (monmap changed)...
Dec 02 09:55:16 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005541910 (monmap changed)...
Dec 02 09:55:16 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005541910.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 02 09:55:16 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541910.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:55:16 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:16 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:16 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005541910 on np0005541910.localdomain
Dec 02 09:55:16 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005541910 on np0005541910.localdomain
Dec 02 09:55:16 np0005541914.localdomain ceph-mon[288526]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:55:16 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr metadata", "who": "np0005541911.adcgiw", "id": "np0005541911.adcgiw"} : dispatch
Dec 02 09:55:16 np0005541914.localdomain ceph-mon[288526]: mgrmap e23: np0005541914.lljzmk(active, since 6s), standbys: np0005541910.kzipdo, np0005541913.mfesdm, np0005541912.qwddia, np0005541909.kfesnk, np0005541911.adcgiw
Dec 02 09:55:16 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:16 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:16 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:16 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:16 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:16 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:16 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:16 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:16 np0005541914.localdomain ceph-mon[288526]: pgmap v6: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:16 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:16 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:16 np0005541914.localdomain ceph-mon[288526]: pgmap v7: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 0 B/s wr, 22 op/s
Dec 02 09:55:16 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:16 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:55:16 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541910.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:55:16 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541910.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:55:16 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:17 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541910.localdomain.devices.0}] v 0)
Dec 02 09:55:17 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541910.localdomain}] v 0)
Dec 02 09:55:17 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005541910.kzipdo (monmap changed)...
Dec 02 09:55:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005541910.kzipdo (monmap changed)...
Dec 02 09:55:17 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005541910.kzipdo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 02 09:55:17 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541910.kzipdo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:55:17 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 02 09:55:17 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:55:17 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:17 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:17 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005541910.kzipdo on np0005541910.localdomain
Dec 02 09:55:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005541910.kzipdo on np0005541910.localdomain
Dec 02 09:55:17 np0005541914.localdomain ceph-mon[288526]: Reconfiguring crash.np0005541910 (monmap changed)...
Dec 02 09:55:17 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon crash.np0005541910 on np0005541910.localdomain
Dec 02 09:55:17 np0005541914.localdomain ceph-mon[288526]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)
Dec 02 09:55:17 np0005541914.localdomain ceph-mon[288526]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST)
Dec 02 09:55:17 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:17 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:17 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541910.kzipdo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:55:17 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541910.kzipdo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:55:17 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:55:17 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v8: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 0 B/s wr, 16 op/s
Dec 02 09:55:18 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541910.localdomain.devices.0}] v 0)
Dec 02 09:55:18 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541910.localdomain}] v 0)
Dec 02 09:55:18 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Dec 02 09:55:18 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 02 09:55:18 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:18 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:18 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005541912.localdomain
Dec 02 09:55:18 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005541912.localdomain
Dec 02 09:55:19 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:55:19 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mgr.np0005541910.kzipdo (monmap changed)...
Dec 02 09:55:19 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mgr.np0005541910.kzipdo on np0005541910.localdomain
Dec 02 09:55:19 np0005541914.localdomain ceph-mon[288526]: pgmap v8: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 0 B/s wr, 16 op/s
Dec 02 09:55:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 02 09:55:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:19 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.2 on np0005541912.localdomain
Dec 02 09:55:19 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events
Dec 02 09:55:19 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 02 09:55:19 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v9: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 0 B/s wr, 12 op/s
Dec 02 09:55:19 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 09:55:20 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 09:55:20 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 09:55:20 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 09:55:20 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "osd.5"} v 0)
Dec 02 09:55:20 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 02 09:55:20 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:20 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:20 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005541912.localdomain
Dec 02 09:55:20 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005541912.localdomain
Dec 02 09:55:21 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.34293 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 02 09:55:21 np0005541914.localdomain ceph-mon[288526]: pgmap v9: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 0 B/s wr, 12 op/s
Dec 02 09:55:21 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:21 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:21 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:21 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:21 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:21 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 02 09:55:21 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:21 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.5 on np0005541912.localdomain
Dec 02 09:55:21 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v10: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 0 B/s wr, 11 op/s
Dec 02 09:55:22 np0005541914.localdomain ceph-mon[288526]: from='client.34293 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 02 09:55:22 np0005541914.localdomain ceph-mon[288526]: pgmap v10: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 0 B/s wr, 11 op/s
Dec 02 09:55:23 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 09:55:23 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v11: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 0 B/s wr, 11 op/s
Dec 02 09:55:23 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 09:55:23 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 09:55:23 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.26775 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:55:23 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO root] Saving service mon spec with placement label:mon
Dec 02 09:55:23 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Saving service mon spec with placement label:mon
Dec 02 09:55:23 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 02 09:55:23 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:55:23 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:55:24 np0005541914.localdomain podman[293887]: 2025-12-02 09:55:24.073421172 +0000 UTC m=+0.076420302 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 02 09:55:24 np0005541914.localdomain podman[293887]: 2025-12-02 09:55:24.081485449 +0000 UTC m=+0.084484559 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm)
Dec 02 09:55:24 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 09:55:24 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 09:55:24 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:55:24 np0005541914.localdomain podman[293886]: 2025-12-02 09:55:24.20117778 +0000 UTC m=+0.203321695 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 09:55:24 np0005541914.localdomain podman[293886]: 2025-12-02 09:55:24.24281147 +0000 UTC m=+0.244955345 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 09:55:24 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 09:55:24 np0005541914.localdomain systemd[1]: tmp-crun.wsgmaz.mount: Deactivated successfully.
Dec 02 09:55:24 np0005541914.localdomain podman[293918]: 2025-12-02 09:55:24.349471984 +0000 UTC m=+0.139609860 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 02 09:55:24 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:55:24 np0005541914.localdomain podman[293918]: 2025-12-02 09:55:24.368956659 +0000 UTC m=+0.159094555 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:55:24 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:55:24 np0005541914.localdomain podman[293945]: 2025-12-02 09:55:24.437873561 +0000 UTC m=+0.064054495 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Dec 02 09:55:24 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:55:24 np0005541914.localdomain podman[293945]: 2025-12-02 09:55:24.470825956 +0000 UTC m=+0.097006960 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 02 09:55:24 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005541912 (monmap changed)...
Dec 02 09:55:24 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005541912 (monmap changed)...
Dec 02 09:55:24 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 02 09:55:24 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:55:24 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:55:24 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 02 09:55:24 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 02 09:55:24 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:24 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:24 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005541912 on np0005541912.localdomain
Dec 02 09:55:24 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005541912 on np0005541912.localdomain
Dec 02 09:55:25 np0005541914.localdomain ceph-mon[288526]: pgmap v11: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 0 B/s wr, 11 op/s
Dec 02 09:55:25 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:25 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:25 np0005541914.localdomain ceph-mon[288526]: from='client.26775 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:55:25 np0005541914.localdomain ceph-mon[288526]: Saving service mon spec with placement label:mon
Dec 02 09:55:25 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:25 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:25 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:25 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:55:25 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 02 09:55:25 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:25 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 09:55:25 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 09:55:25 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)...
Dec 02 09:55:25 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)...
Dec 02 09:55:25 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 02 09:55:25 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:55:25 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:25 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:25 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain
Dec 02 09:55:25 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain
Dec 02 09:55:25 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v12: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 0 B/s wr, 11 op/s
Dec 02 09:55:25 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.34303 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005541912", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 02 09:55:26 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mon.np0005541912 (monmap changed)...
Dec 02 09:55:26 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mon.np0005541912 on np0005541912.localdomain
Dec 02 09:55:26 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:26 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:26 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)...
Dec 02 09:55:26 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:55:26 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:55:26 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:26 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain
Dec 02 09:55:26 np0005541914.localdomain ceph-mon[288526]: pgmap v12: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 0 B/s wr, 11 op/s
Dec 02 09:55:26 np0005541914.localdomain ceph-mon[288526]: from='client.34303 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005541912", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 02 09:55:26 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 09:55:26 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 09:55:26 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005541913.mfesdm (monmap changed)...
Dec 02 09:55:26 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005541913.mfesdm (monmap changed)...
Dec 02 09:55:26 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 02 09:55:26 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:55:26 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 02 09:55:26 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:55:26 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:26 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:26 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain
Dec 02 09:55:26 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain
Dec 02 09:55:27 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 09:55:27 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 09:55:27 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005541913 (monmap changed)...
Dec 02 09:55:27 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005541913 (monmap changed)...
Dec 02 09:55:27 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 02 09:55:27 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:55:27 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 02 09:55:27 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 02 09:55:27 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:27 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:27 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005541913 on np0005541913.localdomain
Dec 02 09:55:27 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005541913 on np0005541913.localdomain
Dec 02 09:55:27 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:27 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:27 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mgr.np0005541913.mfesdm (monmap changed)...
Dec 02 09:55:27 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:55:27 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:55:27 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:55:27 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:27 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain
Dec 02 09:55:27 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:27 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.200:0/555242505' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 02 09:55:27 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:27 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:55:27 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 02 09:55:27 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:27 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v13: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:28 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 09:55:28 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 09:55:28 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005541914 (monmap changed)...
Dec 02 09:55:28 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005541914 (monmap changed)...
Dec 02 09:55:28 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 02 09:55:28 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:55:28 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:28 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:28 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005541914 on np0005541914.localdomain
Dec 02 09:55:28 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005541914 on np0005541914.localdomain
Dec 02 09:55:28 np0005541914.localdomain sudo[293970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:55:28 np0005541914.localdomain sudo[293970]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:28 np0005541914.localdomain sudo[293970]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:28 np0005541914.localdomain sudo[293988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:55:28 np0005541914.localdomain sudo[293988]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:28 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mon.np0005541913 (monmap changed)...
Dec 02 09:55:28 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mon.np0005541913 on np0005541913.localdomain
Dec 02 09:55:28 np0005541914.localdomain ceph-mon[288526]: pgmap v13: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:28 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:28 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:28 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:55:28 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:55:28 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:28 np0005541914.localdomain podman[294023]: 
Dec 02 09:55:28 np0005541914.localdomain podman[294023]: 2025-12-02 09:55:28.997439428 +0000 UTC m=+0.067652875 container create 29286eb36dfbcc30c3dd5bf4f1921348815331865bb3663aa55211b4c509f1c8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_poitras, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, GIT_BRANCH=main, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, ceph=True)
Dec 02 09:55:29 np0005541914.localdomain systemd[1]: Started libpod-conmon-29286eb36dfbcc30c3dd5bf4f1921348815331865bb3663aa55211b4c509f1c8.scope.
Dec 02 09:55:29 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:55:29 np0005541914.localdomain podman[294023]: 2025-12-02 09:55:28.96569189 +0000 UTC m=+0.035905367 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:55:29 np0005541914.localdomain podman[294023]: 2025-12-02 09:55:29.065149084 +0000 UTC m=+0.135362501 container init 29286eb36dfbcc30c3dd5bf4f1921348815331865bb3663aa55211b4c509f1c8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_poitras, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, build-date=2025-11-26T19:44:28Z, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, io.openshift.expose-services=, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., vcs-type=git, release=1763362218, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 02 09:55:29 np0005541914.localdomain systemd[1]: tmp-crun.j6BuT7.mount: Deactivated successfully.
Dec 02 09:55:29 np0005541914.localdomain podman[294023]: 2025-12-02 09:55:29.076218781 +0000 UTC m=+0.146432208 container start 29286eb36dfbcc30c3dd5bf4f1921348815331865bb3663aa55211b4c509f1c8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_poitras, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_BRANCH=main, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, ceph=True, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 02 09:55:29 np0005541914.localdomain podman[294023]: 2025-12-02 09:55:29.076399427 +0000 UTC m=+0.146612844 container attach 29286eb36dfbcc30c3dd5bf4f1921348815331865bb3663aa55211b4c509f1c8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_poitras, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, GIT_CLEAN=True, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, distribution-scope=public, com.redhat.component=rhceph-container, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4)
Dec 02 09:55:29 np0005541914.localdomain sweet_poitras[294038]: 167 167
Dec 02 09:55:29 np0005541914.localdomain systemd[1]: libpod-29286eb36dfbcc30c3dd5bf4f1921348815331865bb3663aa55211b4c509f1c8.scope: Deactivated successfully.
Dec 02 09:55:29 np0005541914.localdomain podman[294023]: 2025-12-02 09:55:29.080124181 +0000 UTC m=+0.150337618 container died 29286eb36dfbcc30c3dd5bf4f1921348815331865bb3663aa55211b4c509f1c8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_poitras, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_CLEAN=True, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_BRANCH=main, distribution-scope=public, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, release=1763362218, version=7, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 02 09:55:29 np0005541914.localdomain podman[294043]: 2025-12-02 09:55:29.176896033 +0000 UTC m=+0.083711625 container remove 29286eb36dfbcc30c3dd5bf4f1921348815331865bb3663aa55211b4c509f1c8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_poitras, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vcs-type=git, io.openshift.tags=rhceph ceph, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, ceph=True)
Dec 02 09:55:29 np0005541914.localdomain systemd[1]: libpod-conmon-29286eb36dfbcc30c3dd5bf4f1921348815331865bb3663aa55211b4c509f1c8.scope: Deactivated successfully.
Dec 02 09:55:29 np0005541914.localdomain sudo[293988]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:29 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0)
Dec 02 09:55:29 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0)
Dec 02 09:55:29 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Dec 02 09:55:29 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Dec 02 09:55:29 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Dec 02 09:55:29 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 02 09:55:29 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:29 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:29 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005541914.localdomain
Dec 02 09:55:29 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005541914.localdomain
Dec 02 09:55:29 np0005541914.localdomain sudo[294060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:55:29 np0005541914.localdomain sudo[294060]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:29 np0005541914.localdomain sudo[294060]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:29 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:55:29 np0005541914.localdomain sudo[294078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:55:29 np0005541914.localdomain sudo[294078]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:29 np0005541914.localdomain ceph-mon[288526]: Reconfiguring crash.np0005541914 (monmap changed)...
Dec 02 09:55:29 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon crash.np0005541914 on np0005541914.localdomain
Dec 02 09:55:29 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:29 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:29 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 02 09:55:29 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:29 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v14: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:30 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-75827950c3aa79aeb0b5fc736566075ba36b4baafa3f6e488c62ea62fd257b68-merged.mount: Deactivated successfully.
Dec 02 09:55:30 np0005541914.localdomain podman[294112]: 
Dec 02 09:55:30 np0005541914.localdomain podman[294112]: 2025-12-02 09:55:30.212386332 +0000 UTC m=+0.340387736 container create 89eebb9f4b10e244d526af2c39e052cf719db648732d16630abbc105588e55d2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_raman, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, com.redhat.component=rhceph-container, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, CEPH_POINT_RELEASE=, release=1763362218, RELEASE=main, io.openshift.expose-services=, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 02 09:55:30 np0005541914.localdomain systemd[1]: Started libpod-conmon-89eebb9f4b10e244d526af2c39e052cf719db648732d16630abbc105588e55d2.scope.
Dec 02 09:55:30 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:55:30 np0005541914.localdomain podman[294112]: 2025-12-02 09:55:30.265379398 +0000 UTC m=+0.393380792 container init 89eebb9f4b10e244d526af2c39e052cf719db648732d16630abbc105588e55d2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_raman, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=1763362218, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, version=7, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 02 09:55:30 np0005541914.localdomain podman[294112]: 2025-12-02 09:55:30.275018162 +0000 UTC m=+0.403019586 container start 89eebb9f4b10e244d526af2c39e052cf719db648732d16630abbc105588e55d2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_raman, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_CLEAN=True, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, vcs-type=git, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 02 09:55:30 np0005541914.localdomain podman[294112]: 2025-12-02 09:55:30.275263139 +0000 UTC m=+0.403264533 container attach 89eebb9f4b10e244d526af2c39e052cf719db648732d16630abbc105588e55d2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_raman, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, version=7, name=rhceph, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.openshift.tags=rhceph ceph, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 09:55:30 np0005541914.localdomain gifted_raman[294127]: 167 167
Dec 02 09:55:30 np0005541914.localdomain systemd[1]: libpod-89eebb9f4b10e244d526af2c39e052cf719db648732d16630abbc105588e55d2.scope: Deactivated successfully.
Dec 02 09:55:30 np0005541914.localdomain podman[294112]: 2025-12-02 09:55:30.278876709 +0000 UTC m=+0.406878183 container died 89eebb9f4b10e244d526af2c39e052cf719db648732d16630abbc105588e55d2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_raman, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, architecture=x86_64, io.buildah.version=1.41.4, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_CLEAN=True)
Dec 02 09:55:30 np0005541914.localdomain podman[294112]: 2025-12-02 09:55:30.193176785 +0000 UTC m=+0.321178189 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:55:30 np0005541914.localdomain podman[294132]: 2025-12-02 09:55:30.349571397 +0000 UTC m=+0.062443477 container remove 89eebb9f4b10e244d526af2c39e052cf719db648732d16630abbc105588e55d2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_raman, release=1763362218, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.tags=rhceph ceph, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, name=rhceph, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, vcs-type=git, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main)
Dec 02 09:55:30 np0005541914.localdomain systemd[1]: libpod-conmon-89eebb9f4b10e244d526af2c39e052cf719db648732d16630abbc105588e55d2.scope: Deactivated successfully.
Dec 02 09:55:30 np0005541914.localdomain sudo[294078]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:30 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0)
Dec 02 09:55:30 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0)
Dec 02 09:55:30 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0)
Dec 02 09:55:30 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0)
Dec 02 09:55:30 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring osd.4 (monmap changed)...
Dec 02 09:55:30 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring osd.4 (monmap changed)...
Dec 02 09:55:30 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "osd.4"} v 0)
Dec 02 09:55:30 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Dec 02 09:55:30 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:30 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:30 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005541914.localdomain
Dec 02 09:55:30 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005541914.localdomain
Dec 02 09:55:30 np0005541914.localdomain sudo[294155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:55:30 np0005541914.localdomain sudo[294155]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:30 np0005541914.localdomain sudo[294155]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:30 np0005541914.localdomain sudo[294173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:55:30 np0005541914.localdomain sudo[294173]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:30 np0005541914.localdomain ceph-mon[288526]: Reconfiguring osd.1 (monmap changed)...
Dec 02 09:55:30 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.1 on np0005541914.localdomain
Dec 02 09:55:30 np0005541914.localdomain ceph-mon[288526]: pgmap v14: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:30 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:30 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:30 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:30 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:30 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Dec 02 09:55:30 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:31 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-4fa6dd665b415bf675226148c9c04b8d656c1849e9bfb689c08e86b332797a55-merged.mount: Deactivated successfully.
Dec 02 09:55:31 np0005541914.localdomain podman[294209]: 
Dec 02 09:55:31 np0005541914.localdomain podman[294209]: 2025-12-02 09:55:31.118346469 +0000 UTC m=+0.073462772 container create 391e8fffeaefd7bc56bab90b333503312eff82020aeaf6a1b8528252a91dff5a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_pike, io.openshift.expose-services=, version=7, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph)
Dec 02 09:55:31 np0005541914.localdomain systemd[1]: Started libpod-conmon-391e8fffeaefd7bc56bab90b333503312eff82020aeaf6a1b8528252a91dff5a.scope.
Dec 02 09:55:31 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:55:31 np0005541914.localdomain podman[294209]: 2025-12-02 09:55:31.17902026 +0000 UTC m=+0.134136603 container init 391e8fffeaefd7bc56bab90b333503312eff82020aeaf6a1b8528252a91dff5a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_pike, build-date=2025-11-26T19:44:28Z, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, ceph=True, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main)
Dec 02 09:55:31 np0005541914.localdomain podman[294209]: 2025-12-02 09:55:31.087484017 +0000 UTC m=+0.042600340 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:55:31 np0005541914.localdomain friendly_pike[294224]: 167 167
Dec 02 09:55:31 np0005541914.localdomain systemd[1]: libpod-391e8fffeaefd7bc56bab90b333503312eff82020aeaf6a1b8528252a91dff5a.scope: Deactivated successfully.
Dec 02 09:55:31 np0005541914.localdomain podman[294209]: 2025-12-02 09:55:31.198801094 +0000 UTC m=+0.153917387 container start 391e8fffeaefd7bc56bab90b333503312eff82020aeaf6a1b8528252a91dff5a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_pike, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, architecture=x86_64, distribution-scope=public, vcs-type=git, version=7, CEPH_POINT_RELEASE=, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7)
Dec 02 09:55:31 np0005541914.localdomain podman[294209]: 2025-12-02 09:55:31.199060422 +0000 UTC m=+0.154176755 container attach 391e8fffeaefd7bc56bab90b333503312eff82020aeaf6a1b8528252a91dff5a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_pike, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.openshift.expose-services=, name=rhceph, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, version=7, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 09:55:31 np0005541914.localdomain podman[294209]: 2025-12-02 09:55:31.205346553 +0000 UTC m=+0.160462906 container died 391e8fffeaefd7bc56bab90b333503312eff82020aeaf6a1b8528252a91dff5a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_pike, io.openshift.expose-services=, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, distribution-scope=public, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, ceph=True, io.openshift.tags=rhceph ceph, release=1763362218, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.buildah.version=1.41.4)
Dec 02 09:55:31 np0005541914.localdomain podman[294229]: 2025-12-02 09:55:31.267831009 +0000 UTC m=+0.064211319 container remove 391e8fffeaefd7bc56bab90b333503312eff82020aeaf6a1b8528252a91dff5a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=friendly_pike, CEPH_POINT_RELEASE=, architecture=x86_64, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_BRANCH=main, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, distribution-scope=public)
Dec 02 09:55:31 np0005541914.localdomain systemd[1]: libpod-conmon-391e8fffeaefd7bc56bab90b333503312eff82020aeaf6a1b8528252a91dff5a.scope: Deactivated successfully.
Dec 02 09:55:31 np0005541914.localdomain sudo[294173]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:31 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0)
Dec 02 09:55:31 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0)
Dec 02 09:55:31 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0)
Dec 02 09:55:31 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0)
Dec 02 09:55:31 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005541914.sqgqkj (monmap changed)...
Dec 02 09:55:31 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005541914.sqgqkj (monmap changed)...
Dec 02 09:55:31 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 02 09:55:31 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:55:31 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:31 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:31 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005541914.sqgqkj on np0005541914.localdomain
Dec 02 09:55:31 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005541914.sqgqkj on np0005541914.localdomain
Dec 02 09:55:31 np0005541914.localdomain sudo[294251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:55:31 np0005541914.localdomain sudo[294251]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:31 np0005541914.localdomain sudo[294251]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:31 np0005541914.localdomain sudo[294269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:55:31 np0005541914.localdomain sudo[294269]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:31 np0005541914.localdomain ceph-mon[288526]: Reconfiguring osd.4 (monmap changed)...
Dec 02 09:55:31 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.4 on np0005541914.localdomain
Dec 02 09:55:31 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:31 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:31 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:31 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:31 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:55:31 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:55:31 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:31 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v15: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:32 np0005541914.localdomain systemd[1]: tmp-crun.bGhaFz.mount: Deactivated successfully.
Dec 02 09:55:32 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-25fdf91b5a4a561e562d191cbbe12b31407dec2f76e140f8af8253e7eb2cd35f-merged.mount: Deactivated successfully.
Dec 02 09:55:32 np0005541914.localdomain podman[294303]: 
Dec 02 09:55:32 np0005541914.localdomain podman[294303]: 2025-12-02 09:55:32.03882039 +0000 UTC m=+0.065737657 container create 2be99c1a0ce4d30d101f0fa7b69cf9ed91641771b062f425848685224d63d2d3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_faraday, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., ceph=True, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.buildah.version=1.41.4, version=7, description=Red Hat Ceph Storage 7, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, distribution-scope=public)
Dec 02 09:55:32 np0005541914.localdomain systemd[1]: Started libpod-conmon-2be99c1a0ce4d30d101f0fa7b69cf9ed91641771b062f425848685224d63d2d3.scope.
Dec 02 09:55:32 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:55:32 np0005541914.localdomain podman[294303]: 2025-12-02 09:55:32.098757658 +0000 UTC m=+0.125674915 container init 2be99c1a0ce4d30d101f0fa7b69cf9ed91641771b062f425848685224d63d2d3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_faraday, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, version=7, vendor=Red Hat, Inc., ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, RELEASE=main, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, name=rhceph, CEPH_POINT_RELEASE=)
Dec 02 09:55:32 np0005541914.localdomain podman[294303]: 2025-12-02 09:55:32.107162284 +0000 UTC m=+0.134079561 container start 2be99c1a0ce4d30d101f0fa7b69cf9ed91641771b062f425848685224d63d2d3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_faraday, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, RELEASE=main, io.buildah.version=1.41.4, release=1763362218, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7)
Dec 02 09:55:32 np0005541914.localdomain crazy_faraday[294318]: 167 167
Dec 02 09:55:32 np0005541914.localdomain systemd[1]: libpod-2be99c1a0ce4d30d101f0fa7b69cf9ed91641771b062f425848685224d63d2d3.scope: Deactivated successfully.
Dec 02 09:55:32 np0005541914.localdomain podman[294303]: 2025-12-02 09:55:32.108798185 +0000 UTC m=+0.135715452 container attach 2be99c1a0ce4d30d101f0fa7b69cf9ed91641771b062f425848685224d63d2d3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_faraday, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_CLEAN=True, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, name=rhceph, RELEASE=main, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, distribution-scope=public)
Dec 02 09:55:32 np0005541914.localdomain podman[294303]: 2025-12-02 09:55:32.112773416 +0000 UTC m=+0.139690703 container died 2be99c1a0ce4d30d101f0fa7b69cf9ed91641771b062f425848685224d63d2d3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_faraday, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, name=rhceph, ceph=True, RELEASE=main, io.openshift.tags=rhceph ceph, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, release=1763362218, distribution-scope=public, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 09:55:32 np0005541914.localdomain podman[294303]: 2025-12-02 09:55:32.014344773 +0000 UTC m=+0.041262030 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:55:32 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mgr stat", "format": "json"} v 0)
Dec 02 09:55:32 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/2343995021' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json"} : dispatch
Dec 02 09:55:32 np0005541914.localdomain podman[294323]: 2025-12-02 09:55:32.208811085 +0000 UTC m=+0.087273423 container remove 2be99c1a0ce4d30d101f0fa7b69cf9ed91641771b062f425848685224d63d2d3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_faraday, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, vendor=Red Hat, Inc., GIT_CLEAN=True, release=1763362218, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, ceph=True, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 09:55:32 np0005541914.localdomain systemd[1]: libpod-conmon-2be99c1a0ce4d30d101f0fa7b69cf9ed91641771b062f425848685224d63d2d3.scope: Deactivated successfully.
Dec 02 09:55:32 np0005541914.localdomain sudo[294269]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:32 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0)
Dec 02 09:55:32 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0)
Dec 02 09:55:32 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005541914.lljzmk (monmap changed)...
Dec 02 09:55:32 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005541914.lljzmk (monmap changed)...
Dec 02 09:55:32 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 02 09:55:32 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:55:32 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 02 09:55:32 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:55:32 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:32 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:32 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005541914.lljzmk on np0005541914.localdomain
Dec 02 09:55:32 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005541914.lljzmk on np0005541914.localdomain
Dec 02 09:55:32 np0005541914.localdomain sudo[294339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:55:32 np0005541914.localdomain sudo[294339]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:32 np0005541914.localdomain sudo[294339]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:32 np0005541914.localdomain sudo[294357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:55:32 np0005541914.localdomain sudo[294357]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:32 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mds.mds.np0005541914.sqgqkj (monmap changed)...
Dec 02 09:55:32 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mds.mds.np0005541914.sqgqkj on np0005541914.localdomain
Dec 02 09:55:32 np0005541914.localdomain ceph-mon[288526]: pgmap v15: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:32 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.200:0/2343995021' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json"} : dispatch
Dec 02 09:55:32 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:32 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:32 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:55:32 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:55:32 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:55:32 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:32 np0005541914.localdomain podman[294393]: 
Dec 02 09:55:32 np0005541914.localdomain podman[294393]: 2025-12-02 09:55:32.913880365 +0000 UTC m=+0.069513022 container create 2c76b28d6ef1d547964c2b4e7b5d507d66455dd84b9c41e7e56c4001fff6a927 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_williamson, io.openshift.tags=rhceph ceph, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, description=Red Hat Ceph Storage 7)
Dec 02 09:55:32 np0005541914.localdomain systemd[1]: Started libpod-conmon-2c76b28d6ef1d547964c2b4e7b5d507d66455dd84b9c41e7e56c4001fff6a927.scope.
Dec 02 09:55:32 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:55:32 np0005541914.localdomain podman[294393]: 2025-12-02 09:55:32.964317363 +0000 UTC m=+0.119950060 container init 2c76b28d6ef1d547964c2b4e7b5d507d66455dd84b9c41e7e56c4001fff6a927 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_williamson, GIT_BRANCH=main, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, release=1763362218, io.openshift.expose-services=, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, vcs-type=git, distribution-scope=public)
Dec 02 09:55:32 np0005541914.localdomain podman[294393]: 2025-12-02 09:55:32.970294575 +0000 UTC m=+0.125927272 container start 2c76b28d6ef1d547964c2b4e7b5d507d66455dd84b9c41e7e56c4001fff6a927 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_williamson, CEPH_POINT_RELEASE=, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, name=rhceph, GIT_BRANCH=main, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, vcs-type=git, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 02 09:55:32 np0005541914.localdomain podman[294393]: 2025-12-02 09:55:32.970546833 +0000 UTC m=+0.126179580 container attach 2c76b28d6ef1d547964c2b4e7b5d507d66455dd84b9c41e7e56c4001fff6a927 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_williamson, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, release=1763362218, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, ceph=True, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 09:55:32 np0005541914.localdomain optimistic_williamson[294408]: 167 167
Dec 02 09:55:32 np0005541914.localdomain systemd[1]: libpod-2c76b28d6ef1d547964c2b4e7b5d507d66455dd84b9c41e7e56c4001fff6a927.scope: Deactivated successfully.
Dec 02 09:55:32 np0005541914.localdomain podman[294393]: 2025-12-02 09:55:32.972710689 +0000 UTC m=+0.128343416 container died 2c76b28d6ef1d547964c2b4e7b5d507d66455dd84b9c41e7e56c4001fff6a927 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_williamson, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, version=7, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, release=1763362218, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, name=rhceph, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, ceph=True)
Dec 02 09:55:32 np0005541914.localdomain podman[294393]: 2025-12-02 09:55:32.889132979 +0000 UTC m=+0.044765706 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:55:33 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-559d91ad8e4feeefa8aa55cac4ca58e7f2368af0e47ea3de773610061157afda-merged.mount: Deactivated successfully.
Dec 02 09:55:33 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-10a1701e9964ae77602873677580eee5101932eab6b2ceef44f256ffaea7d209-merged.mount: Deactivated successfully.
Dec 02 09:55:33 np0005541914.localdomain podman[294413]: 2025-12-02 09:55:33.066706247 +0000 UTC m=+0.089290265 container remove 2c76b28d6ef1d547964c2b4e7b5d507d66455dd84b9c41e7e56c4001fff6a927 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=optimistic_williamson, io.openshift.expose-services=, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., name=rhceph, version=7, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git)
Dec 02 09:55:33 np0005541914.localdomain systemd[1]: libpod-conmon-2c76b28d6ef1d547964c2b4e7b5d507d66455dd84b9c41e7e56c4001fff6a927.scope: Deactivated successfully.
Dec 02 09:55:33 np0005541914.localdomain sudo[294357]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:33 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0)
Dec 02 09:55:33 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0)
Dec 02 09:55:33 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005541914 (monmap changed)...
Dec 02 09:55:33 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005541914 (monmap changed)...
Dec 02 09:55:33 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 02 09:55:33 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:55:33 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 02 09:55:33 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 02 09:55:33 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:33 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:33 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005541914 on np0005541914.localdomain
Dec 02 09:55:33 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005541914 on np0005541914.localdomain
Dec 02 09:55:33 np0005541914.localdomain sudo[294427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:55:33 np0005541914.localdomain sudo[294427]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:33 np0005541914.localdomain sudo[294427]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:33 np0005541914.localdomain sudo[294445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:55:33 np0005541914.localdomain sudo[294445]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:33 np0005541914.localdomain podman[239757]: time="2025-12-02T09:55:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 09:55:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:55:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 09:55:33 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.26961 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005541910", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 02 09:55:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:55:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19185 "" "Go-http-client/1.1"
Dec 02 09:55:33 np0005541914.localdomain podman[294481]: 
Dec 02 09:55:33 np0005541914.localdomain podman[294481]: 2025-12-02 09:55:33.737630674 +0000 UTC m=+0.090554783 container create 2431e75cda296d488735c4859b6fb007f4bf429058401e8c6ead0dd793538fff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_hopper, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, distribution-scope=public, RELEASE=main, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, version=7)
Dec 02 09:55:33 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mgr.np0005541914.lljzmk (monmap changed)...
Dec 02 09:55:33 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mgr.np0005541914.lljzmk on np0005541914.localdomain
Dec 02 09:55:33 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:33 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:33 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:55:33 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 02 09:55:33 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:33 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v16: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:33 np0005541914.localdomain systemd[1]: Started libpod-conmon-2431e75cda296d488735c4859b6fb007f4bf429058401e8c6ead0dd793538fff.scope.
Dec 02 09:55:33 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:55:33 np0005541914.localdomain podman[294481]: 2025-12-02 09:55:33.791567489 +0000 UTC m=+0.144491628 container init 2431e75cda296d488735c4859b6fb007f4bf429058401e8c6ead0dd793538fff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_hopper, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, distribution-scope=public, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., RELEASE=main, version=7, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, architecture=x86_64)
Dec 02 09:55:33 np0005541914.localdomain podman[294481]: 2025-12-02 09:55:33.696614083 +0000 UTC m=+0.049538202 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:55:33 np0005541914.localdomain podman[294481]: 2025-12-02 09:55:33.801857474 +0000 UTC m=+0.154781583 container start 2431e75cda296d488735c4859b6fb007f4bf429058401e8c6ead0dd793538fff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_hopper, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, release=1763362218, RELEASE=main, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, version=7, architecture=x86_64, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, distribution-scope=public, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7)
Dec 02 09:55:33 np0005541914.localdomain podman[294481]: 2025-12-02 09:55:33.802091661 +0000 UTC m=+0.155015810 container attach 2431e75cda296d488735c4859b6fb007f4bf429058401e8c6ead0dd793538fff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_hopper, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., name=rhceph, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, architecture=x86_64, release=1763362218, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 02 09:55:33 np0005541914.localdomain recursing_hopper[294495]: 167 167
Dec 02 09:55:33 np0005541914.localdomain systemd[1]: libpod-2431e75cda296d488735c4859b6fb007f4bf429058401e8c6ead0dd793538fff.scope: Deactivated successfully.
Dec 02 09:55:33 np0005541914.localdomain podman[294481]: 2025-12-02 09:55:33.804875866 +0000 UTC m=+0.157800005 container died 2431e75cda296d488735c4859b6fb007f4bf429058401e8c6ead0dd793538fff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_hopper, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, name=rhceph, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, com.redhat.component=rhceph-container, vcs-type=git, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, GIT_BRANCH=main, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 09:55:33 np0005541914.localdomain podman[294502]: 2025-12-02 09:55:33.885710332 +0000 UTC m=+0.069789900 container remove 2431e75cda296d488735c4859b6fb007f4bf429058401e8c6ead0dd793538fff (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=recursing_hopper, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, release=1763362218, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_CLEAN=True, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, version=7, GIT_BRANCH=main, CEPH_POINT_RELEASE=, vcs-type=git)
Dec 02 09:55:33 np0005541914.localdomain systemd[1]: libpod-conmon-2431e75cda296d488735c4859b6fb007f4bf429058401e8c6ead0dd793538fff.scope: Deactivated successfully.
Dec 02 09:55:33 np0005541914.localdomain sudo[294445]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:33 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0)
Dec 02 09:55:33 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0)
Dec 02 09:55:33 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:33 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:33 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 02 09:55:33 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:55:33 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 02 09:55:34 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-f2e82613354c02fe07e8bc59a452e134c0a28d0502c85b5ed9c570070636b1d6-merged.mount: Deactivated successfully.
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 02 09:55:34 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] update: starting ev 4b82e39f-ec49-4bed-914f-4acb15750e73 (Updating node-proxy deployment (+5 -> 5))
Dec 02 09:55:34 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] complete: finished ev 4b82e39f-ec49-4bed-914f-4acb15750e73 (Updating node-proxy deployment (+5 -> 5))
Dec 02 09:55:34 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Completed event 4b82e39f-ec49-4bed-914f-4acb15750e73 (Updating node-proxy deployment (+5 -> 5)) in 0 seconds
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:34.395995) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669334396082, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2356, "num_deletes": 265, "total_data_size": 8432363, "memory_usage": 9082480, "flush_reason": "Manual Compaction"}
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669334421782, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 4802871, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13727, "largest_seqno": 16078, "table_properties": {"data_size": 4793637, "index_size": 5483, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 23848, "raw_average_key_size": 22, "raw_value_size": 4773312, "raw_average_value_size": 4456, "num_data_blocks": 229, "num_entries": 1071, "num_filter_entries": 1071, "num_deletions": 263, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669289, "oldest_key_time": 1764669289, "file_creation_time": 1764669334, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fef79939-f0d3-4c6e-a3c1-7bf191246dd2", "db_session_id": "ES6HEAUO0NO66H72LGQU", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 25844 microseconds, and 9888 cpu microseconds.
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:34.421844) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 4802871 bytes OK
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:34.421871) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:34.423688) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:34.423713) EVENT_LOG_v1 {"time_micros": 1764669334423705, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:34.423736) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 8420670, prev total WAL file size 8420670, number of live WAL files 2.
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:34.425294) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031303238' seq:72057594037927935, type:22 .. '6B760031323930' seq:0, type:0; will stop at (end)
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(4690KB)], [21(12MB)]
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669334425371, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 17571689, "oldest_snapshot_seqno": -1}
Dec 02 09:55:34 np0005541914.localdomain sudo[294517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:55:34 np0005541914.localdomain sudo[294517]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:34 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:55:34 np0005541914.localdomain sudo[294517]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:34 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:55:34 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005541910 (monmap changed)...
Dec 02 09:55:34 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005541910 (monmap changed)...
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:34 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005541910 on np0005541910.localdomain
Dec 02 09:55:34 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005541910 on np0005541910.localdomain
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 10250 keys, 16745726 bytes, temperature: kUnknown
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669334544634, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 16745726, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16683944, "index_size": 35057, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25669, "raw_key_size": 275363, "raw_average_key_size": 26, "raw_value_size": 16505076, "raw_average_value_size": 1610, "num_data_blocks": 1339, "num_entries": 10250, "num_filter_entries": 10250, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669199, "oldest_key_time": 0, "file_creation_time": 1764669334, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fef79939-f0d3-4c6e-a3c1-7bf191246dd2", "db_session_id": "ES6HEAUO0NO66H72LGQU", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:34.544945) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 16745726 bytes
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:34.547361) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.2 rd, 140.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.6, 12.2 +0.0 blob) out(16.0 +0.0 blob), read-write-amplify(7.1) write-amplify(3.5) OK, records in: 10740, records dropped: 490 output_compression: NoCompression
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:34.547409) EVENT_LOG_v1 {"time_micros": 1764669334547395, "job": 10, "event": "compaction_finished", "compaction_time_micros": 119365, "compaction_time_cpu_micros": 48232, "output_level": 6, "num_output_files": 1, "total_output_size": 16745726, "num_input_records": 10740, "num_output_records": 10250, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669334548264, "job": 10, "event": "table_file_deletion", "file_number": 23}
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669334550155, "job": 10, "event": "table_file_deletion", "file_number": 21}
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:34.425199) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:34.550215) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:34.550222) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:34.550226) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:34.550229) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:34.550233) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 09:55:34 np0005541914.localdomain podman[294535]: 2025-12-02 09:55:34.573597717 +0000 UTC m=+0.104587402 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 02 09:55:34 np0005541914.localdomain podman[294535]: 2025-12-02 09:55:34.615887227 +0000 UTC m=+0.146876922 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 09:55:34 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:55:34 np0005541914.localdomain podman[294536]: 2025-12-02 09:55:34.616495786 +0000 UTC m=+0.143872431 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, version=9.6, vendor=Red Hat, Inc., vcs-type=git, name=ubi9-minimal, config_id=edpm)
Dec 02 09:55:34 np0005541914.localdomain podman[294536]: 2025-12-02 09:55:34.70088251 +0000 UTC m=+0.228259165 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, distribution-scope=public, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_id=edpm)
Dec 02 09:55:34 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mon.np0005541914 (monmap changed)...
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mon.np0005541914 on np0005541914.localdomain
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: from='client.26961 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005541910", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: pgmap v16: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: mgrmap e24: np0005541914.lljzmk(active, since 24s), standbys: np0005541910.kzipdo, np0005541913.mfesdm, np0005541912.qwddia, np0005541911.adcgiw
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 02 09:55:34 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:35 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.34287 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005541910"], "force": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:55:35 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO root] Remove daemons mon.np0005541910
Dec 02 09:55:35 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Remove daemons mon.np0005541910
Dec 02 09:55:35 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "quorum_status"} v 0)
Dec 02 09:55:35 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "quorum_status"} : dispatch
Dec 02 09:55:35 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.services.cephadmservice] Safe to remove mon.np0005541910: new quorum should be ['np0005541911', 'np0005541914', 'np0005541913', 'np0005541912'] (from ['np0005541911', 'np0005541914', 'np0005541913', 'np0005541912'])
Dec 02 09:55:35 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Safe to remove mon.np0005541910: new quorum should be ['np0005541911', 'np0005541914', 'np0005541913', 'np0005541912'] (from ['np0005541911', 'np0005541914', 'np0005541913', 'np0005541912'])
Dec 02 09:55:35 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.services.cephadmservice] Removing monitor np0005541910 from monmap...
Dec 02 09:55:35 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Removing monitor np0005541910 from monmap...
Dec 02 09:55:35 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e9 handle_command mon_command({"prefix": "mon rm", "name": "np0005541910"} v 0)
Dec 02 09:55:35 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon rm", "name": "np0005541910"} : dispatch
Dec 02 09:55:35 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Removing daemon mon.np0005541910 from np0005541910.localdomain -- ports []
Dec 02 09:55:35 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Removing daemon mon.np0005541910 from np0005541910.localdomain -- ports []
Dec 02 09:55:35 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@2(peon) e10  my rank is now 1 (was 2)
Dec 02 09:55:35 np0005541914.localdomain ceph-mgr[287188]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0
Dec 02 09:55:35 np0005541914.localdomain ceph-mgr[287188]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0
Dec 02 09:55:35 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 calling monitor election
Dec 02 09:55:35 np0005541914.localdomain ceph-mon[288526]: paxos.1).electionLogic(42) init, last seen epoch 42
Dec 02 09:55:35 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(electing) e10 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:55:35 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(electing) e10 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541911"} v 0)
Dec 02 09:55:35 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541911"} : dispatch
Dec 02 09:55:35 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(electing) e10 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541912"} v 0)
Dec 02 09:55:35 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 09:55:35 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(electing) e10 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0)
Dec 02 09:55:35 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:55:35 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(electing) e10 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541914"} v 0)
Dec 02 09:55:35 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch
Dec 02 09:55:35 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(electing) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541910.localdomain.devices.0}] v 0)
Dec 02 09:55:35 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events
Dec 02 09:55:35 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(electing) e10 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 02 09:55:35 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v17: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(electing) e10 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541910.localdomain}] v 0)
Dec 02 09:55:37 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005541910.kzipdo (monmap changed)...
Dec 02 09:55:37 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005541910.kzipdo (monmap changed)...
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005541910.kzipdo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541910.kzipdo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mon.np0005541910 on np0005541910.localdomain
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: from='client.34287 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005541910"], "force": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: Remove daemons mon.np0005541910
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "quorum_status"} : dispatch
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: Safe to remove mon.np0005541910: new quorum should be ['np0005541911', 'np0005541914', 'np0005541913', 'np0005541912'] (from ['np0005541911', 'np0005541914', 'np0005541913', 'np0005541912'])
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: Removing monitor np0005541910 from monmap...
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon rm", "name": "np0005541910"} : dispatch
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: Removing daemon mon.np0005541910 from np0005541910.localdomain -- ports []
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: mon.np0005541912 calling monitor election
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: mon.np0005541913 calling monitor election
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914 calling monitor election
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541911"} : dispatch
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: pgmap v17: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: mon.np0005541911 calling monitor election
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: mon.np0005541911 is new leader, mons np0005541911,np0005541914,np0005541913,np0005541912 in quorum (ranks 0,1,2,3)
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: monmap epoch 10
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: last_changed 2025-12-02T09:55:35.077328+0000
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: created 2025-12-02T07:44:17.411659+0000
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: min_mon_release 18 (reef)
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: election_strategy: 1
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: 0: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005541911
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: 1: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005541914
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: 2: [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] mon.np0005541913
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: 3: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005541912
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: fsmap cephfs:1 {0=mds.np0005541912.ghcwcm=up:active} 2 up:standby
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: osdmap e87: 6 total, 6 up, 6 in
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: mgrmap e24: np0005541914.lljzmk(active, since 27s), standbys: np0005541910.kzipdo, np0005541913.mfesdm, np0005541912.qwddia, np0005541911.adcgiw
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]:     stray daemon mgr.np0005541909.kfesnk on host np0005541909.localdomain not managed by cephadm
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]:     stray host np0005541909.localdomain has 1 stray daemons: ['mgr.np0005541909.kfesnk']
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:37 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:37 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005541910.kzipdo on np0005541910.localdomain
Dec 02 09:55:37 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005541910.kzipdo on np0005541910.localdomain
Dec 02 09:55:37 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v18: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541910.localdomain.devices.0}] v 0)
Dec 02 09:55:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541910.localdomain}] v 0)
Dec 02 09:55:38 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005541911 (monmap changed)...
Dec 02 09:55:38 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005541911 (monmap changed)...
Dec 02 09:55:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 02 09:55:38 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:55:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 02 09:55:38 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 02 09:55:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:38 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:38 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005541911 on np0005541911.localdomain
Dec 02 09:55:38 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005541911 on np0005541911.localdomain
Dec 02 09:55:38 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mgr.np0005541910.kzipdo (monmap changed)...
Dec 02 09:55:38 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541910.kzipdo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:55:38 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541910.kzipdo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:55:38 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:55:38 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:38 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mgr.np0005541910.kzipdo on np0005541910.localdomain
Dec 02 09:55:38 np0005541914.localdomain ceph-mon[288526]: pgmap v18: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:38 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:38 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:38 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:55:38 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 02 09:55:38 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:38 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.34330 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005541910.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:55:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 02 09:55:38 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO root] Removed label mon from host np0005541910.localdomain
Dec 02 09:55:38 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Removed label mon from host np0005541910.localdomain
Dec 02 09:55:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain.devices.0}] v 0)
Dec 02 09:55:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain}] v 0)
Dec 02 09:55:38 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005541911.adcgiw (monmap changed)...
Dec 02 09:55:38 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005541911.adcgiw (monmap changed)...
Dec 02 09:55:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005541911.adcgiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 02 09:55:38 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541911.adcgiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:55:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 02 09:55:38 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:55:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:38 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:38 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:55:38 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005541911.adcgiw on np0005541911.localdomain
Dec 02 09:55:38 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005541911.adcgiw on np0005541911.localdomain
Dec 02 09:55:39 np0005541914.localdomain podman[294575]: 2025-12-02 09:55:39.060312491 +0000 UTC m=+0.063455878 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.schema-version=1.0)
Dec 02 09:55:39 np0005541914.localdomain podman[294575]: 2025-12-02 09:55:39.098228668 +0000 UTC m=+0.101372025 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true)
Dec 02 09:55:39 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:55:39 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:55:39 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mon.np0005541911 (monmap changed)...
Dec 02 09:55:39 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mon.np0005541911 on np0005541911.localdomain
Dec 02 09:55:39 np0005541914.localdomain ceph-mon[288526]: from='client.34330 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005541910.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:55:39 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:39 np0005541914.localdomain ceph-mon[288526]: Removed label mon from host np0005541910.localdomain
Dec 02 09:55:39 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:39 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:39 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mgr.np0005541911.adcgiw (monmap changed)...
Dec 02 09:55:39 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541911.adcgiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:55:39 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541911.adcgiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:55:39 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:55:39 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:39 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mgr.np0005541911.adcgiw on np0005541911.localdomain
Dec 02 09:55:39 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v19: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:39 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 09:55:39 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 09:55:39 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 09:55:39 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 09:55:39 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 09:55:39 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 09:55:39 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain.devices.0}] v 0)
Dec 02 09:55:39 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain}] v 0)
Dec 02 09:55:39 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005541911 (monmap changed)...
Dec 02 09:55:39 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005541911 (monmap changed)...
Dec 02 09:55:39 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005541911.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 02 09:55:39 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541911.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:55:39 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:39 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:39 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005541911 on np0005541911.localdomain
Dec 02 09:55:39 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005541911 on np0005541911.localdomain
Dec 02 09:55:40 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.34297 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005541910.localdomain", "label": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 02 09:55:40 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO root] Removed label mgr from host np0005541910.localdomain
Dec 02 09:55:40 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Removed label mgr from host np0005541910.localdomain
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain.devices.0}] v 0)
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain}] v 0)
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:40.854626) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669340854669, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 528, "num_deletes": 251, "total_data_size": 502557, "memory_usage": 512808, "flush_reason": "Manual Compaction"}
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669340859371, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 315167, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16084, "largest_seqno": 16606, "table_properties": {"data_size": 312185, "index_size": 901, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8645, "raw_average_key_size": 21, "raw_value_size": 305665, "raw_average_value_size": 764, "num_data_blocks": 37, "num_entries": 400, "num_filter_entries": 400, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669334, "oldest_key_time": 1764669334, "file_creation_time": 1764669340, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fef79939-f0d3-4c6e-a3c1-7bf191246dd2", "db_session_id": "ES6HEAUO0NO66H72LGQU", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 4790 microseconds, and 1820 cpu microseconds.
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:40.859416) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 315167 bytes OK
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:40.859436) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:40.861180) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:40.861204) EVENT_LOG_v1 {"time_micros": 1764669340861198, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:40.861225) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 499246, prev total WAL file size 499246, number of live WAL files 2.
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:40.861869) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130353432' seq:72057594037927935, type:22 .. '7061786F73003130373934' seq:0, type:0; will stop at (end)
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(307KB)], [24(15MB)]
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669340861917, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 17060893, "oldest_snapshot_seqno": -1}
Dec 02 09:55:40 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005541912 (monmap changed)...
Dec 02 09:55:40 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005541912 (monmap changed)...
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:40 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005541912 on np0005541912.localdomain
Dec 02 09:55:40 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005541912 on np0005541912.localdomain
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 10123 keys, 14985547 bytes, temperature: kUnknown
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669340968613, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 14985547, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14926324, "index_size": 32818, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25349, "raw_key_size": 273610, "raw_average_key_size": 27, "raw_value_size": 14751302, "raw_average_value_size": 1457, "num_data_blocks": 1241, "num_entries": 10123, "num_filter_entries": 10123, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669199, "oldest_key_time": 0, "file_creation_time": 1764669340, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fef79939-f0d3-4c6e-a3c1-7bf191246dd2", "db_session_id": "ES6HEAUO0NO66H72LGQU", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:40.968949) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 14985547 bytes
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:40.971679) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 159.7 rd, 140.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 16.0 +0.0 blob) out(14.3 +0.0 blob), read-write-amplify(101.7) write-amplify(47.5) OK, records in: 10650, records dropped: 527 output_compression: NoCompression
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:40.971709) EVENT_LOG_v1 {"time_micros": 1764669340971696, "job": 12, "event": "compaction_finished", "compaction_time_micros": 106838, "compaction_time_cpu_micros": 44360, "output_level": 6, "num_output_files": 1, "total_output_size": 14985547, "num_input_records": 10650, "num_output_records": 10123, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669340972012, "job": 12, "event": "table_file_deletion", "file_number": 26}
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669340974488, "job": 12, "event": "table_file_deletion", "file_number": 24}
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:40.861779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:40.974709) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:40.974719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:40.974903) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:40.974910) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:55:40.975131) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: pgmap v19: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: Reconfiguring crash.np0005541911 (monmap changed)...
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541911.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541911.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon crash.np0005541911 on np0005541911.localdomain
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: from='client.34297 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005541910.localdomain", "label": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:55:40 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:41 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.26796 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005541910.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:55:41 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 02 09:55:41 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO root] Removed label _admin from host np0005541910.localdomain
Dec 02 09:55:41 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Removed label _admin from host np0005541910.localdomain
Dec 02 09:55:41 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v20: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:41 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 09:55:41 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 09:55:41 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring osd.2 (monmap changed)...
Dec 02 09:55:41 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring osd.2 (monmap changed)...
Dec 02 09:55:41 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Dec 02 09:55:41 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 02 09:55:41 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:41 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:41 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005541912.localdomain
Dec 02 09:55:41 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005541912.localdomain
Dec 02 09:55:42 np0005541914.localdomain ceph-mon[288526]: Removed label mgr from host np0005541910.localdomain
Dec 02 09:55:42 np0005541914.localdomain ceph-mon[288526]: Reconfiguring crash.np0005541912 (monmap changed)...
Dec 02 09:55:42 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon crash.np0005541912 on np0005541912.localdomain
Dec 02 09:55:42 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:42 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:42 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:42 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 02 09:55:42 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:55:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 09:55:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:55:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:55:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:55:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:55:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:55:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 09:55:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:55:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:55:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 09:55:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:55:42 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 09:55:42 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 09:55:43 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring osd.5 (monmap changed)...
Dec 02 09:55:43 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring osd.5 (monmap changed)...
Dec 02 09:55:43 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "osd.5"} v 0)
Dec 02 09:55:43 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 02 09:55:43 np0005541914.localdomain ceph-mon[288526]: from='client.26796 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005541910.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:55:43 np0005541914.localdomain ceph-mon[288526]: Removed label _admin from host np0005541910.localdomain
Dec 02 09:55:43 np0005541914.localdomain ceph-mon[288526]: pgmap v20: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:43 np0005541914.localdomain ceph-mon[288526]: Reconfiguring osd.2 (monmap changed)...
Dec 02 09:55:43 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.2 on np0005541912.localdomain
Dec 02 09:55:43 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:43 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:43 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:43 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005541912.localdomain
Dec 02 09:55:43 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005541912.localdomain
Dec 02 09:55:43 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v21: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:44 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 09:55:44 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:44 np0005541914.localdomain ceph-mon[288526]: Reconfiguring osd.5 (monmap changed)...
Dec 02 09:55:44 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 02 09:55:44 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:44 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.5 on np0005541912.localdomain
Dec 02 09:55:44 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 09:55:44 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)...
Dec 02 09:55:44 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)...
Dec 02 09:55:44 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 02 09:55:44 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:55:44 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:44 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:44 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain
Dec 02 09:55:44 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain
Dec 02 09:55:44 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:55:44 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 09:55:44 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 09:55:45 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005541912.qwddia (monmap changed)...
Dec 02 09:55:45 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005541912.qwddia (monmap changed)...
Dec 02 09:55:45 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 02 09:55:45 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:55:45 np0005541914.localdomain ceph-mon[288526]: pgmap v21: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:55:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:55:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:45 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 02 09:55:45 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:55:45 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:45 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:45 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005541912.qwddia on np0005541912.localdomain
Dec 02 09:55:45 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005541912.qwddia on np0005541912.localdomain
Dec 02 09:55:45 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v22: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:46 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 09:55:46 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 09:55:46 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005541912 (monmap changed)...
Dec 02 09:55:46 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005541912 (monmap changed)...
Dec 02 09:55:46 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 02 09:55:46 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:55:46 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 02 09:55:46 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 02 09:55:46 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:46 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:46 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005541912 on np0005541912.localdomain
Dec 02 09:55:46 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005541912 on np0005541912.localdomain
Dec 02 09:55:46 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)...
Dec 02 09:55:46 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain
Dec 02 09:55:46 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:46 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mgr.np0005541912.qwddia (monmap changed)...
Dec 02 09:55:46 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:55:46 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:55:46 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:55:46 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:46 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mgr.np0005541912.qwddia on np0005541912.localdomain
Dec 02 09:55:46 np0005541914.localdomain ceph-mon[288526]: pgmap v22: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:46 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:46 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:46 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:55:46 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 02 09:55:46 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:46 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 09:55:46 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 09:55:46 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005541913 (monmap changed)...
Dec 02 09:55:46 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005541913 (monmap changed)...
Dec 02 09:55:46 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 02 09:55:46 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:55:46 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:46 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:46 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain
Dec 02 09:55:46 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain
Dec 02 09:55:47 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 09:55:47 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 09:55:47 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Dec 02 09:55:47 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Dec 02 09:55:47 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Dec 02 09:55:47 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 02 09:55:47 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:47 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:47 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on np0005541913.localdomain
Dec 02 09:55:47 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on np0005541913.localdomain
Dec 02 09:55:47 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v23: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:47 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mon.np0005541912 (monmap changed)...
Dec 02 09:55:47 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mon.np0005541912 on np0005541912.localdomain
Dec 02 09:55:47 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:47 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:47 np0005541914.localdomain ceph-mon[288526]: Reconfiguring crash.np0005541913 (monmap changed)...
Dec 02 09:55:47 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:55:47 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:55:47 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:47 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain
Dec 02 09:55:47 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:47 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:47 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 02 09:55:47 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:48 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 09:55:48 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 09:55:48 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring osd.3 (monmap changed)...
Dec 02 09:55:48 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring osd.3 (monmap changed)...
Dec 02 09:55:48 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "osd.3"} v 0)
Dec 02 09:55:48 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Dec 02 09:55:48 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:48 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:48 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.3 on np0005541913.localdomain
Dec 02 09:55:48 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.3 on np0005541913.localdomain
Dec 02 09:55:48 np0005541914.localdomain ceph-mon[288526]: Reconfiguring osd.0 (monmap changed)...
Dec 02 09:55:48 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.0 on np0005541913.localdomain
Dec 02 09:55:48 np0005541914.localdomain ceph-mon[288526]: pgmap v23: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:48 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:48 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:48 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Dec 02 09:55:48 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:49 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:55:49 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v24: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:49 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 09:55:49 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 09:55:49 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)...
Dec 02 09:55:49 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)...
Dec 02 09:55:49 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 02 09:55:49 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:55:49 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:49 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:49 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain
Dec 02 09:55:49 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain
Dec 02 09:55:49 np0005541914.localdomain ceph-mon[288526]: Reconfiguring osd.3 (monmap changed)...
Dec 02 09:55:49 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.3 on np0005541913.localdomain
Dec 02 09:55:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:55:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:55:49 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:50 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 09:55:50 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 09:55:50 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005541913.mfesdm (monmap changed)...
Dec 02 09:55:50 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005541913.mfesdm (monmap changed)...
Dec 02 09:55:50 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 02 09:55:50 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:55:50 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 02 09:55:50 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:55:50 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:50 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:50 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain
Dec 02 09:55:50 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain
Dec 02 09:55:50 np0005541914.localdomain ceph-mon[288526]: pgmap v24: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:50 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)...
Dec 02 09:55:50 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain
Dec 02 09:55:50 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:50 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:50 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:55:50 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:55:50 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:55:50 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:51 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 09:55:51 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 09:55:51 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005541913 (monmap changed)...
Dec 02 09:55:51 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005541913 (monmap changed)...
Dec 02 09:55:51 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 02 09:55:51 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:55:51 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 02 09:55:51 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 02 09:55:51 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:51 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:51 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005541913 on np0005541913.localdomain
Dec 02 09:55:51 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005541913 on np0005541913.localdomain
Dec 02 09:55:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:55:51.523 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:55:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:55:51.524 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:55:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:55:51.635 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:55:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:55:51.636 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:55:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:55:51.636 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:55:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:55:51.666 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:55:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:55:51.666 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:55:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:55:51.667 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:55:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:55:51.667 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 09:55:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:55:51.667 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:55:51 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v25: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:51 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mgr.np0005541913.mfesdm (monmap changed)...
Dec 02 09:55:51 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain
Dec 02 09:55:51 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.107:0/3750190611' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:55:51 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:51 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:51 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:55:51 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 02 09:55:51 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:55:52.101 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:55:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:55:52.276 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 09:55:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:55:52.277 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11960MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 09:55:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:55:52.278 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:55:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:55:52.278 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:55:52 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 09:55:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:55:52.363 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 09:55:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:55:52.363 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 09:55:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:55:52.383 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:55:52 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 09:55:52 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005541914 (monmap changed)...
Dec 02 09:55:52 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005541914 (monmap changed)...
Dec 02 09:55:52 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 02 09:55:52 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:55:52 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:52 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:52 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005541914 on np0005541914.localdomain
Dec 02 09:55:52 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005541914 on np0005541914.localdomain
Dec 02 09:55:52 np0005541914.localdomain sudo[294619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:55:52 np0005541914.localdomain sudo[294619]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:52 np0005541914.localdomain sudo[294619]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:52 np0005541914.localdomain sudo[294653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:55:52 np0005541914.localdomain sudo[294653]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:55:52.796 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:55:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:55:52.802 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 09:55:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:55:52.835 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 09:55:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:55:52.837 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 09:55:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:55:52.837 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:55:52 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mon.np0005541913 (monmap changed)...
Dec 02 09:55:52 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mon.np0005541913 on np0005541913.localdomain
Dec 02 09:55:52 np0005541914.localdomain ceph-mon[288526]: pgmap v25: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:52 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.108:0/9569723' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:55:52 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:52 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:52 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:55:52 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:55:52 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:52 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.107:0/1371164466' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:55:52 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.108:0/533854766' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:55:52 np0005541914.localdomain podman[294692]: 
Dec 02 09:55:52 np0005541914.localdomain podman[294692]: 2025-12-02 09:55:52.99758517 +0000 UTC m=+0.073236735 container create aac4a578639b811c36f26ff1a33c261dfbf2b58514082ab033dc305b0cbe1d17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_nash, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, RELEASE=main, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, vcs-type=git, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, release=1763362218, name=rhceph, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7)
Dec 02 09:55:53 np0005541914.localdomain systemd[1]: Started libpod-conmon-aac4a578639b811c36f26ff1a33c261dfbf2b58514082ab033dc305b0cbe1d17.scope.
Dec 02 09:55:53 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:55:53 np0005541914.localdomain podman[294692]: 2025-12-02 09:55:52.968382749 +0000 UTC m=+0.044034324 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:55:53 np0005541914.localdomain podman[294692]: 2025-12-02 09:55:53.081575072 +0000 UTC m=+0.157226637 container init aac4a578639b811c36f26ff1a33c261dfbf2b58514082ab033dc305b0cbe1d17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_nash, description=Red Hat Ceph Storage 7, version=7, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, architecture=x86_64, RELEASE=main, GIT_BRANCH=main, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public)
Dec 02 09:55:53 np0005541914.localdomain podman[294692]: 2025-12-02 09:55:53.092287709 +0000 UTC m=+0.167939264 container start aac4a578639b811c36f26ff1a33c261dfbf2b58514082ab033dc305b0cbe1d17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_nash, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, GIT_BRANCH=main, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, release=1763362218, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 02 09:55:53 np0005541914.localdomain podman[294692]: 2025-12-02 09:55:53.092526476 +0000 UTC m=+0.168178071 container attach aac4a578639b811c36f26ff1a33c261dfbf2b58514082ab033dc305b0cbe1d17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_nash, version=7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, name=rhceph, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., release=1763362218, ceph=True, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 02 09:55:53 np0005541914.localdomain keen_nash[294707]: 167 167
Dec 02 09:55:53 np0005541914.localdomain systemd[1]: libpod-aac4a578639b811c36f26ff1a33c261dfbf2b58514082ab033dc305b0cbe1d17.scope: Deactivated successfully.
Dec 02 09:55:53 np0005541914.localdomain podman[294692]: 2025-12-02 09:55:53.095851027 +0000 UTC m=+0.171502612 container died aac4a578639b811c36f26ff1a33c261dfbf2b58514082ab033dc305b0cbe1d17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_nash, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, RELEASE=main, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, name=rhceph, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, architecture=x86_64, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., GIT_CLEAN=True, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 02 09:55:53 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.26836 -' entity='client.admin' cmd=[{"prefix": "orch host drain", "hostname": "np0005541910.localdomain", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:55:53 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 02 09:55:53 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO root] Added label _no_schedule to host np0005541910.localdomain
Dec 02 09:55:53 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Added label _no_schedule to host np0005541910.localdomain
Dec 02 09:55:53 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 02 09:55:53 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO root] Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005541910.localdomain
Dec 02 09:55:53 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005541910.localdomain
Dec 02 09:55:53 np0005541914.localdomain podman[294712]: 2025-12-02 09:55:53.189026181 +0000 UTC m=+0.082460098 container remove aac4a578639b811c36f26ff1a33c261dfbf2b58514082ab033dc305b0cbe1d17 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=keen_nash, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, ceph=True, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhceph, RELEASE=main, GIT_BRANCH=main, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 02 09:55:53 np0005541914.localdomain systemd[1]: libpod-conmon-aac4a578639b811c36f26ff1a33c261dfbf2b58514082ab033dc305b0cbe1d17.scope: Deactivated successfully.
Dec 02 09:55:53 np0005541914.localdomain sudo[294653]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:53 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0)
Dec 02 09:55:53 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0)
Dec 02 09:55:53 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Dec 02 09:55:53 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Dec 02 09:55:53 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Dec 02 09:55:53 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 02 09:55:53 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:53 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:53 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005541914.localdomain
Dec 02 09:55:53 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005541914.localdomain
Dec 02 09:55:53 np0005541914.localdomain sudo[294728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:55:53 np0005541914.localdomain sudo[294728]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:53 np0005541914.localdomain sudo[294728]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:53 np0005541914.localdomain sudo[294746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:55:53 np0005541914.localdomain sudo[294746]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:55:53.730 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:55:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:55:53.730 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 09:55:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:55:53.731 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 09:55:53 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v26: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:55:53.832 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 09:55:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:55:53.833 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:55:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:55:53.833 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:55:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:55:53.834 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 09:55:53 np0005541914.localdomain ceph-mon[288526]: Reconfiguring crash.np0005541914 (monmap changed)...
Dec 02 09:55:53 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon crash.np0005541914 on np0005541914.localdomain
Dec 02 09:55:53 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:53 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:53 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:53 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:53 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 02 09:55:53 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:54 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-65d7f68c5bbf7ababb6d19dc908c3fde10b2dbb3434069f7c13bf83e3d3d2751-merged.mount: Deactivated successfully.
Dec 02 09:55:54 np0005541914.localdomain podman[294780]: 
Dec 02 09:55:54 np0005541914.localdomain podman[294780]: 2025-12-02 09:55:54.120231748 +0000 UTC m=+0.079331741 container create 0b0753121881db7fced91e4b09ad97acb86de27cad2668acc17b46699abe00bf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_shtern, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, GIT_CLEAN=True, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., GIT_BRANCH=main, name=rhceph, description=Red Hat Ceph Storage 7, architecture=x86_64, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, distribution-scope=public, version=7, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z)
Dec 02 09:55:54 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:55:54 np0005541914.localdomain systemd[1]: Started libpod-conmon-0b0753121881db7fced91e4b09ad97acb86de27cad2668acc17b46699abe00bf.scope.
Dec 02 09:55:54 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:55:54 np0005541914.localdomain podman[294780]: 2025-12-02 09:55:54.179538627 +0000 UTC m=+0.138638610 container init 0b0753121881db7fced91e4b09ad97acb86de27cad2668acc17b46699abe00bf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_shtern, vendor=Red Hat, Inc., GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, RELEASE=main, build-date=2025-11-26T19:44:28Z, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, version=7, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 09:55:54 np0005541914.localdomain podman[294780]: 2025-12-02 09:55:54.088346116 +0000 UTC m=+0.047446149 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:55:54 np0005541914.localdomain podman[294780]: 2025-12-02 09:55:54.192229845 +0000 UTC m=+0.151329828 container start 0b0753121881db7fced91e4b09ad97acb86de27cad2668acc17b46699abe00bf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_shtern, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, release=1763362218, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, ceph=True, name=rhceph, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main)
Dec 02 09:55:54 np0005541914.localdomain podman[294780]: 2025-12-02 09:55:54.192578075 +0000 UTC m=+0.151678098 container attach 0b0753121881db7fced91e4b09ad97acb86de27cad2668acc17b46699abe00bf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_shtern, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, version=7, vendor=Red Hat, Inc.)
Dec 02 09:55:54 np0005541914.localdomain systemd[1]: libpod-0b0753121881db7fced91e4b09ad97acb86de27cad2668acc17b46699abe00bf.scope: Deactivated successfully.
Dec 02 09:55:54 np0005541914.localdomain naughty_shtern[294797]: 167 167
Dec 02 09:55:54 np0005541914.localdomain podman[294780]: 2025-12-02 09:55:54.195275178 +0000 UTC m=+0.154375231 container died 0b0753121881db7fced91e4b09ad97acb86de27cad2668acc17b46699abe00bf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_shtern, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, GIT_BRANCH=main, distribution-scope=public, io.buildah.version=1.41.4, architecture=x86_64, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, version=7, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.openshift.expose-services=, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 02 09:55:54 np0005541914.localdomain podman[294796]: 2025-12-02 09:55:54.259790326 +0000 UTC m=+0.101584780 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 02 09:55:54 np0005541914.localdomain podman[294796]: 2025-12-02 09:55:54.269201442 +0000 UTC m=+0.110995916 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3)
Dec 02 09:55:54 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:55:54 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 09:55:54 np0005541914.localdomain podman[294811]: 2025-12-02 09:55:54.351812263 +0000 UTC m=+0.147659606 container remove 0b0753121881db7fced91e4b09ad97acb86de27cad2668acc17b46699abe00bf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=naughty_shtern, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, vcs-type=git, version=7, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, name=rhceph, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 02 09:55:54 np0005541914.localdomain systemd[1]: libpod-conmon-0b0753121881db7fced91e4b09ad97acb86de27cad2668acc17b46699abe00bf.scope: Deactivated successfully.
Dec 02 09:55:54 np0005541914.localdomain podman[294831]: 2025-12-02 09:55:54.416784875 +0000 UTC m=+0.116120914 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 09:55:54 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:55:54 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:55:54 np0005541914.localdomain podman[294831]: 2025-12-02 09:55:54.456847587 +0000 UTC m=+0.156183596 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 09:55:54 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 09:55:54 np0005541914.localdomain sudo[294746]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:54 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0)
Dec 02 09:55:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:55:54.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:55:54 np0005541914.localdomain podman[294859]: 2025-12-02 09:55:54.530213845 +0000 UTC m=+0.065489199 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 02 09:55:54 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0)
Dec 02 09:55:54 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:55:54 np0005541914.localdomain podman[294859]: 2025-12-02 09:55:54.563633594 +0000 UTC m=+0.098908928 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 02 09:55:54 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring osd.4 (monmap changed)...
Dec 02 09:55:54 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring osd.4 (monmap changed)...
Dec 02 09:55:54 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "osd.4"} v 0)
Dec 02 09:55:54 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Dec 02 09:55:54 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:54 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:54 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005541914.localdomain
Dec 02 09:55:54 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005541914.localdomain
Dec 02 09:55:54 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:55:54 np0005541914.localdomain sudo[294887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:55:54 np0005541914.localdomain podman[294880]: 2025-12-02 09:55:54.610923957 +0000 UTC m=+0.049898823 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 02 09:55:54 np0005541914.localdomain sudo[294887]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:54 np0005541914.localdomain sudo[294887]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:54 np0005541914.localdomain podman[294880]: 2025-12-02 09:55:54.638810768 +0000 UTC m=+0.077785644 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller)
Dec 02 09:55:54 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:55:54 np0005541914.localdomain sudo[294917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:55:54 np0005541914.localdomain sudo[294917]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:54 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.34313 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "host_pattern": "np0005541910.localdomain", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 02 09:55:54 np0005541914.localdomain ceph-mon[288526]: from='client.26836 -' entity='client.admin' cmd=[{"prefix": "orch host drain", "hostname": "np0005541910.localdomain", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:55:54 np0005541914.localdomain ceph-mon[288526]: Added label _no_schedule to host np0005541910.localdomain
Dec 02 09:55:54 np0005541914.localdomain ceph-mon[288526]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005541910.localdomain
Dec 02 09:55:54 np0005541914.localdomain ceph-mon[288526]: Reconfiguring osd.1 (monmap changed)...
Dec 02 09:55:54 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.1 on np0005541914.localdomain
Dec 02 09:55:54 np0005541914.localdomain ceph-mon[288526]: pgmap v26: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:54 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:54 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:54 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Dec 02 09:55:54 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:55 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-5cde19fa133f0ceac604e06622698c40e92773af801f892f7cca39423688d386-merged.mount: Deactivated successfully.
Dec 02 09:55:55 np0005541914.localdomain podman[294958]: 
Dec 02 09:55:55 np0005541914.localdomain podman[294958]: 2025-12-02 09:55:55.076521111 +0000 UTC m=+0.066421227 container create f44278155fb5bbb91e694272e28d868a2e19b291066819b32f680c6202d062b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_solomon, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=7, GIT_BRANCH=main, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, release=1763362218, architecture=x86_64, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 09:55:55 np0005541914.localdomain systemd[1]: Started libpod-conmon-f44278155fb5bbb91e694272e28d868a2e19b291066819b32f680c6202d062b4.scope.
Dec 02 09:55:55 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:55:55 np0005541914.localdomain podman[294958]: 2025-12-02 09:55:55.140573915 +0000 UTC m=+0.130473991 container init f44278155fb5bbb91e694272e28d868a2e19b291066819b32f680c6202d062b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_solomon, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, distribution-scope=public, ceph=True, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, name=rhceph, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=)
Dec 02 09:55:55 np0005541914.localdomain podman[294958]: 2025-12-02 09:55:55.050043954 +0000 UTC m=+0.039944030 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:55:55 np0005541914.localdomain podman[294958]: 2025-12-02 09:55:55.150848539 +0000 UTC m=+0.140748615 container start f44278155fb5bbb91e694272e28d868a2e19b291066819b32f680c6202d062b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_solomon, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.openshift.tags=rhceph ceph, version=7, vendor=Red Hat, Inc., io.buildah.version=1.41.4, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, name=rhceph)
Dec 02 09:55:55 np0005541914.localdomain podman[294958]: 2025-12-02 09:55:55.151018804 +0000 UTC m=+0.140918900 container attach f44278155fb5bbb91e694272e28d868a2e19b291066819b32f680c6202d062b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_solomon, distribution-scope=public, ceph=True, RELEASE=main, architecture=x86_64, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, release=1763362218, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 02 09:55:55 np0005541914.localdomain sleepy_solomon[294973]: 167 167
Dec 02 09:55:55 np0005541914.localdomain systemd[1]: libpod-f44278155fb5bbb91e694272e28d868a2e19b291066819b32f680c6202d062b4.scope: Deactivated successfully.
Dec 02 09:55:55 np0005541914.localdomain podman[294958]: 2025-12-02 09:55:55.15350491 +0000 UTC m=+0.143405026 container died f44278155fb5bbb91e694272e28d868a2e19b291066819b32f680c6202d062b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_solomon, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, release=1763362218, architecture=x86_64, version=7, ceph=True, GIT_CLEAN=True, io.buildah.version=1.41.4, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=)
Dec 02 09:55:55 np0005541914.localdomain podman[294978]: 2025-12-02 09:55:55.235843521 +0000 UTC m=+0.076259187 container remove f44278155fb5bbb91e694272e28d868a2e19b291066819b32f680c6202d062b4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_solomon, vendor=Red Hat, Inc., io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, version=7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, description=Red Hat Ceph Storage 7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=)
Dec 02 09:55:55 np0005541914.localdomain systemd[1]: libpod-conmon-f44278155fb5bbb91e694272e28d868a2e19b291066819b32f680c6202d062b4.scope: Deactivated successfully.
Dec 02 09:55:55 np0005541914.localdomain sudo[294917]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:55 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0)
Dec 02 09:55:55 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0)
Dec 02 09:55:55 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005541914.sqgqkj (monmap changed)...
Dec 02 09:55:55 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005541914.sqgqkj (monmap changed)...
Dec 02 09:55:55 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 02 09:55:55 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:55:55 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:55 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:55 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005541914.sqgqkj on np0005541914.localdomain
Dec 02 09:55:55 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005541914.sqgqkj on np0005541914.localdomain
Dec 02 09:55:55 np0005541914.localdomain sudo[295001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:55:55 np0005541914.localdomain sudo[295001]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:55 np0005541914.localdomain sudo[295001]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:55:55.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:55:55 np0005541914.localdomain sudo[295019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:55:55 np0005541914.localdomain sudo[295019]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:55 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v27: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:55 np0005541914.localdomain ceph-mon[288526]: Reconfiguring osd.4 (monmap changed)...
Dec 02 09:55:55 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.4 on np0005541914.localdomain
Dec 02 09:55:55 np0005541914.localdomain ceph-mon[288526]: from='client.34313 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "host_pattern": "np0005541910.localdomain", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 02 09:55:55 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.106:0/2376208400' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:55:55 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:55 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:55 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:55:55 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:55:55 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:56 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-ca1387c74c649bf9b9b03b1f055508dc33fb7e9a42a0fd726056b8b6b73d8c2d-merged.mount: Deactivated successfully.
Dec 02 09:55:56 np0005541914.localdomain podman[295055]: 
Dec 02 09:55:56 np0005541914.localdomain podman[295055]: 2025-12-02 09:55:56.027804222 +0000 UTC m=+0.078789595 container create c4d7f6e2d694dd194fc59d8bc675cd0643992ef62360c6659cd1ac0e866abeac (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_ramanujan, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.buildah.version=1.41.4, release=1763362218, version=7, RELEASE=main, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, architecture=x86_64, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., vcs-type=git, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 02 09:55:56 np0005541914.localdomain systemd[1]: Started libpod-conmon-c4d7f6e2d694dd194fc59d8bc675cd0643992ef62360c6659cd1ac0e866abeac.scope.
Dec 02 09:55:56 np0005541914.localdomain podman[295055]: 2025-12-02 09:55:55.99595052 +0000 UTC m=+0.046935933 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:55:56 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:55:56 np0005541914.localdomain podman[295055]: 2025-12-02 09:55:56.126341017 +0000 UTC m=+0.177326400 container init c4d7f6e2d694dd194fc59d8bc675cd0643992ef62360c6659cd1ac0e866abeac (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_ramanujan, io.openshift.expose-services=, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, architecture=x86_64, CEPH_POINT_RELEASE=, release=1763362218, ceph=True, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vendor=Red Hat, Inc., version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, distribution-scope=public)
Dec 02 09:55:56 np0005541914.localdomain podman[295055]: 2025-12-02 09:55:56.135945221 +0000 UTC m=+0.186930614 container start c4d7f6e2d694dd194fc59d8bc675cd0643992ef62360c6659cd1ac0e866abeac (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_ramanujan, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, GIT_CLEAN=True, name=rhceph, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=7, RELEASE=main, ceph=True, description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 09:55:56 np0005541914.localdomain podman[295055]: 2025-12-02 09:55:56.136295622 +0000 UTC m=+0.187281015 container attach c4d7f6e2d694dd194fc59d8bc675cd0643992ef62360c6659cd1ac0e866abeac (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_ramanujan, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, architecture=x86_64, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, GIT_BRANCH=main, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, version=7)
Dec 02 09:55:56 np0005541914.localdomain nervous_ramanujan[295070]: 167 167
Dec 02 09:55:56 np0005541914.localdomain systemd[1]: libpod-c4d7f6e2d694dd194fc59d8bc675cd0643992ef62360c6659cd1ac0e866abeac.scope: Deactivated successfully.
Dec 02 09:55:56 np0005541914.localdomain podman[295055]: 2025-12-02 09:55:56.140244692 +0000 UTC m=+0.191230085 container died c4d7f6e2d694dd194fc59d8bc675cd0643992ef62360c6659cd1ac0e866abeac (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_ramanujan, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vcs-type=git, release=1763362218, build-date=2025-11-26T19:44:28Z, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, version=7)
Dec 02 09:55:56 np0005541914.localdomain podman[295075]: 2025-12-02 09:55:56.233496056 +0000 UTC m=+0.084277212 container remove c4d7f6e2d694dd194fc59d8bc675cd0643992ef62360c6659cd1ac0e866abeac (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_ramanujan, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, release=1763362218, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, RELEASE=main, GIT_BRANCH=main, com.redhat.component=rhceph-container)
Dec 02 09:55:56 np0005541914.localdomain systemd[1]: libpod-conmon-c4d7f6e2d694dd194fc59d8bc675cd0643992ef62360c6659cd1ac0e866abeac.scope: Deactivated successfully.
Dec 02 09:55:56 np0005541914.localdomain sudo[295019]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:56 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0)
Dec 02 09:55:56 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0)
Dec 02 09:55:56 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005541914.lljzmk (monmap changed)...
Dec 02 09:55:56 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005541914.lljzmk (monmap changed)...
Dec 02 09:55:56 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 02 09:55:56 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:55:56 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 02 09:55:56 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:55:56 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:56 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:56 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005541914.lljzmk on np0005541914.localdomain
Dec 02 09:55:56 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005541914.lljzmk on np0005541914.localdomain
Dec 02 09:55:56 np0005541914.localdomain sudo[295092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:55:56 np0005541914.localdomain sudo[295092]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:56 np0005541914.localdomain sudo[295092]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:56 np0005541914.localdomain sudo[295110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:55:56 np0005541914.localdomain sudo[295110]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:56 np0005541914.localdomain podman[295145]: 
Dec 02 09:55:56 np0005541914.localdomain podman[295145]: 2025-12-02 09:55:56.893858492 +0000 UTC m=+0.063951902 container create d2e46c486a6efef070c0ea6719799543390ff26f57488c5090f0f62cfd9ac28c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_satoshi, build-date=2025-11-26T19:44:28Z, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, architecture=x86_64, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 09:55:56 np0005541914.localdomain systemd[1]: Started libpod-conmon-d2e46c486a6efef070c0ea6719799543390ff26f57488c5090f0f62cfd9ac28c.scope.
Dec 02 09:55:56 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:55:56 np0005541914.localdomain podman[295145]: 2025-12-02 09:55:56.863710302 +0000 UTC m=+0.033803732 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:55:56 np0005541914.localdomain podman[295145]: 2025-12-02 09:55:56.968127558 +0000 UTC m=+0.138220958 container init d2e46c486a6efef070c0ea6719799543390ff26f57488c5090f0f62cfd9ac28c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_satoshi, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, distribution-scope=public, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., ceph=True, release=1763362218, version=7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, RELEASE=main, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, name=rhceph, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph)
Dec 02 09:55:56 np0005541914.localdomain podman[295145]: 2025-12-02 09:55:56.976569856 +0000 UTC m=+0.146663256 container start d2e46c486a6efef070c0ea6719799543390ff26f57488c5090f0f62cfd9ac28c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_satoshi, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, RELEASE=main, build-date=2025-11-26T19:44:28Z, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218)
Dec 02 09:55:56 np0005541914.localdomain podman[295145]: 2025-12-02 09:55:56.976815483 +0000 UTC m=+0.146908933 container attach d2e46c486a6efef070c0ea6719799543390ff26f57488c5090f0f62cfd9ac28c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_satoshi, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_BRANCH=main)
Dec 02 09:55:56 np0005541914.localdomain brave_satoshi[295160]: 167 167
Dec 02 09:55:56 np0005541914.localdomain systemd[1]: libpod-d2e46c486a6efef070c0ea6719799543390ff26f57488c5090f0f62cfd9ac28c.scope: Deactivated successfully.
Dec 02 09:55:56 np0005541914.localdomain podman[295145]: 2025-12-02 09:55:56.979066751 +0000 UTC m=+0.149160231 container died d2e46c486a6efef070c0ea6719799543390ff26f57488c5090f0f62cfd9ac28c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_satoshi, vendor=Red Hat, Inc., io.openshift.expose-services=, RELEASE=main, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, release=1763362218, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z)
Dec 02 09:55:56 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mds.mds.np0005541914.sqgqkj (monmap changed)...
Dec 02 09:55:56 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mds.mds.np0005541914.sqgqkj on np0005541914.localdomain
Dec 02 09:55:56 np0005541914.localdomain ceph-mon[288526]: pgmap v27: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:56 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.106:0/3001664006' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:55:56 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:56 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:56 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:55:56 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:55:56 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:55:56 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:57 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-0ddfc7e13fa5653ad296dc381eafec5c4a56023f55bc89db714b7a311a7d944c-merged.mount: Deactivated successfully.
Dec 02 09:55:57 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-2a3430851471d9366ece34bf59a3862aebf49438bc98f54a924f6db145391b4a-merged.mount: Deactivated successfully.
Dec 02 09:55:57 np0005541914.localdomain podman[295165]: 2025-12-02 09:55:57.053955066 +0000 UTC m=+0.066378966 container remove d2e46c486a6efef070c0ea6719799543390ff26f57488c5090f0f62cfd9ac28c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_satoshi, RELEASE=main, GIT_BRANCH=main, io.buildah.version=1.41.4, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.openshift.tags=rhceph ceph, version=7, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, name=rhceph, vendor=Red Hat, Inc.)
Dec 02 09:55:57 np0005541914.localdomain systemd[1]: libpod-conmon-d2e46c486a6efef070c0ea6719799543390ff26f57488c5090f0f62cfd9ac28c.scope: Deactivated successfully.
Dec 02 09:55:57 np0005541914.localdomain sudo[295110]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:57 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0)
Dec 02 09:55:57 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0)
Dec 02 09:55:57 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005541914 (monmap changed)...
Dec 02 09:55:57 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005541914 (monmap changed)...
Dec 02 09:55:57 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 02 09:55:57 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:55:57 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 02 09:55:57 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 02 09:55:57 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:57 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:57 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005541914 on np0005541914.localdomain
Dec 02 09:55:57 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005541914 on np0005541914.localdomain
Dec 02 09:55:57 np0005541914.localdomain sudo[295182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:55:57 np0005541914.localdomain sudo[295182]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:57 np0005541914.localdomain sudo[295182]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:57 np0005541914.localdomain sudo[295200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:55:57 np0005541914.localdomain sudo[295200]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:57 np0005541914.localdomain podman[295236]: 
Dec 02 09:55:57 np0005541914.localdomain podman[295236]: 2025-12-02 09:55:57.674498397 +0000 UTC m=+0.064692845 container create 9f12d93af1d53128b6bd1029350a7ed33b50d29fe5f60bd0cc65b6ae9c2fec86 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_noyce, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vcs-type=git, ceph=True, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, architecture=x86_64, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_CLEAN=True)
Dec 02 09:55:57 np0005541914.localdomain systemd[1]: Started libpod-conmon-9f12d93af1d53128b6bd1029350a7ed33b50d29fe5f60bd0cc65b6ae9c2fec86.scope.
Dec 02 09:55:57 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:55:57 np0005541914.localdomain podman[295236]: 2025-12-02 09:55:57.728336069 +0000 UTC m=+0.118530507 container init 9f12d93af1d53128b6bd1029350a7ed33b50d29fe5f60bd0cc65b6ae9c2fec86 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_noyce, build-date=2025-11-26T19:44:28Z, architecture=x86_64, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.buildah.version=1.41.4, ceph=True, version=7)
Dec 02 09:55:57 np0005541914.localdomain podman[295236]: 2025-12-02 09:55:57.735161927 +0000 UTC m=+0.125356395 container start 9f12d93af1d53128b6bd1029350a7ed33b50d29fe5f60bd0cc65b6ae9c2fec86 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_noyce, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, version=7, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., release=1763362218, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhceph ceph)
Dec 02 09:55:57 np0005541914.localdomain podman[295236]: 2025-12-02 09:55:57.735464956 +0000 UTC m=+0.125659404 container attach 9f12d93af1d53128b6bd1029350a7ed33b50d29fe5f60bd0cc65b6ae9c2fec86 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_noyce, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, io.openshift.expose-services=, vendor=Red Hat, Inc., ceph=True, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.4, release=1763362218, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph)
Dec 02 09:55:57 np0005541914.localdomain heuristic_noyce[295251]: 167 167
Dec 02 09:55:57 np0005541914.localdomain systemd[1]: libpod-9f12d93af1d53128b6bd1029350a7ed33b50d29fe5f60bd0cc65b6ae9c2fec86.scope: Deactivated successfully.
Dec 02 09:55:57 np0005541914.localdomain podman[295236]: 2025-12-02 09:55:57.73823297 +0000 UTC m=+0.128427468 container died 9f12d93af1d53128b6bd1029350a7ed33b50d29fe5f60bd0cc65b6ae9c2fec86 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_noyce, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, release=1763362218, RELEASE=main, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, vcs-type=git, GIT_BRANCH=main, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., version=7, com.redhat.component=rhceph-container)
Dec 02 09:55:57 np0005541914.localdomain podman[295236]: 2025-12-02 09:55:57.652512916 +0000 UTC m=+0.042707364 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:55:57 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v28: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:57 np0005541914.localdomain podman[295256]: 2025-12-02 09:55:57.801001786 +0000 UTC m=+0.056886786 container remove 9f12d93af1d53128b6bd1029350a7ed33b50d29fe5f60bd0cc65b6ae9c2fec86 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_noyce, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, name=rhceph, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, ceph=True, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, version=7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 02 09:55:57 np0005541914.localdomain systemd[1]: libpod-conmon-9f12d93af1d53128b6bd1029350a7ed33b50d29fe5f60bd0cc65b6ae9c2fec86.scope: Deactivated successfully.
Dec 02 09:55:57 np0005541914.localdomain sudo[295200]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:57 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0)
Dec 02 09:55:57 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0)
Dec 02 09:55:57 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mgr.np0005541914.lljzmk (monmap changed)...
Dec 02 09:55:57 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mgr.np0005541914.lljzmk on np0005541914.localdomain
Dec 02 09:55:57 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:57 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:57 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:55:57 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 02 09:55:57 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:57 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:57 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:55:58 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-fe233c1e727aa8b5c930368b8c0c5ae15db668281837d2eb4c2f7497817c7200-merged.mount: Deactivated successfully.
Dec 02 09:55:58 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mon.np0005541914 (monmap changed)...
Dec 02 09:55:58 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mon.np0005541914 on np0005541914.localdomain
Dec 02 09:55:58 np0005541914.localdomain ceph-mon[288526]: pgmap v28: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.44243 -' entity='client.admin' cmd=[{"prefix": "orch host rm", "hostname": "np0005541910.localdomain", "force": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:55:59 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 02 09:55:59 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005541910.localdomain"} v 0)
Dec 02 09:55:59 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541910.localdomain"} : dispatch
Dec 02 09:55:59 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO root] Removed host np0005541910.localdomain
Dec 02 09:55:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Removed host np0005541910.localdomain
Dec 02 09:55:59 np0005541914.localdomain ceph-mgr[287188]: [cephadm ERROR cephadm.utils] executing refresh((['np0005541910.localdomain', 'np0005541911.localdomain', 'np0005541912.localdomain', 'np0005541913.localdomain', 'np0005541914.localdomain'],)) failed.
                                                           Traceback (most recent call last):
                                                             File "/usr/share/ceph/mgr/cephadm/utils.py", line 94, in do_work
                                                               return f(*arg)
                                                             File "/usr/share/ceph/mgr/cephadm/serve.py", line 317, in refresh
                                                               and not self.mgr.inventory.has_label(host, SpecialHostLabels.NO_MEMORY_AUTOTUNE)
                                                             File "/usr/share/ceph/mgr/cephadm/inventory.py", line 253, in has_label
                                                               host = self._get_stored_name(host)
                                                             File "/usr/share/ceph/mgr/cephadm/inventory.py", line 181, in _get_stored_name
                                                               self.assert_host(host)
                                                             File "/usr/share/ceph/mgr/cephadm/inventory.py", line 209, in assert_host
                                                               raise OrchestratorError('host %s does not exist' % host)
                                                           orchestrator._interface.OrchestratorError: host np0005541910.localdomain does not exist
Dec 02 09:55:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [ERR] : executing refresh((['np0005541910.localdomain', 'np0005541911.localdomain', 'np0005541912.localdomain', 'np0005541913.localdomain', 'np0005541914.localdomain'],)) failed.
                                                           Traceback (most recent call last):
                                                             File "/usr/share/ceph/mgr/cephadm/utils.py", line 94, in do_work
                                                               return f(*arg)
                                                             File "/usr/share/ceph/mgr/cephadm/serve.py", line 317, in refresh
                                                               and not self.mgr.inventory.has_label(host, SpecialHostLabels.NO_MEMORY_AUTOTUNE)
                                                             File "/usr/share/ceph/mgr/cephadm/inventory.py", line 253, in has_label
                                                               host = self._get_stored_name(host)
                                                             File "/usr/share/ceph/mgr/cephadm/inventory.py", line 181, in _get_stored_name
                                                               self.assert_host(host)
                                                             File "/usr/share/ceph/mgr/cephadm/inventory.py", line 209, in assert_host
                                                               raise OrchestratorError('host %s does not exist' % host)
                                                           orchestrator._interface.OrchestratorError: host np0005541910.localdomain does not exist
Dec 02 09:55:59 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:55:59.142+0000 7f5c56ad9640 -1 log_channel(cephadm) log [ERR] : executing refresh((['np0005541910.localdomain', 'np0005541911.localdomain', 'np0005541912.localdomain', 'np0005541913.localdomain', 'np0005541914.localdomain'],)) failed.
Dec 02 09:55:59 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Traceback (most recent call last):
Dec 02 09:55:59 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]:   File "/usr/share/ceph/mgr/cephadm/utils.py", line 94, in do_work
Dec 02 09:55:59 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]:     return f(*arg)
Dec 02 09:55:59 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]:   File "/usr/share/ceph/mgr/cephadm/serve.py", line 317, in refresh
Dec 02 09:55:59 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]:     and not self.mgr.inventory.has_label(host, SpecialHostLabels.NO_MEMORY_AUTOTUNE)
Dec 02 09:55:59 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]:   File "/usr/share/ceph/mgr/cephadm/inventory.py", line 253, in has_label
Dec 02 09:55:59 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]:     host = self._get_stored_name(host)
Dec 02 09:55:59 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]:   File "/usr/share/ceph/mgr/cephadm/inventory.py", line 181, in _get_stored_name
Dec 02 09:55:59 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]:     self.assert_host(host)
Dec 02 09:55:59 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]:   File "/usr/share/ceph/mgr/cephadm/inventory.py", line 209, in assert_host
Dec 02 09:55:59 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]:     raise OrchestratorError('host %s does not exist' % host)
Dec 02 09:55:59 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: orchestrator._interface.OrchestratorError: host np0005541910.localdomain does not exist
Dec 02 09:55:59 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:55:59 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:55:59 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 02 09:55:59 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:55:59 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541911.localdomain:/etc/ceph/ceph.conf
Dec 02 09:55:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541911.localdomain:/etc/ceph/ceph.conf
Dec 02 09:55:59 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541912.localdomain:/etc/ceph/ceph.conf
Dec 02 09:55:59 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541913.localdomain:/etc/ceph/ceph.conf
Dec 02 09:55:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541912.localdomain:/etc/ceph/ceph.conf
Dec 02 09:55:59 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541914.localdomain:/etc/ceph/ceph.conf
Dec 02 09:55:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541913.localdomain:/etc/ceph/ceph.conf
Dec 02 09:55:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541914.localdomain:/etc/ceph/ceph.conf
Dec 02 09:55:59 np0005541914.localdomain sudo[295272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 02 09:55:59 np0005541914.localdomain sudo[295272]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:59 np0005541914.localdomain sudo[295272]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:59 np0005541914.localdomain sudo[295290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph
Dec 02 09:55:59 np0005541914.localdomain sudo[295290]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:59 np0005541914.localdomain sudo[295290]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:59 np0005541914.localdomain sudo[295308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:55:59 np0005541914.localdomain sudo[295308]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:59 np0005541914.localdomain sudo[295308]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:59 np0005541914.localdomain sudo[295326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:55:59 np0005541914.localdomain sudo[295326]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:59 np0005541914.localdomain sudo[295326]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:59 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:55:59 np0005541914.localdomain sudo[295344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:55:59 np0005541914.localdomain sudo[295344]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:59 np0005541914.localdomain sudo[295344]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:59 np0005541914.localdomain sudo[295378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:55:59 np0005541914.localdomain sudo[295378]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:59 np0005541914.localdomain sudo[295378]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:59 np0005541914.localdomain sudo[295396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:55:59 np0005541914.localdomain sudo[295396]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:59 np0005541914.localdomain sudo[295396]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:59 np0005541914.localdomain sudo[295414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 02 09:55:59 np0005541914.localdomain sudo[295414]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:59 np0005541914.localdomain sudo[295414]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:59 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:55:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:55:59 np0005541914.localdomain sudo[295432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 09:55:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v29: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:55:59 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:55:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:55:59 np0005541914.localdomain sudo[295432]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:59 np0005541914.localdomain sudo[295432]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:59 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:55:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:55:59 np0005541914.localdomain sudo[295450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 09:55:59 np0005541914.localdomain sudo[295450]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:59 np0005541914.localdomain sudo[295450]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:59 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:55:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:55:59 np0005541914.localdomain sudo[295468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:55:59 np0005541914.localdomain sudo[295468]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:59 np0005541914.localdomain sudo[295468]: pam_unix(sudo:session): session closed for user root
Dec 02 09:55:59 np0005541914.localdomain sudo[295486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:55:59 np0005541914.localdomain sudo[295486]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:55:59 np0005541914.localdomain sudo[295486]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:00 np0005541914.localdomain sudo[295504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:56:00 np0005541914.localdomain sudo[295504]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:00 np0005541914.localdomain sudo[295504]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:00 np0005541914.localdomain ceph-mon[288526]: from='client.44243 -' entity='client.admin' cmd=[{"prefix": "orch host rm", "hostname": "np0005541910.localdomain", "force": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:56:00 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:00 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541910.localdomain"} : dispatch
Dec 02 09:56:00 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541910.localdomain"} : dispatch
Dec 02 09:56:00 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005541910.localdomain"}]': finished
Dec 02 09:56:00 np0005541914.localdomain ceph-mon[288526]: Removed host np0005541910.localdomain
Dec 02 09:56:00 np0005541914.localdomain ceph-mon[288526]: executing refresh((['np0005541910.localdomain', 'np0005541911.localdomain', 'np0005541912.localdomain', 'np0005541913.localdomain', 'np0005541914.localdomain'],)) failed.
                                                           Traceback (most recent call last):
                                                             File "/usr/share/ceph/mgr/cephadm/utils.py", line 94, in do_work
                                                               return f(*arg)
                                                             File "/usr/share/ceph/mgr/cephadm/serve.py", line 317, in refresh
                                                               and not self.mgr.inventory.has_label(host, SpecialHostLabels.NO_MEMORY_AUTOTUNE)
                                                             File "/usr/share/ceph/mgr/cephadm/inventory.py", line 253, in has_label
                                                               host = self._get_stored_name(host)
                                                             File "/usr/share/ceph/mgr/cephadm/inventory.py", line 181, in _get_stored_name
                                                               self.assert_host(host)
                                                             File "/usr/share/ceph/mgr/cephadm/inventory.py", line 209, in assert_host
                                                               raise OrchestratorError('host %s does not exist' % host)
                                                           orchestrator._interface.OrchestratorError: host np0005541910.localdomain does not exist
Dec 02 09:56:00 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:00 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:56:00 np0005541914.localdomain ceph-mon[288526]: Updating np0005541911.localdomain:/etc/ceph/ceph.conf
Dec 02 09:56:00 np0005541914.localdomain ceph-mon[288526]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf
Dec 02 09:56:00 np0005541914.localdomain ceph-mon[288526]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf
Dec 02 09:56:00 np0005541914.localdomain ceph-mon[288526]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf
Dec 02 09:56:00 np0005541914.localdomain ceph-mon[288526]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:56:00 np0005541914.localdomain ceph-mon[288526]: pgmap v29: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:00 np0005541914.localdomain ceph-mon[288526]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:56:00 np0005541914.localdomain ceph-mon[288526]: Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:56:00 np0005541914.localdomain ceph-mon[288526]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:56:00 np0005541914.localdomain sudo[295538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:56:00 np0005541914.localdomain sudo[295538]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:00 np0005541914.localdomain sudo[295538]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:00 np0005541914.localdomain sudo[295556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:56:00 np0005541914.localdomain sudo[295556]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:00 np0005541914.localdomain sudo[295556]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:00 np0005541914.localdomain sudo[295574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:56:00 np0005541914.localdomain sudo[295574]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:00 np0005541914.localdomain sudo[295574]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:00 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0)
Dec 02 09:56:00 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 09:56:00 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0)
Dec 02 09:56:00 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 09:56:00 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain.devices.0}] v 0)
Dec 02 09:56:00 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain}] v 0)
Dec 02 09:56:00 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 09:56:00 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 09:56:00 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 02 09:56:01 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] update: starting ev 9f05625e-c4a5-4cd1-a185-eda0536a7c71 (Updating node-proxy deployment (+4 -> 4))
Dec 02 09:56:01 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] complete: finished ev 9f05625e-c4a5-4cd1-a185-eda0536a7c71 (Updating node-proxy deployment (+4 -> 4))
Dec 02 09:56:01 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Completed event 9f05625e-c4a5-4cd1-a185-eda0536a7c71 (Updating node-proxy deployment (+4 -> 4)) in 0 seconds
Dec 02 09:56:01 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:01 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:01 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:01 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:01 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:01 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:01 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:01 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:01 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 02 09:56:01 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:56:01 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v30: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:01 np0005541914.localdomain sudo[295592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:56:02 np0005541914.localdomain sudo[295592]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:02 np0005541914.localdomain sudo[295592]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:02 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events
Dec 02 09:56:02 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 02 09:56:02 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:02 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:56:02 np0005541914.localdomain ceph-mon[288526]: pgmap v30: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:02 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:56:03.166 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:56:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:56:03.167 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:56:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:56:03.167 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:56:03 np0005541914.localdomain podman[239757]: time="2025-12-02T09:56:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 09:56:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:56:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 09:56:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:56:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19174 "" "Go-http-client/1.1"
Dec 02 09:56:03 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v31: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:04 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 02 09:56:04 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/727099599' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 09:56:04 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 02 09:56:04 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/727099599' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 09:56:04 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:56:04 np0005541914.localdomain ceph-mon[288526]: pgmap v31: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:04 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.32:0/727099599' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 09:56:04 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.32:0/727099599' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 09:56:04 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:56:04 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:56:05 np0005541914.localdomain systemd[1]: tmp-crun.YGSfSG.mount: Deactivated successfully.
Dec 02 09:56:05 np0005541914.localdomain podman[295611]: 2025-12-02 09:56:05.076967121 +0000 UTC m=+0.079569639 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, maintainer=Red Hat, Inc., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_id=edpm, distribution-scope=public, io.openshift.tags=minimal rhel9, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git)
Dec 02 09:56:05 np0005541914.localdomain systemd[1]: tmp-crun.5Nif2E.mount: Deactivated successfully.
Dec 02 09:56:05 np0005541914.localdomain podman[295610]: 2025-12-02 09:56:05.127625756 +0000 UTC m=+0.129717438 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 02 09:56:05 np0005541914.localdomain podman[295610]: 2025-12-02 09:56:05.13561194 +0000 UTC m=+0.137703622 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 09:56:05 np0005541914.localdomain podman[295611]: 2025-12-02 09:56:05.144787009 +0000 UTC m=+0.147389567 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, config_id=edpm, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, distribution-scope=public, version=9.6, name=ubi9-minimal, release=1755695350, io.buildah.version=1.33.7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 02 09:56:05 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:56:05 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:56:05 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v32: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:06 np0005541914.localdomain ceph-mon[288526]: pgmap v32: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.44259 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:56:07 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO root] Saving service mon spec with placement label:mon
Dec 02 09:56:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Saving service mon spec with placement label:mon
Dec 02 09:56:07 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 02 09:56:07 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:56:07 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:07 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 02 09:56:07 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:56:07 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 02 09:56:07 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] update: starting ev f93cfb42-6db3-45f2-9d32-013716e483b0 (Updating node-proxy deployment (+4 -> 4))
Dec 02 09:56:07 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] complete: finished ev f93cfb42-6db3-45f2-9d32-013716e483b0 (Updating node-proxy deployment (+4 -> 4))
Dec 02 09:56:07 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Completed event f93cfb42-6db3-45f2-9d32-013716e483b0 (Updating node-proxy deployment (+4 -> 4)) in 0 seconds
Dec 02 09:56:07 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 02 09:56:07 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:56:07 np0005541914.localdomain sudo[295654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:56:07 np0005541914.localdomain sudo[295654]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:07 np0005541914.localdomain sudo[295654]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v33: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:08 np0005541914.localdomain ceph-mon[288526]: from='client.44259 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:56:08 np0005541914.localdomain ceph-mon[288526]: Saving service mon spec with placement label:mon
Dec 02 09:56:08 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:08 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:08 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:56:08 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:08 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:56:08 np0005541914.localdomain ceph-mon[288526]: pgmap v33: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:08 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.34329 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005541913", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 02 09:56:09 np0005541914.localdomain sshd[295672]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:56:09 np0005541914.localdomain ceph-mon[288526]: from='client.34329 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005541913", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 02 09:56:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_09:56:09
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] do_upmap
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] pools ['images', 'backups', 'manila_data', 'volumes', 'vms', '.mgr', 'manila_metadata']
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32)
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014449417225013959 of space, bias 1.0, pg target 0.2885066972594454 quantized to 32 (current 32)
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.1810441094360693e-06 of space, bias 4.0, pg target 0.001741927228736274 quantized to 16 (current 16)
Dec 02 09:56:09 np0005541914.localdomain sshd[295672]: Invalid user janice from 34.78.29.97 port 39724
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v34: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:09 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.26864 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005541913"], "force": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO root] Remove daemons mon.np0005541913
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Remove daemons mon.np0005541913
Dec 02 09:56:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "quorum_status"} v 0)
Dec 02 09:56:09 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "quorum_status"} : dispatch
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.services.cephadmservice] Safe to remove mon.np0005541913: new quorum should be ['np0005541911', 'np0005541914', 'np0005541912'] (from ['np0005541911', 'np0005541914', 'np0005541912'])
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Safe to remove mon.np0005541913: new quorum should be ['np0005541911', 'np0005541914', 'np0005541912'] (from ['np0005541911', 'np0005541914', 'np0005541912'])
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.services.cephadmservice] Removing monitor np0005541913 from monmap...
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Removing monitor np0005541913 from monmap...
Dec 02 09:56:09 np0005541914.localdomain sshd[295672]: Received disconnect from 34.78.29.97 port 39724:11: Bye Bye [preauth]
Dec 02 09:56:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e10 handle_command mon_command({"prefix": "mon rm", "name": "np0005541913"} v 0)
Dec 02 09:56:09 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon rm", "name": "np0005541913"} : dispatch
Dec 02 09:56:09 np0005541914.localdomain sshd[295672]: Disconnected from invalid user janice 34.78.29.97 port 39724 [preauth]
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Removing daemon mon.np0005541913 from np0005541913.localdomain -- ports []
Dec 02 09:56:09 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Removing daemon mon.np0005541913 from np0005541913.localdomain -- ports []
Dec 02 09:56:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(probing) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541911"} v 0)
Dec 02 09:56:09 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541911"} : dispatch
Dec 02 09:56:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(probing) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541912"} v 0)
Dec 02 09:56:09 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 09:56:09 np0005541914.localdomain podman[295674]: 2025-12-02 09:56:09.868283097 +0000 UTC m=+0.078861686 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, container_name=multipathd, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 02 09:56:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(probing) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541914"} v 0)
Dec 02 09:56:09 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch
Dec 02 09:56:09 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 calling monitor election
Dec 02 09:56:09 np0005541914.localdomain ceph-mon[288526]: paxos.1).electionLogic(44) init, last seen epoch 44
Dec 02 09:56:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(electing) e11 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:56:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(electing) e11 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:56:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:56:09 np0005541914.localdomain podman[295674]: 2025-12-02 09:56:09.906924996 +0000 UTC m=+0.117503595 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0)
Dec 02 09:56:09 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:56:10 np0005541914.localdomain ceph-mon[288526]: pgmap v34: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:10 np0005541914.localdomain ceph-mon[288526]: from='client.26864 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005541913"], "force": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:56:10 np0005541914.localdomain ceph-mon[288526]: Remove daemons mon.np0005541913
Dec 02 09:56:10 np0005541914.localdomain ceph-mon[288526]: Safe to remove mon.np0005541913: new quorum should be ['np0005541911', 'np0005541914', 'np0005541912'] (from ['np0005541911', 'np0005541914', 'np0005541912'])
Dec 02 09:56:10 np0005541914.localdomain ceph-mon[288526]: Removing monitor np0005541913 from monmap...
Dec 02 09:56:10 np0005541914.localdomain ceph-mon[288526]: Removing daemon mon.np0005541913 from np0005541913.localdomain -- ports []
Dec 02 09:56:10 np0005541914.localdomain ceph-mon[288526]: mon.np0005541912 calling monitor election
Dec 02 09:56:10 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541911"} : dispatch
Dec 02 09:56:10 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 09:56:10 np0005541914.localdomain ceph-mon[288526]: mon.np0005541911 calling monitor election
Dec 02 09:56:10 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch
Dec 02 09:56:10 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914 calling monitor election
Dec 02 09:56:10 np0005541914.localdomain ceph-mon[288526]: mon.np0005541911 is new leader, mons np0005541911,np0005541914,np0005541912 in quorum (ranks 0,1,2)
Dec 02 09:56:10 np0005541914.localdomain ceph-mon[288526]: monmap epoch 11
Dec 02 09:56:10 np0005541914.localdomain ceph-mon[288526]: fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:56:10 np0005541914.localdomain ceph-mon[288526]: last_changed 2025-12-02T09:56:09.850960+0000
Dec 02 09:56:10 np0005541914.localdomain ceph-mon[288526]: created 2025-12-02T07:44:17.411659+0000
Dec 02 09:56:10 np0005541914.localdomain ceph-mon[288526]: min_mon_release 18 (reef)
Dec 02 09:56:10 np0005541914.localdomain ceph-mon[288526]: election_strategy: 1
Dec 02 09:56:10 np0005541914.localdomain ceph-mon[288526]: 0: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005541911
Dec 02 09:56:10 np0005541914.localdomain ceph-mon[288526]: 1: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005541914
Dec 02 09:56:10 np0005541914.localdomain ceph-mon[288526]: 2: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005541912
Dec 02 09:56:10 np0005541914.localdomain ceph-mon[288526]: fsmap cephfs:1 {0=mds.np0005541912.ghcwcm=up:active} 2 up:standby
Dec 02 09:56:10 np0005541914.localdomain ceph-mon[288526]: osdmap e87: 6 total, 6 up, 6 in
Dec 02 09:56:10 np0005541914.localdomain ceph-mon[288526]: mgrmap e24: np0005541914.lljzmk(active, since 60s), standbys: np0005541910.kzipdo, np0005541913.mfesdm, np0005541912.qwddia, np0005541911.adcgiw
Dec 02 09:56:10 np0005541914.localdomain ceph-mon[288526]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm
Dec 02 09:56:10 np0005541914.localdomain ceph-mon[288526]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm
Dec 02 09:56:10 np0005541914.localdomain ceph-mon[288526]:     stray daemon mgr.np0005541909.kfesnk on host np0005541909.localdomain not managed by cephadm
Dec 02 09:56:10 np0005541914.localdomain ceph-mon[288526]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm
Dec 02 09:56:10 np0005541914.localdomain ceph-mon[288526]:     stray host np0005541909.localdomain has 1 stray daemons: ['mgr.np0005541909.kfesnk']
Dec 02 09:56:11 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v35: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:56:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 09:56:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:56:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:56:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:56:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:56:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:56:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 09:56:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:56:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:56:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 09:56:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:56:12 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events
Dec 02 09:56:12 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 02 09:56:12 np0005541914.localdomain ceph-mon[288526]: pgmap v35: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:12 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:12 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 09:56:12 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 09:56:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v36: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:13 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:56:13 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:13 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 02 09:56:13 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:56:13 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541911.localdomain:/etc/ceph/ceph.conf
Dec 02 09:56:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541911.localdomain:/etc/ceph/ceph.conf
Dec 02 09:56:13 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541912.localdomain:/etc/ceph/ceph.conf
Dec 02 09:56:13 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541913.localdomain:/etc/ceph/ceph.conf
Dec 02 09:56:13 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541914.localdomain:/etc/ceph/ceph.conf
Dec 02 09:56:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541912.localdomain:/etc/ceph/ceph.conf
Dec 02 09:56:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541913.localdomain:/etc/ceph/ceph.conf
Dec 02 09:56:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541914.localdomain:/etc/ceph/ceph.conf
Dec 02 09:56:13 np0005541914.localdomain sudo[295691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 02 09:56:13 np0005541914.localdomain sudo[295691]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:13 np0005541914.localdomain sudo[295691]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:13 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:13 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:13 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:13 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:56:13 np0005541914.localdomain sudo[295709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph
Dec 02 09:56:13 np0005541914.localdomain sudo[295709]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:13 np0005541914.localdomain sudo[295709]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:14 np0005541914.localdomain sudo[295727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:56:14 np0005541914.localdomain sudo[295727]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:14 np0005541914.localdomain sudo[295727]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:14 np0005541914.localdomain sudo[295745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:56:14 np0005541914.localdomain sudo[295745]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:14 np0005541914.localdomain sudo[295745]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:14 np0005541914.localdomain sudo[295763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:56:14 np0005541914.localdomain sudo[295763]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:14 np0005541914.localdomain sudo[295763]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:14 np0005541914.localdomain sudo[295797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:56:14 np0005541914.localdomain sudo[295797]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:14 np0005541914.localdomain sudo[295797]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:14 np0005541914.localdomain sudo[295815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:56:14 np0005541914.localdomain sudo[295815]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:14 np0005541914.localdomain sudo[295815]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:14 np0005541914.localdomain sudo[295833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 02 09:56:14 np0005541914.localdomain sudo[295833]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:14 np0005541914.localdomain sudo[295833]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:14 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:56:14 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:56:14 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:56:14 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:56:14 np0005541914.localdomain sudo[295851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 09:56:14 np0005541914.localdomain sudo[295851]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:14 np0005541914.localdomain sudo[295851]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:14 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:56:14 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:56:14 np0005541914.localdomain sudo[295869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 09:56:14 np0005541914.localdomain sudo[295869]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:14 np0005541914.localdomain sudo[295869]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:14 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:56:14 np0005541914.localdomain sudo[295887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:56:14 np0005541914.localdomain sudo[295887]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:14 np0005541914.localdomain sudo[295887]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:14 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:56:14 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:56:14 np0005541914.localdomain sudo[295905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:56:14 np0005541914.localdomain sudo[295905]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:14 np0005541914.localdomain sudo[295905]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:14 np0005541914.localdomain sudo[295923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:56:14 np0005541914.localdomain sudo[295923]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:14 np0005541914.localdomain sudo[295923]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:14 np0005541914.localdomain sudo[295957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:56:14 np0005541914.localdomain sudo[295957]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:14 np0005541914.localdomain sudo[295957]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:14 np0005541914.localdomain sudo[295975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:56:14 np0005541914.localdomain sudo[295975]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:14 np0005541914.localdomain sudo[295975]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:14 np0005541914.localdomain sudo[295993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:56:14 np0005541914.localdomain sudo[295993]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:14 np0005541914.localdomain sudo[295993]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:14 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0)
Dec 02 09:56:14 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0)
Dec 02 09:56:14 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 09:56:15 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 09:56:15 np0005541914.localdomain ceph-mon[288526]: pgmap v36: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:15 np0005541914.localdomain ceph-mon[288526]: Updating np0005541911.localdomain:/etc/ceph/ceph.conf
Dec 02 09:56:15 np0005541914.localdomain ceph-mon[288526]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf
Dec 02 09:56:15 np0005541914.localdomain ceph-mon[288526]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf
Dec 02 09:56:15 np0005541914.localdomain ceph-mon[288526]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf
Dec 02 09:56:15 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:15 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:15 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain.devices.0}] v 0)
Dec 02 09:56:15 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain}] v 0)
Dec 02 09:56:15 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 09:56:15 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 09:56:15 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 02 09:56:15 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] update: starting ev e3273f04-a1c0-4cc6-9dda-2febd6ff084f (Updating node-proxy deployment (+4 -> 4))
Dec 02 09:56:15 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] complete: finished ev e3273f04-a1c0-4cc6-9dda-2febd6ff084f (Updating node-proxy deployment (+4 -> 4))
Dec 02 09:56:15 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Completed event e3273f04-a1c0-4cc6-9dda-2febd6ff084f (Updating node-proxy deployment (+4 -> 4)) in 0 seconds
Dec 02 09:56:15 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 02 09:56:15 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:56:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.438 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:56:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.439 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:56:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.439 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:56:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.439 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:56:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:56:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:56:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:56:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:56:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:56:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:56:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:56:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:56:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:56:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:56:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:56:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:56:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:56:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:56:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:56:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:56:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:56:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:56:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:56:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:56:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:56:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:56:15 np0005541914.localdomain sudo[296011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:56:15 np0005541914.localdomain sudo[296011]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:15 np0005541914.localdomain sudo[296011]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:15 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005541911.adcgiw (monmap changed)...
Dec 02 09:56:15 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005541911.adcgiw (monmap changed)...
Dec 02 09:56:15 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005541911.adcgiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 02 09:56:15 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541911.adcgiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:56:15 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 02 09:56:15 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:56:15 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:56:15 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:15 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005541911.adcgiw on np0005541911.localdomain
Dec 02 09:56:15 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005541911.adcgiw on np0005541911.localdomain
Dec 02 09:56:15 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v37: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:16 np0005541914.localdomain ceph-mon[288526]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:56:16 np0005541914.localdomain ceph-mon[288526]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:56:16 np0005541914.localdomain ceph-mon[288526]: Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:56:16 np0005541914.localdomain ceph-mon[288526]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:56:16 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:16 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:16 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:16 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:16 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:16 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:16 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:16 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:56:16 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541911.adcgiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:56:16 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541911.adcgiw", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:56:16 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:56:16 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:16 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain.devices.0}] v 0)
Dec 02 09:56:16 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain}] v 0)
Dec 02 09:56:16 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005541911 (monmap changed)...
Dec 02 09:56:16 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005541911 (monmap changed)...
Dec 02 09:56:16 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005541911.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 02 09:56:16 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541911.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:56:16 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:56:16 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:16 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005541911 on np0005541911.localdomain
Dec 02 09:56:16 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005541911 on np0005541911.localdomain
Dec 02 09:56:17 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mgr.np0005541911.adcgiw (monmap changed)...
Dec 02 09:56:17 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mgr.np0005541911.adcgiw on np0005541911.localdomain
Dec 02 09:56:17 np0005541914.localdomain ceph-mon[288526]: pgmap v37: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:17 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:17 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:17 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541911.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:56:17 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541911.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:56:17 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:17 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events
Dec 02 09:56:17 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 02 09:56:17 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain.devices.0}] v 0)
Dec 02 09:56:17 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain}] v 0)
Dec 02 09:56:17 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005541912 (monmap changed)...
Dec 02 09:56:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005541912 (monmap changed)...
Dec 02 09:56:17 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 02 09:56:17 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:56:17 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:56:17 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:17 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005541912 on np0005541912.localdomain
Dec 02 09:56:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005541912 on np0005541912.localdomain
Dec 02 09:56:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v38: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:18 np0005541914.localdomain ceph-mon[288526]: Reconfiguring crash.np0005541911 (monmap changed)...
Dec 02 09:56:18 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon crash.np0005541911 on np0005541911.localdomain
Dec 02 09:56:18 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:18 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:18 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:18 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:56:18 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:56:18 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:18 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 09:56:18 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 09:56:18 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring osd.2 (monmap changed)...
Dec 02 09:56:18 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring osd.2 (monmap changed)...
Dec 02 09:56:18 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Dec 02 09:56:18 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 02 09:56:18 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:56:18 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:18 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005541912.localdomain
Dec 02 09:56:18 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005541912.localdomain
Dec 02 09:56:19 np0005541914.localdomain ceph-mon[288526]: Reconfiguring crash.np0005541912 (monmap changed)...
Dec 02 09:56:19 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon crash.np0005541912 on np0005541912.localdomain
Dec 02 09:56:19 np0005541914.localdomain ceph-mon[288526]: pgmap v38: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:19 np0005541914.localdomain ceph-mon[288526]: Reconfiguring osd.2 (monmap changed)...
Dec 02 09:56:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 02 09:56:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:19 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.2 on np0005541912.localdomain
Dec 02 09:56:19 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 09:56:19 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 09:56:19 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:56:19 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring osd.5 (monmap changed)...
Dec 02 09:56:19 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring osd.5 (monmap changed)...
Dec 02 09:56:19 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.5"} v 0)
Dec 02 09:56:19 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 02 09:56:19 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:56:19 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:19 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005541912.localdomain
Dec 02 09:56:19 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005541912.localdomain
Dec 02 09:56:19 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v39: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:20 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 09:56:20 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 09:56:20 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:20 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:20 np0005541914.localdomain ceph-mon[288526]: Reconfiguring osd.5 (monmap changed)...
Dec 02 09:56:20 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 02 09:56:20 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:20 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.5 on np0005541912.localdomain
Dec 02 09:56:20 np0005541914.localdomain ceph-mon[288526]: pgmap v39: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:20 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:20 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)...
Dec 02 09:56:20 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)...
Dec 02 09:56:20 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 02 09:56:20 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:56:20 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:56:20 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:20 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain
Dec 02 09:56:20 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain
Dec 02 09:56:21 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:21 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)...
Dec 02 09:56:21 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:56:21 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:56:21 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:21 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain
Dec 02 09:56:21 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 09:56:21 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 09:56:21 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005541912.qwddia (monmap changed)...
Dec 02 09:56:21 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005541912.qwddia (monmap changed)...
Dec 02 09:56:21 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 02 09:56:21 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:56:21 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 02 09:56:21 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:56:21 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:56:21 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:21 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005541912.qwddia on np0005541912.localdomain
Dec 02 09:56:21 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005541912.qwddia on np0005541912.localdomain
Dec 02 09:56:21 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v40: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:22 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 09:56:22 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 09:56:22 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005541913 (monmap changed)...
Dec 02 09:56:22 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005541913 (monmap changed)...
Dec 02 09:56:22 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 02 09:56:22 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:56:22 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:56:22 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:22 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain
Dec 02 09:56:22 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain
Dec 02 09:56:22 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:22 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:22 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mgr.np0005541912.qwddia (monmap changed)...
Dec 02 09:56:22 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:56:22 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:56:22 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:56:22 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:22 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mgr.np0005541912.qwddia on np0005541912.localdomain
Dec 02 09:56:22 np0005541914.localdomain ceph-mon[288526]: pgmap v40: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:22 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:22 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:22 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:56:22 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:56:22 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:22 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.26871 -' entity='client.admin' cmd=[{"prefix": "orch daemon add", "daemon_type": "mon", "placement": "np0005541913.localdomain:172.18.0.104", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:56:22 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 02 09:56:22 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 02 09:56:22 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:56:22 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:56:22 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:22 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Deploying daemon mon.np0005541913 on np0005541913.localdomain
Dec 02 09:56:22 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Deploying daemon mon.np0005541913 on np0005541913.localdomain
Dec 02 09:56:23 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 09:56:23 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 09:56:23 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Dec 02 09:56:23 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Dec 02 09:56:23 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Dec 02 09:56:23 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 02 09:56:23 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:56:23 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:23 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on np0005541913.localdomain
Dec 02 09:56:23 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on np0005541913.localdomain
Dec 02 09:56:23 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v41: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:23 np0005541914.localdomain ceph-mon[288526]: Reconfiguring crash.np0005541913 (monmap changed)...
Dec 02 09:56:23 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain
Dec 02 09:56:23 np0005541914.localdomain ceph-mon[288526]: from='client.26871 -' entity='client.admin' cmd=[{"prefix": "orch daemon add", "daemon_type": "mon", "placement": "np0005541913.localdomain:172.18.0.104", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:56:23 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:23 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:56:23 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:23 np0005541914.localdomain ceph-mon[288526]: Deploying daemon mon.np0005541913 on np0005541913.localdomain
Dec 02 09:56:23 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:23 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:23 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 02 09:56:23 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:24 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:56:24 np0005541914.localdomain ceph-mon[288526]: Reconfiguring osd.0 (monmap changed)...
Dec 02 09:56:24 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.0 on np0005541913.localdomain
Dec 02 09:56:24 np0005541914.localdomain ceph-mon[288526]: pgmap v41: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:24 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:56:24 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:56:24 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:56:25 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:56:25 np0005541914.localdomain podman[296029]: 2025-12-02 09:56:25.081721397 +0000 UTC m=+0.080652761 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:56:25 np0005541914.localdomain podman[296029]: 2025-12-02 09:56:25.168887136 +0000 UTC m=+0.167818540 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec 02 09:56:25 np0005541914.localdomain systemd[1]: tmp-crun.hmKeQO.mount: Deactivated successfully.
Dec 02 09:56:25 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:56:25 np0005541914.localdomain podman[296031]: 2025-12-02 09:56:25.193279021 +0000 UTC m=+0.184003725 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 02 09:56:25 np0005541914.localdomain podman[296037]: 2025-12-02 09:56:25.121933095 +0000 UTC m=+0.113043571 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Dec 02 09:56:25 np0005541914.localdomain podman[296031]: 2025-12-02 09:56:25.232792316 +0000 UTC m=+0.223517050 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 02 09:56:25 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 09:56:25 np0005541914.localdomain podman[296030]: 2025-12-02 09:56:25.270046602 +0000 UTC m=+0.263915682 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 09:56:25 np0005541914.localdomain podman[296030]: 2025-12-02 09:56:25.279851462 +0000 UTC m=+0.273720572 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 09:56:25 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 09:56:25 np0005541914.localdomain podman[296037]: 2025-12-02 09:56:25.308947639 +0000 UTC m=+0.300058125 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:56:25 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:56:25 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:56:25 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:56:25 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v42: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:25 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 09:56:25 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 09:56:26 np0005541914.localdomain systemd[1]: tmp-crun.liInR3.mount: Deactivated successfully.
Dec 02 09:56:26 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 09:56:26 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 09:56:26 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring osd.3 (monmap changed)...
Dec 02 09:56:26 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring osd.3 (monmap changed)...
Dec 02 09:56:26 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.3"} v 0)
Dec 02 09:56:26 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Dec 02 09:56:26 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:56:26 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:26 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.3 on np0005541913.localdomain
Dec 02 09:56:26 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.3 on np0005541913.localdomain
Dec 02 09:56:26 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:56:26 np0005541914.localdomain ceph-mgr[287188]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect)
Dec 02 09:56:26 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0)
Dec 02 09:56:26 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:56:26 np0005541914.localdomain ceph-mgr[287188]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory
Dec 02 09:56:26 np0005541914.localdomain ceph-mon[288526]: pgmap v42: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:26 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:26 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:26 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:26 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:26 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Dec 02 09:56:26 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:26 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:56:27 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 09:56:27 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 09:56:27 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)...
Dec 02 09:56:27 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)...
Dec 02 09:56:27 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 02 09:56:27 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:56:27 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:56:27 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:27 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain
Dec 02 09:56:27 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain
Dec 02 09:56:27 np0005541914.localdomain ceph-mgr[287188]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect)
Dec 02 09:56:27 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0)
Dec 02 09:56:27 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:56:27 np0005541914.localdomain ceph-mgr[287188]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory
Dec 02 09:56:27 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v43: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:27 np0005541914.localdomain ceph-mon[288526]: Reconfiguring osd.3 (monmap changed)...
Dec 02 09:56:27 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.3 on np0005541913.localdomain
Dec 02 09:56:27 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:27 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:27 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:56:27 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:56:27 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:27 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:56:28 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 09:56:29 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:56:29 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:56:29 np0005541914.localdomain ceph-mgr[287188]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect)
Dec 02 09:56:29 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0)
Dec 02 09:56:29 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:56:29 np0005541914.localdomain ceph-mgr[287188]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory
Dec 02 09:56:29 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 09:56:29 np0005541914.localdomain ceph-mgr[287188]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect)
Dec 02 09:56:29 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0)
Dec 02 09:56:29 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:56:29 np0005541914.localdomain ceph-mgr[287188]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory
Dec 02 09:56:29 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)...
Dec 02 09:56:29 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain
Dec 02 09:56:29 np0005541914.localdomain ceph-mon[288526]: pgmap v43: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:29 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005541913.mfesdm (monmap changed)...
Dec 02 09:56:29 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005541913.mfesdm (monmap changed)...
Dec 02 09:56:29 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 02 09:56:29 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:56:29 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 02 09:56:29 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:56:29 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:56:29 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:29 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain
Dec 02 09:56:29 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain
Dec 02 09:56:29 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v44: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:30 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 09:56:30 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 09:56:30 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005541914 (monmap changed)...
Dec 02 09:56:30 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005541914 (monmap changed)...
Dec 02 09:56:30 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 02 09:56:30 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:56:30 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:56:30 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:30 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005541914 on np0005541914.localdomain
Dec 02 09:56:30 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005541914 on np0005541914.localdomain
Dec 02 09:56:30 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:56:30 np0005541914.localdomain ceph-mgr[287188]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect)
Dec 02 09:56:30 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0)
Dec 02 09:56:30 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:56:30 np0005541914.localdomain ceph-mgr[287188]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory
Dec 02 09:56:30 np0005541914.localdomain sudo[296113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:56:30 np0005541914.localdomain sudo[296113]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:30 np0005541914.localdomain sudo[296113]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:30 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:30 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:56:30 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:56:30 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:30 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mgr.np0005541913.mfesdm (monmap changed)...
Dec 02 09:56:30 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:56:30 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:56:30 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:56:30 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:30 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain
Dec 02 09:56:30 np0005541914.localdomain ceph-mon[288526]: pgmap v44: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:30 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:30 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:30 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:56:30 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:56:30 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:30 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:56:30 np0005541914.localdomain sudo[296131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:56:30 np0005541914.localdomain sudo[296131]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:31 np0005541914.localdomain podman[296167]: 
Dec 02 09:56:31 np0005541914.localdomain podman[296167]: 2025-12-02 09:56:31.222302285 +0000 UTC m=+0.089318206 container create 45faec8b8d6cd827755e5232c2bf7c3cf1651589640bc7e6900bd54a6d8393b3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_shockley, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, distribution-scope=public, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, com.redhat.component=rhceph-container, vcs-type=git, io.buildah.version=1.41.4)
Dec 02 09:56:31 np0005541914.localdomain podman[296167]: 2025-12-02 09:56:31.162000676 +0000 UTC m=+0.029016617 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:56:31 np0005541914.localdomain systemd[1]: Started libpod-conmon-45faec8b8d6cd827755e5232c2bf7c3cf1651589640bc7e6900bd54a6d8393b3.scope.
Dec 02 09:56:31 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:56:31 np0005541914.localdomain podman[296167]: 2025-12-02 09:56:31.45292866 +0000 UTC m=+0.319944541 container init 45faec8b8d6cd827755e5232c2bf7c3cf1651589640bc7e6900bd54a6d8393b3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_shockley, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, architecture=x86_64, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., release=1763362218, distribution-scope=public, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, version=7)
Dec 02 09:56:31 np0005541914.localdomain podman[296167]: 2025-12-02 09:56:31.464911327 +0000 UTC m=+0.331927258 container start 45faec8b8d6cd827755e5232c2bf7c3cf1651589640bc7e6900bd54a6d8393b3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_shockley, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, GIT_CLEAN=True, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, ceph=True, GIT_BRANCH=main, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, vcs-type=git, distribution-scope=public, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 02 09:56:31 np0005541914.localdomain podman[296167]: 2025-12-02 09:56:31.465294198 +0000 UTC m=+0.332310169 container attach 45faec8b8d6cd827755e5232c2bf7c3cf1651589640bc7e6900bd54a6d8393b3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_shockley, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, ceph=True, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1763362218, GIT_CLEAN=True, RELEASE=main, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, GIT_BRANCH=main, version=7, io.openshift.tags=rhceph ceph, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7)
Dec 02 09:56:31 np0005541914.localdomain clever_shockley[296182]: 167 167
Dec 02 09:56:31 np0005541914.localdomain systemd[1]: libpod-45faec8b8d6cd827755e5232c2bf7c3cf1651589640bc7e6900bd54a6d8393b3.scope: Deactivated successfully.
Dec 02 09:56:31 np0005541914.localdomain podman[296167]: 2025-12-02 09:56:31.471565099 +0000 UTC m=+0.338581020 container died 45faec8b8d6cd827755e5232c2bf7c3cf1651589640bc7e6900bd54a6d8393b3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_shockley, ceph=True, io.openshift.expose-services=, version=7, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, name=rhceph, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, com.redhat.component=rhceph-container, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., RELEASE=main)
Dec 02 09:56:31 np0005541914.localdomain podman[296187]: 2025-12-02 09:56:31.558328626 +0000 UTC m=+0.076977229 container remove 45faec8b8d6cd827755e5232c2bf7c3cf1651589640bc7e6900bd54a6d8393b3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=clever_shockley, vendor=Red Hat, Inc., io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, ceph=True, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vcs-type=git, GIT_CLEAN=True, release=1763362218, build-date=2025-11-26T19:44:28Z)
Dec 02 09:56:31 np0005541914.localdomain systemd[1]: libpod-conmon-45faec8b8d6cd827755e5232c2bf7c3cf1651589640bc7e6900bd54a6d8393b3.scope: Deactivated successfully.
Dec 02 09:56:31 np0005541914.localdomain ceph-mgr[287188]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect)
Dec 02 09:56:31 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0)
Dec 02 09:56:31 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:56:31 np0005541914.localdomain ceph-mgr[287188]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory
Dec 02 09:56:31 np0005541914.localdomain sudo[296131]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:31 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0)
Dec 02 09:56:31 np0005541914.localdomain ceph-mon[288526]: Reconfiguring crash.np0005541914 (monmap changed)...
Dec 02 09:56:31 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon crash.np0005541914 on np0005541914.localdomain
Dec 02 09:56:31 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:56:31 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0)
Dec 02 09:56:31 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Dec 02 09:56:31 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Dec 02 09:56:31 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Dec 02 09:56:31 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 02 09:56:31 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:56:31 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:31 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005541914.localdomain
Dec 02 09:56:31 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005541914.localdomain
Dec 02 09:56:31 np0005541914.localdomain sudo[296203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:56:31 np0005541914.localdomain sudo[296203]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:31 np0005541914.localdomain sudo[296203]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:31 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v45: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:31 np0005541914.localdomain sudo[296221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:56:31 np0005541914.localdomain sudo[296221]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:32 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-6c5524597fd87619ceaca8f45ad57a3108c2855219448df246d81f615ed5ab3a-merged.mount: Deactivated successfully.
Dec 02 09:56:32 np0005541914.localdomain podman[296257]: 
Dec 02 09:56:32 np0005541914.localdomain podman[296257]: 2025-12-02 09:56:32.275975269 +0000 UTC m=+0.058113133 container create 06abe1f5837b6f381ed3a4f9d3969d5c109e7f799b0573ec2ed170f1ff8efdee (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_brahmagupta, io.buildah.version=1.41.4, distribution-scope=public, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, vcs-type=git, version=7, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, release=1763362218, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, name=rhceph)
Dec 02 09:56:32 np0005541914.localdomain systemd[1]: Started libpod-conmon-06abe1f5837b6f381ed3a4f9d3969d5c109e7f799b0573ec2ed170f1ff8efdee.scope.
Dec 02 09:56:32 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:56:32 np0005541914.localdomain podman[296257]: 2025-12-02 09:56:32.348358667 +0000 UTC m=+0.130496531 container init 06abe1f5837b6f381ed3a4f9d3969d5c109e7f799b0573ec2ed170f1ff8efdee (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_brahmagupta, io.buildah.version=1.41.4, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, vendor=Red Hat, Inc., ceph=True, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, release=1763362218, GIT_BRANCH=main, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public)
Dec 02 09:56:32 np0005541914.localdomain podman[296257]: 2025-12-02 09:56:32.253611797 +0000 UTC m=+0.035749681 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:56:32 np0005541914.localdomain podman[296257]: 2025-12-02 09:56:32.359507587 +0000 UTC m=+0.141645451 container start 06abe1f5837b6f381ed3a4f9d3969d5c109e7f799b0573ec2ed170f1ff8efdee (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_brahmagupta, io.openshift.expose-services=, RELEASE=main, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, release=1763362218, architecture=x86_64, vcs-type=git, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, com.redhat.component=rhceph-container, name=rhceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 02 09:56:32 np0005541914.localdomain podman[296257]: 2025-12-02 09:56:32.360613411 +0000 UTC m=+0.142751305 container attach 06abe1f5837b6f381ed3a4f9d3969d5c109e7f799b0573ec2ed170f1ff8efdee (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_brahmagupta, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, ceph=True, com.redhat.component=rhceph-container, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, vendor=Red Hat, Inc., GIT_BRANCH=main, io.buildah.version=1.41.4, RELEASE=main, build-date=2025-11-26T19:44:28Z, name=rhceph, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, vcs-type=git)
Dec 02 09:56:32 np0005541914.localdomain vigilant_brahmagupta[296272]: 167 167
Dec 02 09:56:32 np0005541914.localdomain systemd[1]: libpod-06abe1f5837b6f381ed3a4f9d3969d5c109e7f799b0573ec2ed170f1ff8efdee.scope: Deactivated successfully.
Dec 02 09:56:32 np0005541914.localdomain podman[296257]: 2025-12-02 09:56:32.364044386 +0000 UTC m=+0.146182240 container died 06abe1f5837b6f381ed3a4f9d3969d5c109e7f799b0573ec2ed170f1ff8efdee (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_brahmagupta, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_CLEAN=True, name=rhceph, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 02 09:56:32 np0005541914.localdomain podman[296277]: 2025-12-02 09:56:32.437304771 +0000 UTC m=+0.068914174 container remove 06abe1f5837b6f381ed3a4f9d3969d5c109e7f799b0573ec2ed170f1ff8efdee (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_brahmagupta, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, ceph=True, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.buildah.version=1.41.4, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, release=1763362218, RELEASE=main, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 02 09:56:32 np0005541914.localdomain systemd[1]: libpod-conmon-06abe1f5837b6f381ed3a4f9d3969d5c109e7f799b0573ec2ed170f1ff8efdee.scope: Deactivated successfully.
Dec 02 09:56:32 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:56:32 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:56:32 np0005541914.localdomain ceph-mgr[287188]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect)
Dec 02 09:56:32 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0)
Dec 02 09:56:32 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:56:32 np0005541914.localdomain ceph-mgr[287188]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory
Dec 02 09:56:32 np0005541914.localdomain sudo[296221]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:32 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0)
Dec 02 09:56:32 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0)
Dec 02 09:56:32 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring osd.4 (monmap changed)...
Dec 02 09:56:32 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.4"} v 0)
Dec 02 09:56:32 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Dec 02 09:56:32 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring osd.4 (monmap changed)...
Dec 02 09:56:32 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:56:32 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:32 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005541914.localdomain
Dec 02 09:56:32 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005541914.localdomain
Dec 02 09:56:32 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:32 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:32 np0005541914.localdomain ceph-mon[288526]: Reconfiguring osd.1 (monmap changed)...
Dec 02 09:56:32 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 02 09:56:32 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:32 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.1 on np0005541914.localdomain
Dec 02 09:56:32 np0005541914.localdomain ceph-mon[288526]: pgmap v45: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:32 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:56:32 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:32 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:32 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Dec 02 09:56:32 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:32 np0005541914.localdomain sudo[296300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:56:32 np0005541914.localdomain sudo[296300]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:32 np0005541914.localdomain sudo[296300]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:32 np0005541914.localdomain sudo[296318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:56:32 np0005541914.localdomain sudo[296318]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:33 np0005541914.localdomain podman[296353]: 
Dec 02 09:56:33 np0005541914.localdomain podman[296353]: 2025-12-02 09:56:33.203038091 +0000 UTC m=+0.059343172 container create 77a39728ea91623728840c867e914f93922f464ce065c38a00e16d5a14e9ea3f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_jemison, io.openshift.expose-services=, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, CEPH_POINT_RELEASE=, distribution-scope=public, version=7, build-date=2025-11-26T19:44:28Z)
Dec 02 09:56:33 np0005541914.localdomain systemd[1]: tmp-crun.AbV84k.mount: Deactivated successfully.
Dec 02 09:56:33 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-9295b727f7d480e619a54cfb8191e82d04472c157df76542e8738afd771dafa7-merged.mount: Deactivated successfully.
Dec 02 09:56:33 np0005541914.localdomain systemd[1]: Started libpod-conmon-77a39728ea91623728840c867e914f93922f464ce065c38a00e16d5a14e9ea3f.scope.
Dec 02 09:56:33 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:56:33 np0005541914.localdomain podman[296353]: 2025-12-02 09:56:33.267371273 +0000 UTC m=+0.123676364 container init 77a39728ea91623728840c867e914f93922f464ce065c38a00e16d5a14e9ea3f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_jemison, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, version=7, GIT_BRANCH=main, distribution-scope=public, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 09:56:33 np0005541914.localdomain podman[296353]: 2025-12-02 09:56:33.170990983 +0000 UTC m=+0.027296094 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:56:33 np0005541914.localdomain brave_jemison[296368]: 167 167
Dec 02 09:56:33 np0005541914.localdomain systemd[1]: libpod-77a39728ea91623728840c867e914f93922f464ce065c38a00e16d5a14e9ea3f.scope: Deactivated successfully.
Dec 02 09:56:33 np0005541914.localdomain podman[296353]: 2025-12-02 09:56:33.284746443 +0000 UTC m=+0.141051514 container start 77a39728ea91623728840c867e914f93922f464ce065c38a00e16d5a14e9ea3f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_jemison, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, vcs-type=git, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, release=1763362218, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, architecture=x86_64, ceph=True, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public)
Dec 02 09:56:33 np0005541914.localdomain podman[296353]: 2025-12-02 09:56:33.287143016 +0000 UTC m=+0.143448127 container attach 77a39728ea91623728840c867e914f93922f464ce065c38a00e16d5a14e9ea3f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_jemison, architecture=x86_64, io.openshift.expose-services=, RELEASE=main, GIT_CLEAN=True, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 09:56:33 np0005541914.localdomain podman[296353]: 2025-12-02 09:56:33.290104637 +0000 UTC m=+0.146409738 container died 77a39728ea91623728840c867e914f93922f464ce065c38a00e16d5a14e9ea3f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_jemison, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, RELEASE=main, ceph=True)
Dec 02 09:56:33 np0005541914.localdomain podman[296373]: 2025-12-02 09:56:33.368246301 +0000 UTC m=+0.081046564 container remove 77a39728ea91623728840c867e914f93922f464ce065c38a00e16d5a14e9ea3f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=brave_jemison, vendor=Red Hat, Inc., version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, ceph=True, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.buildah.version=1.41.4)
Dec 02 09:56:33 np0005541914.localdomain systemd[1]: libpod-conmon-77a39728ea91623728840c867e914f93922f464ce065c38a00e16d5a14e9ea3f.scope: Deactivated successfully.
Dec 02 09:56:33 np0005541914.localdomain sudo[296318]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:33 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0)
Dec 02 09:56:33 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0)
Dec 02 09:56:33 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005541914.sqgqkj (monmap changed)...
Dec 02 09:56:33 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005541914.sqgqkj (monmap changed)...
Dec 02 09:56:33 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 02 09:56:33 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:56:33 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:56:33 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:33 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005541914.sqgqkj on np0005541914.localdomain
Dec 02 09:56:33 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005541914.sqgqkj on np0005541914.localdomain
Dec 02 09:56:33 np0005541914.localdomain ceph-mgr[287188]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect)
Dec 02 09:56:33 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0)
Dec 02 09:56:33 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:56:33 np0005541914.localdomain ceph-mgr[287188]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory
Dec 02 09:56:33 np0005541914.localdomain sudo[296397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:56:33 np0005541914.localdomain sudo[296397]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:33 np0005541914.localdomain sudo[296397]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:33 np0005541914.localdomain podman[239757]: time="2025-12-02T09:56:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 09:56:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:56:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 09:56:33 np0005541914.localdomain sudo[296415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:56:33 np0005541914.localdomain sudo[296415]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:56:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19187 "" "Go-http-client/1.1"
Dec 02 09:56:33 np0005541914.localdomain ceph-mon[288526]: Reconfiguring osd.4 (monmap changed)...
Dec 02 09:56:33 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.4 on np0005541914.localdomain
Dec 02 09:56:33 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:33 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:33 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:56:33 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:56:33 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:33 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:56:33 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v46: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:34 np0005541914.localdomain podman[296450]: 
Dec 02 09:56:34 np0005541914.localdomain podman[296450]: 2025-12-02 09:56:34.095246199 +0000 UTC m=+0.073991609 container create 4e7a61ec3a928d71222d7e423194e4ce9d1418345a52770c83c1b8a763f7a0bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_rosalind, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, ceph=True, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, vcs-type=git, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 09:56:34 np0005541914.localdomain systemd[1]: Started libpod-conmon-4e7a61ec3a928d71222d7e423194e4ce9d1418345a52770c83c1b8a763f7a0bd.scope.
Dec 02 09:56:34 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:56:34 np0005541914.localdomain podman[296450]: 2025-12-02 09:56:34.160103178 +0000 UTC m=+0.138848578 container init 4e7a61ec3a928d71222d7e423194e4ce9d1418345a52770c83c1b8a763f7a0bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_rosalind, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, build-date=2025-11-26T19:44:28Z, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph)
Dec 02 09:56:34 np0005541914.localdomain podman[296450]: 2025-12-02 09:56:34.065077129 +0000 UTC m=+0.043822549 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:56:34 np0005541914.localdomain hardcore_rosalind[296465]: 167 167
Dec 02 09:56:34 np0005541914.localdomain systemd[1]: libpod-4e7a61ec3a928d71222d7e423194e4ce9d1418345a52770c83c1b8a763f7a0bd.scope: Deactivated successfully.
Dec 02 09:56:34 np0005541914.localdomain podman[296450]: 2025-12-02 09:56:34.17264258 +0000 UTC m=+0.151387990 container start 4e7a61ec3a928d71222d7e423194e4ce9d1418345a52770c83c1b8a763f7a0bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_rosalind, GIT_CLEAN=True, vcs-type=git, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, CEPH_POINT_RELEASE=, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, version=7, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, ceph=True, build-date=2025-11-26T19:44:28Z, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, vendor=Red Hat, Inc.)
Dec 02 09:56:34 np0005541914.localdomain podman[296450]: 2025-12-02 09:56:34.173243609 +0000 UTC m=+0.151989009 container attach 4e7a61ec3a928d71222d7e423194e4ce9d1418345a52770c83c1b8a763f7a0bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_rosalind, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., name=rhceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, vcs-type=git, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, ceph=True, com.redhat.component=rhceph-container, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, io.openshift.expose-services=, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 02 09:56:34 np0005541914.localdomain podman[296450]: 2025-12-02 09:56:34.175294311 +0000 UTC m=+0.154039721 container died 4e7a61ec3a928d71222d7e423194e4ce9d1418345a52770c83c1b8a763f7a0bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_rosalind, io.buildah.version=1.41.4, name=rhceph, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, com.redhat.component=rhceph-container, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 02 09:56:34 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-7734eeb04191c15cc7274dfb084d8566c6ce911f27e13b565b97259b4fff01b8-merged.mount: Deactivated successfully.
Dec 02 09:56:34 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c067362ddabcb46a05f9fd8754c83acb91112f35b7a0bce279ac39112d6e1fb3-merged.mount: Deactivated successfully.
Dec 02 09:56:34 np0005541914.localdomain podman[296470]: 2025-12-02 09:56:34.270539876 +0000 UTC m=+0.087787678 container remove 4e7a61ec3a928d71222d7e423194e4ce9d1418345a52770c83c1b8a763f7a0bd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hardcore_rosalind, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, RELEASE=main, release=1763362218, com.redhat.component=rhceph-container, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, vendor=Red Hat, Inc., name=rhceph, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7)
Dec 02 09:56:34 np0005541914.localdomain systemd[1]: libpod-conmon-4e7a61ec3a928d71222d7e423194e4ce9d1418345a52770c83c1b8a763f7a0bd.scope: Deactivated successfully.
Dec 02 09:56:34 np0005541914.localdomain sudo[296415]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:34 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0)
Dec 02 09:56:34 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0)
Dec 02 09:56:34 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005541914.lljzmk (monmap changed)...
Dec 02 09:56:34 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005541914.lljzmk (monmap changed)...
Dec 02 09:56:34 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 02 09:56:34 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:56:34 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 02 09:56:34 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:56:34 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:56:34 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:34 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005541914.lljzmk on np0005541914.localdomain
Dec 02 09:56:34 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005541914.lljzmk on np0005541914.localdomain
Dec 02 09:56:34 np0005541914.localdomain sudo[296486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:56:34 np0005541914.localdomain sudo[296486]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:34 np0005541914.localdomain sudo[296486]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:34 np0005541914.localdomain sudo[296504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:56:34 np0005541914.localdomain sudo[296504]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:34 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:56:34 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:56:34 np0005541914.localdomain ceph-mgr[287188]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect)
Dec 02 09:56:34 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0)
Dec 02 09:56:34 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:56:34 np0005541914.localdomain ceph-mgr[287188]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory
Dec 02 09:56:34 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mds.mds.np0005541914.sqgqkj (monmap changed)...
Dec 02 09:56:34 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mds.mds.np0005541914.sqgqkj on np0005541914.localdomain
Dec 02 09:56:34 np0005541914.localdomain ceph-mon[288526]: pgmap v46: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:34 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:34 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:34 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:56:34 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:56:34 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:56:34 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:34 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:56:34 np0005541914.localdomain podman[296540]: 
Dec 02 09:56:34 np0005541914.localdomain podman[296540]: 2025-12-02 09:56:34.898631407 +0000 UTC m=+0.059854667 container create 857cdcebf8307e98935fa138402323144bfc21cfb84ffd135fd92d754fedf6e1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_wright, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, architecture=x86_64, name=rhceph, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, RELEASE=main, ceph=True, com.redhat.component=rhceph-container)
Dec 02 09:56:34 np0005541914.localdomain systemd[1]: Started libpod-conmon-857cdcebf8307e98935fa138402323144bfc21cfb84ffd135fd92d754fedf6e1.scope.
Dec 02 09:56:34 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:56:34 np0005541914.localdomain podman[296540]: 2025-12-02 09:56:34.947498018 +0000 UTC m=+0.108721308 container init 857cdcebf8307e98935fa138402323144bfc21cfb84ffd135fd92d754fedf6e1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_wright, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, CEPH_POINT_RELEASE=, GIT_CLEAN=True, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, release=1763362218, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, version=7, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 09:56:34 np0005541914.localdomain podman[296540]: 2025-12-02 09:56:34.957134852 +0000 UTC m=+0.118358142 container start 857cdcebf8307e98935fa138402323144bfc21cfb84ffd135fd92d754fedf6e1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_wright, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, name=rhceph, ceph=True, com.redhat.component=rhceph-container, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.buildah.version=1.41.4, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, description=Red Hat Ceph Storage 7, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 02 09:56:34 np0005541914.localdomain podman[296540]: 2025-12-02 09:56:34.957425631 +0000 UTC m=+0.118648921 container attach 857cdcebf8307e98935fa138402323144bfc21cfb84ffd135fd92d754fedf6e1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_wright, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, vcs-type=git, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, distribution-scope=public, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, release=1763362218, GIT_CLEAN=True, io.openshift.expose-services=)
Dec 02 09:56:34 np0005541914.localdomain sad_wright[296555]: 167 167
Dec 02 09:56:34 np0005541914.localdomain systemd[1]: libpod-857cdcebf8307e98935fa138402323144bfc21cfb84ffd135fd92d754fedf6e1.scope: Deactivated successfully.
Dec 02 09:56:34 np0005541914.localdomain podman[296540]: 2025-12-02 09:56:34.959550116 +0000 UTC m=+0.120773436 container died 857cdcebf8307e98935fa138402323144bfc21cfb84ffd135fd92d754fedf6e1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_wright, name=rhceph, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, release=1763362218, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, version=7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 09:56:34 np0005541914.localdomain podman[296540]: 2025-12-02 09:56:34.869961463 +0000 UTC m=+0.031184743 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:56:35 np0005541914.localdomain podman[296560]: 2025-12-02 09:56:35.044483507 +0000 UTC m=+0.073860364 container remove 857cdcebf8307e98935fa138402323144bfc21cfb84ffd135fd92d754fedf6e1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_wright, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vendor=Red Hat, Inc., ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, CEPH_POINT_RELEASE=, distribution-scope=public, vcs-type=git, RELEASE=main, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.openshift.expose-services=, io.buildah.version=1.41.4, GIT_BRANCH=main, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container)
Dec 02 09:56:35 np0005541914.localdomain systemd[1]: libpod-conmon-857cdcebf8307e98935fa138402323144bfc21cfb84ffd135fd92d754fedf6e1.scope: Deactivated successfully.
Dec 02 09:56:35 np0005541914.localdomain sudo[296504]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:35 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0)
Dec 02 09:56:35 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0)
Dec 02 09:56:35 np0005541914.localdomain sudo[296577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:56:35 np0005541914.localdomain sudo[296577]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:35 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:56:35 np0005541914.localdomain sudo[296577]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:35 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:56:35 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-6112c5ca23687b0135b492a4dbf9ae9cd1a1510812778a3381ff8b6380fa9154-merged.mount: Deactivated successfully.
Dec 02 09:56:35 np0005541914.localdomain podman[296596]: 2025-12-02 09:56:35.301943521 +0000 UTC m=+0.066090627 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, release=1755695350, managed_by=edpm_ansible, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-type=git, io.openshift.tags=minimal rhel9, version=9.6, io.buildah.version=1.33.7, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 02 09:56:35 np0005541914.localdomain sudo[296607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:56:35 np0005541914.localdomain sudo[296607]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:35 np0005541914.localdomain podman[296596]: 2025-12-02 09:56:35.312128532 +0000 UTC m=+0.076275668 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.33.7, name=ubi9-minimal, release=1755695350, version=9.6, architecture=x86_64, io.openshift.expose-services=)
Dec 02 09:56:35 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:56:35 np0005541914.localdomain podman[296594]: 2025-12-02 09:56:35.364722796 +0000 UTC m=+0.130590955 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 09:56:35 np0005541914.localdomain podman[296594]: 2025-12-02 09:56:35.37271664 +0000 UTC m=+0.138584809 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 02 09:56:35 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:56:35 np0005541914.localdomain ceph-mgr[287188]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect)
Dec 02 09:56:35 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0)
Dec 02 09:56:35 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:56:35 np0005541914.localdomain ceph-mgr[287188]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory
Dec 02 09:56:35 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mgr.np0005541914.lljzmk (monmap changed)...
Dec 02 09:56:35 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mgr.np0005541914.lljzmk on np0005541914.localdomain
Dec 02 09:56:35 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:35 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:35 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:56:35 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v47: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:35 np0005541914.localdomain sudo[296607]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:36 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:56:36 np0005541914.localdomain ceph-mgr[287188]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect)
Dec 02 09:56:36 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0)
Dec 02 09:56:36 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:56:36 np0005541914.localdomain ceph-mgr[287188]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory
Dec 02 09:56:36 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:56:36 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 09:56:36 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 09:56:36 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:56:36 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:36 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 02 09:56:36 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:56:36 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 02 09:56:36 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] update: starting ev fe6eb9e2-f462-491d-b6e2-31d810d4c7bf (Updating node-proxy deployment (+4 -> 4))
Dec 02 09:56:36 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] complete: finished ev fe6eb9e2-f462-491d-b6e2-31d810d4c7bf (Updating node-proxy deployment (+4 -> 4))
Dec 02 09:56:36 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Completed event fe6eb9e2-f462-491d-b6e2-31d810d4c7bf (Updating node-proxy deployment (+4 -> 4)) in 0 seconds
Dec 02 09:56:36 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 02 09:56:36 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:56:36 np0005541914.localdomain ceph-mon[288526]: pgmap v47: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:36 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:56:36 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:36 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:36 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:36 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:56:36 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:37 np0005541914.localdomain sudo[296687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:56:37 np0005541914.localdomain sudo[296687]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:37 np0005541914.localdomain sudo[296687]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:37 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events
Dec 02 09:56:37 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 02 09:56:37 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/1875286268' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 02 09:56:37 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 02 09:56:37 np0005541914.localdomain ceph-mgr[287188]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect)
Dec 02 09:56:37 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0)
Dec 02 09:56:37 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:56:37 np0005541914.localdomain ceph-mgr[287188]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory
Dec 02 09:56:37 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:56:37 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.200:0/1875286268' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 02 09:56:37 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:37 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:56:37 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v48: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:38 np0005541914.localdomain ceph-mgr[287188]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect)
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0)
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:56:38 np0005541914.localdomain ceph-mgr[287188]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:56:38 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.34342 -' entity='client.admin' cmd=[{"prefix": "orch", "action": "reconfig", "service_name": "osd.default_drive_group", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:56:38 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO root] Reconfig service osd.default_drive_group
Dec 02 09:56:38 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfig service osd.default_drive_group
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 09:56:38 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] update: starting ev 60afd78b-f32e-4ca5-8a7b-9f2c5b6edcb9 (Updating node-proxy deployment (+4 -> 4))
Dec 02 09:56:38 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] complete: finished ev 60afd78b-f32e-4ca5-8a7b-9f2c5b6edcb9 (Updating node-proxy deployment (+4 -> 4))
Dec 02 09:56:38 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Completed event 60afd78b-f32e-4ca5-8a7b-9f2c5b6edcb9 (Updating node-proxy deployment (+4 -> 4)) in 0 seconds
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0)
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: pgmap v48: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0)
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0)
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0)
Dec 02 09:56:38 np0005541914.localdomain sudo[296705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:56:38 np0005541914.localdomain sudo[296705]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:38 np0005541914.localdomain sudo[296705]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:56:38 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:38 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005541912.localdomain
Dec 02 09:56:38 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005541912.localdomain
Dec 02 09:56:39 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:56:39 np0005541914.localdomain ceph-mgr[287188]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect)
Dec 02 09:56:39 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0)
Dec 02 09:56:39 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:56:39 np0005541914.localdomain ceph-mgr[287188]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory
Dec 02 09:56:39 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 09:56:39 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 09:56:39 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 09:56:39 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f5c35926df0>)]
Dec 02 09:56:39 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 02 09:56:39 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 09:56:39 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f5c35926dc0>)]
Dec 02 09:56:39 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 02 09:56:39 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v49: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:39 np0005541914.localdomain ceph-mon[288526]: from='client.34342 -' entity='client.admin' cmd=[{"prefix": "orch", "action": "reconfig", "service_name": "osd.default_drive_group", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:56:39 np0005541914.localdomain ceph-mon[288526]: Reconfig service osd.default_drive_group
Dec 02 09:56:39 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:39 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:39 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:39 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:39 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 02 09:56:39 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:39 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.2 on np0005541912.localdomain
Dec 02 09:56:39 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:56:39 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 09:56:39 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 09:56:39 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 09:56:39 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 09:56:39 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.5"} v 0)
Dec 02 09:56:39 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 02 09:56:39 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:56:39 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:39 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005541912.localdomain
Dec 02 09:56:39 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005541912.localdomain
Dec 02 09:56:39 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:56:40 np0005541914.localdomain systemd[1]: tmp-crun.UZ9B64.mount: Deactivated successfully.
Dec 02 09:56:40 np0005541914.localdomain podman[296723]: 2025-12-02 09:56:40.075647631 +0000 UTC m=+0.074593506 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 02 09:56:40 np0005541914.localdomain podman[296723]: 2025-12-02 09:56:40.112181886 +0000 UTC m=+0.111127761 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:56:40 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:56:40 np0005541914.localdomain ceph-mgr[287188]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect)
Dec 02 09:56:40 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0)
Dec 02 09:56:40 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:56:40 np0005541914.localdomain ceph-mgr[287188]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory
Dec 02 09:56:40 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:56:40 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:56:40 np0005541914.localdomain ceph-mon[288526]: pgmap v49: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:56:40 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:40 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:40 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:40 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:40 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 02 09:56:40 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:40 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.5 on np0005541912.localdomain
Dec 02 09:56:40 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:56:40 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 09:56:40 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 09:56:40 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 09:56:41 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 09:56:41 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Dec 02 09:56:41 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 02 09:56:41 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:56:41 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:41 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on np0005541913.localdomain
Dec 02 09:56:41 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on np0005541913.localdomain
Dec 02 09:56:41 np0005541914.localdomain ceph-mgr[287188]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect)
Dec 02 09:56:41 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0)
Dec 02 09:56:41 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:56:41 np0005541914.localdomain ceph-mgr[287188]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory
Dec 02 09:56:41 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v50: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s
Dec 02 09:56:41 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:41 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:41 np0005541914.localdomain ceph-mon[288526]: mgrmap e25: np0005541914.lljzmk(active, since 91s), standbys: np0005541910.kzipdo, np0005541913.mfesdm, np0005541912.qwddia, np0005541911.adcgiw
Dec 02 09:56:41 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:41 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:41 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 02 09:56:41 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:41 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.0 on np0005541913.localdomain
Dec 02 09:56:41 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:56:42 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 09:56:42 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 09:56:42 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 09:56:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:56:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:56:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:56:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:56:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:56:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 09:56:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:56:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 09:56:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:56:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:56:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 09:56:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:56:42 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 09:56:42 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.3"} v 0)
Dec 02 09:56:42 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Dec 02 09:56:42 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:56:42 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:42 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.3 on np0005541913.localdomain
Dec 02 09:56:42 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.3 on np0005541913.localdomain
Dec 02 09:56:42 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events
Dec 02 09:56:42 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 02 09:56:42 np0005541914.localdomain ceph-mgr[287188]: mgr.server handle_open ignoring open from mon.np0005541913 172.18.0.107:0/3224144201; not ready for session (expect reconnect)
Dec 02 09:56:42 np0005541914.localdomain ceph-mgr[287188]: mgr finish mon failed to return metadata for mon.np0005541913: (2) No such file or directory
Dec 02 09:56:42 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0)
Dec 02 09:56:42 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:56:42 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:56:42 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:56:43 np0005541914.localdomain ceph-mon[288526]: pgmap v50: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s
Dec 02 09:56:43 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:43 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:43 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:43 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:43 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Dec 02 09:56:43 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:43 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.3 on np0005541913.localdomain
Dec 02 09:56:43 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:43 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:56:43 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 09:56:43 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 09:56:43 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 09:56:43 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 09:56:43 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Dec 02 09:56:43 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 02 09:56:43 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 09:56:43 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:43 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005541914.localdomain
Dec 02 09:56:43 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005541914.localdomain
Dec 02 09:56:43 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "mgr fail"} v 0)
Dec 02 09:56:43 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='client.? 172.18.0.200:0/219576174' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Dec 02 09:56:43 np0005541914.localdomain sudo[296742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:56:43 np0005541914.localdomain sudo[296742]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:43 np0005541914.localdomain sudo[296742]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:43 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon).osd e88 e88: 6 total, 6 up, 6 in
Dec 02 09:56:43 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:43.307+0000 7f5cbd03f640 -1 mgr handle_mgr_map I was active but no longer am
Dec 02 09:56:43 np0005541914.localdomain ceph-mgr[287188]: mgr handle_mgr_map I was active but no longer am
Dec 02 09:56:43 np0005541914.localdomain ceph-mgr[287188]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 02 09:56:43 np0005541914.localdomain ceph-mgr[287188]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 02 09:56:43 np0005541914.localdomain ceph-mgr[287188]: mgr respawn  1: '-n'
Dec 02 09:56:43 np0005541914.localdomain ceph-mgr[287188]: mgr respawn  2: 'mgr.np0005541914.lljzmk'
Dec 02 09:56:43 np0005541914.localdomain ceph-mgr[287188]: mgr respawn  3: '-f'
Dec 02 09:56:43 np0005541914.localdomain ceph-mgr[287188]: mgr respawn  4: '--setuser'
Dec 02 09:56:43 np0005541914.localdomain ceph-mgr[287188]: mgr respawn  5: 'ceph'
Dec 02 09:56:43 np0005541914.localdomain ceph-mgr[287188]: mgr respawn  6: '--setgroup'
Dec 02 09:56:43 np0005541914.localdomain ceph-mgr[287188]: mgr respawn  7: 'ceph'
Dec 02 09:56:43 np0005541914.localdomain ceph-mgr[287188]: mgr respawn  8: '--default-log-to-file=false'
Dec 02 09:56:43 np0005541914.localdomain ceph-mgr[287188]: mgr respawn  9: '--default-log-to-journald=true'
Dec 02 09:56:43 np0005541914.localdomain ceph-mgr[287188]: mgr respawn  10: '--default-log-to-stderr=false'
Dec 02 09:56:43 np0005541914.localdomain ceph-mgr[287188]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec 02 09:56:43 np0005541914.localdomain ceph-mgr[287188]: mgr respawn  exe_path /proc/self/exe
Dec 02 09:56:43 np0005541914.localdomain sshd[292837]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 02 09:56:43 np0005541914.localdomain systemd-logind[760]: Session 65 logged out. Waiting for processes to exit.
Dec 02 09:56:43 np0005541914.localdomain sudo[296760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:56:43 np0005541914.localdomain sudo[296760]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:56:43 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: ignoring --setuser ceph since I am not root
Dec 02 09:56:43 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: ignoring --setgroup ceph since I am not root
Dec 02 09:56:43 np0005541914.localdomain ceph-mgr[287188]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mgr, pid 2
Dec 02 09:56:43 np0005541914.localdomain ceph-mgr[287188]: pidfile_write: ignore empty --pid-file
Dec 02 09:56:43 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'alerts'
Dec 02 09:56:43 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:43.503+0000 7fd411515140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 02 09:56:43 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec 02 09:56:43 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'balancer'
Dec 02 09:56:43 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:43.576+0000 7fd411515140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 02 09:56:43 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec 02 09:56:43 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'cephadm'
Dec 02 09:56:43 np0005541914.localdomain podman[296818]: 
Dec 02 09:56:43 np0005541914.localdomain podman[296818]: 2025-12-02 09:56:43.872129069 +0000 UTC m=+0.085593542 container create f873477ea276867d1da29dce8d1b1ce216ef95c8346d75e4728910bd8c83cc5b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_zhukovsky, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, architecture=x86_64, release=1763362218, name=rhceph, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., version=7, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, distribution-scope=public, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 02 09:56:43 np0005541914.localdomain systemd[1]: Started libpod-conmon-f873477ea276867d1da29dce8d1b1ce216ef95c8346d75e4728910bd8c83cc5b.scope.
Dec 02 09:56:43 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:56:43 np0005541914.localdomain podman[296818]: 2025-12-02 09:56:43.832317704 +0000 UTC m=+0.045782207 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:56:43 np0005541914.localdomain podman[296818]: 2025-12-02 09:56:43.935538413 +0000 UTC m=+0.149002886 container init f873477ea276867d1da29dce8d1b1ce216ef95c8346d75e4728910bd8c83cc5b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_zhukovsky, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, ceph=True, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, vcs-type=git, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, GIT_BRANCH=main, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, description=Red Hat Ceph Storage 7, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph)
Dec 02 09:56:43 np0005541914.localdomain systemd[1]: tmp-crun.HilKA0.mount: Deactivated successfully.
Dec 02 09:56:43 np0005541914.localdomain podman[296818]: 2025-12-02 09:56:43.949798048 +0000 UTC m=+0.163262521 container start f873477ea276867d1da29dce8d1b1ce216ef95c8346d75e4728910bd8c83cc5b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_zhukovsky, distribution-scope=public, vendor=Red Hat, Inc., version=7, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, ceph=True, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.expose-services=, RELEASE=main, architecture=x86_64, GIT_BRANCH=main, GIT_CLEAN=True, release=1763362218, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, build-date=2025-11-26T19:44:28Z)
Dec 02 09:56:43 np0005541914.localdomain podman[296818]: 2025-12-02 09:56:43.951696616 +0000 UTC m=+0.165161129 container attach f873477ea276867d1da29dce8d1b1ce216ef95c8346d75e4728910bd8c83cc5b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_zhukovsky, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, io.openshift.expose-services=, ceph=True, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, vcs-type=git, name=rhceph, distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, RELEASE=main, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 02 09:56:43 np0005541914.localdomain quirky_zhukovsky[296833]: 167 167
Dec 02 09:56:43 np0005541914.localdomain systemd[1]: libpod-f873477ea276867d1da29dce8d1b1ce216ef95c8346d75e4728910bd8c83cc5b.scope: Deactivated successfully.
Dec 02 09:56:43 np0005541914.localdomain podman[296818]: 2025-12-02 09:56:43.956171673 +0000 UTC m=+0.169636156 container died f873477ea276867d1da29dce8d1b1ce216ef95c8346d75e4728910bd8c83cc5b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_zhukovsky, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_BRANCH=main, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, name=rhceph, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, com.redhat.component=rhceph-container, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc.)
Dec 02 09:56:44 np0005541914.localdomain podman[296838]: 2025-12-02 09:56:44.036566345 +0000 UTC m=+0.076109513 container remove f873477ea276867d1da29dce8d1b1ce216ef95c8346d75e4728910bd8c83cc5b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_zhukovsky, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, ceph=True, distribution-scope=public, GIT_BRANCH=main, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, GIT_CLEAN=True, com.redhat.component=rhceph-container, vcs-type=git, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 09:56:44 np0005541914.localdomain systemd[1]: libpod-conmon-f873477ea276867d1da29dce8d1b1ce216ef95c8346d75e4728910bd8c83cc5b.scope: Deactivated successfully.
Dec 02 09:56:44 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:44 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:44 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:44 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 09:56:44 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 02 09:56:44 np0005541914.localdomain ceph-mon[288526]: from='mgr.17121 172.18.0.108:0/2364182550' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:56:44 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.200:0/219576174' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Dec 02 09:56:44 np0005541914.localdomain ceph-mon[288526]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Dec 02 09:56:44 np0005541914.localdomain ceph-mon[288526]: Activating manager daemon np0005541910.kzipdo
Dec 02 09:56:44 np0005541914.localdomain ceph-mon[288526]: osdmap e88: 6 total, 6 up, 6 in
Dec 02 09:56:44 np0005541914.localdomain ceph-mon[288526]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished
Dec 02 09:56:44 np0005541914.localdomain ceph-mon[288526]: mgrmap e26: np0005541910.kzipdo(active, starting, since 0.0525072s), standbys: np0005541913.mfesdm, np0005541912.qwddia, np0005541911.adcgiw
Dec 02 09:56:44 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'crash'
Dec 02 09:56:44 np0005541914.localdomain sudo[296760]: pam_unix(sudo:session): session closed for user root
Dec 02 09:56:44 np0005541914.localdomain systemd[1]: session-65.scope: Deactivated successfully.
Dec 02 09:56:44 np0005541914.localdomain systemd[1]: session-65.scope: Consumed 21.680s CPU time.
Dec 02 09:56:44 np0005541914.localdomain systemd-logind[760]: Removed session 65.
Dec 02 09:56:44 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:44.242+0000 7fd411515140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 02 09:56:44 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec 02 09:56:44 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'dashboard'
Dec 02 09:56:44 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:56:44 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:56:44 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'devicehealth'
Dec 02 09:56:44 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:44.841+0000 7fd411515140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 02 09:56:44 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec 02 09:56:44 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'diskprediction_local'
Dec 02 09:56:44 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d968559f1d6c92a0c06c3c251309c8df174e75f0a8f8455d2bb8a59e80f71988-merged.mount: Deactivated successfully.
Dec 02 09:56:44 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 02 09:56:44 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 02 09:56:44 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]:   from numpy import show_config as show_numpy_config
Dec 02 09:56:44 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:44.996+0000 7fd411515140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 02 09:56:44 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec 02 09:56:44 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'influx'
Dec 02 09:56:45 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:45.059+0000 7fd411515140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 02 09:56:45 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec 02 09:56:45 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'insights'
Dec 02 09:56:45 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'iostat'
Dec 02 09:56:45 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:45.185+0000 7fd411515140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 02 09:56:45 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec 02 09:56:45 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'k8sevents'
Dec 02 09:56:45 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'localpool'
Dec 02 09:56:45 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'mds_autoscaler'
Dec 02 09:56:45 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'mirroring'
Dec 02 09:56:45 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'nfs'
Dec 02 09:56:45 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:45.977+0000 7fd411515140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 02 09:56:45 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec 02 09:56:45 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'orchestrator'
Dec 02 09:56:46 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:46.133+0000 7fd411515140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 02 09:56:46 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec 02 09:56:46 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'osd_perf_query'
Dec 02 09:56:46 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:46.203+0000 7fd411515140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 02 09:56:46 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec 02 09:56:46 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'osd_support'
Dec 02 09:56:46 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:46.265+0000 7fd411515140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 02 09:56:46 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec 02 09:56:46 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'pg_autoscaler'
Dec 02 09:56:46 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:46.337+0000 7fd411515140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 02 09:56:46 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec 02 09:56:46 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'progress'
Dec 02 09:56:46 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:46.399+0000 7fd411515140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 02 09:56:46 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec 02 09:56:46 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'prometheus'
Dec 02 09:56:46 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:56:46 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:46.724+0000 7fd411515140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 02 09:56:46 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec 02 09:56:46 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'rbd_support'
Dec 02 09:56:46 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:46.811+0000 7fd411515140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 02 09:56:46 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec 02 09:56:46 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'restful'
Dec 02 09:56:46 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'rgw'
Dec 02 09:56:47 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:47.154+0000 7fd411515140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 02 09:56:47 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec 02 09:56:47 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'rook'
Dec 02 09:56:47 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:47.605+0000 7fd411515140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 02 09:56:47 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec 02 09:56:47 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'selftest'
Dec 02 09:56:47 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:47.668+0000 7fd411515140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 02 09:56:47 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec 02 09:56:47 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'snap_schedule'
Dec 02 09:56:47 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'stats'
Dec 02 09:56:47 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'status'
Dec 02 09:56:47 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:47.872+0000 7fd411515140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec 02 09:56:47 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec 02 09:56:47 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'telegraf'
Dec 02 09:56:47 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:47.930+0000 7fd411515140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 02 09:56:47 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec 02 09:56:47 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'telemetry'
Dec 02 09:56:48 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:48.058+0000 7fd411515140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 02 09:56:48 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec 02 09:56:48 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'test_orchestrator'
Dec 02 09:56:48 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:48.205+0000 7fd411515140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 02 09:56:48 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec 02 09:56:48 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'volumes'
Dec 02 09:56:48 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:48.397+0000 7fd411515140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 02 09:56:48 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec 02 09:56:48 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Loading python module 'zabbix'
Dec 02 09:56:48 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T09:56:48.455+0000 7fd411515140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 02 09:56:48 np0005541914.localdomain ceph-mgr[287188]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec 02 09:56:48 np0005541914.localdomain ceph-mgr[287188]: ms_deliver_dispatch: unhandled message 0x561987797600 mon_map magic: 0 from mon.1 v2:172.18.0.108:3300/0
Dec 02 09:56:48 np0005541914.localdomain ceph-mon[288526]: Standby manager daemon np0005541914.lljzmk started
Dec 02 09:56:48 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:56:49 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:56:49 np0005541914.localdomain ceph-mon[288526]: mgrmap e27: np0005541910.kzipdo(active, starting, since 5s), standbys: np0005541913.mfesdm, np0005541912.qwddia, np0005541911.adcgiw, np0005541914.lljzmk
Dec 02 09:56:50 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.107:0/3031756029' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:56:50 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:56:50 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:56:51 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.107:0/128556610' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:56:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:56:52.523 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:56:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:56:52.526 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:56:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:56:52.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:56:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:56:52.639 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:56:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:56:52.640 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:56:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:56:52.640 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:56:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:56:52.641 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 09:56:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:56:52.643 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:56:52 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:56:53 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 09:56:53 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2995218045' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:56:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:56:53.078 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:56:53 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.108:0/2995218045' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:56:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:56:53.240 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 09:56:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:56:53.241 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=12021MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 09:56:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:56:53.242 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:56:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:56:53.242 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:56:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:56:53.767 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 09:56:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:56:53.768 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 09:56:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:56:53.782 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:56:54 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 09:56:54 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/916855140' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:56:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:56:54.200 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:56:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:56:54.204 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 09:56:54 np0005541914.localdomain systemd[1]: Stopping User Manager for UID 1002...
Dec 02 09:56:54 np0005541914.localdomain systemd[26272]: Activating special unit Exit the Session...
Dec 02 09:56:54 np0005541914.localdomain systemd[26272]: Removed slice User Background Tasks Slice.
Dec 02 09:56:54 np0005541914.localdomain systemd[26272]: Stopped target Main User Target.
Dec 02 09:56:54 np0005541914.localdomain systemd[26272]: Stopped target Basic System.
Dec 02 09:56:54 np0005541914.localdomain systemd[26272]: Stopped target Paths.
Dec 02 09:56:54 np0005541914.localdomain systemd[26272]: Stopped target Sockets.
Dec 02 09:56:54 np0005541914.localdomain systemd[26272]: Stopped target Timers.
Dec 02 09:56:54 np0005541914.localdomain systemd[26272]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 02 09:56:54 np0005541914.localdomain systemd[26272]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 02 09:56:54 np0005541914.localdomain systemd[26272]: Closed D-Bus User Message Bus Socket.
Dec 02 09:56:54 np0005541914.localdomain systemd[26272]: Stopped Create User's Volatile Files and Directories.
Dec 02 09:56:54 np0005541914.localdomain systemd[26272]: Removed slice User Application Slice.
Dec 02 09:56:54 np0005541914.localdomain systemd[26272]: Reached target Shutdown.
Dec 02 09:56:54 np0005541914.localdomain systemd[26272]: Finished Exit the Session.
Dec 02 09:56:54 np0005541914.localdomain systemd[26272]: Reached target Exit the Session.
Dec 02 09:56:54 np0005541914.localdomain systemd[1]: user@1002.service: Deactivated successfully.
Dec 02 09:56:54 np0005541914.localdomain systemd[1]: Stopped User Manager for UID 1002.
Dec 02 09:56:54 np0005541914.localdomain systemd[1]: user@1002.service: Consumed 13.598s CPU time, read 0B from disk, written 7.0K to disk.
Dec 02 09:56:54 np0005541914.localdomain systemd[1]: Stopping User Runtime Directory /run/user/1002...
Dec 02 09:56:54 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.108:0/916855140' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:56:54 np0005541914.localdomain systemd[1]: run-user-1002.mount: Deactivated successfully.
Dec 02 09:56:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:56:54.262 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 09:56:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:56:54.266 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 09:56:54 np0005541914.localdomain systemd[1]: user-runtime-dir@1002.service: Deactivated successfully.
Dec 02 09:56:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:56:54.266 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:56:54 np0005541914.localdomain systemd[1]: Stopped User Runtime Directory /run/user/1002.
Dec 02 09:56:54 np0005541914.localdomain systemd[1]: Removed slice User Slice of UID 1002.
Dec 02 09:56:54 np0005541914.localdomain systemd[1]: user-1002.slice: Consumed 4min 22.823s CPU time.
Dec 02 09:56:54 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:56:54 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:56:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:56:55.267 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:56:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:56:55.268 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 09:56:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:56:55.268 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 09:56:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:56:55.282 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 09:56:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:56:55.283 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:56:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:56:55.284 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:56:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:56:55.284 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:56:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:56:55.285 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:56:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:56:55.285 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 09:56:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:56:55.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:56:55 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:56:55 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:56:55 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:56:56 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:56:56 np0005541914.localdomain systemd[1]: tmp-crun.SbiBXd.mount: Deactivated successfully.
Dec 02 09:56:56 np0005541914.localdomain podman[296914]: 2025-12-02 09:56:56.125731984 +0000 UTC m=+0.119624531 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 02 09:56:56 np0005541914.localdomain podman[296915]: 2025-12-02 09:56:56.180518575 +0000 UTC m=+0.172327038 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Dec 02 09:56:56 np0005541914.localdomain podman[296913]: 2025-12-02 09:56:56.09250547 +0000 UTC m=+0.091749091 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:56:56 np0005541914.localdomain podman[296914]: 2025-12-02 09:56:56.210280883 +0000 UTC m=+0.204173470 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 09:56:56 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 09:56:56 np0005541914.localdomain podman[296913]: 2025-12-02 09:56:56.224776365 +0000 UTC m=+0.224019926 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 02 09:56:56 np0005541914.localdomain podman[296922]: 2025-12-02 09:56:56.223997712 +0000 UTC m=+0.214890858 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:56:56 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:56:56 np0005541914.localdomain podman[296915]: 2025-12-02 09:56:56.241586098 +0000 UTC m=+0.233394571 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3)
Dec 02 09:56:56 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 09:56:56 np0005541914.localdomain podman[296922]: 2025-12-02 09:56:56.308065887 +0000 UTC m=+0.298959083 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:56:56 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:56:56 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.106:0/3993520404' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:56:56 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:56:57 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.106:0/243203575' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:56:58 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:56:59 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:57:00 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:57:02 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:57:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:57:03.167 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:57:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:57:03.168 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:57:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:57:03.168 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:57:03 np0005541914.localdomain podman[239757]: time="2025-12-02T09:57:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 09:57:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:57:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 09:57:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:57:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19182 "" "Go-http-client/1.1"
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1607212135' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1607212135' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.32:0/1607212135' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.32:0/1607212135' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:57:04.505817) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669424505855, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 2767, "num_deletes": 253, "total_data_size": 7211164, "memory_usage": 7864776, "flush_reason": "Manual Compaction"}
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669424525858, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 4223005, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16611, "largest_seqno": 19373, "table_properties": {"data_size": 4211723, "index_size": 7019, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3205, "raw_key_size": 28461, "raw_average_key_size": 22, "raw_value_size": 4187242, "raw_average_value_size": 3297, "num_data_blocks": 307, "num_entries": 1270, "num_filter_entries": 1270, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669340, "oldest_key_time": 1764669340, "file_creation_time": 1764669424, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fef79939-f0d3-4c6e-a3c1-7bf191246dd2", "db_session_id": "ES6HEAUO0NO66H72LGQU", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 20094 microseconds, and 6217 cpu microseconds.
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:57:04.525906) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 4223005 bytes OK
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:57:04.525930) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:57:04.527753) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:57:04.527769) EVENT_LOG_v1 {"time_micros": 1764669424527765, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:57:04.527789) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 7197842, prev total WAL file size 7197842, number of live WAL files 2.
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:57:04.529053) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130373933' seq:72057594037927935, type:22 .. '7061786F73003131303435' seq:0, type:0; will stop at (end)
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(4124KB)], [27(14MB)]
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669424529118, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 19208552, "oldest_snapshot_seqno": -1}
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 10849 keys, 16102954 bytes, temperature: kUnknown
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669424632040, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 16102954, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16038686, "index_size": 36071, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27141, "raw_key_size": 290928, "raw_average_key_size": 26, "raw_value_size": 15850808, "raw_average_value_size": 1461, "num_data_blocks": 1382, "num_entries": 10849, "num_filter_entries": 10849, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669199, "oldest_key_time": 0, "file_creation_time": 1764669424, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fef79939-f0d3-4c6e-a3c1-7bf191246dd2", "db_session_id": "ES6HEAUO0NO66H72LGQU", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:57:04.634166) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 16102954 bytes
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:57:04.636412) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 186.5 rd, 156.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.0, 14.3 +0.0 blob) out(15.4 +0.0 blob), read-write-amplify(8.4) write-amplify(3.8) OK, records in: 11393, records dropped: 544 output_compression: NoCompression
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:57:04.636487) EVENT_LOG_v1 {"time_micros": 1764669424636443, "job": 14, "event": "compaction_finished", "compaction_time_micros": 103008, "compaction_time_cpu_micros": 49161, "output_level": 6, "num_output_files": 1, "total_output_size": 16102954, "num_input_records": 11393, "num_output_records": 10849, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669424637475, "job": 14, "event": "table_file_deletion", "file_number": 29}
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669424639984, "job": 14, "event": "table_file_deletion", "file_number": 27}
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:57:04.528943) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:57:04.640111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:57:04.640118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:57:04.640122) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:57:04.640125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:57:04.640127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 09:57:04 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:57:05 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:57:05 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:57:06 np0005541914.localdomain podman[296999]: 2025-12-02 09:57:06.084302927 +0000 UTC m=+0.081029313 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 02 09:57:06 np0005541914.localdomain podman[296999]: 2025-12-02 09:57:06.091271609 +0000 UTC m=+0.087997875 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 02 09:57:06 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:57:06 np0005541914.localdomain podman[297000]: 2025-12-02 09:57:06.139721857 +0000 UTC m=+0.133089041 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, version=9.6, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_id=edpm, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vcs-type=git, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 02 09:57:06 np0005541914.localdomain podman[297000]: 2025-12-02 09:57:06.149795355 +0000 UTC m=+0.143162609 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, version=9.6, maintainer=Red Hat, Inc., release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=edpm, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, container_name=openstack_network_exporter)
Dec 02 09:57:06 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:57:06 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:57:08 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:57:09 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:57:10 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:57:10 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:57:10 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:57:11 np0005541914.localdomain podman[297042]: 2025-12-02 09:57:11.116492041 +0000 UTC m=+0.122654693 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec 02 09:57:11 np0005541914.localdomain podman[297042]: 2025-12-02 09:57:11.129003572 +0000 UTC m=+0.135166214 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:57:11 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:57:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 09:57:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 7800.1 total, 600.0 interval
                                                          Cumulative writes: 5039 writes, 22K keys, 5039 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5039 writes, 750 syncs, 6.72 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 193 writes, 569 keys, 193 commit groups, 1.0 writes per commit group, ingest: 0.60 MB, 0.00 MB/s
                                                          Interval WAL: 193 writes, 73 syncs, 2.64 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 02 09:57:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:57:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:57:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:57:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:57:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:57:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 09:57:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:57:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 09:57:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:57:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:57:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 09:57:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:57:12 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:57:14 np0005541914.localdomain sshd[297060]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:57:14 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon).osd e89 e89: 6 total, 6 up, 6 in
Dec 02 09:57:14 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:57:14 np0005541914.localdomain ceph-mon[288526]: Activating manager daemon np0005541913.mfesdm
Dec 02 09:57:14 np0005541914.localdomain ceph-mon[288526]: Manager daemon np0005541910.kzipdo is unresponsive, replacing it with standby daemon np0005541913.mfesdm
Dec 02 09:57:14 np0005541914.localdomain ceph-mon[288526]: osdmap e89: 6 total, 6 up, 6 in
Dec 02 09:57:14 np0005541914.localdomain ceph-mon[288526]: mgrmap e28: np0005541913.mfesdm(active, starting, since 0.0539087s), standbys: np0005541912.qwddia, np0005541911.adcgiw, np0005541914.lljzmk
Dec 02 09:57:14 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:57:14 np0005541914.localdomain sshd[297060]: Invalid user weblogic from 34.78.29.97 port 38478
Dec 02 09:57:14 np0005541914.localdomain sshd[297060]: Received disconnect from 34.78.29.97 port 38478:11: Bye Bye [preauth]
Dec 02 09:57:14 np0005541914.localdomain sshd[297060]: Disconnected from invalid user weblogic 34.78.29.97 port 38478 [preauth]
Dec 02 09:57:15 np0005541914.localdomain sshd[297063]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:57:15 np0005541914.localdomain sshd[297063]: Accepted publickey for ceph-admin from 192.168.122.107 port 44754 ssh2: RSA SHA256:/YNuT9SDdDKzRDkfkhY38xwkOxxhXakQO/p8xUCPPz0
Dec 02 09:57:15 np0005541914.localdomain systemd-logind[760]: New session 66 of user ceph-admin.
Dec 02 09:57:15 np0005541914.localdomain systemd[1]: Created slice User Slice of UID 1002.
Dec 02 09:57:15 np0005541914.localdomain systemd[1]: Starting User Runtime Directory /run/user/1002...
Dec 02 09:57:15 np0005541914.localdomain systemd[1]: Finished User Runtime Directory /run/user/1002.
Dec 02 09:57:15 np0005541914.localdomain systemd[1]: Starting User Manager for UID 1002...
Dec 02 09:57:15 np0005541914.localdomain systemd[297067]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 02 09:57:15 np0005541914.localdomain systemd[297067]: Queued start job for default target Main User Target.
Dec 02 09:57:15 np0005541914.localdomain systemd[297067]: Created slice User Application Slice.
Dec 02 09:57:15 np0005541914.localdomain systemd[297067]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 02 09:57:15 np0005541914.localdomain systemd[297067]: Started Daily Cleanup of User's Temporary Directories.
Dec 02 09:57:15 np0005541914.localdomain systemd[297067]: Reached target Paths.
Dec 02 09:57:15 np0005541914.localdomain systemd[297067]: Reached target Timers.
Dec 02 09:57:15 np0005541914.localdomain systemd[297067]: Starting D-Bus User Message Bus Socket...
Dec 02 09:57:15 np0005541914.localdomain systemd[297067]: Starting Create User's Volatile Files and Directories...
Dec 02 09:57:15 np0005541914.localdomain systemd[297067]: Listening on D-Bus User Message Bus Socket.
Dec 02 09:57:15 np0005541914.localdomain systemd[297067]: Reached target Sockets.
Dec 02 09:57:15 np0005541914.localdomain systemd[297067]: Finished Create User's Volatile Files and Directories.
Dec 02 09:57:15 np0005541914.localdomain systemd[297067]: Reached target Basic System.
Dec 02 09:57:15 np0005541914.localdomain systemd[297067]: Reached target Main User Target.
Dec 02 09:57:15 np0005541914.localdomain systemd[297067]: Startup finished in 145ms.
Dec 02 09:57:15 np0005541914.localdomain systemd[1]: Started User Manager for UID 1002.
Dec 02 09:57:15 np0005541914.localdomain systemd[1]: Started Session 66 of User ceph-admin.
Dec 02 09:57:15 np0005541914.localdomain sshd[297063]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 02 09:57:15 np0005541914.localdomain sudo[297084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:57:15 np0005541914.localdomain sudo[297084]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:15 np0005541914.localdomain sudo[297084]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:15 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541911"} : dispatch
Dec 02 09:57:15 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 09:57:15 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch
Dec 02 09:57:15 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mds metadata", "who": "mds.np0005541914.sqgqkj"} : dispatch
Dec 02 09:57:15 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mds metadata", "who": "mds.np0005541913.maexpe"} : dispatch
Dec 02 09:57:15 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mds metadata", "who": "mds.np0005541912.ghcwcm"} : dispatch
Dec 02 09:57:15 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mgr metadata", "who": "np0005541913.mfesdm", "id": "np0005541913.mfesdm"} : dispatch
Dec 02 09:57:15 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mgr metadata", "who": "np0005541912.qwddia", "id": "np0005541912.qwddia"} : dispatch
Dec 02 09:57:15 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mgr metadata", "who": "np0005541911.adcgiw", "id": "np0005541911.adcgiw"} : dispatch
Dec 02 09:57:15 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mgr metadata", "who": "np0005541914.lljzmk", "id": "np0005541914.lljzmk"} : dispatch
Dec 02 09:57:15 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 02 09:57:15 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 02 09:57:15 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 02 09:57:15 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "osd metadata", "id": 3} : dispatch
Dec 02 09:57:15 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "osd metadata", "id": 4} : dispatch
Dec 02 09:57:15 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "osd metadata", "id": 5} : dispatch
Dec 02 09:57:15 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mds metadata"} : dispatch
Dec 02 09:57:15 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "osd metadata"} : dispatch
Dec 02 09:57:15 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata"} : dispatch
Dec 02 09:57:15 np0005541914.localdomain ceph-mon[288526]: Manager daemon np0005541913.mfesdm is now available
Dec 02 09:57:15 np0005541914.localdomain ceph-mon[288526]: removing stray HostCache host record np0005541910.localdomain.devices.0
Dec 02 09:57:15 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541910.localdomain.devices.0"} : dispatch
Dec 02 09:57:15 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005541910.localdomain.devices.0"}]': finished
Dec 02 09:57:15 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541910.localdomain.devices.0"} : dispatch
Dec 02 09:57:15 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005541910.localdomain.devices.0"}]': finished
Dec 02 09:57:15 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541913.mfesdm/mirror_snapshot_schedule"} : dispatch
Dec 02 09:57:15 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541913.mfesdm/trash_purge_schedule"} : dispatch
Dec 02 09:57:15 np0005541914.localdomain ceph-mon[288526]: mgrmap e29: np0005541913.mfesdm(active, since 1.15788s), standbys: np0005541912.qwddia, np0005541911.adcgiw, np0005541914.lljzmk
Dec 02 09:57:15 np0005541914.localdomain sudo[297102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 02 09:57:15 np0005541914.localdomain sudo[297102]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 09:57:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 7800.2 total, 600.0 interval
                                                          Cumulative writes: 5878 writes, 25K keys, 5878 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5878 writes, 789 syncs, 7.45 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 111 writes, 312 keys, 111 commit groups, 1.0 writes per commit group, ingest: 0.42 MB, 0.00 MB/s
                                                          Interval WAL: 111 writes, 43 syncs, 2.58 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 02 09:57:16 np0005541914.localdomain podman[297191]: 2025-12-02 09:57:16.650168834 +0000 UTC m=+0.103122327 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.buildah.version=1.41.4, version=7, name=rhceph, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, RELEASE=main, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, vcs-type=git, GIT_BRANCH=main, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph)
Dec 02 09:57:16 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:57:16 np0005541914.localdomain podman[297191]: 2025-12-02 09:57:16.736675093 +0000 UTC m=+0.189628566 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, vcs-type=git, com.redhat.component=rhceph-container, ceph=True, io.openshift.tags=rhceph ceph, version=7, GIT_BRANCH=main, RELEASE=main, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph)
Dec 02 09:57:16 np0005541914.localdomain ceph-mon[288526]: from='client.34369 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 02 09:57:16 np0005541914.localdomain ceph-mon[288526]: pgmap v3: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:57:16 np0005541914.localdomain ceph-mon[288526]: [02/Dec/2025:09:57:16] ENGINE Bus STARTING
Dec 02 09:57:16 np0005541914.localdomain ceph-mon[288526]: [02/Dec/2025:09:57:16] ENGINE Serving on http://172.18.0.107:8765
Dec 02 09:57:16 np0005541914.localdomain ceph-mon[288526]: [02/Dec/2025:09:57:16] ENGINE Serving on https://172.18.0.107:7150
Dec 02 09:57:16 np0005541914.localdomain ceph-mon[288526]: [02/Dec/2025:09:57:16] ENGINE Bus STARTED
Dec 02 09:57:16 np0005541914.localdomain ceph-mon[288526]: [02/Dec/2025:09:57:16] ENGINE Client ('172.18.0.107', 38106) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 02 09:57:16 np0005541914.localdomain ceph-mon[288526]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm)
Dec 02 09:57:16 np0005541914.localdomain ceph-mon[288526]: Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm)
Dec 02 09:57:16 np0005541914.localdomain ceph-mon[288526]: Cluster is now healthy
Dec 02 09:57:17 np0005541914.localdomain sudo[297102]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:17 np0005541914.localdomain sudo[297311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:57:17 np0005541914.localdomain sudo[297311]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:17 np0005541914.localdomain sudo[297311]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:17 np0005541914.localdomain sudo[297329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:57:17 np0005541914.localdomain sudo[297329]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:17 np0005541914.localdomain ceph-mon[288526]: pgmap v4: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:57:17 np0005541914.localdomain ceph-mon[288526]: mgrmap e30: np0005541913.mfesdm(active, since 2s), standbys: np0005541912.qwddia, np0005541911.adcgiw, np0005541914.lljzmk
Dec 02 09:57:17 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:17 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:17 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:17 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:17 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:17 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:17 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:17 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:18 np0005541914.localdomain sudo[297329]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:18 np0005541914.localdomain sudo[297379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:57:18 np0005541914.localdomain sudo[297379]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:18 np0005541914.localdomain sudo[297379]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:18 np0005541914.localdomain sudo[297397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Dec 02 09:57:18 np0005541914.localdomain sudo[297397]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:18 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:57:18 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:57:18 np0005541914.localdomain sudo[297397]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:19 np0005541914.localdomain sudo[297433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 02 09:57:19 np0005541914.localdomain sudo[297433]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:19 np0005541914.localdomain sudo[297433]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:19 np0005541914.localdomain sudo[297451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph
Dec 02 09:57:19 np0005541914.localdomain sudo[297451]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:19 np0005541914.localdomain sudo[297451]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:19 np0005541914.localdomain sudo[297469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:57:19 np0005541914.localdomain sudo[297469]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:19 np0005541914.localdomain sudo[297469]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:19 np0005541914.localdomain sudo[297487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:57:19 np0005541914.localdomain sudo[297487]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:19 np0005541914.localdomain sudo[297487]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:19 np0005541914.localdomain sudo[297505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:57:19 np0005541914.localdomain sudo[297505]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:19 np0005541914.localdomain sudo[297505]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:19 np0005541914.localdomain sudo[297539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:57:19 np0005541914.localdomain sudo[297539]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:19 np0005541914.localdomain sudo[297539]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:19 np0005541914.localdomain sudo[297557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:57:19 np0005541914.localdomain sudo[297557]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:19 np0005541914.localdomain sudo[297557]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:19 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:57:19 np0005541914.localdomain sudo[297575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 02 09:57:19 np0005541914.localdomain sudo[297575]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:19 np0005541914.localdomain sudo[297575]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:19 np0005541914.localdomain sudo[297593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 09:57:19 np0005541914.localdomain sudo[297593]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:19 np0005541914.localdomain sudo[297593]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:19 np0005541914.localdomain ceph-mon[288526]: pgmap v5: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:57:19 np0005541914.localdomain ceph-mon[288526]: from='client.34408 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:57:19 np0005541914.localdomain ceph-mon[288526]: Saving service mon spec with placement label:mon
Dec 02 09:57:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:57:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config rm", "who": "osd/host:np0005541911", "name": "osd_memory_target"} : dispatch
Dec 02 09:57:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 02 09:57:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 02 09:57:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:19 np0005541914.localdomain ceph-mon[288526]: Adjusting osd_memory_target on np0005541914.localdomain to 836.6M
Dec 02 09:57:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 02 09:57:19 np0005541914.localdomain ceph-mon[288526]: Unable to set osd_memory_target on np0005541914.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 09:57:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 02 09:57:19 np0005541914.localdomain ceph-mon[288526]: Adjusting osd_memory_target on np0005541912.localdomain to 836.6M
Dec 02 09:57:19 np0005541914.localdomain ceph-mon[288526]: Unable to set osd_memory_target on np0005541912.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 09:57:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 02 09:57:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 02 09:57:19 np0005541914.localdomain ceph-mon[288526]: Adjusting osd_memory_target on np0005541913.localdomain to 836.6M
Dec 02 09:57:19 np0005541914.localdomain ceph-mon[288526]: Unable to set osd_memory_target on np0005541913.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 09:57:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:57:19 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:57:19 np0005541914.localdomain ceph-mon[288526]: Updating np0005541911.localdomain:/etc/ceph/ceph.conf
Dec 02 09:57:19 np0005541914.localdomain ceph-mon[288526]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf
Dec 02 09:57:19 np0005541914.localdomain ceph-mon[288526]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf
Dec 02 09:57:19 np0005541914.localdomain ceph-mon[288526]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf
Dec 02 09:57:19 np0005541914.localdomain ceph-mon[288526]: mgrmap e31: np0005541913.mfesdm(active, since 4s), standbys: np0005541912.qwddia, np0005541911.adcgiw, np0005541914.lljzmk
Dec 02 09:57:19 np0005541914.localdomain sudo[297611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 09:57:19 np0005541914.localdomain sudo[297611]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:19 np0005541914.localdomain sudo[297611]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:19 np0005541914.localdomain sudo[297629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:57:19 np0005541914.localdomain sudo[297629]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:19 np0005541914.localdomain sudo[297629]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:19 np0005541914.localdomain sudo[297647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:57:19 np0005541914.localdomain sudo[297647]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:19 np0005541914.localdomain sudo[297647]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:20 np0005541914.localdomain sudo[297665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:57:20 np0005541914.localdomain sudo[297665]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:20 np0005541914.localdomain sudo[297665]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:20 np0005541914.localdomain sudo[297699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:57:20 np0005541914.localdomain sudo[297699]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:20 np0005541914.localdomain sudo[297699]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:20 np0005541914.localdomain sudo[297717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:57:20 np0005541914.localdomain sudo[297717]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:20 np0005541914.localdomain sudo[297717]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:20 np0005541914.localdomain sudo[297735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:57:20 np0005541914.localdomain sudo[297735]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:20 np0005541914.localdomain sudo[297735]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:20 np0005541914.localdomain sudo[297753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 02 09:57:20 np0005541914.localdomain sudo[297753]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:20 np0005541914.localdomain sudo[297753]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:20 np0005541914.localdomain sudo[297771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph
Dec 02 09:57:20 np0005541914.localdomain sudo[297771]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:20 np0005541914.localdomain sudo[297771]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:20 np0005541914.localdomain sudo[297789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.client.admin.keyring.new
Dec 02 09:57:20 np0005541914.localdomain sudo[297789]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:20 np0005541914.localdomain sudo[297789]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:20 np0005541914.localdomain sudo[297807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:57:20 np0005541914.localdomain sudo[297807]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:20 np0005541914.localdomain sudo[297807]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:20 np0005541914.localdomain ceph-mon[288526]: Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:57:20 np0005541914.localdomain ceph-mon[288526]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:57:20 np0005541914.localdomain ceph-mon[288526]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:57:20 np0005541914.localdomain ceph-mon[288526]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:57:20 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:57:20 np0005541914.localdomain ceph-mon[288526]: from='client.34414 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005541913", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 02 09:57:20 np0005541914.localdomain ceph-mon[288526]: Updating np0005541912.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 09:57:20 np0005541914.localdomain ceph-mon[288526]: Updating np0005541911.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 09:57:20 np0005541914.localdomain ceph-mon[288526]: Updating np0005541913.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 09:57:20 np0005541914.localdomain ceph-mon[288526]: Updating np0005541914.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 09:57:20 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:57:20 np0005541914.localdomain sudo[297825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.client.admin.keyring.new
Dec 02 09:57:20 np0005541914.localdomain sudo[297825]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:20 np0005541914.localdomain sudo[297825]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:20 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:57:20 np0005541914.localdomain sudo[297859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.client.admin.keyring.new
Dec 02 09:57:20 np0005541914.localdomain sudo[297859]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:20 np0005541914.localdomain sudo[297859]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:20 np0005541914.localdomain sudo[297877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.client.admin.keyring.new
Dec 02 09:57:20 np0005541914.localdomain sudo[297877]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:20 np0005541914.localdomain sudo[297877]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:21 np0005541914.localdomain sudo[297895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 02 09:57:21 np0005541914.localdomain sudo[297895]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:21 np0005541914.localdomain sudo[297895]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:21 np0005541914.localdomain sudo[297913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 09:57:21 np0005541914.localdomain sudo[297913]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:21 np0005541914.localdomain sudo[297913]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:21 np0005541914.localdomain sudo[297931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 09:57:21 np0005541914.localdomain sudo[297931]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:21 np0005541914.localdomain sudo[297931]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:21 np0005541914.localdomain sudo[297949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring.new
Dec 02 09:57:21 np0005541914.localdomain sudo[297949]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:21 np0005541914.localdomain sudo[297949]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:21 np0005541914.localdomain sudo[297967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:57:21 np0005541914.localdomain sudo[297967]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:21 np0005541914.localdomain sudo[297967]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:21 np0005541914.localdomain sudo[297985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring.new
Dec 02 09:57:21 np0005541914.localdomain sudo[297985]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:21 np0005541914.localdomain sudo[297985]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:21 np0005541914.localdomain sudo[298019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring.new
Dec 02 09:57:21 np0005541914.localdomain sudo[298019]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:21 np0005541914.localdomain sudo[298019]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:21 np0005541914.localdomain sudo[298037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring.new
Dec 02 09:57:21 np0005541914.localdomain sudo[298037]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:21 np0005541914.localdomain sudo[298037]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:21 np0005541914.localdomain sudo[298055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring.new /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:57:21 np0005541914.localdomain sudo[298055]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:21 np0005541914.localdomain sudo[298055]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:21 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 02 09:57:21 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/2604409311' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 02 09:57:21 np0005541914.localdomain ceph-mon[288526]: pgmap v6: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:57:21 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:57:21 np0005541914.localdomain ceph-mon[288526]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:57:21 np0005541914.localdomain ceph-mon[288526]: Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:57:21 np0005541914.localdomain ceph-mon[288526]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:57:21 np0005541914.localdomain ceph-mon[288526]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:57:21 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:21 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:21 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:21 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:21 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:21 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:21 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.200:0/2604409311' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 02 09:57:21 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:22 np0005541914.localdomain sudo[298073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:57:22 np0005541914.localdomain sudo[298073]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:22 np0005541914.localdomain sudo[298073]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:22 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:22 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:57:22 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:22 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:22 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:57:22 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mon.np0005541911 (monmap changed)...
Dec 02 09:57:22 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:57:22 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 02 09:57:22 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:57:22 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mon.np0005541911 on np0005541911.localdomain
Dec 02 09:57:22 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:57:22 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:57:23 np0005541914.localdomain ceph-mon[288526]: pgmap v7: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 0 B/s wr, 16 op/s
Dec 02 09:57:23 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:57:23 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:23 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:23 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mon.np0005541912 (monmap changed)...
Dec 02 09:57:23 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:57:23 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 02 09:57:23 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:57:23 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mon.np0005541912 on np0005541912.localdomain
Dec 02 09:57:24 np0005541914.localdomain sudo[298091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:57:24 np0005541914.localdomain sudo[298091]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:24 np0005541914.localdomain sudo[298091]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:24 np0005541914.localdomain sudo[298109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:57:24 np0005541914.localdomain sudo[298109]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:24 np0005541914.localdomain podman[298143]: 
Dec 02 09:57:24 np0005541914.localdomain podman[298143]: 2025-12-02 09:57:24.549546087 +0000 UTC m=+0.072138343 container create a85d399db636764a25191d0b829b62fe42c06e650af9cbadea2eafbe8e55e39e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_lichterman, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, version=7, RELEASE=main, architecture=x86_64, io.openshift.expose-services=, release=1763362218, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7)
Dec 02 09:57:24 np0005541914.localdomain systemd[1]: Started libpod-conmon-a85d399db636764a25191d0b829b62fe42c06e650af9cbadea2eafbe8e55e39e.scope.
Dec 02 09:57:24 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:57:24 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:57:24 np0005541914.localdomain podman[298143]: 2025-12-02 09:57:24.526022809 +0000 UTC m=+0.048615065 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:57:24 np0005541914.localdomain podman[298143]: 2025-12-02 09:57:24.6302799 +0000 UTC m=+0.152872156 container init a85d399db636764a25191d0b829b62fe42c06e650af9cbadea2eafbe8e55e39e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_lichterman, architecture=x86_64, description=Red Hat Ceph Storage 7, name=rhceph, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, version=7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, RELEASE=main, io.openshift.expose-services=, io.buildah.version=1.41.4, release=1763362218)
Dec 02 09:57:24 np0005541914.localdomain podman[298143]: 2025-12-02 09:57:24.642477981 +0000 UTC m=+0.165070247 container start a85d399db636764a25191d0b829b62fe42c06e650af9cbadea2eafbe8e55e39e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_lichterman, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, ceph=True, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, architecture=x86_64, distribution-scope=public, GIT_CLEAN=True)
Dec 02 09:57:24 np0005541914.localdomain podman[298143]: 2025-12-02 09:57:24.642821792 +0000 UTC m=+0.165414098 container attach a85d399db636764a25191d0b829b62fe42c06e650af9cbadea2eafbe8e55e39e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_lichterman, name=rhceph, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_BRANCH=main, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, RELEASE=main, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=7, ceph=True, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc.)
Dec 02 09:57:24 np0005541914.localdomain affectionate_lichterman[298158]: 167 167
Dec 02 09:57:24 np0005541914.localdomain systemd[1]: libpod-a85d399db636764a25191d0b829b62fe42c06e650af9cbadea2eafbe8e55e39e.scope: Deactivated successfully.
Dec 02 09:57:24 np0005541914.localdomain podman[298143]: 2025-12-02 09:57:24.647281928 +0000 UTC m=+0.169874194 container died a85d399db636764a25191d0b829b62fe42c06e650af9cbadea2eafbe8e55e39e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_lichterman, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, RELEASE=main, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, release=1763362218, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, ceph=True, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7)
Dec 02 09:57:24 np0005541914.localdomain podman[298163]: 2025-12-02 09:57:24.748947559 +0000 UTC m=+0.090125880 container remove a85d399db636764a25191d0b829b62fe42c06e650af9cbadea2eafbe8e55e39e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_lichterman, com.redhat.component=rhceph-container, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, distribution-scope=public, io.buildah.version=1.41.4, RELEASE=main, ceph=True, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 02 09:57:24 np0005541914.localdomain systemd[1]: libpod-conmon-a85d399db636764a25191d0b829b62fe42c06e650af9cbadea2eafbe8e55e39e.scope: Deactivated successfully.
Dec 02 09:57:24 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:57:24 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:24 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:24 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 02 09:57:24 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:57:24 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.1 on np0005541914.localdomain
Dec 02 09:57:24 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:57:24 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:57:24 np0005541914.localdomain sudo[298109]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:25 np0005541914.localdomain sudo[298187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:57:25 np0005541914.localdomain sudo[298187]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:25 np0005541914.localdomain sudo[298187]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:25 np0005541914.localdomain sudo[298205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:57:25 np0005541914.localdomain sudo[298205]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:25 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-50568e2eff231372a4e20c095bbecd6298cd2aa0c01816189354fb3673909ed1-merged.mount: Deactivated successfully.
Dec 02 09:57:25 np0005541914.localdomain podman[298239]: 
Dec 02 09:57:25 np0005541914.localdomain podman[298239]: 2025-12-02 09:57:25.688661487 +0000 UTC m=+0.078081333 container create 96508b77995af9523ed429833af313013f302295f32869205c287cf452684020 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_wilson, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, name=rhceph, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, ceph=True, version=7, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.buildah.version=1.41.4, GIT_BRANCH=main)
Dec 02 09:57:25 np0005541914.localdomain systemd[1]: Started libpod-conmon-96508b77995af9523ed429833af313013f302295f32869205c287cf452684020.scope.
Dec 02 09:57:25 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:57:25 np0005541914.localdomain podman[298239]: 2025-12-02 09:57:25.657498497 +0000 UTC m=+0.046918373 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:57:25 np0005541914.localdomain podman[298239]: 2025-12-02 09:57:25.760492688 +0000 UTC m=+0.149912534 container init 96508b77995af9523ed429833af313013f302295f32869205c287cf452684020 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_wilson, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, name=rhceph, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, architecture=x86_64, ceph=True, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, release=1763362218, io.openshift.tags=rhceph ceph, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 09:57:25 np0005541914.localdomain podman[298239]: 2025-12-02 09:57:25.771637998 +0000 UTC m=+0.161057844 container start 96508b77995af9523ed429833af313013f302295f32869205c287cf452684020 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_wilson, build-date=2025-11-26T19:44:28Z, architecture=x86_64, GIT_CLEAN=True, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.buildah.version=1.41.4, name=rhceph, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True)
Dec 02 09:57:25 np0005541914.localdomain podman[298239]: 2025-12-02 09:57:25.771948437 +0000 UTC m=+0.161368313 container attach 96508b77995af9523ed429833af313013f302295f32869205c287cf452684020 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_wilson, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, release=1763362218, version=7, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, name=rhceph, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4)
Dec 02 09:57:25 np0005541914.localdomain loving_wilson[298254]: 167 167
Dec 02 09:57:25 np0005541914.localdomain systemd[1]: libpod-96508b77995af9523ed429833af313013f302295f32869205c287cf452684020.scope: Deactivated successfully.
Dec 02 09:57:25 np0005541914.localdomain podman[298239]: 2025-12-02 09:57:25.776799836 +0000 UTC m=+0.166219712 container died 96508b77995af9523ed429833af313013f302295f32869205c287cf452684020 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_wilson, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, ceph=True, name=rhceph, GIT_BRANCH=main, vcs-type=git, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, release=1763362218, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 09:57:25 np0005541914.localdomain podman[298260]: 2025-12-02 09:57:25.893394733 +0000 UTC m=+0.103698914 container remove 96508b77995af9523ed429833af313013f302295f32869205c287cf452684020 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=loving_wilson, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, version=7, com.redhat.component=rhceph-container, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, CEPH_POINT_RELEASE=, name=rhceph, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 02 09:57:25 np0005541914.localdomain systemd[1]: libpod-conmon-96508b77995af9523ed429833af313013f302295f32869205c287cf452684020.scope: Deactivated successfully.
Dec 02 09:57:25 np0005541914.localdomain ceph-mon[288526]: pgmap v8: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 0 B/s wr, 13 op/s
Dec 02 09:57:25 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:25 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:25 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:25 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:25 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Dec 02 09:57:25 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:57:25 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.4 on np0005541914.localdomain
Dec 02 09:57:25 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:25 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:57:26 np0005541914.localdomain sudo[298205]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:26 np0005541914.localdomain sudo[298284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:57:26 np0005541914.localdomain sudo[298284]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:26 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:57:26 np0005541914.localdomain sudo[298284]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:26 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:57:26 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:57:26 np0005541914.localdomain sudo[298305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:57:26 np0005541914.localdomain sudo[298305]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:26 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:57:26 np0005541914.localdomain podman[298303]: 2025-12-02 09:57:26.428639411 +0000 UTC m=+0.156153245 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 02 09:57:26 np0005541914.localdomain podman[298301]: 2025-12-02 09:57:26.442311079 +0000 UTC m=+0.174063851 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 09:57:26 np0005541914.localdomain podman[298301]: 2025-12-02 09:57:26.482994359 +0000 UTC m=+0.214747121 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 02 09:57:26 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 09:57:26 np0005541914.localdomain podman[298303]: 2025-12-02 09:57:26.515180812 +0000 UTC m=+0.242694866 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:57:26 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:57:26 np0005541914.localdomain podman[298304]: 2025-12-02 09:57:26.538656977 +0000 UTC m=+0.262400666 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:57:26 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-491395cbb31bfe2335f113b7668e4179b94ab9be4c90a5d241bc3ab8198f285a-merged.mount: Deactivated successfully.
Dec 02 09:57:26 np0005541914.localdomain podman[298354]: 2025-12-02 09:57:26.505570958 +0000 UTC m=+0.091349597 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:57:26 np0005541914.localdomain podman[298304]: 2025-12-02 09:57:26.579949047 +0000 UTC m=+0.303692706 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 02 09:57:26 np0005541914.localdomain podman[298354]: 2025-12-02 09:57:26.591878211 +0000 UTC m=+0.177656800 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 02 09:57:26 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 09:57:26 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:57:26 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:57:26 np0005541914.localdomain podman[298418]: 
Dec 02 09:57:26 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:57:26 np0005541914.localdomain podman[298418]: 2025-12-02 09:57:26.821323941 +0000 UTC m=+0.084838329 container create 5aba5b6f1e2a9e6a4db4fcca0dd734ebec0725750dfd035abcd5c5b47e96f0f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_easley, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, CEPH_POINT_RELEASE=, ceph=True, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, com.redhat.component=rhceph-container, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, release=1763362218)
Dec 02 09:57:26 np0005541914.localdomain systemd[1]: Started libpod-conmon-5aba5b6f1e2a9e6a4db4fcca0dd734ebec0725750dfd035abcd5c5b47e96f0f0.scope.
Dec 02 09:57:26 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:57:26 np0005541914.localdomain podman[298418]: 2025-12-02 09:57:26.784857628 +0000 UTC m=+0.048372006 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:57:26 np0005541914.localdomain podman[298418]: 2025-12-02 09:57:26.895848074 +0000 UTC m=+0.159362462 container init 5aba5b6f1e2a9e6a4db4fcca0dd734ebec0725750dfd035abcd5c5b47e96f0f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_easley, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, distribution-scope=public, CEPH_POINT_RELEASE=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z)
Dec 02 09:57:26 np0005541914.localdomain podman[298418]: 2025-12-02 09:57:26.912558604 +0000 UTC m=+0.176072942 container start 5aba5b6f1e2a9e6a4db4fcca0dd734ebec0725750dfd035abcd5c5b47e96f0f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_easley, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, version=7, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=)
Dec 02 09:57:26 np0005541914.localdomain podman[298418]: 2025-12-02 09:57:26.912782721 +0000 UTC m=+0.176297099 container attach 5aba5b6f1e2a9e6a4db4fcca0dd734ebec0725750dfd035abcd5c5b47e96f0f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_easley, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, release=1763362218, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, distribution-scope=public, ceph=True, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main)
Dec 02 09:57:26 np0005541914.localdomain dazzling_easley[298433]: 167 167
Dec 02 09:57:26 np0005541914.localdomain systemd[1]: libpod-5aba5b6f1e2a9e6a4db4fcca0dd734ebec0725750dfd035abcd5c5b47e96f0f0.scope: Deactivated successfully.
Dec 02 09:57:26 np0005541914.localdomain podman[298418]: 2025-12-02 09:57:26.918098373 +0000 UTC m=+0.181612721 container died 5aba5b6f1e2a9e6a4db4fcca0dd734ebec0725750dfd035abcd5c5b47e96f0f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_easley, GIT_CLEAN=True, vcs-type=git, GIT_BRANCH=main, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, name=rhceph, ceph=True, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, vendor=Red Hat, Inc., RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, version=7, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 02 09:57:27 np0005541914.localdomain podman[298438]: 2025-12-02 09:57:27.004351824 +0000 UTC m=+0.078834276 container remove 5aba5b6f1e2a9e6a4db4fcca0dd734ebec0725750dfd035abcd5c5b47e96f0f0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_easley, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7)
Dec 02 09:57:27 np0005541914.localdomain systemd[1]: libpod-conmon-5aba5b6f1e2a9e6a4db4fcca0dd734ebec0725750dfd035abcd5c5b47e96f0f0.scope: Deactivated successfully.
Dec 02 09:57:27 np0005541914.localdomain sudo[298305]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:27 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:27 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:27 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:27 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:27 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mon.np0005541914 (monmap changed)...
Dec 02 09:57:27 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:57:27 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 02 09:57:27 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:57:27 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mon.np0005541914 on np0005541914.localdomain
Dec 02 09:57:27 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.200:0/1130361570' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json"} : dispatch
Dec 02 09:57:27 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:57:27 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:27 np0005541914.localdomain sudo[298453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:57:27 np0005541914.localdomain sudo[298453]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:27 np0005541914.localdomain sudo[298453]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:27 np0005541914.localdomain systemd[1]: tmp-crun.dOENfJ.mount: Deactivated successfully.
Dec 02 09:57:27 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-5f7a1d9a97390e8108c64a4f2bad5f2486a107173b555df3b4a4370d313b1e2a-merged.mount: Deactivated successfully.
Dec 02 09:57:28 np0005541914.localdomain ceph-mon[288526]: pgmap v9: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s
Dec 02 09:57:28 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:28 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:57:28 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:57:28 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:28 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:57:28 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:57:28 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e11  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:57:29 np0005541914.localdomain ceph-mon[288526]: from='client.34423 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005541911", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 02 09:57:29 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:57:29 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:57:29 np0005541914.localdomain ceph-mgr[287188]: ms_deliver_dispatch: unhandled message 0x561987797600 mon_map magic: 0 from mon.1 v2:172.18.0.108:3300/0
Dec 02 09:57:29 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@1(peon) e12  my rank is now 0 (was 1)
Dec 02 09:57:29 np0005541914.localdomain ceph-mgr[287188]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0
Dec 02 09:57:29 np0005541914.localdomain ceph-mgr[287188]: client.0 ms_handle_reset on v2:172.18.0.108:3300/0
Dec 02 09:57:29 np0005541914.localdomain ceph-mgr[287188]: ms_deliver_dispatch: unhandled message 0x561987797080 mon_map magic: 0 from mon.0 v2:172.18.0.108:3300/0
Dec 02 09:57:29 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 calling monitor election
Dec 02 09:57:29 np0005541914.localdomain ceph-mon[288526]: paxos.0).electionLogic(46) init, last seen epoch 46
Dec 02 09:57:29 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(electing) e12 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:57:29 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 is new leader, mons np0005541914,np0005541912 in quorum (ranks 0,1)
Dec 02 09:57:29 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : monmap epoch 12
Dec 02 09:57:29 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:57:29 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : last_changed 2025-12-02T09:57:29.744140+0000
Dec 02 09:57:29 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : created 2025-12-02T07:44:17.411659+0000
Dec 02 09:57:29 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : min_mon_release 18 (reef)
Dec 02 09:57:29 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 02 09:57:29 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005541914
Dec 02 09:57:29 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005541912
Dec 02 09:57:29 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e12 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:57:29 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=mds.np0005541912.ghcwcm=up:active} 2 up:standby
Dec 02 09:57:29 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : osdmap e89: 6 total, 6 up, 6 in
Dec 02 09:57:29 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : mgrmap e31: np0005541913.mfesdm(active, since 15s), standbys: np0005541912.qwddia, np0005541911.adcgiw, np0005541914.lljzmk
Dec 02 09:57:30 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 02 09:57:30 np0005541914.localdomain ceph-mon[288526]: from='client.26982 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005541911"], "force": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:57:30 np0005541914.localdomain ceph-mon[288526]: Remove daemons mon.np0005541911
Dec 02 09:57:30 np0005541914.localdomain ceph-mon[288526]: Safe to remove mon.np0005541911: new quorum should be ['np0005541914', 'np0005541912'] (from ['np0005541914', 'np0005541912'])
Dec 02 09:57:30 np0005541914.localdomain ceph-mon[288526]: Removing monitor np0005541911 from monmap...
Dec 02 09:57:30 np0005541914.localdomain ceph-mon[288526]: Removing daemon mon.np0005541911 from np0005541911.localdomain -- ports []
Dec 02 09:57:30 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 09:57:30 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch
Dec 02 09:57:30 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:57:30 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:57:30 np0005541914.localdomain ceph-mon[288526]: mon.np0005541912 calling monitor election
Dec 02 09:57:30 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914 calling monitor election
Dec 02 09:57:30 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914 is new leader, mons np0005541914,np0005541912 in quorum (ranks 0,1)
Dec 02 09:57:30 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:57:30 np0005541914.localdomain ceph-mon[288526]: monmap epoch 12
Dec 02 09:57:30 np0005541914.localdomain ceph-mon[288526]: fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:57:30 np0005541914.localdomain ceph-mon[288526]: last_changed 2025-12-02T09:57:29.744140+0000
Dec 02 09:57:30 np0005541914.localdomain ceph-mon[288526]: created 2025-12-02T07:44:17.411659+0000
Dec 02 09:57:30 np0005541914.localdomain ceph-mon[288526]: min_mon_release 18 (reef)
Dec 02 09:57:30 np0005541914.localdomain ceph-mon[288526]: election_strategy: 1
Dec 02 09:57:30 np0005541914.localdomain ceph-mon[288526]: 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005541914
Dec 02 09:57:30 np0005541914.localdomain ceph-mon[288526]: 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005541912
Dec 02 09:57:30 np0005541914.localdomain ceph-mon[288526]: fsmap cephfs:1 {0=mds.np0005541912.ghcwcm=up:active} 2 up:standby
Dec 02 09:57:30 np0005541914.localdomain ceph-mon[288526]: osdmap e89: 6 total, 6 up, 6 in
Dec 02 09:57:30 np0005541914.localdomain ceph-mon[288526]: mgrmap e31: np0005541913.mfesdm(active, since 15s), standbys: np0005541912.qwddia, np0005541911.adcgiw, np0005541914.lljzmk
Dec 02 09:57:30 np0005541914.localdomain ceph-mon[288526]: overall HEALTH_OK
Dec 02 09:57:30 np0005541914.localdomain sudo[298471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 02 09:57:30 np0005541914.localdomain sudo[298471]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:30 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e12 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 02 09:57:30 np0005541914.localdomain sudo[298471]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:30 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:30 np0005541914.localdomain sudo[298489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph
Dec 02 09:57:30 np0005541914.localdomain sudo[298489]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:30 np0005541914.localdomain sudo[298489]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:30 np0005541914.localdomain sudo[298507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:57:30 np0005541914.localdomain sudo[298507]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:30 np0005541914.localdomain sudo[298507]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:30 np0005541914.localdomain sudo[298525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:57:30 np0005541914.localdomain sudo[298525]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:30 np0005541914.localdomain sudo[298525]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:30 np0005541914.localdomain sudo[298543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:57:30 np0005541914.localdomain sudo[298543]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:30 np0005541914.localdomain sudo[298543]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:30 np0005541914.localdomain sudo[298577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:57:30 np0005541914.localdomain sudo[298577]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:30 np0005541914.localdomain sudo[298577]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:30 np0005541914.localdomain sudo[298595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:57:30 np0005541914.localdomain sudo[298595]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:30 np0005541914.localdomain sudo[298595]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:30 np0005541914.localdomain sudo[298613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 02 09:57:30 np0005541914.localdomain sudo[298613]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:30 np0005541914.localdomain sudo[298613]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:30 np0005541914.localdomain sudo[298631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 09:57:30 np0005541914.localdomain sudo[298631]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:30 np0005541914.localdomain sudo[298631]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:30 np0005541914.localdomain sudo[298649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 09:57:30 np0005541914.localdomain sudo[298649]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:30 np0005541914.localdomain sudo[298649]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:30 np0005541914.localdomain sudo[298667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:57:30 np0005541914.localdomain sudo[298667]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:30 np0005541914.localdomain sudo[298667]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:30 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e12  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:57:30 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e12  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:57:30 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e12  adding peer [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to list of hints
Dec 02 09:57:30 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader).monmap v12 adding/updating np0005541913 at [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] to monitor cluster
Dec 02 09:57:30 np0005541914.localdomain ceph-mgr[287188]: ms_deliver_dispatch: unhandled message 0x561987796f20 mon_map magic: 0 from mon.0 v2:172.18.0.108:3300/0
Dec 02 09:57:30 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 calling monitor election
Dec 02 09:57:30 np0005541914.localdomain ceph-mon[288526]: paxos.0).electionLogic(48) init, last seen epoch 48
Dec 02 09:57:30 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(electing) e13 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:57:30 np0005541914.localdomain sudo[298685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:57:30 np0005541914.localdomain sudo[298685]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:30 np0005541914.localdomain sudo[298685]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:30 np0005541914.localdomain sudo[298703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:57:30 np0005541914.localdomain sudo[298703]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:30 np0005541914.localdomain sudo[298703]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:31 np0005541914.localdomain sudo[298737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:57:31 np0005541914.localdomain sudo[298737]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:31 np0005541914.localdomain sudo[298737]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:31 np0005541914.localdomain sudo[298755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:57:31 np0005541914.localdomain sudo[298755]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:31 np0005541914.localdomain sudo[298755]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:31 np0005541914.localdomain sudo[298773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:57:31 np0005541914.localdomain sudo[298773]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:31 np0005541914.localdomain sudo[298773]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:33 np0005541914.localdomain podman[239757]: time="2025-12-02T09:57:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 09:57:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:57:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 09:57:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:57:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19179 "" "Go-http-client/1.1"
Dec 02 09:57:35 np0005541914.localdomain ceph-mds[285895]: mds.beacon.mds.np0005541914.sqgqkj missed beacon ack from the monitors
Dec 02 09:57:35 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 is new leader, mons np0005541914,np0005541912 in quorum (ranks 0,1)
Dec 02 09:57:35 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : monmap epoch 13
Dec 02 09:57:35 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:57:35 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : last_changed 2025-12-02T09:57:30.836166+0000
Dec 02 09:57:35 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : created 2025-12-02T07:44:17.411659+0000
Dec 02 09:57:35 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : min_mon_release 18 (reef)
Dec 02 09:57:35 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 02 09:57:35 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005541914
Dec 02 09:57:35 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005541912
Dec 02 09:57:35 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005541913
Dec 02 09:57:35 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e13 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:57:35 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=mds.np0005541912.ghcwcm=up:active} 2 up:standby
Dec 02 09:57:35 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : osdmap e89: 6 total, 6 up, 6 in
Dec 02 09:57:35 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : mgrmap e31: np0005541913.mfesdm(active, since 21s), standbys: np0005541912.qwddia, np0005541911.adcgiw, np0005541914.lljzmk
Dec 02 09:57:35 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum np0005541914,np0005541912 (MON_DOWN)
Dec 02 09:57:35 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 09:57:35 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0)
Dec 02 09:57:35 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 09:57:35 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain.devices.0}] v 0)
Dec 02 09:57:35 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1/3 mons down, quorum np0005541914,np0005541912
Dec 02 09:57:35 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [WRN] : [WRN] MON_DOWN: 1/3 mons down, quorum np0005541914,np0005541912
Dec 02 09:57:35 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [WRN] :     mon.np0005541913 (rank 2) addr [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] is down (out of quorum)
Dec 02 09:57:35 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0)
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: Updating np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: pgmap v11: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: mon.np0005541912 calling monitor election
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914 calling monitor election
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: pgmap v12: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: pgmap v13: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914 is new leader, mons np0005541914,np0005541912 in quorum (ranks 0,1)
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: monmap epoch 13
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: last_changed 2025-12-02T09:57:30.836166+0000
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: created 2025-12-02T07:44:17.411659+0000
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: min_mon_release 18 (reef)
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: election_strategy: 1
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005541914
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005541912
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005541913
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: fsmap cephfs:1 {0=mds.np0005541912.ghcwcm=up:active} 2 up:standby
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: osdmap e89: 6 total, 6 up, 6 in
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: mgrmap e31: np0005541913.mfesdm(active, since 21s), standbys: np0005541912.qwddia, np0005541911.adcgiw, np0005541914.lljzmk
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: Health check failed: 1/3 mons down, quorum np0005541914,np0005541912 (MON_DOWN)
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: Health detail: HEALTH_WARN 1/3 mons down, quorum np0005541914,np0005541912
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: [WRN] MON_DOWN: 1/3 mons down, quorum np0005541914,np0005541912
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]:     mon.np0005541913 (rank 2) addr [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] is down (out of quorum)
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain}] v 0)
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 02 09:57:36 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:36 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:57:36 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:57:37 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:37 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:37 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:37 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:37 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:37 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:37 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:37 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:37 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:57:37 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 02 09:57:37 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:57:37 np0005541914.localdomain ceph-mon[288526]: Deploying daemon mon.np0005541911 on np0005541911.localdomain
Dec 02 09:57:37 np0005541914.localdomain ceph-mon[288526]: from='client.34428 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005541911.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:57:37 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:37 np0005541914.localdomain ceph-mon[288526]: Removed label mon from host np0005541911.localdomain
Dec 02 09:57:37 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:57:37 np0005541914.localdomain podman[298791]: 2025-12-02 09:57:37.094614624 +0000 UTC m=+0.096811574 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 09:57:37 np0005541914.localdomain podman[298791]: 2025-12-02 09:57:37.109840909 +0000 UTC m=+0.112037889 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 09:57:37 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:57:37 np0005541914.localdomain systemd[1]: tmp-crun.WRieZV.mount: Deactivated successfully.
Dec 02 09:57:37 np0005541914.localdomain podman[298792]: 2025-12-02 09:57:37.192143439 +0000 UTC m=+0.192988108 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.buildah.version=1.33.7, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, release=1755695350, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=9.6, architecture=x86_64, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 02 09:57:37 np0005541914.localdomain podman[298792]: 2025-12-02 09:57:37.209090446 +0000 UTC m=+0.209935145 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-type=git, io.buildah.version=1.33.7, release=1755695350, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, name=ubi9-minimal, container_name=openstack_network_exporter, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 02 09:57:37 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:57:37 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 02 09:57:37 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:37 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 calling monitor election
Dec 02 09:57:37 np0005541914.localdomain ceph-mon[288526]: paxos.0).electionLogic(50) init, last seen epoch 50
Dec 02 09:57:37 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(electing) e13 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:57:37 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 is new leader, mons np0005541914,np0005541912,np0005541913 in quorum (ranks 0,1,2)
Dec 02 09:57:37 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : monmap epoch 13
Dec 02 09:57:37 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:57:37 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : last_changed 2025-12-02T09:57:30.836166+0000
Dec 02 09:57:37 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : created 2025-12-02T07:44:17.411659+0000
Dec 02 09:57:37 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : min_mon_release 18 (reef)
Dec 02 09:57:37 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 02 09:57:37 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005541914
Dec 02 09:57:37 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005541912
Dec 02 09:57:37 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005541913
Dec 02 09:57:37 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e13 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:57:37 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=mds.np0005541912.ghcwcm=up:active} 2 up:standby
Dec 02 09:57:37 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : osdmap e89: 6 total, 6 up, 6 in
Dec 02 09:57:37 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : mgrmap e31: np0005541913.mfesdm(active, since 23s), standbys: np0005541912.qwddia, np0005541911.adcgiw, np0005541914.lljzmk
Dec 02 09:57:37 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [INF] : Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005541914,np0005541912)
Dec 02 09:57:37 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 02 09:57:37 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 02 09:57:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain.devices.0}] v 0)
Dec 02 09:57:38 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain}] v 0)
Dec 02 09:57:38 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e13  adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints
Dec 02 09:57:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e13  adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints
Dec 02 09:57:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 02 09:57:38 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541913 calling monitor election
Dec 02 09:57:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541912 calling monitor election
Dec 02 09:57:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541913 calling monitor election
Dec 02 09:57:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914 calling monitor election
Dec 02 09:57:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914 is new leader, mons np0005541914,np0005541912,np0005541913 in quorum (ranks 0,1,2)
Dec 02 09:57:38 np0005541914.localdomain ceph-mon[288526]: monmap epoch 13
Dec 02 09:57:38 np0005541914.localdomain ceph-mon[288526]: fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:57:38 np0005541914.localdomain ceph-mon[288526]: last_changed 2025-12-02T09:57:30.836166+0000
Dec 02 09:57:38 np0005541914.localdomain ceph-mon[288526]: created 2025-12-02T07:44:17.411659+0000
Dec 02 09:57:38 np0005541914.localdomain ceph-mon[288526]: min_mon_release 18 (reef)
Dec 02 09:57:38 np0005541914.localdomain ceph-mon[288526]: election_strategy: 1
Dec 02 09:57:38 np0005541914.localdomain ceph-mon[288526]: 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005541914
Dec 02 09:57:38 np0005541914.localdomain ceph-mon[288526]: 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005541912
Dec 02 09:57:38 np0005541914.localdomain ceph-mon[288526]: 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005541913
Dec 02 09:57:38 np0005541914.localdomain ceph-mon[288526]: fsmap cephfs:1 {0=mds.np0005541912.ghcwcm=up:active} 2 up:standby
Dec 02 09:57:38 np0005541914.localdomain ceph-mon[288526]: osdmap e89: 6 total, 6 up, 6 in
Dec 02 09:57:38 np0005541914.localdomain ceph-mon[288526]: mgrmap e31: np0005541913.mfesdm(active, since 23s), standbys: np0005541912.qwddia, np0005541911.adcgiw, np0005541914.lljzmk
Dec 02 09:57:38 np0005541914.localdomain ceph-mon[288526]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005541914,np0005541912)
Dec 02 09:57:38 np0005541914.localdomain ceph-mon[288526]: Cluster is now healthy
Dec 02 09:57:38 np0005541914.localdomain ceph-mon[288526]: overall HEALTH_OK
Dec 02 09:57:38 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:38 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:38 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 02 09:57:38 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e13 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 02 09:57:38 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:38 np0005541914.localdomain sudo[298835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:57:38 np0005541914.localdomain sudo[298835]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:38 np0005541914.localdomain sudo[298835]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e13  adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints
Dec 02 09:57:38 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader).monmap v13 adding/updating np0005541911 at [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to monitor cluster
Dec 02 09:57:39 np0005541914.localdomain ceph-mgr[287188]: ms_deliver_dispatch: unhandled message 0x5619877971e0 mon_map magic: 0 from mon.0 v2:172.18.0.108:3300/0
Dec 02 09:57:39 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 calling monitor election
Dec 02 09:57:39 np0005541914.localdomain ceph-mon[288526]: paxos.0).electionLogic(52) init, last seen epoch 52
Dec 02 09:57:39 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(electing) e14 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:57:41 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:57:42 np0005541914.localdomain podman[298853]: 2025-12-02 09:57:42.070858183 +0000 UTC m=+0.071545544 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:57:42 np0005541914.localdomain podman[298853]: 2025-12-02 09:57:42.082670143 +0000 UTC m=+0.083357454 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec 02 09:57:42 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:57:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:57:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 09:57:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:57:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:57:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:57:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:57:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:57:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 09:57:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:57:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:57:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 09:57:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:57:43 np0005541914.localdomain ceph-mds[285895]: mds.beacon.mds.np0005541914.sqgqkj missed beacon ack from the monitors
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: paxos.0).electionLogic(53) init, last seen epoch 53, mid-election, bumping
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(electing) e14 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 is new leader, mons np0005541914,np0005541912,np0005541913,np0005541911 in quorum (ranks 0,1,2,3)
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : monmap epoch 14
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : last_changed 2025-12-02T09:57:38.994501+0000
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : created 2025-12-02T07:44:17.411659+0000
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : min_mon_release 18 (reef)
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005541914
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005541912
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005541913
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : 3: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005541911
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e14 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=mds.np0005541912.ghcwcm=up:active} 2 up:standby
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : osdmap e89: 6 total, 6 up, 6 in
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : mgrmap e31: np0005541913.mfesdm(active, since 29s), standbys: np0005541912.qwddia, np0005541911.adcgiw, np0005541914.lljzmk
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain.devices.0}] v 0)
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain}] v 0)
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: pgmap v15: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: from='client.44380 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005541911.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: Removed label _admin from host np0005541911.localdomain
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541911"} : dispatch
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: mon.np0005541912 calling monitor election
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914 calling monitor election
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: mon.np0005541913 calling monitor election
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541911"} : dispatch
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: pgmap v16: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541911"} : dispatch
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: mon.np0005541911 calling monitor election
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541911"} : dispatch
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: pgmap v17: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541911"} : dispatch
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541911"} : dispatch
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914 is new leader, mons np0005541914,np0005541912,np0005541913,np0005541911 in quorum (ranks 0,1,2,3)
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: monmap epoch 14
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: last_changed 2025-12-02T09:57:38.994501+0000
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: created 2025-12-02T07:44:17.411659+0000
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: min_mon_release 18 (reef)
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: election_strategy: 1
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005541914
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005541912
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005541913
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: 3: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005541911
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: fsmap cephfs:1 {0=mds.np0005541912.ghcwcm=up:active} 2 up:standby
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: osdmap e89: 6 total, 6 up, 6 in
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: mgrmap e31: np0005541913.mfesdm(active, since 29s), standbys: np0005541912.qwddia, np0005541911.adcgiw, np0005541914.lljzmk
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: overall HEALTH_OK
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:44 np0005541914.localdomain sudo[298871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 02 09:57:44 np0005541914.localdomain sudo[298871]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:44 np0005541914.localdomain sudo[298871]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:44 np0005541914.localdomain sudo[298889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph
Dec 02 09:57:44 np0005541914.localdomain sudo[298889]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:44 np0005541914.localdomain sudo[298889]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:44 np0005541914.localdomain sudo[298907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:57:44 np0005541914.localdomain sudo[298907]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:44 np0005541914.localdomain sudo[298907]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain.devices.0}] v 0)
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:44 np0005541914.localdomain sudo[298925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain}] v 0)
Dec 02 09:57:44 np0005541914.localdomain sudo[298925]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:44 np0005541914.localdomain sudo[298925]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:44 np0005541914.localdomain sudo[298943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:57:44 np0005541914.localdomain sudo[298943]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:44 np0005541914.localdomain sudo[298943]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:44 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:57:44 np0005541914.localdomain sudo[298977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:57:44 np0005541914.localdomain sudo[298977]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:44 np0005541914.localdomain sudo[298977]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:44 np0005541914.localdomain sudo[298995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:57:44 np0005541914.localdomain sudo[298995]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:44 np0005541914.localdomain sudo[298995]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:44 np0005541914.localdomain sudo[299013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 02 09:57:44 np0005541914.localdomain sudo[299013]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:44 np0005541914.localdomain sudo[299013]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:44 np0005541914.localdomain sudo[299031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 09:57:44 np0005541914.localdomain sudo[299031]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:44 np0005541914.localdomain sudo[299031]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:44 np0005541914.localdomain sudo[299049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 09:57:44 np0005541914.localdomain sudo[299049]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:44 np0005541914.localdomain sudo[299049]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:44 np0005541914.localdomain sudo[299067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:57:44 np0005541914.localdomain sudo[299067]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:44 np0005541914.localdomain sudo[299067]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:45 np0005541914.localdomain sudo[299085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:57:45 np0005541914.localdomain sudo[299085]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:45 np0005541914.localdomain sudo[299085]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:45 np0005541914.localdomain sudo[299103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:57:45 np0005541914.localdomain sudo[299103]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:45 np0005541914.localdomain sudo[299103]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:57:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:57:45 np0005541914.localdomain ceph-mon[288526]: Removing np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:57:45 np0005541914.localdomain ceph-mon[288526]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf
Dec 02 09:57:45 np0005541914.localdomain ceph-mon[288526]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf
Dec 02 09:57:45 np0005541914.localdomain ceph-mon[288526]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf
Dec 02 09:57:45 np0005541914.localdomain ceph-mon[288526]: Removing np0005541911.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 09:57:45 np0005541914.localdomain ceph-mon[288526]: Removing np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:57:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:45 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541911"} : dispatch
Dec 02 09:57:45 np0005541914.localdomain sudo[299137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:57:45 np0005541914.localdomain sudo[299137]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:45 np0005541914.localdomain sudo[299137]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:45 np0005541914.localdomain sudo[299155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:57:45 np0005541914.localdomain sudo[299155]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:45 np0005541914.localdomain sudo[299155]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:45 np0005541914.localdomain sudo[299173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:57:45 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 09:57:45 np0005541914.localdomain sudo[299173]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:45 np0005541914.localdomain sudo[299173]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:45 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:45 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0)
Dec 02 09:57:45 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 09:57:45 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:45 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0)
Dec 02 09:57:45 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:45 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:45 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 09:57:45 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:45 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 09:57:45 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:45 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 02 09:57:45 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:46 np0005541914.localdomain ceph-mon[288526]: pgmap v18: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:57:46 np0005541914.localdomain ceph-mon[288526]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:57:46 np0005541914.localdomain ceph-mon[288526]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:57:46 np0005541914.localdomain ceph-mon[288526]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:57:46 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:46 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:46 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:46 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:46 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:46 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:46 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:47 np0005541914.localdomain ceph-mon[288526]: Removing daemon mgr.np0005541911.adcgiw from np0005541911.localdomain -- ports [8765]
Dec 02 09:57:47 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e14 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.np0005541911.adcgiw"} v 0)
Dec 02 09:57:47 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth rm", "entity": "mgr.np0005541911.adcgiw"} : dispatch
Dec 02 09:57:47 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005541911.adcgiw"}]': finished
Dec 02 09:57:47 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 02 09:57:47 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:47 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e14 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 02 09:57:47 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:47 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e14 handle_command mon_command({"prefix": "mon rm", "name": "np0005541911"} v 0)
Dec 02 09:57:47 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon rm", "name": "np0005541911"} : dispatch
Dec 02 09:57:47 np0005541914.localdomain ceph-mgr[287188]: ms_deliver_dispatch: unhandled message 0x561987797600 mon_map magic: 0 from mon.0 v2:172.18.0.108:3300/0
Dec 02 09:57:47 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 calling monitor election
Dec 02 09:57:47 np0005541914.localdomain ceph-mon[288526]: paxos.0).electionLogic(56) init, last seen epoch 56
Dec 02 09:57:47 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(electing) e15 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:57:49 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(electing) e15 handle_auth_request failed to assign global_id
Dec 02 09:57:50 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(electing) e15 handle_auth_request failed to assign global_id
Dec 02 09:57:50 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(electing) e15 handle_auth_request failed to assign global_id
Dec 02 09:57:51 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(electing) e15 handle_auth_request failed to assign global_id
Dec 02 09:57:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:51.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:57:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:51.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 02 09:57:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:51.664 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 02 09:57:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:52.659 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:57:52 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 is new leader, mons np0005541914,np0005541912 in quorum (ranks 0,1)
Dec 02 09:57:52 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 02 09:57:52 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 calling monitor election
Dec 02 09:57:52 np0005541914.localdomain ceph-mon[288526]: paxos.0).electionLogic(59) init, last seen epoch 59, mid-election, bumping
Dec 02 09:57:52 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(electing) e15 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:57:52 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [INF] : mon.np0005541914 is new leader, mons np0005541914,np0005541912,np0005541913 in quorum (ranks 0,1,2)
Dec 02 09:57:52 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : monmap epoch 15
Dec 02 09:57:52 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:57:52 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : last_changed 2025-12-02T09:57:47.906570+0000
Dec 02 09:57:52 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : created 2025-12-02T07:44:17.411659+0000
Dec 02 09:57:52 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : min_mon_release 18 (reef)
Dec 02 09:57:52 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 02 09:57:52 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005541914
Dec 02 09:57:52 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005541912
Dec 02 09:57:52 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005541913
Dec 02 09:57:52 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:57:52 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=mds.np0005541912.ghcwcm=up:active} 2 up:standby
Dec 02 09:57:52 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : osdmap e89: 6 total, 6 up, 6 in
Dec 02 09:57:52 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [DBG] : mgrmap e31: np0005541913.mfesdm(active, since 38s), standbys: np0005541912.qwddia, np0005541911.adcgiw, np0005541914.lljzmk
Dec 02 09:57:52 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 02 09:57:52 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 02 09:57:52 np0005541914.localdomain ceph-mon[288526]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec 02 09:57:52 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:52 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:52 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 02 09:57:53 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:53 np0005541914.localdomain ceph-mon[288526]: Removing key for mgr.np0005541911.adcgiw
Dec 02 09:57:53 np0005541914.localdomain ceph-mon[288526]: Safe to remove mon.np0005541911: new quorum should be ['np0005541914', 'np0005541912', 'np0005541913'] (from ['np0005541914', 'np0005541912', 'np0005541913'])
Dec 02 09:57:53 np0005541914.localdomain ceph-mon[288526]: Removing monitor np0005541911 from monmap...
Dec 02 09:57:53 np0005541914.localdomain ceph-mon[288526]: Removing daemon mon.np0005541911 from np0005541911.localdomain -- ports []
Dec 02 09:57:53 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 09:57:53 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:57:53 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch
Dec 02 09:57:53 np0005541914.localdomain ceph-mon[288526]: mon.np0005541912 calling monitor election
Dec 02 09:57:53 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914 calling monitor election
Dec 02 09:57:53 np0005541914.localdomain ceph-mon[288526]: mon.np0005541913 calling monitor election
Dec 02 09:57:53 np0005541914.localdomain ceph-mon[288526]: pgmap v20: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:57:53 np0005541914.localdomain ceph-mon[288526]: pgmap v21: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:57:53 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914 is new leader, mons np0005541914,np0005541912 in quorum (ranks 0,1)
Dec 02 09:57:53 np0005541914.localdomain ceph-mon[288526]: mon.np0005541912 calling monitor election
Dec 02 09:57:53 np0005541914.localdomain ceph-mon[288526]: overall HEALTH_OK
Dec 02 09:57:53 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914 calling monitor election
Dec 02 09:57:53 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914 is new leader, mons np0005541914,np0005541912,np0005541913 in quorum (ranks 0,1,2)
Dec 02 09:57:53 np0005541914.localdomain ceph-mon[288526]: monmap epoch 15
Dec 02 09:57:53 np0005541914.localdomain ceph-mon[288526]: fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:57:53 np0005541914.localdomain ceph-mon[288526]: last_changed 2025-12-02T09:57:47.906570+0000
Dec 02 09:57:53 np0005541914.localdomain ceph-mon[288526]: created 2025-12-02T07:44:17.411659+0000
Dec 02 09:57:53 np0005541914.localdomain ceph-mon[288526]: min_mon_release 18 (reef)
Dec 02 09:57:53 np0005541914.localdomain ceph-mon[288526]: election_strategy: 1
Dec 02 09:57:53 np0005541914.localdomain ceph-mon[288526]: 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005541914
Dec 02 09:57:53 np0005541914.localdomain ceph-mon[288526]: 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005541912
Dec 02 09:57:53 np0005541914.localdomain ceph-mon[288526]: 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005541913
Dec 02 09:57:53 np0005541914.localdomain ceph-mon[288526]: fsmap cephfs:1 {0=mds.np0005541912.ghcwcm=up:active} 2 up:standby
Dec 02 09:57:53 np0005541914.localdomain ceph-mon[288526]: osdmap e89: 6 total, 6 up, 6 in
Dec 02 09:57:53 np0005541914.localdomain ceph-mon[288526]: mgrmap e31: np0005541913.mfesdm(active, since 38s), standbys: np0005541912.qwddia, np0005541911.adcgiw, np0005541914.lljzmk
Dec 02 09:57:53 np0005541914.localdomain ceph-mon[288526]: overall HEALTH_OK
Dec 02 09:57:53 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:53 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:53 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 02 09:57:53 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:53 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 02 09:57:53 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:53 np0005541914.localdomain sudo[299191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:57:53 np0005541914.localdomain sudo[299191]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:53 np0005541914.localdomain sudo[299191]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:53.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:57:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:53.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:57:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:53.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 02 09:57:54 np0005541914.localdomain ceph-mon[288526]: pgmap v22: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:57:54 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:54 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:57:54 np0005541914.localdomain ceph-mon[288526]: from='client.54101 -' entity='client.admin' cmd=[{"prefix": "orch host drain", "hostname": "np0005541911.localdomain", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:57:54 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:54 np0005541914.localdomain ceph-mon[288526]: Added label _no_schedule to host np0005541911.localdomain
Dec 02 09:57:54 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:54 np0005541914.localdomain ceph-mon[288526]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005541911.localdomain
Dec 02 09:57:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:54.541 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:57:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:54.542 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 09:57:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:54.542 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 09:57:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:54.568 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 09:57:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:54.568 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:57:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:54.569 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:57:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:54.569 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:57:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:54.584 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:57:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:54.584 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:57:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:54.585 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:57:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:54.585 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 09:57:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:54.585 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:57:54 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:57:54 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain.devices.0}] v 0)
Dec 02 09:57:54 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:54 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541911.localdomain}] v 0)
Dec 02 09:57:54 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:54 np0005541914.localdomain sudo[299229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 02 09:57:54 np0005541914.localdomain sudo[299229]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:54 np0005541914.localdomain sudo[299229]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:54 np0005541914.localdomain sudo[299247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph
Dec 02 09:57:54 np0005541914.localdomain sudo[299247]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:54 np0005541914.localdomain sudo[299247]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:54 np0005541914.localdomain sudo[299265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:57:54 np0005541914.localdomain sudo[299265]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:54 np0005541914.localdomain sudo[299265]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:54 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 09:57:54 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3063913601' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:57:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:55.014 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:57:55 np0005541914.localdomain sudo[299283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:57:55 np0005541914.localdomain sudo[299283]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:55 np0005541914.localdomain sudo[299283]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:55 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.107:0/2425549797' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:57:55 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:55 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:55 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:57:55 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:57:55 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.108:0/3063913601' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:57:55 np0005541914.localdomain sudo[299303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:57:55 np0005541914.localdomain sudo[299303]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:55 np0005541914.localdomain sudo[299303]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:55.185 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 09:57:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:55.187 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11984MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 09:57:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:55.187 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:57:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:55.188 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:57:55 np0005541914.localdomain sudo[299337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:57:55 np0005541914.localdomain sudo[299337]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:55 np0005541914.localdomain sudo[299337]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:55 np0005541914.localdomain sudo[299355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:57:55 np0005541914.localdomain sudo[299355]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:55 np0005541914.localdomain sudo[299355]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:55.282 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 09:57:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:55.283 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 09:57:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:55.331 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Refreshing inventories for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 02 09:57:55 np0005541914.localdomain sudo[299373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 02 09:57:55 np0005541914.localdomain sudo[299373]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:55 np0005541914.localdomain sudo[299373]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:55.380 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Updating ProviderTree inventory for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 02 09:57:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:55.381 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Updating inventory in ProviderTree for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 02 09:57:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:55.396 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Refreshing aggregate associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 02 09:57:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:55.419 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Refreshing trait associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, traits: HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,HW_CPU_X86_SSE42,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_IDE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AKI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 02 09:57:55 np0005541914.localdomain sudo[299391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 09:57:55 np0005541914.localdomain sudo[299391]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:55 np0005541914.localdomain sudo[299391]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:55.440 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:57:55 np0005541914.localdomain sudo[299410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 09:57:55 np0005541914.localdomain sudo[299410]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:55 np0005541914.localdomain sudo[299410]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:55 np0005541914.localdomain sudo[299428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:57:55 np0005541914.localdomain sudo[299428]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:55 np0005541914.localdomain sudo[299428]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:55 np0005541914.localdomain sudo[299465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:57:55 np0005541914.localdomain sudo[299465]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:55 np0005541914.localdomain sudo[299465]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:55 np0005541914.localdomain sudo[299483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:57:55 np0005541914.localdomain sudo[299483]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:55 np0005541914.localdomain sudo[299483]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:55 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 09:57:55 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:55 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 09:57:55 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:55 np0005541914.localdomain sudo[299517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:57:55 np0005541914.localdomain sudo[299517]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:55 np0005541914.localdomain sudo[299517]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:55 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 09:57:55 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3093024393' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:57:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:55.899 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:57:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:55.903 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 09:57:55 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 02 09:57:55 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:55 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command({"prefix":"config-key del","key":"mgr/cephadm/host.np0005541911.localdomain"} v 0)
Dec 02 09:57:55 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541911.localdomain"} : dispatch
Dec 02 09:57:55 np0005541914.localdomain sudo[299535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:57:55 np0005541914.localdomain sudo[299535]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:55 np0005541914.localdomain sudo[299535]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:55 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005541911.localdomain"}]': finished
Dec 02 09:57:55 np0005541914.localdomain sudo[299555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:57:55 np0005541914.localdomain sudo[299555]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:55 np0005541914.localdomain sudo[299555]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:56 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0)
Dec 02 09:57:56 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:56 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0)
Dec 02 09:57:56 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:56 np0005541914.localdomain ceph-mon[288526]: from='client.54116 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "host_pattern": "np0005541911.localdomain", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 02 09:57:56 np0005541914.localdomain ceph-mon[288526]: pgmap v23: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:57:56 np0005541914.localdomain ceph-mon[288526]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf
Dec 02 09:57:56 np0005541914.localdomain ceph-mon[288526]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf
Dec 02 09:57:56 np0005541914.localdomain ceph-mon[288526]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf
Dec 02 09:57:56 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.107:0/1003759709' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:57:56 np0005541914.localdomain ceph-mon[288526]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:57:56 np0005541914.localdomain ceph-mon[288526]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:57:56 np0005541914.localdomain ceph-mon[288526]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:57:56 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:56 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:56 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.108:0/3093024393' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:57:56 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541911.localdomain"} : dispatch
Dec 02 09:57:56 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:56 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541911.localdomain"} : dispatch
Dec 02 09:57:56 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005541911.localdomain"}]': finished
Dec 02 09:57:56 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:56 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:56 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 09:57:56 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:56 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 09:57:56 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:56 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 02 09:57:56 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:56.352 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 09:57:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:56.355 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 09:57:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:56.355 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.168s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:57:56 np0005541914.localdomain sudo[299573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:57:56 np0005541914.localdomain sudo[299573]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:57:56 np0005541914.localdomain sudo[299573]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:56 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 02 09:57:56 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:57:56 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:57:56 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:57:56 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:57:56 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:57:57 np0005541914.localdomain podman[299591]: 2025-12-02 09:57:57.092565942 +0000 UTC m=+0.091938186 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 02 09:57:57 np0005541914.localdomain ceph-mon[288526]: from='client.34455 -' entity='client.admin' cmd=[{"prefix": "orch host rm", "hostname": "np0005541911.localdomain", "force": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:57:57 np0005541914.localdomain ceph-mon[288526]: Removed host np0005541911.localdomain
Dec 02 09:57:57 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:57 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:57 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:57 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:57:57 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:57:57 np0005541914.localdomain ceph-mon[288526]: Reconfiguring crash.np0005541912 (monmap changed)...
Dec 02 09:57:57 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:57:57 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:57:57 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon crash.np0005541912 on np0005541912.localdomain
Dec 02 09:57:57 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.106:0/464441794' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:57:57 np0005541914.localdomain systemd[1]: tmp-crun.XNxMko.mount: Deactivated successfully.
Dec 02 09:57:57 np0005541914.localdomain podman[299594]: 2025-12-02 09:57:57.140832164 +0000 UTC m=+0.134812584 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Dec 02 09:57:57 np0005541914.localdomain podman[299592]: 2025-12-02 09:57:57.192725677 +0000 UTC m=+0.190328457 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 09:57:57 np0005541914.localdomain podman[299592]: 2025-12-02 09:57:57.203993611 +0000 UTC m=+0.201596481 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 02 09:57:57 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 09:57:57 np0005541914.localdomain podman[299594]: 2025-12-02 09:57:57.225746844 +0000 UTC m=+0.219727234 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 09:57:57 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 09:57:57 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:57:57 np0005541914.localdomain podman[299593]: 2025-12-02 09:57:57.250635493 +0000 UTC m=+0.245161929 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 02 09:57:57 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:57 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 09:57:57 np0005541914.localdomain podman[299591]: 2025-12-02 09:57:57.273763209 +0000 UTC m=+0.273135393 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Dec 02 09:57:57 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:57 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:57:57 np0005541914.localdomain podman[299593]: 2025-12-02 09:57:57.284130985 +0000 UTC m=+0.278657401 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 02 09:57:57 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 09:57:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:57.314 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:57:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:57.368 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:57:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:57.369 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:57:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:57.370 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:57:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:57.370 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 09:57:57 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 02 09:57:58 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:58 np0005541914.localdomain sshd[299674]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:57:58 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 09:57:58 np0005541914.localdomain sshd[299674]: Accepted publickey for tripleo-admin from 192.168.122.11 port 58072 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 09:57:58 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:58 np0005541914.localdomain systemd[1]: Created slice User Slice of UID 1003.
Dec 02 09:57:58 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 09:57:58 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:58 np0005541914.localdomain systemd[1]: Starting User Runtime Directory /run/user/1003...
Dec 02 09:57:58 np0005541914.localdomain systemd-logind[760]: New session 68 of user tripleo-admin.
Dec 02 09:57:58 np0005541914.localdomain systemd[1]: Finished User Runtime Directory /run/user/1003.
Dec 02 09:57:58 np0005541914.localdomain ceph-mon[288526]: pgmap v24: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:57:58 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:58 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:58 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 02 09:57:58 np0005541914.localdomain ceph-mon[288526]: Reconfiguring osd.2 (monmap changed)...
Dec 02 09:57:58 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:57:58 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.2 on np0005541912.localdomain
Dec 02 09:57:58 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.106:0/2656442173' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:57:58 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:58 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:58 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:58 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 02 09:57:58 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:57:58 np0005541914.localdomain systemd[1]: Starting User Manager for UID 1003...
Dec 02 09:57:58 np0005541914.localdomain systemd[299678]: pam_unix(systemd-user:session): session opened for user tripleo-admin(uid=1003) by (uid=0)
Dec 02 09:57:58 np0005541914.localdomain systemd[299678]: Queued start job for default target Main User Target.
Dec 02 09:57:58 np0005541914.localdomain systemd[299678]: Created slice User Application Slice.
Dec 02 09:57:58 np0005541914.localdomain systemd[299678]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 02 09:57:58 np0005541914.localdomain systemd[299678]: Started Daily Cleanup of User's Temporary Directories.
Dec 02 09:57:58 np0005541914.localdomain systemd[299678]: Reached target Paths.
Dec 02 09:57:58 np0005541914.localdomain systemd[299678]: Reached target Timers.
Dec 02 09:57:58 np0005541914.localdomain systemd[299678]: Starting D-Bus User Message Bus Socket...
Dec 02 09:57:58 np0005541914.localdomain systemd[299678]: Starting Create User's Volatile Files and Directories...
Dec 02 09:57:58 np0005541914.localdomain systemd[299678]: Finished Create User's Volatile Files and Directories.
Dec 02 09:57:58 np0005541914.localdomain systemd[299678]: Listening on D-Bus User Message Bus Socket.
Dec 02 09:57:58 np0005541914.localdomain systemd[299678]: Reached target Sockets.
Dec 02 09:57:58 np0005541914.localdomain systemd[299678]: Reached target Basic System.
Dec 02 09:57:58 np0005541914.localdomain systemd[299678]: Reached target Main User Target.
Dec 02 09:57:58 np0005541914.localdomain systemd[299678]: Startup finished in 119ms.
Dec 02 09:57:58 np0005541914.localdomain systemd[1]: Started User Manager for UID 1003.
Dec 02 09:57:58 np0005541914.localdomain systemd[1]: Started Session 68 of User tripleo-admin.
Dec 02 09:57:58 np0005541914.localdomain sshd[299674]: pam_unix(sshd:session): session opened for user tripleo-admin(uid=1003) by (uid=0)
Dec 02 09:57:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:57:58.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:57:58 np0005541914.localdomain sudo[299818]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-khgvppzmkoqvtjsaiyiymzfrxmeopksu ; /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764669478.5328784-61593-182485962142233/AnsiballZ_lineinfile.py
Dec 02 09:57:58 np0005541914.localdomain sudo[299818]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Dec 02 09:57:59 np0005541914.localdomain python3[299820]: ansible-ansible.builtin.lineinfile Invoked with dest=/etc/os-net-config/tripleo_config.yaml insertafter=172.18.0 line=    - ip_netmask: 172.18.0.105/24 backup=True path=/etc/os-net-config/tripleo_config.yaml state=present backrefs=False create=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 02 09:57:59 np0005541914.localdomain sudo[299818]: pam_unix(sudo:session): session closed for user root
Dec 02 09:57:59 np0005541914.localdomain ceph-mon[288526]: Reconfiguring osd.5 (monmap changed)...
Dec 02 09:57:59 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.5 on np0005541912.localdomain
Dec 02 09:57:59 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 09:57:59 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:59 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 09:57:59 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:57:59 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 02 09:57:59 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:57:59 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:57:59 np0005541914.localdomain sudo[299964]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bozsynccedyzczisxlgtifwfjdgktojl ; /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764669479.2924082-61609-129300495460032/AnsiballZ_command.py
Dec 02 09:57:59 np0005541914.localdomain sudo[299964]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Dec 02 09:57:59 np0005541914.localdomain python3[299966]: ansible-ansible.legacy.command Invoked with _raw_params=ip a add 172.18.0.105/24 dev vlan21 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:57:59 np0005541914.localdomain sudo[299964]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:00 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 09:58:00 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:00 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 09:58:00 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:00 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 02 09:58:00 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:58:00 np0005541914.localdomain sudo[300109]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hhbazbzjbnxetfnmrbjnboxlcouyyzba ; /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1764669479.9987385-61620-265831835443957/AnsiballZ_command.py
Dec 02 09:58:00 np0005541914.localdomain sudo[300109]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Dec 02 09:58:00 np0005541914.localdomain ceph-mon[288526]: pgmap v25: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:58:00 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:00 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:00 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:58:00 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)...
Dec 02 09:58:00 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:58:00 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:00 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain
Dec 02 09:58:00 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:00 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:00 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:58:00 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:58:00 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:58:00 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:00 np0005541914.localdomain python3[300111]: ansible-ansible.legacy.command Invoked with _raw_params=ping -W1 -c 3 172.18.0.105 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 09:58:01 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 09:58:01 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:01 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mgr.np0005541912.qwddia (monmap changed)...
Dec 02 09:58:01 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mgr.np0005541912.qwddia on np0005541912.localdomain
Dec 02 09:58:01 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 09:58:01 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:02 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 09:58:02 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:02 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 09:58:02 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:02 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 02 09:58:02 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:58:02 np0005541914.localdomain ceph-mon[288526]: pgmap v26: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:58:02 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:02 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:02 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:58:02 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mon.np0005541912 (monmap changed)...
Dec 02 09:58:02 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 02 09:58:02 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:02 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mon.np0005541912 on np0005541912.localdomain
Dec 02 09:58:02 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:02 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:02 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:58:02 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:58:02 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:02 np0005541914.localdomain sudo[300109]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:58:03.168 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:58:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:58:03.169 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:58:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:58:03.169 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:58:03 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 09:58:03 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:03 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 09:58:03 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:03 np0005541914.localdomain ceph-mon[288526]: Reconfiguring crash.np0005541913 (monmap changed)...
Dec 02 09:58:03 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain
Dec 02 09:58:03 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:03 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:03 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 02 09:58:03 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:03 np0005541914.localdomain podman[239757]: time="2025-12-02T09:58:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 09:58:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:58:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 09:58:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:58:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19190 "" "Go-http-client/1.1"
Dec 02 09:58:03 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 02 09:58:03 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: pgmap v27: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: Reconfiguring osd.0 (monmap changed)...
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.0 on np0005541913.localdomain
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.32:0/486568655' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: from='client.? 172.18.0.32:0/486568655' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:58:04.637904) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669484637958, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 2147, "num_deletes": 264, "total_data_size": 5523359, "memory_usage": 5751592, "flush_reason": "Manual Compaction"}
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669484654987, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 3554573, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19379, "largest_seqno": 21520, "table_properties": {"data_size": 3546188, "index_size": 4634, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 23810, "raw_average_key_size": 22, "raw_value_size": 3526765, "raw_average_value_size": 3314, "num_data_blocks": 194, "num_entries": 1064, "num_filter_entries": 1064, "num_deletions": 263, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669425, "oldest_key_time": 1764669425, "file_creation_time": 1764669484, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fef79939-f0d3-4c6e-a3c1-7bf191246dd2", "db_session_id": "ES6HEAUO0NO66H72LGQU", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 17157 microseconds, and 5555 cpu microseconds.
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:58:04.655056) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 3554573 bytes OK
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:58:04.655088) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:58:04.657079) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:58:04.657108) EVENT_LOG_v1 {"time_micros": 1764669484657101, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:58:04.657136) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 5512842, prev total WAL file size 5512842, number of live WAL files 2.
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:58:04.658486) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031323839' seq:72057594037927935, type:22 .. '6B760031353530' seq:0, type:0; will stop at (end)
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(3471KB)], [30(15MB)]
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669484658553, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 19657527, "oldest_snapshot_seqno": -1}
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 11403 keys, 18677973 bytes, temperature: kUnknown
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669484744536, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 18677973, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18609801, "index_size": 38567, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28549, "raw_key_size": 305959, "raw_average_key_size": 26, "raw_value_size": 18411849, "raw_average_value_size": 1614, "num_data_blocks": 1472, "num_entries": 11403, "num_filter_entries": 11403, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669199, "oldest_key_time": 0, "file_creation_time": 1764669484, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "fef79939-f0d3-4c6e-a3c1-7bf191246dd2", "db_session_id": "ES6HEAUO0NO66H72LGQU", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}}
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:58:04.744957) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 18677973 bytes
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:58:04.747546) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 228.4 rd, 217.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 15.4 +0.0 blob) out(17.8 +0.0 blob), read-write-amplify(10.8) write-amplify(5.3) OK, records in: 11913, records dropped: 510 output_compression: NoCompression
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:58:04.747589) EVENT_LOG_v1 {"time_micros": 1764669484747569, "job": 16, "event": "compaction_finished", "compaction_time_micros": 86079, "compaction_time_cpu_micros": 28058, "output_level": 6, "num_output_files": 1, "total_output_size": 18677973, "num_input_records": 11913, "num_output_records": 11403, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669484748240, "job": 16, "event": "table_file_deletion", "file_number": 32}
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669484750425, "job": 16, "event": "table_file_deletion", "file_number": 30}
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:58:04.658358) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:58:04.750550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:58:04.750559) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:58:04.750562) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:58:04.750565) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 09:58:04 np0005541914.localdomain ceph-mon[288526]: rocksdb: (Original Log Time 2025/12/02-09:58:04.750568) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 09:58:05 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 09:58:05 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:05 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 09:58:05 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:05 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 02 09:58:05 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:58:05 np0005541914.localdomain ceph-mon[288526]: from='client.34458 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:58:05 np0005541914.localdomain ceph-mon[288526]: Saving service mon spec with placement label:mon
Dec 02 09:58:05 np0005541914.localdomain ceph-mon[288526]: Reconfiguring osd.3 (monmap changed)...
Dec 02 09:58:05 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon osd.3 on np0005541913.localdomain
Dec 02 09:58:05 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:05 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:05 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:58:05 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:58:05 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:06 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 09:58:06 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:06 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 09:58:06 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:06 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 02 09:58:06 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:58:06 np0005541914.localdomain ceph-mon[288526]: pgmap v28: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:58:06 np0005541914.localdomain ceph-mon[288526]: from='client.54149 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005541914", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 02 09:58:06 np0005541914.localdomain ceph-mon[288526]: Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)...
Dec 02 09:58:06 np0005541914.localdomain ceph-mon[288526]: Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain
Dec 02 09:58:06 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:06 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:06 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:58:06 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:58:06 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:58:06 np0005541914.localdomain ceph-mon[288526]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:06 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e15 handle_command mon_command({"prefix": "mon rm", "name": "np0005541914"} v 0)
Dec 02 09:58:06 np0005541914.localdomain ceph-mon[288526]: log_channel(audit) log [INF] : from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon rm", "name": "np0005541914"} : dispatch
Dec 02 09:58:06 np0005541914.localdomain ceph-mgr[287188]: ms_deliver_dispatch: unhandled message 0x561987797080 mon_map magic: 0 from mon.0 v2:172.18.0.108:3300/0
Dec 02 09:58:06 np0005541914.localdomain ceph-mon[288526]: mon.np0005541914@0(leader) e16  removed from monmap, suicide.
Dec 02 09:58:06 np0005541914.localdomain ceph-mgr[287188]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0
Dec 02 09:58:06 np0005541914.localdomain ceph-mgr[287188]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0
Dec 02 09:58:06 np0005541914.localdomain ceph-mgr[287188]: ms_deliver_dispatch: unhandled message 0x56199114a000 mon_map magic: 0 from mon.1 v2:172.18.0.104:3300/0
Dec 02 09:58:06 np0005541914.localdomain sudo[300130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:58:06 np0005541914.localdomain sudo[300130]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:06 np0005541914.localdomain sudo[300130]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:07 np0005541914.localdomain podman[300146]: 2025-12-02 09:58:07.023865463 +0000 UTC m=+0.057637111 container died 699b233252c58098b0dcca9b2b21425d550e7754773bf4b3759bf26abfe89544 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mon-np0005541914, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, name=rhceph, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, vendor=Red Hat, Inc., GIT_BRANCH=main, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main)
Dec 02 09:58:07 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-ce118f9e1514dd9e8c61f039c0b5ce0d2beef8304000bf74b350ea0ec7a4ea4b-merged.mount: Deactivated successfully.
Dec 02 09:58:07 np0005541914.localdomain podman[300146]: 2025-12-02 09:58:07.057236561 +0000 UTC m=+0.091008129 container remove 699b233252c58098b0dcca9b2b21425d550e7754773bf4b3759bf26abfe89544 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mon-np0005541914, distribution-scope=public, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, RELEASE=main, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, ceph=True, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 02 09:58:07 np0005541914.localdomain sudo[300160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 rm-daemon --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074 --name mon.np0005541914 --force
Dec 02 09:58:07 np0005541914.localdomain sudo[300160]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:07 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:58:07 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:58:07 np0005541914.localdomain sudo[300281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:58:07 np0005541914.localdomain sudo[300281]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:07 np0005541914.localdomain sudo[300281]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:07 np0005541914.localdomain podman[300236]: 2025-12-02 09:58:07.670546411 +0000 UTC m=+0.138564179 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 02 09:58:07 np0005541914.localdomain sudo[300307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:58:07 np0005541914.localdomain sudo[300307]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:07 np0005541914.localdomain podman[300236]: 2025-12-02 09:58:07.721783793 +0000 UTC m=+0.189801481 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 09:58:07 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:58:07 np0005541914.localdomain podman[300237]: 2025-12-02 09:58:07.628669043 +0000 UTC m=+0.091063969 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-type=git, name=ubi9-minimal, maintainer=Red Hat, Inc., distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, version=9.6, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7)
Dec 02 09:58:07 np0005541914.localdomain podman[300237]: 2025-12-02 09:58:07.810574342 +0000 UTC m=+0.272969318 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vendor=Red Hat, Inc., container_name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64, distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, vcs-type=git)
Dec 02 09:58:07 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:58:07 np0005541914.localdomain systemd[1]: ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074@mon.np0005541914.service: Deactivated successfully.
Dec 02 09:58:07 np0005541914.localdomain systemd[1]: Stopped Ceph mon.np0005541914 for c7c8e171-a193-56fb-95fa-8879fcfa7074.
Dec 02 09:58:07 np0005541914.localdomain systemd[1]: ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074@mon.np0005541914.service: Consumed 11.455s CPU time.
Dec 02 09:58:07 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:58:08 np0005541914.localdomain systemd-rc-local-generator[300385]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:58:08 np0005541914.localdomain systemd-sysv-generator[300390]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:58:08 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:58:08 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:58:08 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:58:08 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:58:08 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:58:08 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:58:08 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:58:08 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:58:08 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:58:08 np0005541914.localdomain sudo[300160]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:08 np0005541914.localdomain podman[300409]: 
Dec 02 09:58:08 np0005541914.localdomain podman[300409]: 2025-12-02 09:58:08.397752695 +0000 UTC m=+0.064439407 container create 2a039c061b114a746faddcd0060907a7d20c304bcd8bfba63f6ee05233d638cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_dirac, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, release=1763362218, io.buildah.version=1.41.4, GIT_BRANCH=main, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, ceph=True)
Dec 02 09:58:08 np0005541914.localdomain systemd[1]: Started libpod-conmon-2a039c061b114a746faddcd0060907a7d20c304bcd8bfba63f6ee05233d638cf.scope.
Dec 02 09:58:08 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:58:08 np0005541914.localdomain podman[300409]: 2025-12-02 09:58:08.453544187 +0000 UTC m=+0.120230929 container init 2a039c061b114a746faddcd0060907a7d20c304bcd8bfba63f6ee05233d638cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_dirac, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, release=1763362218, io.buildah.version=1.41.4, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, name=rhceph, ceph=True, RELEASE=main, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 09:58:08 np0005541914.localdomain systemd[1]: tmp-crun.COJN1E.mount: Deactivated successfully.
Dec 02 09:58:08 np0005541914.localdomain podman[300409]: 2025-12-02 09:58:08.470068211 +0000 UTC m=+0.136754943 container start 2a039c061b114a746faddcd0060907a7d20c304bcd8bfba63f6ee05233d638cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_dirac, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, ceph=True, vendor=Red Hat, Inc., name=rhceph, architecture=x86_64, GIT_BRANCH=main, release=1763362218, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 09:58:08 np0005541914.localdomain podman[300409]: 2025-12-02 09:58:08.471525996 +0000 UTC m=+0.138212788 container attach 2a039c061b114a746faddcd0060907a7d20c304bcd8bfba63f6ee05233d638cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_dirac, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, com.redhat.component=rhceph-container, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.openshift.tags=rhceph ceph, RELEASE=main, vcs-type=git, ceph=True, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, release=1763362218, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 02 09:58:08 np0005541914.localdomain happy_dirac[300424]: 167 167
Dec 02 09:58:08 np0005541914.localdomain podman[300409]: 2025-12-02 09:58:08.474305141 +0000 UTC m=+0.140991853 container died 2a039c061b114a746faddcd0060907a7d20c304bcd8bfba63f6ee05233d638cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_dirac, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, distribution-scope=public, vcs-type=git, GIT_BRANCH=main, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, ceph=True, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7)
Dec 02 09:58:08 np0005541914.localdomain systemd[1]: libpod-2a039c061b114a746faddcd0060907a7d20c304bcd8bfba63f6ee05233d638cf.scope: Deactivated successfully.
Dec 02 09:58:08 np0005541914.localdomain podman[300409]: 2025-12-02 09:58:08.376341172 +0000 UTC m=+0.043027974 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:58:08 np0005541914.localdomain podman[300429]: 2025-12-02 09:58:08.549946058 +0000 UTC m=+0.067197111 container remove 2a039c061b114a746faddcd0060907a7d20c304bcd8bfba63f6ee05233d638cf (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_dirac, architecture=x86_64, GIT_BRANCH=main, vcs-type=git, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, name=rhceph, version=7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, release=1763362218, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, vendor=Red Hat, Inc.)
Dec 02 09:58:08 np0005541914.localdomain systemd[1]: libpod-conmon-2a039c061b114a746faddcd0060907a7d20c304bcd8bfba63f6ee05233d638cf.scope: Deactivated successfully.
Dec 02 09:58:08 np0005541914.localdomain sudo[300307]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:08 np0005541914.localdomain sudo[300443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:58:08 np0005541914.localdomain sudo[300443]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:08 np0005541914.localdomain sudo[300443]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:08 np0005541914.localdomain sudo[300461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:58:08 np0005541914.localdomain sudo[300461]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:09 np0005541914.localdomain podman[300496]: 
Dec 02 09:58:09 np0005541914.localdomain podman[300496]: 2025-12-02 09:58:09.231579202 +0000 UTC m=+0.076762182 container create 1a3d292658be6dac6cc4e21f2d8e2d59eaf1aa49fb70075419e08caff443b20a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_pare, RELEASE=main, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, CEPH_POINT_RELEASE=, GIT_BRANCH=main, version=7, vcs-type=git, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_CLEAN=True)
Dec 02 09:58:09 np0005541914.localdomain systemd[1]: Started libpod-conmon-1a3d292658be6dac6cc4e21f2d8e2d59eaf1aa49fb70075419e08caff443b20a.scope.
Dec 02 09:58:09 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:58:09 np0005541914.localdomain podman[300496]: 2025-12-02 09:58:09.200730341 +0000 UTC m=+0.045913371 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:58:09 np0005541914.localdomain podman[300496]: 2025-12-02 09:58:09.301259658 +0000 UTC m=+0.146442648 container init 1a3d292658be6dac6cc4e21f2d8e2d59eaf1aa49fb70075419e08caff443b20a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_pare, io.buildah.version=1.41.4, name=rhceph, GIT_CLEAN=True, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.openshift.expose-services=, ceph=True, RELEASE=main, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, version=7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 02 09:58:09 np0005541914.localdomain podman[300496]: 2025-12-02 09:58:09.30857161 +0000 UTC m=+0.153754580 container start 1a3d292658be6dac6cc4e21f2d8e2d59eaf1aa49fb70075419e08caff443b20a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_pare, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, RELEASE=main, vcs-type=git, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, version=7, release=1763362218, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, description=Red Hat Ceph Storage 7)
Dec 02 09:58:09 np0005541914.localdomain podman[300496]: 2025-12-02 09:58:09.308828699 +0000 UTC m=+0.154011679 container attach 1a3d292658be6dac6cc4e21f2d8e2d59eaf1aa49fb70075419e08caff443b20a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_pare, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, RELEASE=main, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, version=7, release=1763362218, GIT_CLEAN=True, vcs-type=git, architecture=x86_64, distribution-scope=public, GIT_BRANCH=main, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=)
Dec 02 09:58:09 np0005541914.localdomain adoring_pare[300512]: 167 167
Dec 02 09:58:09 np0005541914.localdomain systemd[1]: libpod-1a3d292658be6dac6cc4e21f2d8e2d59eaf1aa49fb70075419e08caff443b20a.scope: Deactivated successfully.
Dec 02 09:58:09 np0005541914.localdomain podman[300496]: 2025-12-02 09:58:09.312577573 +0000 UTC m=+0.157760543 container died 1a3d292658be6dac6cc4e21f2d8e2d59eaf1aa49fb70075419e08caff443b20a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_pare, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, version=7, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.openshift.expose-services=, RELEASE=main, distribution-scope=public, io.openshift.tags=rhceph ceph, release=1763362218, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7)
Dec 02 09:58:09 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c6c345eef9df37ff8200d98843a81e91c0205ec5ea28af1b29d99d20ce571d81-merged.mount: Deactivated successfully.
Dec 02 09:58:09 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-1720ab8a23b42af740e3fb763e709a989ecaa8641ba4c6dabd496829b3d241d4-merged.mount: Deactivated successfully.
Dec 02 09:58:09 np0005541914.localdomain podman[300517]: 2025-12-02 09:58:09.420428623 +0000 UTC m=+0.092029608 container remove 1a3d292658be6dac6cc4e21f2d8e2d59eaf1aa49fb70075419e08caff443b20a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_pare, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_BRANCH=main, RELEASE=main, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, vcs-type=git, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, description=Red Hat Ceph Storage 7, release=1763362218)
Dec 02 09:58:09 np0005541914.localdomain systemd[1]: libpod-conmon-1a3d292658be6dac6cc4e21f2d8e2d59eaf1aa49fb70075419e08caff443b20a.scope: Deactivated successfully.
Dec 02 09:58:09 np0005541914.localdomain sudo[300461]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:09 np0005541914.localdomain sudo[300540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:58:09 np0005541914.localdomain sudo[300540]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:09 np0005541914.localdomain sudo[300540]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:09 np0005541914.localdomain sudo[300558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:58:09 np0005541914.localdomain sudo[300558]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:10 np0005541914.localdomain podman[300592]: 
Dec 02 09:58:10 np0005541914.localdomain podman[300592]: 2025-12-02 09:58:10.246285587 +0000 UTC m=+0.066462728 container create 8df6b497744b262c96cf6b1eadcfa53f5037639bc72277df6b2443ad80ad11e8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_jones, version=7, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, RELEASE=main, ceph=True, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 02 09:58:10 np0005541914.localdomain systemd[1]: Started libpod-conmon-8df6b497744b262c96cf6b1eadcfa53f5037639bc72277df6b2443ad80ad11e8.scope.
Dec 02 09:58:10 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:58:10 np0005541914.localdomain podman[300592]: 2025-12-02 09:58:10.305681639 +0000 UTC m=+0.125858760 container init 8df6b497744b262c96cf6b1eadcfa53f5037639bc72277df6b2443ad80ad11e8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_jones, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, version=7, GIT_CLEAN=True, ceph=True, CEPH_POINT_RELEASE=, release=1763362218, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.buildah.version=1.41.4)
Dec 02 09:58:10 np0005541914.localdomain podman[300592]: 2025-12-02 09:58:10.213810917 +0000 UTC m=+0.033988098 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:58:10 np0005541914.localdomain nervous_jones[300607]: 167 167
Dec 02 09:58:10 np0005541914.localdomain podman[300592]: 2025-12-02 09:58:10.322561224 +0000 UTC m=+0.142738365 container start 8df6b497744b262c96cf6b1eadcfa53f5037639bc72277df6b2443ad80ad11e8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_jones, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, vcs-type=git, release=1763362218, vendor=Red Hat, Inc., GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, ceph=True, com.redhat.component=rhceph-container, name=rhceph, build-date=2025-11-26T19:44:28Z, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 09:58:10 np0005541914.localdomain podman[300592]: 2025-12-02 09:58:10.322897015 +0000 UTC m=+0.143074156 container attach 8df6b497744b262c96cf6b1eadcfa53f5037639bc72277df6b2443ad80ad11e8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_jones, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, ceph=True, io.openshift.tags=rhceph ceph, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, release=1763362218, io.buildah.version=1.41.4, GIT_BRANCH=main, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, CEPH_POINT_RELEASE=)
Dec 02 09:58:10 np0005541914.localdomain systemd[1]: libpod-8df6b497744b262c96cf6b1eadcfa53f5037639bc72277df6b2443ad80ad11e8.scope: Deactivated successfully.
Dec 02 09:58:10 np0005541914.localdomain podman[300592]: 2025-12-02 09:58:10.325003289 +0000 UTC m=+0.145180420 container died 8df6b497744b262c96cf6b1eadcfa53f5037639bc72277df6b2443ad80ad11e8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_jones, CEPH_POINT_RELEASE=, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, ceph=True, RELEASE=main, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, version=7, name=rhceph, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 02 09:58:10 np0005541914.localdomain podman[300612]: 2025-12-02 09:58:10.400673337 +0000 UTC m=+0.065708135 container remove 8df6b497744b262c96cf6b1eadcfa53f5037639bc72277df6b2443ad80ad11e8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=nervous_jones, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, distribution-scope=public, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, build-date=2025-11-26T19:44:28Z, release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, ceph=True, com.redhat.component=rhceph-container, architecture=x86_64, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 09:58:10 np0005541914.localdomain systemd[1]: tmp-crun.s0RQ8C.mount: Deactivated successfully.
Dec 02 09:58:10 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-cfd1042074871156c75482c47c7c0bf8e1b46009482b3a3d376235193c802f55-merged.mount: Deactivated successfully.
Dec 02 09:58:10 np0005541914.localdomain systemd[1]: libpod-conmon-8df6b497744b262c96cf6b1eadcfa53f5037639bc72277df6b2443ad80ad11e8.scope: Deactivated successfully.
Dec 02 09:58:10 np0005541914.localdomain sudo[300558]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:10 np0005541914.localdomain sudo[300635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:58:10 np0005541914.localdomain sudo[300635]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:10 np0005541914.localdomain sudo[300635]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:10 np0005541914.localdomain sudo[300653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:58:10 np0005541914.localdomain sudo[300653]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:11 np0005541914.localdomain podman[300689]: 
Dec 02 09:58:11 np0005541914.localdomain podman[300689]: 2025-12-02 09:58:11.102760596 +0000 UTC m=+0.074880286 container create 7cee4bba0649038d56b397d3be7f06461f343e24e38fa7d33481d41d7e2558c6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_benz, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, com.redhat.component=rhceph-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, ceph=True, name=rhceph, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, GIT_BRANCH=main, vendor=Red Hat, Inc.)
Dec 02 09:58:11 np0005541914.localdomain systemd[1]: Started libpod-conmon-7cee4bba0649038d56b397d3be7f06461f343e24e38fa7d33481d41d7e2558c6.scope.
Dec 02 09:58:11 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:58:11 np0005541914.localdomain podman[300689]: 2025-12-02 09:58:11.153351769 +0000 UTC m=+0.125471439 container init 7cee4bba0649038d56b397d3be7f06461f343e24e38fa7d33481d41d7e2558c6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_benz, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, vcs-type=git, GIT_CLEAN=True, version=7, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7)
Dec 02 09:58:11 np0005541914.localdomain elegant_benz[300704]: 167 167
Dec 02 09:58:11 np0005541914.localdomain systemd[1]: libpod-7cee4bba0649038d56b397d3be7f06461f343e24e38fa7d33481d41d7e2558c6.scope: Deactivated successfully.
Dec 02 09:58:11 np0005541914.localdomain podman[300689]: 2025-12-02 09:58:11.063599151 +0000 UTC m=+0.035718851 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:58:11 np0005541914.localdomain podman[300689]: 2025-12-02 09:58:11.16616041 +0000 UTC m=+0.138280100 container start 7cee4bba0649038d56b397d3be7f06461f343e24e38fa7d33481d41d7e2558c6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_benz, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, ceph=True, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, GIT_BRANCH=main, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, name=rhceph, GIT_CLEAN=True, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 09:58:11 np0005541914.localdomain podman[300689]: 2025-12-02 09:58:11.166363206 +0000 UTC m=+0.138482896 container attach 7cee4bba0649038d56b397d3be7f06461f343e24e38fa7d33481d41d7e2558c6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_benz, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.buildah.version=1.41.4, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, ceph=True, name=rhceph, io.openshift.tags=rhceph ceph, vcs-type=git, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 09:58:11 np0005541914.localdomain podman[300689]: 2025-12-02 09:58:11.168865263 +0000 UTC m=+0.140985023 container died 7cee4bba0649038d56b397d3be7f06461f343e24e38fa7d33481d41d7e2558c6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_benz, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, release=1763362218, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container)
Dec 02 09:58:11 np0005541914.localdomain podman[300709]: 2025-12-02 09:58:11.233640469 +0000 UTC m=+0.061132306 container remove 7cee4bba0649038d56b397d3be7f06461f343e24e38fa7d33481d41d7e2558c6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_benz, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, release=1763362218, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_CLEAN=True, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, CEPH_POINT_RELEASE=, RELEASE=main, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=)
Dec 02 09:58:11 np0005541914.localdomain systemd[1]: libpod-conmon-7cee4bba0649038d56b397d3be7f06461f343e24e38fa7d33481d41d7e2558c6.scope: Deactivated successfully.
Dec 02 09:58:11 np0005541914.localdomain sudo[300653]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:11 np0005541914.localdomain sudo[300725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:58:11 np0005541914.localdomain sudo[300725]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:11 np0005541914.localdomain sudo[300725]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:11 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-0820f6c8f74c2f69aa9f87491ba9d604307d08116fb6005c0bb84b005f425c25-merged.mount: Deactivated successfully.
Dec 02 09:58:11 np0005541914.localdomain sudo[300743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:58:11 np0005541914.localdomain sudo[300743]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:11 np0005541914.localdomain podman[300777]: 
Dec 02 09:58:11 np0005541914.localdomain podman[300777]: 2025-12-02 09:58:11.938286164 +0000 UTC m=+0.078263298 container create 66dbac33e87a2e1b1877119200f1fd3964f72df6cc6312d168b77a785dbbd308 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_zhukovsky, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, release=1763362218, architecture=x86_64, GIT_BRANCH=main, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public)
Dec 02 09:58:11 np0005541914.localdomain systemd[1]: Started libpod-conmon-66dbac33e87a2e1b1877119200f1fd3964f72df6cc6312d168b77a785dbbd308.scope.
Dec 02 09:58:12 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:58:12 np0005541914.localdomain podman[300777]: 2025-12-02 09:58:11.908585419 +0000 UTC m=+0.048562493 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:58:12 np0005541914.localdomain podman[300777]: 2025-12-02 09:58:12.014488899 +0000 UTC m=+0.154465983 container init 66dbac33e87a2e1b1877119200f1fd3964f72df6cc6312d168b77a785dbbd308 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_zhukovsky, release=1763362218, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, version=7, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_BRANCH=main, vendor=Red Hat, Inc., name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public)
Dec 02 09:58:12 np0005541914.localdomain romantic_zhukovsky[300791]: 167 167
Dec 02 09:58:12 np0005541914.localdomain podman[300777]: 2025-12-02 09:58:12.022192574 +0000 UTC m=+0.162169648 container start 66dbac33e87a2e1b1877119200f1fd3964f72df6cc6312d168b77a785dbbd308 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_zhukovsky, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1763362218, architecture=x86_64, GIT_CLEAN=True, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.openshift.expose-services=, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container)
Dec 02 09:58:12 np0005541914.localdomain podman[300777]: 2025-12-02 09:58:12.022590676 +0000 UTC m=+0.162567760 container attach 66dbac33e87a2e1b1877119200f1fd3964f72df6cc6312d168b77a785dbbd308 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_zhukovsky, ceph=True, name=rhceph, RELEASE=main, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, architecture=x86_64, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, release=1763362218, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Dec 02 09:58:12 np0005541914.localdomain systemd[1]: libpod-66dbac33e87a2e1b1877119200f1fd3964f72df6cc6312d168b77a785dbbd308.scope: Deactivated successfully.
Dec 02 09:58:12 np0005541914.localdomain podman[300777]: 2025-12-02 09:58:12.024627289 +0000 UTC m=+0.164604383 container died 66dbac33e87a2e1b1877119200f1fd3964f72df6cc6312d168b77a785dbbd308 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_zhukovsky, CEPH_POINT_RELEASE=, release=1763362218, vcs-type=git, version=7, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, distribution-scope=public, io.openshift.tags=rhceph ceph, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 02 09:58:12 np0005541914.localdomain podman[300796]: 2025-12-02 09:58:12.092382795 +0000 UTC m=+0.062774235 container remove 66dbac33e87a2e1b1877119200f1fd3964f72df6cc6312d168b77a785dbbd308 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_zhukovsky, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., RELEASE=main, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, release=1763362218, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, ceph=True, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, version=7)
Dec 02 09:58:12 np0005541914.localdomain systemd[1]: libpod-conmon-66dbac33e87a2e1b1877119200f1fd3964f72df6cc6312d168b77a785dbbd308.scope: Deactivated successfully.
Dec 02 09:58:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:58:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:58:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:58:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:58:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:58:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 09:58:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:58:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 09:58:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:58:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:58:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 09:58:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:58:12 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:58:12 np0005541914.localdomain sudo[300743]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:12 np0005541914.localdomain podman[300813]: 2025-12-02 09:58:12.171504849 +0000 UTC m=+0.053178343 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 02 09:58:12 np0005541914.localdomain podman[300813]: 2025-12-02 09:58:12.208726535 +0000 UTC m=+0.090399989 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 02 09:58:12 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:58:12 np0005541914.localdomain sudo[300835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:58:12 np0005541914.localdomain sudo[300835]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:12 np0005541914.localdomain sudo[300835]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:12 np0005541914.localdomain sudo[300853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 02 09:58:12 np0005541914.localdomain sudo[300853]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:12 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-03b615f3eb610edb6066fc837da58cf389fc7c49d10aaf553d448c72d847ebc5-merged.mount: Deactivated successfully.
Dec 02 09:58:13 np0005541914.localdomain podman[300943]: 2025-12-02 09:58:13.133636241 +0000 UTC m=+0.093977039 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., GIT_BRANCH=main, version=7, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, vcs-type=git, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 02 09:58:13 np0005541914.localdomain podman[300943]: 2025-12-02 09:58:13.240899523 +0000 UTC m=+0.201240321 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, version=7, release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, description=Red Hat Ceph Storage 7, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 09:58:13 np0005541914.localdomain sudo[300853]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:13 np0005541914.localdomain sudo[301047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 02 09:58:13 np0005541914.localdomain sudo[301047]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:13 np0005541914.localdomain sudo[301047]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:13 np0005541914.localdomain sudo[301065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph
Dec 02 09:58:13 np0005541914.localdomain sudo[301065]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:13 np0005541914.localdomain sudo[301065]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:13 np0005541914.localdomain sudo[301083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:58:13 np0005541914.localdomain sudo[301083]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:13 np0005541914.localdomain sudo[301083]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:13 np0005541914.localdomain sudo[301101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:58:13 np0005541914.localdomain sudo[301101]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:13 np0005541914.localdomain sudo[301101]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:13 np0005541914.localdomain sudo[301119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:58:13 np0005541914.localdomain sudo[301119]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:13 np0005541914.localdomain sudo[301119]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:14 np0005541914.localdomain sudo[301153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:58:14 np0005541914.localdomain sudo[301153]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:14 np0005541914.localdomain sudo[301153]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:14 np0005541914.localdomain sudo[301171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:58:14 np0005541914.localdomain sudo[301171]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:14 np0005541914.localdomain sudo[301171]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:14 np0005541914.localdomain sudo[301189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 02 09:58:14 np0005541914.localdomain sudo[301189]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:14 np0005541914.localdomain sudo[301189]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:14 np0005541914.localdomain sudo[301207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 09:58:14 np0005541914.localdomain sudo[301207]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:14 np0005541914.localdomain sudo[301207]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:14 np0005541914.localdomain sudo[301225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 09:58:14 np0005541914.localdomain sudo[301225]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:14 np0005541914.localdomain sudo[301225]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:14 np0005541914.localdomain sudo[301243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:58:14 np0005541914.localdomain sudo[301243]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:14 np0005541914.localdomain sudo[301243]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:14 np0005541914.localdomain sudo[301261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:58:14 np0005541914.localdomain sudo[301261]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:14 np0005541914.localdomain sudo[301261]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:14 np0005541914.localdomain sudo[301279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:58:14 np0005541914.localdomain sudo[301279]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:14 np0005541914.localdomain sudo[301279]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:14 np0005541914.localdomain sudo[301313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:58:14 np0005541914.localdomain sudo[301313]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:14 np0005541914.localdomain sudo[301313]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:14 np0005541914.localdomain sudo[301331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:58:14 np0005541914.localdomain sudo[301331]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:14 np0005541914.localdomain sudo[301331]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:14 np0005541914.localdomain sudo[301349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:58:14 np0005541914.localdomain sudo[301349]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:14 np0005541914.localdomain sudo[301349]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.439 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:58:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:58:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:58:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:58:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:58:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:58:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:58:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:58:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:58:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:58:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:58:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:58:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:58:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:58:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:58:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:58:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:58:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:58:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:58:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:58:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:58:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:58:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:58:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:58:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 09:58:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 09:58:15 np0005541914.localdomain sudo[301367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:58:15 np0005541914.localdomain sudo[301367]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:15 np0005541914.localdomain sudo[301367]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:19 np0005541914.localdomain sudo[301385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:58:19 np0005541914.localdomain sudo[301385]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:19 np0005541914.localdomain sudo[301385]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:19 np0005541914.localdomain sudo[301403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:58:19 np0005541914.localdomain sudo[301403]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:20 np0005541914.localdomain podman[301465]: 
Dec 02 09:58:20 np0005541914.localdomain podman[301465]: 2025-12-02 09:58:20.526327767 +0000 UTC m=+0.078914598 container create 616965582d32b2e39dd7074d44dce5f390b8e07cc9613432804bc9e479f80313 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_northcutt, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, version=7, com.redhat.component=rhceph-container, vcs-type=git, GIT_BRANCH=main, io.buildah.version=1.41.4, name=rhceph, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, ceph=True, build-date=2025-11-26T19:44:28Z, architecture=x86_64)
Dec 02 09:58:20 np0005541914.localdomain systemd[1]: Started libpod-conmon-616965582d32b2e39dd7074d44dce5f390b8e07cc9613432804bc9e479f80313.scope.
Dec 02 09:58:20 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:58:20 np0005541914.localdomain podman[301465]: 2025-12-02 09:58:20.49364346 +0000 UTC m=+0.046230331 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:58:20 np0005541914.localdomain podman[301465]: 2025-12-02 09:58:20.597313813 +0000 UTC m=+0.149900624 container init 616965582d32b2e39dd7074d44dce5f390b8e07cc9613432804bc9e479f80313 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_northcutt, io.openshift.expose-services=, io.buildah.version=1.41.4, GIT_BRANCH=main, vendor=Red Hat, Inc., version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, RELEASE=main, architecture=x86_64, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container)
Dec 02 09:58:20 np0005541914.localdomain cranky_northcutt[301480]: 167 167
Dec 02 09:58:20 np0005541914.localdomain systemd[1]: libpod-616965582d32b2e39dd7074d44dce5f390b8e07cc9613432804bc9e479f80313.scope: Deactivated successfully.
Dec 02 09:58:20 np0005541914.localdomain podman[301465]: 2025-12-02 09:58:20.608615197 +0000 UTC m=+0.161202038 container start 616965582d32b2e39dd7074d44dce5f390b8e07cc9613432804bc9e479f80313 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_northcutt, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, GIT_CLEAN=True, io.buildah.version=1.41.4, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, GIT_BRANCH=main, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public)
Dec 02 09:58:20 np0005541914.localdomain podman[301465]: 2025-12-02 09:58:20.609713741 +0000 UTC m=+0.162300692 container attach 616965582d32b2e39dd7074d44dce5f390b8e07cc9613432804bc9e479f80313 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_northcutt, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, RELEASE=main, description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_BRANCH=main, ceph=True, build-date=2025-11-26T19:44:28Z, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 09:58:20 np0005541914.localdomain podman[301465]: 2025-12-02 09:58:20.612767895 +0000 UTC m=+0.165354756 container died 616965582d32b2e39dd7074d44dce5f390b8e07cc9613432804bc9e479f80313 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_northcutt, ceph=True, io.buildah.version=1.41.4, GIT_BRANCH=main, architecture=x86_64, vcs-type=git, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., RELEASE=main, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 09:58:20 np0005541914.localdomain podman[301486]: 2025-12-02 09:58:20.714407595 +0000 UTC m=+0.097240328 container remove 616965582d32b2e39dd7074d44dce5f390b8e07cc9613432804bc9e479f80313 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cranky_northcutt, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, architecture=x86_64, io.buildah.version=1.41.4, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 02 09:58:20 np0005541914.localdomain systemd[1]: libpod-conmon-616965582d32b2e39dd7074d44dce5f390b8e07cc9613432804bc9e479f80313.scope: Deactivated successfully.
Dec 02 09:58:20 np0005541914.localdomain podman[301502]: 
Dec 02 09:58:20 np0005541914.localdomain podman[301502]: 2025-12-02 09:58:20.831111165 +0000 UTC m=+0.077827925 container create 4ebdc0b41e9be42e581b8503b6f3b946226cb0bdaedd13e7d6f1a8c37eb6e1e2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_davinci, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, release=1763362218, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, RELEASE=main, GIT_BRANCH=main, CEPH_POINT_RELEASE=, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, version=7, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, ceph=True, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 02 09:58:20 np0005541914.localdomain systemd[1]: Started libpod-conmon-4ebdc0b41e9be42e581b8503b6f3b946226cb0bdaedd13e7d6f1a8c37eb6e1e2.scope.
Dec 02 09:58:20 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:58:20 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b077ccada571db10a241910bd1fb6d69dd465d03c0fc75967d5a86a88f656ac0/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Dec 02 09:58:20 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b077ccada571db10a241910bd1fb6d69dd465d03c0fc75967d5a86a88f656ac0/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Dec 02 09:58:20 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b077ccada571db10a241910bd1fb6d69dd465d03c0fc75967d5a86a88f656ac0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 02 09:58:20 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b077ccada571db10a241910bd1fb6d69dd465d03c0fc75967d5a86a88f656ac0/merged/var/lib/ceph/mon/ceph-np0005541914 supports timestamps until 2038 (0x7fffffff)
Dec 02 09:58:20 np0005541914.localdomain podman[301502]: 2025-12-02 09:58:20.893089556 +0000 UTC m=+0.139806306 container init 4ebdc0b41e9be42e581b8503b6f3b946226cb0bdaedd13e7d6f1a8c37eb6e1e2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_davinci, distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_CLEAN=True, release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, ceph=True, io.openshift.expose-services=, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 02 09:58:20 np0005541914.localdomain podman[301502]: 2025-12-02 09:58:20.799721347 +0000 UTC m=+0.046438177 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:58:20 np0005541914.localdomain podman[301502]: 2025-12-02 09:58:20.901559404 +0000 UTC m=+0.148276204 container start 4ebdc0b41e9be42e581b8503b6f3b946226cb0bdaedd13e7d6f1a8c37eb6e1e2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_davinci, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.openshift.expose-services=, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, architecture=x86_64, version=7, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, GIT_BRANCH=main, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 02 09:58:20 np0005541914.localdomain podman[301502]: 2025-12-02 09:58:20.901822322 +0000 UTC m=+0.148539072 container attach 4ebdc0b41e9be42e581b8503b6f3b946226cb0bdaedd13e7d6f1a8c37eb6e1e2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_davinci, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, RELEASE=main, vendor=Red Hat, Inc., GIT_BRANCH=main, ceph=True, version=7, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, CEPH_POINT_RELEASE=, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 09:58:20 np0005541914.localdomain systemd[1]: libpod-4ebdc0b41e9be42e581b8503b6f3b946226cb0bdaedd13e7d6f1a8c37eb6e1e2.scope: Deactivated successfully.
Dec 02 09:58:20 np0005541914.localdomain podman[301502]: 2025-12-02 09:58:20.981893124 +0000 UTC m=+0.228609884 container died 4ebdc0b41e9be42e581b8503b6f3b946226cb0bdaedd13e7d6f1a8c37eb6e1e2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_davinci, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, version=7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, RELEASE=main, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, ceph=True, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, com.redhat.component=rhceph-container)
Dec 02 09:58:21 np0005541914.localdomain podman[301543]: 2025-12-02 09:58:21.069791687 +0000 UTC m=+0.078989621 container remove 4ebdc0b41e9be42e581b8503b6f3b946226cb0bdaedd13e7d6f1a8c37eb6e1e2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_davinci, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.openshift.expose-services=, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, RELEASE=main, build-date=2025-11-26T19:44:28Z, distribution-scope=public, io.openshift.tags=rhceph ceph, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 02 09:58:21 np0005541914.localdomain systemd[1]: libpod-conmon-4ebdc0b41e9be42e581b8503b6f3b946226cb0bdaedd13e7d6f1a8c37eb6e1e2.scope: Deactivated successfully.
Dec 02 09:58:21 np0005541914.localdomain sshd[301559]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:58:21 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:58:21 np0005541914.localdomain systemd-sysv-generator[301587]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:58:21 np0005541914.localdomain systemd-rc-local-generator[301583]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:58:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:58:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:58:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:58:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:58:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:58:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:58:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:58:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:58:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:58:21 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-e59137448da341899478bbf4dcf4a802f4eda5b079ce78d7c8357844800a439d-merged.mount: Deactivated successfully.
Dec 02 09:58:21 np0005541914.localdomain systemd[1]: Reloading.
Dec 02 09:58:21 np0005541914.localdomain systemd-rc-local-generator[301626]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 02 09:58:21 np0005541914.localdomain systemd-sysv-generator[301632]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 02 09:58:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:58:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Dec 02 09:58:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:58:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:58:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Dec 02 09:58:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Dec 02 09:58:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:58:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:58:21 np0005541914.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Dec 02 09:58:21 np0005541914.localdomain sshd[301559]: Invalid user deploy from 34.78.29.97 port 38890
Dec 02 09:58:21 np0005541914.localdomain systemd[1]: Starting Ceph mon.np0005541914 for c7c8e171-a193-56fb-95fa-8879fcfa7074...
Dec 02 09:58:22 np0005541914.localdomain sshd[301559]: Received disconnect from 34.78.29.97 port 38890:11: Bye Bye [preauth]
Dec 02 09:58:22 np0005541914.localdomain sshd[301559]: Disconnected from invalid user deploy 34.78.29.97 port 38890 [preauth]
Dec 02 09:58:22 np0005541914.localdomain podman[301691]: 
Dec 02 09:58:22 np0005541914.localdomain podman[301691]: 2025-12-02 09:58:22.209138524 +0000 UTC m=+0.074261507 container create a1c1451e33b032c48fb4a704a99f88b0ad72459003fc18f41bd18df3b3917cd4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mon-np0005541914, version=7, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, architecture=x86_64, io.openshift.expose-services=, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, distribution-scope=public, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., ceph=True, RELEASE=main, io.buildah.version=1.41.4)
Dec 02 09:58:22 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8872674fa163d17628e702e21cb76be31def8e0c6eb5a34156d3c47d9a9d2a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 02 09:58:22 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8872674fa163d17628e702e21cb76be31def8e0c6eb5a34156d3c47d9a9d2a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 02 09:58:22 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8872674fa163d17628e702e21cb76be31def8e0c6eb5a34156d3c47d9a9d2a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 02 09:58:22 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8872674fa163d17628e702e21cb76be31def8e0c6eb5a34156d3c47d9a9d2a4/merged/var/lib/ceph/mon/ceph-np0005541914 supports timestamps until 2038 (0x7fffffff)
Dec 02 09:58:22 np0005541914.localdomain podman[301691]: 2025-12-02 09:58:22.175338792 +0000 UTC m=+0.040461795 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:58:22 np0005541914.localdomain podman[301691]: 2025-12-02 09:58:22.27523107 +0000 UTC m=+0.140354033 container init a1c1451e33b032c48fb4a704a99f88b0ad72459003fc18f41bd18df3b3917cd4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mon-np0005541914, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, build-date=2025-11-26T19:44:28Z, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_BRANCH=main, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 02 09:58:22 np0005541914.localdomain podman[301691]: 2025-12-02 09:58:22.283848553 +0000 UTC m=+0.148971516 container start a1c1451e33b032c48fb4a704a99f88b0ad72459003fc18f41bd18df3b3917cd4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mon-np0005541914, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, GIT_CLEAN=True, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.openshift.expose-services=)
Dec 02 09:58:22 np0005541914.localdomain bash[301691]: a1c1451e33b032c48fb4a704a99f88b0ad72459003fc18f41bd18df3b3917cd4
Dec 02 09:58:22 np0005541914.localdomain systemd[1]: Started Ceph mon.np0005541914 for c7c8e171-a193-56fb-95fa-8879fcfa7074.
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: set uid:gid to 167:167 (ceph:ceph)
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable), process ceph-mon, pid 2
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: pidfile_write: ignore empty --pid-file
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: load: jerasure load: lrc 
Dec 02 09:58:22 np0005541914.localdomain sudo[301403]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: RocksDB version: 7.9.2
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: Git sha 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: Compile date 2025-09-23 00:00:00
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: DB SUMMARY
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: DB Session ID:  O7EMRIXC8F5M1Z077C5B
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: CURRENT file:  CURRENT
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: IDENTITY file:  IDENTITY
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: SST files in /var/lib/ceph/mon/ceph-np0005541914/store.db dir, Total Num: 0, files: 
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-np0005541914/store.db: 000004.log size: 636 ; 
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                         Options.error_if_exists: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                       Options.create_if_missing: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                         Options.paranoid_checks: 1
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                                     Options.env: 0x562ea16f79e0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                                Options.info_log: 0x562ea3be2d20
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                Options.max_file_opening_threads: 16
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                              Options.statistics: (nil)
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                               Options.use_fsync: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                       Options.max_log_file_size: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                         Options.allow_fallocate: 1
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                        Options.use_direct_reads: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:          Options.create_missing_column_families: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                              Options.db_log_dir: 
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                                 Options.wal_dir: 
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                   Options.advise_random_on_open: 1
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                    Options.write_buffer_manager: 0x562ea3bf3540
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                            Options.rate_limiter: (nil)
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                  Options.unordered_write: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                               Options.row_cache: None
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                              Options.wal_filter: None
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:             Options.allow_ingest_behind: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:             Options.two_write_queues: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:             Options.manual_wal_flush: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:             Options.wal_compression: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:             Options.atomic_flush: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                 Options.log_readahead_size: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:             Options.allow_data_in_errors: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:             Options.db_host_id: __hostname__
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:             Options.max_background_jobs: 2
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:             Options.max_background_compactions: -1
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:             Options.max_subcompactions: 1
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:             Options.max_total_wal_size: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                          Options.max_open_files: -1
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                          Options.bytes_per_sync: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:       Options.compaction_readahead_size: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                  Options.max_background_flushes: -1
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: Compression algorithms supported:
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:         kZSTD supported: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:         kXpressCompression supported: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:         kBZip2Compression supported: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:         kLZ4Compression supported: 1
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:         kZlibCompression supported: 1
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:         kLZ4HCCompression supported: 1
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:         kSnappyCompression supported: 1
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-np0005541914/store.db/MANIFEST-000005
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:           Options.merge_operator: 
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:        Options.compaction_filter: None
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:        Options.compaction_filter_factory: None
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:  Options.sst_partitioner_factory: None
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562ea3be2980)
                                                             cache_index_and_filter_blocks: 1
                                                             cache_index_and_filter_blocks_with_high_priority: 0
                                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                                             pin_top_level_index_and_filter: 1
                                                             index_type: 0
                                                             data_block_index_type: 0
                                                             index_shortening: 1
                                                             data_block_hash_table_util_ratio: 0.750000
                                                             checksum: 4
                                                             no_block_cache: 0
                                                             block_cache: 0x562ea3bdf1f0
                                                             block_cache_name: BinnedLRUCache
                                                             block_cache_options:
                                                               capacity : 536870912
                                                               num_shard_bits : 4
                                                               strict_capacity_limit : 0
                                                               high_pri_pool_ratio: 0.000
                                                             block_cache_compressed: (nil)
                                                             persistent_cache: (nil)
                                                             block_size: 4096
                                                             block_size_deviation: 10
                                                             block_restart_interval: 16
                                                             index_block_restart_interval: 1
                                                             metadata_block_size: 4096
                                                             partition_filters: 0
                                                             use_delta_encoding: 1
                                                             filter_policy: bloomfilter
                                                             whole_key_filtering: 1
                                                             verify_compression: 0
                                                             read_amp_bytes_per_bit: 0
                                                             format_version: 5
                                                             enable_index_compression: 1
                                                             block_align: 0
                                                             max_auto_readahead_size: 262144
                                                             prepopulate_block_cache: 0
                                                             initial_auto_readahead_size: 8192
                                                             num_file_reads_for_auto_readahead: 2
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:        Options.write_buffer_size: 33554432
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:  Options.max_write_buffer_number: 2
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:          Options.compression: NoCompression
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:       Options.prefix_extractor: nullptr
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:             Options.num_levels: 7
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                  Options.compression_opts.level: 32767
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:               Options.compression_opts.strategy: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                  Options.compression_opts.enabled: false
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                        Options.arena_block_size: 1048576
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                Options.disable_auto_compactions: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                   Options.table_properties_collectors: 
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                   Options.inplace_update_support: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                           Options.bloom_locality: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                    Options.max_successive_merges: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                Options.paranoid_file_checks: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                Options.force_consistency_checks: 1
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                Options.report_bg_io_stats: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                               Options.ttl: 2592000
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                       Options.enable_blob_files: false
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                           Options.min_blob_size: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                          Options.blob_file_size: 268435456
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb:                Options.blob_file_starting_level: 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-np0005541914/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 2a601a42-6d19-4945-9484-73e64f055198
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669502334943, "job": 1, "event": "recovery_started", "wal_files": [4]}
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669502337426, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1762, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 648, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 526, "raw_average_value_size": 105, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669502, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669502337564, "job": 1, "event": "recovery_finished"}
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x562ea3c06e00
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: DB pointer 0x562ea3cfc000
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914 does not exist in monmap, will attempt to join an existing cluster
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                           ** DB Stats **
                                                           Uptime(secs): 0.0 total, 0.0 interval
                                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                           
                                                           ** Compaction Stats [default] **
                                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                             L0      1/0    1.72 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                                            Sum      1/0    1.72 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                                           
                                                           ** Compaction Stats [default] **
                                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                                           
                                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                           
                                                           Uptime(secs): 0.0 total, 0.0 interval
                                                           Flush(GB): cumulative 0.000, interval 0.000
                                                           AddFile(GB): cumulative 0.000, interval 0.000
                                                           AddFile(Total Files): cumulative 0, interval 0
                                                           AddFile(L0 Files): cumulative 0, interval 0
                                                           AddFile(Keys): cumulative 0, interval 0
                                                           Cumulative compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                           Interval compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                           Block cache BinnedLRUCache@0x562ea3bdf1f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.3e-05 secs_since: 0
                                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                                           
                                                           ** File Read Latency Histogram By Level [default] **
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: using public_addr v2:172.18.0.105:0/0 -> [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0]
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: starting mon.np0005541914 rank -1 at public addrs [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] at bind addrs [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon_data /var/lib/ceph/mon/ceph-np0005541914 fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:58:22 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@-1(???) e0 preinit fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@-1(synchronizing) e16 sync_obtain_latest_monmap
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@-1(synchronizing) e16 sync_obtain_latest_monmap obtained monmap e16
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@-1(synchronizing).mds e16 new map
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@-1(synchronizing).mds e16 print_map
                                                           e16
                                                           enable_multiple, ever_enabled_multiple: 1,1
                                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}
                                                           legacy client fscid: 1
                                                            
                                                           Filesystem 'cephfs' (1)
                                                           fs_name        cephfs
                                                           epoch        15
                                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                                           created        2025-12-02T08:05:53.424954+0000
                                                           modified        2025-12-02T09:52:13.505190+0000
                                                           tableserver        0
                                                           root        0
                                                           session_timeout        60
                                                           session_autoclose        300
                                                           max_file_size        1099511627776
                                                           required_client_features        {}
                                                           last_failure        0
                                                           last_failure_osd_epoch        84
                                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}
                                                           max_mds        1
                                                           in        0
                                                           up        {0=26573}
                                                           failed        
                                                           damaged        
                                                           stopped        
                                                           data_pools        [6]
                                                           metadata_pool        7
                                                           inline_data        disabled
                                                           balancer        
                                                           bal_rank_mask        -1
                                                           standby_count_wanted        1
                                                           qdb_cluster        leader: 26573 members: 26573
                                                           [mds.mds.np0005541912.ghcwcm{0:26573} state up:active seq 13 addr [v2:172.18.0.106:6808/955707462,v1:172.18.0.106:6809/955707462] compat {c=[1],r=[1],i=[17ff]}]
                                                            
                                                            
                                                           Standby daemons:
                                                            
                                                           [mds.mds.np0005541914.sqgqkj{-1:16923} state up:standby seq 1 addr [v2:172.18.0.108:6808/2216063099,v1:172.18.0.108:6809/2216063099] compat {c=[1],r=[1],i=[17ff]}]
                                                           [mds.mds.np0005541913.maexpe{-1:26386} state up:standby seq 1 addr [v2:172.18.0.107:6808/3746047079,v1:172.18.0.107:6809/3746047079] compat {c=[1],r=[1],i=[17ff]}]
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@-1(synchronizing).osd e89 crush map has features 3314933000852226048, adjusting msgr requires
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@-1(synchronizing).osd e89 crush map has features 288514051259236352, adjusting msgr requires
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@-1(synchronizing).osd e89 crush map has features 288514051259236352, adjusting msgr requires
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@-1(synchronizing).osd e89 crush map has features 288514051259236352, adjusting msgr requires
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541913 calling monitor election
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541912 calling monitor election
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541913 calling monitor election
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914 calling monitor election
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914 is new leader, mons np0005541914,np0005541912,np0005541913 in quorum (ranks 0,1,2)
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: monmap epoch 13
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: last_changed 2025-12-02T09:57:30.836166+0000
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: created 2025-12-02T07:44:17.411659+0000
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: min_mon_release 18 (reef)
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: election_strategy: 1
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005541914
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005541912
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005541913
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: fsmap cephfs:1 {0=mds.np0005541912.ghcwcm=up:active} 2 up:standby
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: osdmap e89: 6 total, 6 up, 6 in
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: mgrmap e31: np0005541913.mfesdm(active, since 23s), standbys: np0005541912.qwddia, np0005541911.adcgiw, np0005541914.lljzmk
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005541914,np0005541912)
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Cluster is now healthy
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: overall HEALTH_OK
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: pgmap v15: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='client.44380 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005541911.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Removed label _admin from host np0005541911.localdomain
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541911"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541912 calling monitor election
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914 calling monitor election
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541913 calling monitor election
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541911"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: pgmap v16: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541911"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541911 calling monitor election
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541911"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: pgmap v17: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541911"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541911"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914 is new leader, mons np0005541914,np0005541912,np0005541913,np0005541911 in quorum (ranks 0,1,2,3)
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: monmap epoch 14
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: last_changed 2025-12-02T09:57:38.994501+0000
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: created 2025-12-02T07:44:17.411659+0000
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: min_mon_release 18 (reef)
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: election_strategy: 1
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005541914
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005541912
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005541913
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: 3: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005541911
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: fsmap cephfs:1 {0=mds.np0005541912.ghcwcm=up:active} 2 up:standby
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: osdmap e89: 6 total, 6 up, 6 in
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: mgrmap e31: np0005541913.mfesdm(active, since 29s), standbys: np0005541912.qwddia, np0005541911.adcgiw, np0005541914.lljzmk
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: overall HEALTH_OK
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Removing np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Removing np0005541911.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Removing np0005541911.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541911"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: pgmap v18: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Removing daemon mgr.np0005541911.adcgiw from np0005541911.localdomain -- ports [8765]
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Removing key for mgr.np0005541911.adcgiw
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Safe to remove mon.np0005541911: new quorum should be ['np0005541914', 'np0005541912', 'np0005541913'] (from ['np0005541914', 'np0005541912', 'np0005541913'])
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Removing monitor np0005541911 from monmap...
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Removing daemon mon.np0005541911 from np0005541911.localdomain -- ports []
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541912 calling monitor election
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914 calling monitor election
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541913 calling monitor election
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: pgmap v20: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: pgmap v21: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914 is new leader, mons np0005541914,np0005541912 in quorum (ranks 0,1)
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541912 calling monitor election
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: overall HEALTH_OK
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914 calling monitor election
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914 is new leader, mons np0005541914,np0005541912,np0005541913 in quorum (ranks 0,1,2)
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: monmap epoch 15
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: last_changed 2025-12-02T09:57:47.906570+0000
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: created 2025-12-02T07:44:17.411659+0000
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: min_mon_release 18 (reef)
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: election_strategy: 1
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005541914
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005541912
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005541913
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: fsmap cephfs:1 {0=mds.np0005541912.ghcwcm=up:active} 2 up:standby
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: osdmap e89: 6 total, 6 up, 6 in
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: mgrmap e31: np0005541913.mfesdm(active, since 38s), standbys: np0005541912.qwddia, np0005541911.adcgiw, np0005541914.lljzmk
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: overall HEALTH_OK
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: pgmap v22: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='client.54101 -' entity='client.admin' cmd=[{"prefix": "orch host drain", "hostname": "np0005541911.localdomain", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Added label _no_schedule to host np0005541911.localdomain
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005541911.localdomain
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/2425549797' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/3063913601' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='client.54116 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "host_pattern": "np0005541911.localdomain", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: pgmap v23: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/1003759709' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/3093024393' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541911.localdomain"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541911.localdomain"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005541911.localdomain"}]': finished
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='client.34455 -' entity='client.admin' cmd=[{"prefix": "orch host rm", "hostname": "np0005541911.localdomain", "force": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Removed host np0005541911.localdomain
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring crash.np0005541912 (monmap changed)...
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon crash.np0005541912 on np0005541912.localdomain
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/464441794' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: pgmap v24: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring osd.2 (monmap changed)...
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon osd.2 on np0005541912.localdomain
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/2656442173' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring osd.5 (monmap changed)...
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon osd.5 on np0005541912.localdomain
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: pgmap v25: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)...
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring mgr.np0005541912.qwddia (monmap changed)...
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon mgr.np0005541912.qwddia on np0005541912.localdomain
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: pgmap v26: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring mon.np0005541912 (monmap changed)...
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon mon.np0005541912 on np0005541912.localdomain
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring crash.np0005541913 (monmap changed)...
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: pgmap v27: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring osd.0 (monmap changed)...
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon osd.0 on np0005541913.localdomain
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/486568655' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/486568655' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='client.34458 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Saving service mon spec with placement label:mon
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring osd.3 (monmap changed)...
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon osd.3 on np0005541913.localdomain
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: pgmap v28: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='client.54149 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005541914", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)...
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: pgmap v29: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='client.44434 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005541914"], "force": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Remove daemons mon.np0005541914
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "quorum_status"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon rm", "name": "np0005541914"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Safe to remove mon.np0005541914: new quorum should be ['np0005541912', 'np0005541913'] (from ['np0005541912', 'np0005541913'])
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Removing monitor np0005541914 from monmap...
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Removing daemon mon.np0005541914 from np0005541914.localdomain -- ports []
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541912 calling monitor election
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541913 calling monitor election
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541912 is new leader, mons np0005541912,np0005541913 in quorum (ranks 0,1)
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: monmap epoch 16
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: last_changed 2025-12-02T09:58:06.915003+0000
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: created 2025-12-02T07:44:17.411659+0000
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: min_mon_release 18 (reef)
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: election_strategy: 1
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: 0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005541912
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005541913
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: fsmap cephfs:1 {0=mds.np0005541912.ghcwcm=up:active} 2 up:standby
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: osdmap e89: 6 total, 6 up, 6 in
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: mgrmap e31: np0005541913.mfesdm(active, since 52s), standbys: np0005541912.qwddia, np0005541911.adcgiw, np0005541914.lljzmk
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: overall HEALTH_OK
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring crash.np0005541914 (monmap changed)...
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon crash.np0005541914 on np0005541914.localdomain
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: pgmap v30: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring osd.1 (monmap changed)...
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon osd.1 on np0005541914.localdomain
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring osd.4 (monmap changed)...
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon osd.4 on np0005541914.localdomain
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring mds.mds.np0005541914.sqgqkj (monmap changed)...
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon mds.mds.np0005541914.sqgqkj on np0005541914.localdomain
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: pgmap v31: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring mgr.np0005541914.lljzmk (monmap changed)...
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon mgr.np0005541914.lljzmk on np0005541914.localdomain
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: pgmap v32: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: pgmap v33: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring crash.np0005541912 (monmap changed)...
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon crash.np0005541912 on np0005541912.localdomain
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: pgmap v34: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring osd.2 (monmap changed)...
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon osd.2 on np0005541912.localdomain
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring osd.5 (monmap changed)...
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon osd.5 on np0005541912.localdomain
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: pgmap v35: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)...
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring mgr.np0005541912.qwddia (monmap changed)...
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon mgr.np0005541912.qwddia on np0005541912.localdomain
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='client.54159 -' entity='client.admin' cmd=[{"prefix": "orch daemon add", "daemon_type": "mon", "placement": "np0005541914.localdomain:172.18.0.105", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Deploying daemon mon.np0005541914 on np0005541914.localdomain
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring crash.np0005541913 (monmap changed)...
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: pgmap v36: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring osd.0 (monmap changed)...
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon osd.0 on np0005541913.localdomain
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring osd.3 (monmap changed)...
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon osd.3 on np0005541913.localdomain
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: pgmap v37: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)...
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@-1(synchronizing).paxosservice(auth 1..40) refresh upgraded, format 0 -> 3
Dec 02 09:58:24 np0005541914.localdomain sudo[301749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:58:24 np0005541914.localdomain sudo[301749]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:24 np0005541914.localdomain sudo[301749]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:25 np0005541914.localdomain sudo[301767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 02 09:58:25 np0005541914.localdomain sudo[301767]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:25 np0005541914.localdomain podman[301858]: 2025-12-02 09:58:25.823691252 +0000 UTC m=+0.101310352 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vcs-type=git, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, RELEASE=main, version=7, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 02 09:58:25 np0005541914.localdomain podman[301858]: 2025-12-02 09:58:25.922101214 +0000 UTC m=+0.199720364 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, release=1763362218, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vcs-type=git, ceph=True, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, description=Red Hat Ceph Storage 7)
Dec 02 09:58:26 np0005541914.localdomain sudo[301767]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:26 np0005541914.localdomain sudo[301975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:58:26 np0005541914.localdomain sudo[301975]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:26 np0005541914.localdomain sudo[301975]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:26 np0005541914.localdomain sudo[301993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:58:26 np0005541914.localdomain sudo[301993]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:27 np0005541914.localdomain sudo[301993]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:27 np0005541914.localdomain ceph-mgr[287188]: ms_deliver_dispatch: unhandled message 0x5619910b8000 mon_map magic: 0 from mon.1 v2:172.18.0.104:3300/0
Dec 02 09:58:27 np0005541914.localdomain sudo[302043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:58:27 np0005541914.localdomain sudo[302043]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:27 np0005541914.localdomain sudo[302043]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:27 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:58:27 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:58:27 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:58:27 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:58:27 np0005541914.localdomain systemd[1]: tmp-crun.QBiGQK.mount: Deactivated successfully.
Dec 02 09:58:27 np0005541914.localdomain podman[302061]: 2025-12-02 09:58:27.761740775 +0000 UTC m=+0.107235123 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 02 09:58:27 np0005541914.localdomain podman[302061]: 2025-12-02 09:58:27.795965519 +0000 UTC m=+0.141459817 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 02 09:58:27 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:58:27 np0005541914.localdomain podman[302063]: 2025-12-02 09:58:27.851432311 +0000 UTC m=+0.190215114 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 02 09:58:27 np0005541914.localdomain podman[302062]: 2025-12-02 09:58:27.88221686 +0000 UTC m=+0.224822940 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 09:58:27 np0005541914.localdomain podman[302063]: 2025-12-02 09:58:27.886926733 +0000 UTC m=+0.225709606 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 02 09:58:27 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 09:58:27 np0005541914.localdomain podman[302067]: 2025-12-02 09:58:27.812923136 +0000 UTC m=+0.147944625 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 02 09:58:27 np0005541914.localdomain podman[302062]: 2025-12-02 09:58:27.917914318 +0000 UTC m=+0.260520438 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 09:58:27 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 09:58:27 np0005541914.localdomain podman[302067]: 2025-12-02 09:58:27.943254321 +0000 UTC m=+0.278275840 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Dec 02 09:58:27 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:58:28 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@-1(probing) e17  my rank is now 2 (was -1)
Dec 02 09:58:28 np0005541914.localdomain systemd[1]: tmp-crun.pqstXE.mount: Deactivated successfully.
Dec 02 09:58:28 np0005541914.localdomain ceph-mon[301710]: Reconfiguring mgr.np0005541913.mfesdm (monmap changed)...
Dec 02 09:58:28 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain
Dec 02 09:58:28 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch
Dec 02 09:58:28 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:28 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:28 np0005541914.localdomain ceph-mon[301710]: pgmap v38: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:58:28 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch
Dec 02 09:58:28 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:28 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:28 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch
Dec 02 09:58:28 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:28 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:58:28 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:28 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:58:28 np0005541914.localdomain ceph-mon[301710]: log_channel(cluster) log [INF] : mon.np0005541914 calling monitor election
Dec 02 09:58:28 np0005541914.localdomain ceph-mon[301710]: paxos.2).electionLogic(0) init, first boot, initializing epoch at 1 
Dec 02 09:58:28 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(electing) e17 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:58:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(electing) e17 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:58:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Dec 02 09:58:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Dec 02 09:58:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 02 09:58:32 np0005541914.localdomain ceph-mon[301710]: mgrc update_daemon_metadata mon.np0005541914 metadata {addrs=[v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.1-361.el9cp (439dcd6094d413840eb2ec590fe2194ec616687f) reef (stable),ceph_version_short=18.2.1-361.el9cp,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=np0005541914.localdomain,container_image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=rhel,distro_description=Red Hat Enterprise Linux 9.7 (Plow),distro_version=9.7,hostname=np0005541914.localdomain,kernel_description=#1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023,kernel_version=5.14.0-284.11.1.el9_2.x86_64,mem_swap_kb=1048572,mem_total_kb=16116612,os=Linux}
Dec 02 09:58:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541912 calling monitor election
Dec 02 09:58:32 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 09:58:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541913 calling monitor election
Dec 02 09:58:32 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:58:32 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch
Dec 02 09:58:32 np0005541914.localdomain ceph-mon[301710]: Reconfiguring crash.np0005541912 (monmap changed)...
Dec 02 09:58:32 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:58:32 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch
Dec 02 09:58:32 np0005541914.localdomain ceph-mon[301710]: pgmap v40: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:58:32 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch
Dec 02 09:58:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914 calling monitor election
Dec 02 09:58:32 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch
Dec 02 09:58:32 np0005541914.localdomain ceph-mon[301710]: pgmap v41: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:58:32 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch
Dec 02 09:58:32 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch
Dec 02 09:58:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541912 is new leader, mons np0005541912,np0005541913,np0005541914 in quorum (ranks 0,1,2)
Dec 02 09:58:32 np0005541914.localdomain ceph-mon[301710]: monmap epoch 17
Dec 02 09:58:32 np0005541914.localdomain ceph-mon[301710]: fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:58:32 np0005541914.localdomain ceph-mon[301710]: last_changed 2025-12-02T09:58:27.543539+0000
Dec 02 09:58:32 np0005541914.localdomain ceph-mon[301710]: created 2025-12-02T07:44:17.411659+0000
Dec 02 09:58:32 np0005541914.localdomain ceph-mon[301710]: min_mon_release 18 (reef)
Dec 02 09:58:32 np0005541914.localdomain ceph-mon[301710]: election_strategy: 1
Dec 02 09:58:32 np0005541914.localdomain ceph-mon[301710]: 0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005541912
Dec 02 09:58:32 np0005541914.localdomain ceph-mon[301710]: 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005541913
Dec 02 09:58:32 np0005541914.localdomain ceph-mon[301710]: 2: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005541914
Dec 02 09:58:32 np0005541914.localdomain ceph-mon[301710]: fsmap cephfs:1 {0=mds.np0005541912.ghcwcm=up:active} 2 up:standby
Dec 02 09:58:32 np0005541914.localdomain ceph-mon[301710]: osdmap e89: 6 total, 6 up, 6 in
Dec 02 09:58:32 np0005541914.localdomain ceph-mon[301710]: mgrmap e31: np0005541913.mfesdm(active, since 78s), standbys: np0005541912.qwddia, np0005541911.adcgiw, np0005541914.lljzmk
Dec 02 09:58:32 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541912.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:58:32 np0005541914.localdomain ceph-mon[301710]: overall HEALTH_OK
Dec 02 09:58:32 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:32 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:33 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_auth_request failed to assign global_id
Dec 02 09:58:33 np0005541914.localdomain podman[239757]: time="2025-12-02T09:58:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 09:58:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:58:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 09:58:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:58:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19178 "" "Go-http-client/1.1"
Dec 02 09:58:33 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon crash.np0005541912 on np0005541912.localdomain
Dec 02 09:58:33 np0005541914.localdomain ceph-mon[301710]: pgmap v42: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:58:33 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch
Dec 02 09:58:34 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_auth_request failed to assign global_id
Dec 02 09:58:34 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:34 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:34 np0005541914.localdomain ceph-mon[301710]: Reconfiguring osd.2 (monmap changed)...
Dec 02 09:58:34 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 02 09:58:34 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:34 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon osd.2 on np0005541912.localdomain
Dec 02 09:58:34 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.200:0/1722694794' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 02 09:58:35 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_auth_request failed to assign global_id
Dec 02 09:58:35 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_auth_request failed to assign global_id
Dec 02 09:58:36 np0005541914.localdomain ceph-mon[301710]: pgmap v43: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:58:36 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:36 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:36 np0005541914.localdomain ceph-mon[301710]: Reconfiguring osd.5 (monmap changed)...
Dec 02 09:58:36 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 02 09:58:36 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:36 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon osd.5 on np0005541912.localdomain
Dec 02 09:58:36 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:36 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_auth_request failed to assign global_id
Dec 02 09:58:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:58:37.047 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:58:37 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:37 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:58:37 np0005541914.localdomain ceph-mon[301710]: Reconfiguring mds.mds.np0005541912.ghcwcm (monmap changed)...
Dec 02 09:58:37 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541912.ghcwcm", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:58:37 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:37 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon mds.mds.np0005541912.ghcwcm on np0005541912.localdomain
Dec 02 09:58:37 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:37 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:37 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:58:37 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541912.qwddia", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:58:37 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:58:37 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:37 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:58:37 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:58:38 np0005541914.localdomain systemd[1]: tmp-crun.x5inZs.mount: Deactivated successfully.
Dec 02 09:58:38 np0005541914.localdomain podman[302145]: 2025-12-02 09:58:38.118948477 +0000 UTC m=+0.124729137 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 09:58:38 np0005541914.localdomain podman[302145]: 2025-12-02 09:58:38.127086965 +0000 UTC m=+0.132867625 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 02 09:58:38 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:58:38 np0005541914.localdomain podman[302146]: 2025-12-02 09:58:38.130808478 +0000 UTC m=+0.133140893 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, maintainer=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-type=git, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, name=ubi9-minimal, io.buildah.version=1.33.7, release=1755695350)
Dec 02 09:58:38 np0005541914.localdomain podman[302146]: 2025-12-02 09:58:38.213974896 +0000 UTC m=+0.216307301 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, architecture=x86_64, io.buildah.version=1.33.7, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, build-date=2025-08-20T13:12:41, vcs-type=git, release=1755695350, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 02 09:58:38 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:58:38 np0005541914.localdomain ceph-mon[301710]: pgmap v44: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:58:38 np0005541914.localdomain ceph-mon[301710]: Reconfiguring mgr.np0005541912.qwddia (monmap changed)...
Dec 02 09:58:38 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon mgr.np0005541912.qwddia on np0005541912.localdomain
Dec 02 09:58:38 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:38 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:38 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:58:38 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541913.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:58:38 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:38 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:38 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:38 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:38 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:38 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:38 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:38 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:38 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:38 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:38 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:38 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:38 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:39 np0005541914.localdomain ceph-mon[301710]: Reconfiguring crash.np0005541913 (monmap changed)...
Dec 02 09:58:39 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon crash.np0005541913 on np0005541913.localdomain
Dec 02 09:58:39 np0005541914.localdomain ceph-mon[301710]: from='client.64100 -' entity='client.admin' cmd=[{"prefix": "orch", "action": "reconfig", "service_name": "osd.default_drive_group", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:58:39 np0005541914.localdomain ceph-mon[301710]: Reconfig service osd.default_drive_group
Dec 02 09:58:39 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:39 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:39 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 02 09:58:39 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 172.18.0.107:0/3692232454' entity='mgr.np0005541913.mfesdm' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:39 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e89 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec 02 09:58:39 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e89 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 02 09:58:39 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e90 e90: 6 total, 6 up, 6 in
Dec 02 09:58:40 np0005541914.localdomain sshd[297063]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 02 09:58:40 np0005541914.localdomain systemd[1]: session-66.scope: Deactivated successfully.
Dec 02 09:58:40 np0005541914.localdomain systemd[1]: session-66.scope: Consumed 23.312s CPU time.
Dec 02 09:58:40 np0005541914.localdomain systemd-logind[760]: Session 66 logged out. Waiting for processes to exit.
Dec 02 09:58:40 np0005541914.localdomain systemd-logind[760]: Removed session 66.
Dec 02 09:58:40 np0005541914.localdomain sshd[302187]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:58:40 np0005541914.localdomain sshd[302187]: Accepted publickey for ceph-admin from 192.168.122.106 port 60582 ssh2: RSA SHA256:/YNuT9SDdDKzRDkfkhY38xwkOxxhXakQO/p8xUCPPz0
Dec 02 09:58:40 np0005541914.localdomain systemd-logind[760]: New session 70 of user ceph-admin.
Dec 02 09:58:40 np0005541914.localdomain systemd[1]: Started Session 70 of User ceph-admin.
Dec 02 09:58:40 np0005541914.localdomain sshd[302187]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 02 09:58:40 np0005541914.localdomain ceph-mon[301710]: pgmap v45: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:58:40 np0005541914.localdomain ceph-mon[301710]: Reconfiguring osd.0 (monmap changed)...
Dec 02 09:58:40 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon osd.0 on np0005541913.localdomain
Dec 02 09:58:40 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.200:0/3934454104' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Dec 02 09:58:40 np0005541914.localdomain ceph-mon[301710]: Activating manager daemon np0005541912.qwddia
Dec 02 09:58:40 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:40 np0005541914.localdomain ceph-mon[301710]: osdmap e90: 6 total, 6 up, 6 in
Dec 02 09:58:40 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.200:0/3934454104' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished
Dec 02 09:58:40 np0005541914.localdomain ceph-mon[301710]: mgrmap e32: np0005541912.qwddia(active, starting, since 0.0452632s), standbys: np0005541911.adcgiw, np0005541914.lljzmk
Dec 02 09:58:40 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 09:58:40 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 09:58:40 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch
Dec 02 09:58:40 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "mds metadata", "who": "mds.np0005541914.sqgqkj"} : dispatch
Dec 02 09:58:40 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "mds metadata", "who": "mds.np0005541913.maexpe"} : dispatch
Dec 02 09:58:40 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "mds metadata", "who": "mds.np0005541912.ghcwcm"} : dispatch
Dec 02 09:58:40 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "mgr metadata", "who": "np0005541912.qwddia", "id": "np0005541912.qwddia"} : dispatch
Dec 02 09:58:40 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "mgr metadata", "who": "np0005541911.adcgiw", "id": "np0005541911.adcgiw"} : dispatch
Dec 02 09:58:40 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "mgr metadata", "who": "np0005541914.lljzmk", "id": "np0005541914.lljzmk"} : dispatch
Dec 02 09:58:40 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 02 09:58:40 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 02 09:58:40 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 02 09:58:40 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "osd metadata", "id": 3} : dispatch
Dec 02 09:58:40 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "osd metadata", "id": 4} : dispatch
Dec 02 09:58:40 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "osd metadata", "id": 5} : dispatch
Dec 02 09:58:40 np0005541914.localdomain ceph-mon[301710]: from='mgr.26470 ' entity='mgr.np0005541913.mfesdm' 
Dec 02 09:58:40 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "mds metadata"} : dispatch
Dec 02 09:58:40 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "osd metadata"} : dispatch
Dec 02 09:58:40 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "mon metadata"} : dispatch
Dec 02 09:58:40 np0005541914.localdomain ceph-mon[301710]: Manager daemon np0005541912.qwddia is now available
Dec 02 09:58:40 np0005541914.localdomain ceph-mon[301710]: removing stray HostCache host record np0005541911.localdomain.devices.0
Dec 02 09:58:40 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541911.localdomain.devices.0"} : dispatch
Dec 02 09:58:40 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005541911.localdomain.devices.0"}]': finished
Dec 02 09:58:40 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005541911.localdomain.devices.0"} : dispatch
Dec 02 09:58:40 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005541911.localdomain.devices.0"}]': finished
Dec 02 09:58:40 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541912.qwddia/mirror_snapshot_schedule"} : dispatch
Dec 02 09:58:40 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541912.qwddia/trash_purge_schedule"} : dispatch
Dec 02 09:58:40 np0005541914.localdomain sudo[302191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:58:40 np0005541914.localdomain sudo[302191]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:40 np0005541914.localdomain sudo[302191]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:40 np0005541914.localdomain sudo[302209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 02 09:58:40 np0005541914.localdomain sudo[302209]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:41 np0005541914.localdomain podman[302297]: 2025-12-02 09:58:41.429341105 +0000 UTC m=+0.091235304 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, CEPH_POINT_RELEASE=, name=rhceph, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, release=1763362218, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, RELEASE=main, GIT_CLEAN=True, ceph=True, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=)
Dec 02 09:58:41 np0005541914.localdomain podman[302297]: 2025-12-02 09:58:41.531974626 +0000 UTC m=+0.193868775 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, distribution-scope=public, release=1763362218, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, io.buildah.version=1.41.4, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, ceph=True, vcs-type=git, GIT_BRANCH=main, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, architecture=x86_64)
Dec 02 09:58:41 np0005541914.localdomain ceph-mon[301710]: mgrmap e33: np0005541912.qwddia(active, since 1.09574s), standbys: np0005541911.adcgiw, np0005541914.lljzmk
Dec 02 09:58:41 np0005541914.localdomain ceph-mon[301710]: pgmap v3: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:58:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:58:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 09:58:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:58:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:58:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:58:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:58:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:58:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 09:58:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:58:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:58:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 09:58:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:58:42 np0005541914.localdomain sudo[302209]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:42 np0005541914.localdomain sudo[302414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:58:42 np0005541914.localdomain sudo[302414]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:42 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:58:42 np0005541914.localdomain sudo[302414]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:42 np0005541914.localdomain sudo[302433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 09:58:42 np0005541914.localdomain sudo[302433]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:42 np0005541914.localdomain podman[302432]: 2025-12-02 09:58:42.349804435 +0000 UTC m=+0.095645678 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 02 09:58:42 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e90 _set_new_cache_sizes cache_size:1019530109 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:58:42 np0005541914.localdomain podman[302432]: 2025-12-02 09:58:42.361137851 +0000 UTC m=+0.106979084 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 02 09:58:42 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:58:42 np0005541914.localdomain sudo[302433]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:43 np0005541914.localdomain ceph-mon[301710]: [02/Dec/2025:09:58:41] ENGINE Bus STARTING
Dec 02 09:58:43 np0005541914.localdomain ceph-mon[301710]: [02/Dec/2025:09:58:41] ENGINE Serving on http://172.18.0.106:8765
Dec 02 09:58:43 np0005541914.localdomain ceph-mon[301710]: [02/Dec/2025:09:58:41] ENGINE Serving on https://172.18.0.106:7150
Dec 02 09:58:43 np0005541914.localdomain ceph-mon[301710]: [02/Dec/2025:09:58:41] ENGINE Bus STARTED
Dec 02 09:58:43 np0005541914.localdomain ceph-mon[301710]: [02/Dec/2025:09:58:41] ENGINE Client ('172.18.0.106', 43976) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 02 09:58:43 np0005541914.localdomain ceph-mon[301710]: pgmap v4: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:58:43 np0005541914.localdomain ceph-mon[301710]: mgrmap e34: np0005541912.qwddia(active, since 2s), standbys: np0005541911.adcgiw, np0005541914.lljzmk
Dec 02 09:58:43 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:43 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:43 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:43 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:43 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:43 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:43 np0005541914.localdomain sudo[302501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:58:43 np0005541914.localdomain sudo[302501]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:43 np0005541914.localdomain sudo[302501]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:43 np0005541914.localdomain sudo[302519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Dec 02 09:58:43 np0005541914.localdomain sudo[302519]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:43 np0005541914.localdomain sudo[302519]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:43 np0005541914.localdomain sudo[302556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 02 09:58:43 np0005541914.localdomain sudo[302556]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:43 np0005541914.localdomain sudo[302556]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:43 np0005541914.localdomain sudo[302574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph
Dec 02 09:58:43 np0005541914.localdomain sudo[302574]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:43 np0005541914.localdomain sudo[302574]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:44 np0005541914.localdomain sudo[302592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:58:44 np0005541914.localdomain sudo[302592]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:44 np0005541914.localdomain sudo[302592]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:44 np0005541914.localdomain sudo[302610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:58:44 np0005541914.localdomain sudo[302610]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:44 np0005541914.localdomain sudo[302610]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:44 np0005541914.localdomain sudo[302628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:58:44 np0005541914.localdomain sudo[302628]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:44 np0005541914.localdomain sudo[302628]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:44 np0005541914.localdomain sudo[302662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:58:44 np0005541914.localdomain sudo[302662]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:44 np0005541914.localdomain sudo[302662]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:44 np0005541914.localdomain sudo[302680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 09:58:44 np0005541914.localdomain sudo[302680]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:44 np0005541914.localdomain sudo[302680]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:44 np0005541914.localdomain sudo[302698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 02 09:58:44 np0005541914.localdomain sudo[302698]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:44 np0005541914.localdomain sudo[302698]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:44 np0005541914.localdomain sudo[302716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 09:58:44 np0005541914.localdomain sudo[302716]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:44 np0005541914.localdomain sudo[302716]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:44 np0005541914.localdomain sudo[302734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 09:58:44 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:44 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:44 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 02 09:58:44 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:44 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:44 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 02 09:58:44 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:44 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 02 09:58:44 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:44 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 02 09:58:44 np0005541914.localdomain ceph-mon[301710]: Adjusting osd_memory_target on np0005541912.localdomain to 836.6M
Dec 02 09:58:44 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 02 09:58:44 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 02 09:58:44 np0005541914.localdomain ceph-mon[301710]: Adjusting osd_memory_target on np0005541913.localdomain to 836.6M
Dec 02 09:58:44 np0005541914.localdomain ceph-mon[301710]: Unable to set osd_memory_target on np0005541912.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 09:58:44 np0005541914.localdomain sudo[302734]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:44 np0005541914.localdomain ceph-mon[301710]: Adjusting osd_memory_target on np0005541914.localdomain to 836.6M
Dec 02 09:58:44 np0005541914.localdomain ceph-mon[301710]: Unable to set osd_memory_target on np0005541914.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 09:58:44 np0005541914.localdomain ceph-mon[301710]: Unable to set osd_memory_target on np0005541913.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 09:58:44 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:44 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:58:44 np0005541914.localdomain ceph-mon[301710]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf
Dec 02 09:58:44 np0005541914.localdomain ceph-mon[301710]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf
Dec 02 09:58:44 np0005541914.localdomain ceph-mon[301710]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf
Dec 02 09:58:44 np0005541914.localdomain ceph-mon[301710]: pgmap v5: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:58:44 np0005541914.localdomain sudo[302734]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:44 np0005541914.localdomain sudo[302752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:58:44 np0005541914.localdomain sudo[302752]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:44 np0005541914.localdomain sudo[302752]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:44 np0005541914.localdomain sudo[302770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:58:44 np0005541914.localdomain sudo[302770]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:44 np0005541914.localdomain sudo[302770]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:44 np0005541914.localdomain sudo[302788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:58:44 np0005541914.localdomain sudo[302788]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:44 np0005541914.localdomain sudo[302788]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:44 np0005541914.localdomain sudo[302822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:58:44 np0005541914.localdomain sudo[302822]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:44 np0005541914.localdomain sudo[302822]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:45 np0005541914.localdomain sudo[302840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 09:58:45 np0005541914.localdomain sudo[302840]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:45 np0005541914.localdomain sudo[302840]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:45 np0005541914.localdomain sudo[302858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:58:45 np0005541914.localdomain sudo[302858]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:45 np0005541914.localdomain sudo[302858]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:45 np0005541914.localdomain sudo[302876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 02 09:58:45 np0005541914.localdomain sudo[302876]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:45 np0005541914.localdomain sudo[302876]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:45 np0005541914.localdomain sudo[302894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph
Dec 02 09:58:45 np0005541914.localdomain sudo[302894]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:45 np0005541914.localdomain sudo[302894]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:45 np0005541914.localdomain sudo[302912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.client.admin.keyring.new
Dec 02 09:58:45 np0005541914.localdomain sudo[302912]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:45 np0005541914.localdomain sudo[302912]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:45 np0005541914.localdomain sudo[302930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:58:45 np0005541914.localdomain sudo[302930]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:45 np0005541914.localdomain sudo[302930]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:45 np0005541914.localdomain sudo[302948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.client.admin.keyring.new
Dec 02 09:58:45 np0005541914.localdomain sudo[302948]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:45 np0005541914.localdomain sudo[302948]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:45 np0005541914.localdomain sudo[302982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.client.admin.keyring.new
Dec 02 09:58:45 np0005541914.localdomain sudo[302982]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:45 np0005541914.localdomain sudo[302982]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:45 np0005541914.localdomain sudo[303000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.client.admin.keyring.new
Dec 02 09:58:45 np0005541914.localdomain sudo[303000]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:45 np0005541914.localdomain sudo[303000]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:45 np0005541914.localdomain sudo[303018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 02 09:58:45 np0005541914.localdomain sudo[303018]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:45 np0005541914.localdomain sudo[303018]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:45 np0005541914.localdomain sudo[303036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 09:58:45 np0005541914.localdomain sudo[303036]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:45 np0005541914.localdomain sudo[303036]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:45 np0005541914.localdomain sudo[303054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 09:58:45 np0005541914.localdomain sudo[303054]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:45 np0005541914.localdomain sudo[303054]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:45 np0005541914.localdomain ceph-mon[301710]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:58:45 np0005541914.localdomain ceph-mon[301710]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:58:45 np0005541914.localdomain ceph-mon[301710]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 09:58:45 np0005541914.localdomain ceph-mon[301710]: mgrmap e35: np0005541912.qwddia(active, since 4s), standbys: np0005541911.adcgiw, np0005541914.lljzmk
Dec 02 09:58:45 np0005541914.localdomain ceph-mon[301710]: Standby manager daemon np0005541913.mfesdm started
Dec 02 09:58:45 np0005541914.localdomain ceph-mon[301710]: Updating np0005541913.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 09:58:45 np0005541914.localdomain ceph-mon[301710]: Updating np0005541914.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 09:58:45 np0005541914.localdomain ceph-mon[301710]: Updating np0005541912.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 09:58:45 np0005541914.localdomain sudo[303072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring.new
Dec 02 09:58:45 np0005541914.localdomain sudo[303072]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:45 np0005541914.localdomain sudo[303072]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:45 np0005541914.localdomain sudo[303090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:58:45 np0005541914.localdomain sudo[303090]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:45 np0005541914.localdomain sudo[303090]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:45 np0005541914.localdomain sudo[303108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring.new
Dec 02 09:58:45 np0005541914.localdomain sudo[303108]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:45 np0005541914.localdomain sudo[303108]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:46 np0005541914.localdomain sudo[303142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring.new
Dec 02 09:58:46 np0005541914.localdomain sudo[303142]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:46 np0005541914.localdomain sudo[303142]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:46 np0005541914.localdomain sudo[303160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring.new
Dec 02 09:58:46 np0005541914.localdomain sudo[303160]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:46 np0005541914.localdomain sudo[303160]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:46 np0005541914.localdomain sudo[303178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring.new /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:58:46 np0005541914.localdomain sudo[303178]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:46 np0005541914.localdomain sudo[303178]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:46 np0005541914.localdomain sudo[303196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:58:46 np0005541914.localdomain sudo[303196]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:46 np0005541914.localdomain sudo[303196]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:46 np0005541914.localdomain ceph-mon[301710]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:58:46 np0005541914.localdomain ceph-mon[301710]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:58:46 np0005541914.localdomain ceph-mon[301710]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 09:58:46 np0005541914.localdomain ceph-mon[301710]: mgrmap e36: np0005541912.qwddia(active, since 5s), standbys: np0005541911.adcgiw, np0005541914.lljzmk, np0005541913.mfesdm
Dec 02 09:58:46 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "mgr metadata", "who": "np0005541913.mfesdm", "id": "np0005541913.mfesdm"} : dispatch
Dec 02 09:58:46 np0005541914.localdomain ceph-mon[301710]: pgmap v6: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:58:46 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:46 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:46 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:46 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:46 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:46 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:46 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:46 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:58:46 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 02 09:58:46 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:47 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e90 _set_new_cache_sizes cache_size:1020041399 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:58:47 np0005541914.localdomain ceph-mon[301710]: pgmap v7: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 0 B/s wr, 21 op/s
Dec 02 09:58:47 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon osd.2 on np0005541912.localdomain
Dec 02 09:58:47 np0005541914.localdomain ceph-mon[301710]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)
Dec 02 09:58:47 np0005541914.localdomain ceph-mon[301710]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST)
Dec 02 09:58:47 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:47 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:47 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:47 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:47 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Dec 02 09:58:47 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:48 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon osd.5 on np0005541912.localdomain
Dec 02 09:58:48 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:48 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:48 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:48 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:48 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 02 09:58:48 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:49 np0005541914.localdomain ceph-mon[301710]: pgmap v8: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 0 B/s wr, 15 op/s
Dec 02 09:58:49 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon osd.0 on np0005541913.localdomain
Dec 02 09:58:49 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:49 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:49 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:49 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:49 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Dec 02 09:58:49 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:51 np0005541914.localdomain ceph-mon[301710]: Reconfiguring osd.3 (monmap changed)...
Dec 02 09:58:51 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon osd.3 on np0005541913.localdomain
Dec 02 09:58:51 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:51 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:51 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:51 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:51 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:51 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541913.maexpe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:58:51 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:52 np0005541914.localdomain ceph-mon[301710]: pgmap v9: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 0 B/s wr, 12 op/s
Dec 02 09:58:52 np0005541914.localdomain ceph-mon[301710]: Reconfiguring mds.mds.np0005541913.maexpe (monmap changed)...
Dec 02 09:58:52 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon mds.mds.np0005541913.maexpe on np0005541913.localdomain
Dec 02 09:58:52 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/481852052' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:58:52 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:52 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:52 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541913.mfesdm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:58:52 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:58:52 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:52 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054389 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:58:52 np0005541914.localdomain sudo[303214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:58:52 np0005541914.localdomain sudo[303214]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:52 np0005541914.localdomain sudo[303214]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:52 np0005541914.localdomain sudo[303232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:58:52 np0005541914.localdomain sudo[303232]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:53 np0005541914.localdomain ceph-mon[301710]: from='client.64127 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 02 09:58:53 np0005541914.localdomain ceph-mon[301710]: Reconfiguring mgr.np0005541913.mfesdm (monmap changed)...
Dec 02 09:58:53 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon mgr.np0005541913.mfesdm on np0005541913.localdomain
Dec 02 09:58:53 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:53 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:53 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005541914.localdomain", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 02 09:58:53 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:53 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/1246894869' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:58:53 np0005541914.localdomain podman[303267]: 
Dec 02 09:58:53 np0005541914.localdomain podman[303267]: 2025-12-02 09:58:53.333859073 +0000 UTC m=+0.071809263 container create aadbca85f537e44ec1ee3d7457344edc9833da97c3962c7f3cc86db73b21feee (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_feistel, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, ceph=True, vendor=Red Hat, Inc., RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, version=7, GIT_CLEAN=True, distribution-scope=public, name=rhceph, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 02 09:58:53 np0005541914.localdomain systemd[1]: Started libpod-conmon-aadbca85f537e44ec1ee3d7457344edc9833da97c3962c7f3cc86db73b21feee.scope.
Dec 02 09:58:53 np0005541914.localdomain podman[303267]: 2025-12-02 09:58:53.307442617 +0000 UTC m=+0.045392847 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:58:53 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:58:53 np0005541914.localdomain podman[303267]: 2025-12-02 09:58:53.428225972 +0000 UTC m=+0.166176142 container init aadbca85f537e44ec1ee3d7457344edc9833da97c3962c7f3cc86db73b21feee (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_feistel, CEPH_POINT_RELEASE=, RELEASE=main, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.buildah.version=1.41.4, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, io.openshift.tags=rhceph ceph, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, version=7, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 02 09:58:53 np0005541914.localdomain systemd[1]: tmp-crun.qjucLs.mount: Deactivated successfully.
Dec 02 09:58:53 np0005541914.localdomain podman[303267]: 2025-12-02 09:58:53.44689437 +0000 UTC m=+0.184844560 container start aadbca85f537e44ec1ee3d7457344edc9833da97c3962c7f3cc86db73b21feee (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_feistel, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, vendor=Red Hat, Inc., architecture=x86_64, RELEASE=main, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container)
Dec 02 09:58:53 np0005541914.localdomain podman[303267]: 2025-12-02 09:58:53.449008505 +0000 UTC m=+0.186958735 container attach aadbca85f537e44ec1ee3d7457344edc9833da97c3962c7f3cc86db73b21feee (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_feistel, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, version=7, CEPH_POINT_RELEASE=, build-date=2025-11-26T19:44:28Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, release=1763362218, RELEASE=main, io.buildah.version=1.41.4, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_CLEAN=True)
Dec 02 09:58:53 np0005541914.localdomain eager_feistel[303282]: 167 167
Dec 02 09:58:53 np0005541914.localdomain systemd[1]: libpod-aadbca85f537e44ec1ee3d7457344edc9833da97c3962c7f3cc86db73b21feee.scope: Deactivated successfully.
Dec 02 09:58:53 np0005541914.localdomain podman[303267]: 2025-12-02 09:58:53.450822131 +0000 UTC m=+0.188772331 container died aadbca85f537e44ec1ee3d7457344edc9833da97c3962c7f3cc86db73b21feee (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_feistel, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, release=1763362218, name=rhceph, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7)
Dec 02 09:58:53 np0005541914.localdomain podman[303287]: 2025-12-02 09:58:53.542155157 +0000 UTC m=+0.079248649 container remove aadbca85f537e44ec1ee3d7457344edc9833da97c3962c7f3cc86db73b21feee (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_feistel, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, RELEASE=main, release=1763362218, io.openshift.expose-services=, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7)
Dec 02 09:58:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:58:53.543 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:58:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:58:53.544 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:58:53 np0005541914.localdomain systemd[1]: libpod-conmon-aadbca85f537e44ec1ee3d7457344edc9833da97c3962c7f3cc86db73b21feee.scope: Deactivated successfully.
Dec 02 09:58:53 np0005541914.localdomain sudo[303232]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:53 np0005541914.localdomain sudo[303303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:58:53 np0005541914.localdomain sudo[303303]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:53 np0005541914.localdomain sudo[303303]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:53 np0005541914.localdomain sudo[303321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:58:53 np0005541914.localdomain sudo[303321]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:54 np0005541914.localdomain ceph-mon[301710]: pgmap v10: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 0 B/s wr, 11 op/s
Dec 02 09:58:54 np0005541914.localdomain ceph-mon[301710]: Reconfiguring crash.np0005541914 (monmap changed)...
Dec 02 09:58:54 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon crash.np0005541914 on np0005541914.localdomain
Dec 02 09:58:54 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:54 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:54 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 02 09:58:54 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:54 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:54 np0005541914.localdomain podman[303355]: 
Dec 02 09:58:54 np0005541914.localdomain podman[303355]: 2025-12-02 09:58:54.200735178 +0000 UTC m=+0.083654443 container create c50febf95d72f1bf88dfd7c1e14920c90469fab8aa360ce58a3f96e92290bbbb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_cerf, build-date=2025-11-26T19:44:28Z, version=7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, architecture=x86_64, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, release=1763362218, vcs-type=git, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, name=rhceph, GIT_CLEAN=True, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=)
Dec 02 09:58:54 np0005541914.localdomain systemd[1]: Started libpod-conmon-c50febf95d72f1bf88dfd7c1e14920c90469fab8aa360ce58a3f96e92290bbbb.scope.
Dec 02 09:58:54 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:58:54 np0005541914.localdomain podman[303355]: 2025-12-02 09:58:54.163008407 +0000 UTC m=+0.045927692 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:58:54 np0005541914.localdomain podman[303355]: 2025-12-02 09:58:54.263692689 +0000 UTC m=+0.146611944 container init c50febf95d72f1bf88dfd7c1e14920c90469fab8aa360ce58a3f96e92290bbbb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_cerf, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, RELEASE=main, io.openshift.tags=rhceph ceph, name=rhceph, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, architecture=x86_64, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git)
Dec 02 09:58:54 np0005541914.localdomain podman[303355]: 2025-12-02 09:58:54.273188028 +0000 UTC m=+0.156107293 container start c50febf95d72f1bf88dfd7c1e14920c90469fab8aa360ce58a3f96e92290bbbb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_cerf, RELEASE=main, vcs-type=git, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, distribution-scope=public, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, io.openshift.expose-services=, ceph=True, build-date=2025-11-26T19:44:28Z, version=7)
Dec 02 09:58:54 np0005541914.localdomain podman[303355]: 2025-12-02 09:58:54.273470317 +0000 UTC m=+0.156389552 container attach c50febf95d72f1bf88dfd7c1e14920c90469fab8aa360ce58a3f96e92290bbbb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_cerf, io.openshift.expose-services=, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, version=7, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, vcs-type=git, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, distribution-scope=public)
Dec 02 09:58:54 np0005541914.localdomain youthful_cerf[303370]: 167 167
Dec 02 09:58:54 np0005541914.localdomain systemd[1]: libpod-c50febf95d72f1bf88dfd7c1e14920c90469fab8aa360ce58a3f96e92290bbbb.scope: Deactivated successfully.
Dec 02 09:58:54 np0005541914.localdomain podman[303355]: 2025-12-02 09:58:54.276035895 +0000 UTC m=+0.158955160 container died c50febf95d72f1bf88dfd7c1e14920c90469fab8aa360ce58a3f96e92290bbbb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_cerf, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, ceph=True, release=1763362218, vcs-type=git, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, name=rhceph, version=7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public)
Dec 02 09:58:54 np0005541914.localdomain systemd[1]: tmp-crun.tkDCzX.mount: Deactivated successfully.
Dec 02 09:58:54 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-8bd2fdd2c9a493bc38c8b9420a1d473bcc0050df428b713cd211592315aec6cd-merged.mount: Deactivated successfully.
Dec 02 09:58:54 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-db2dffee161b65f57f5f95e74589b0168517f5da6ca64dfc9c445f128c2ec216-merged.mount: Deactivated successfully.
Dec 02 09:58:54 np0005541914.localdomain podman[303375]: 2025-12-02 09:58:54.378427829 +0000 UTC m=+0.087717237 container remove c50febf95d72f1bf88dfd7c1e14920c90469fab8aa360ce58a3f96e92290bbbb (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_cerf, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, version=7, vendor=Red Hat, Inc., RELEASE=main, architecture=x86_64, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, ceph=True, name=rhceph, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=)
Dec 02 09:58:54 np0005541914.localdomain systemd[1]: libpod-conmon-c50febf95d72f1bf88dfd7c1e14920c90469fab8aa360ce58a3f96e92290bbbb.scope: Deactivated successfully.
Dec 02 09:58:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:58:54.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:58:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:58:54.527 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 09:58:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:58:54.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 09:58:54 np0005541914.localdomain sudo[303321]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:58:54.675 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 09:58:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:58:54.676 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:58:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:58:54.676 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:58:54 np0005541914.localdomain sudo[303399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:58:54 np0005541914.localdomain sudo[303399]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:54 np0005541914.localdomain sudo[303399]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:54 np0005541914.localdomain sudo[303417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:58:54 np0005541914.localdomain sudo[303417]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:55 np0005541914.localdomain ceph-mon[301710]: Reconfiguring osd.1 (monmap changed)...
Dec 02 09:58:55 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon osd.1 on np0005541914.localdomain
Dec 02 09:58:55 np0005541914.localdomain ceph-mon[301710]: from='client.64130 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 09:58:55 np0005541914.localdomain ceph-mon[301710]: Saving service mon spec with placement label:mon
Dec 02 09:58:55 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:55 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:55 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:55 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:55 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Dec 02 09:58:55 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:55 np0005541914.localdomain podman[303452]: 
Dec 02 09:58:55 np0005541914.localdomain podman[303452]: 2025-12-02 09:58:55.234512805 +0000 UTC m=+0.066493229 container create d822887a26ede71ef602192182ed6aa8a29c9ded8280d0046e8df8792550975a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_kalam, GIT_BRANCH=main, CEPH_POINT_RELEASE=, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, com.redhat.component=rhceph-container, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, version=7, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.41.4, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64)
Dec 02 09:58:55 np0005541914.localdomain systemd[1]: Started libpod-conmon-d822887a26ede71ef602192182ed6aa8a29c9ded8280d0046e8df8792550975a.scope.
Dec 02 09:58:55 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:58:55 np0005541914.localdomain podman[303452]: 2025-12-02 09:58:55.203909232 +0000 UTC m=+0.035889666 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:58:55 np0005541914.localdomain podman[303452]: 2025-12-02 09:58:55.303419217 +0000 UTC m=+0.135399631 container init d822887a26ede71ef602192182ed6aa8a29c9ded8280d0046e8df8792550975a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_kalam, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, io.openshift.expose-services=, release=1763362218, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., version=7, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, ceph=True, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 09:58:55 np0005541914.localdomain podman[303452]: 2025-12-02 09:58:55.31237521 +0000 UTC m=+0.144355594 container start d822887a26ede71ef602192182ed6aa8a29c9ded8280d0046e8df8792550975a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_kalam, distribution-scope=public, GIT_CLEAN=True, RELEASE=main, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhceph, release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.buildah.version=1.41.4, GIT_BRANCH=main, CEPH_POINT_RELEASE=, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container)
Dec 02 09:58:55 np0005541914.localdomain podman[303452]: 2025-12-02 09:58:55.312725531 +0000 UTC m=+0.144705955 container attach d822887a26ede71ef602192182ed6aa8a29c9ded8280d0046e8df8792550975a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_kalam, vcs-type=git, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, CEPH_POINT_RELEASE=, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, architecture=x86_64, com.redhat.component=rhceph-container, io.buildah.version=1.41.4, version=7, vendor=Red Hat, Inc., ceph=True)
Dec 02 09:58:55 np0005541914.localdomain crazy_kalam[303467]: 167 167
Dec 02 09:58:55 np0005541914.localdomain systemd[1]: libpod-d822887a26ede71ef602192182ed6aa8a29c9ded8280d0046e8df8792550975a.scope: Deactivated successfully.
Dec 02 09:58:55 np0005541914.localdomain podman[303452]: 2025-12-02 09:58:55.315882797 +0000 UTC m=+0.147863221 container died d822887a26ede71ef602192182ed6aa8a29c9ded8280d0046e8df8792550975a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_kalam, version=7, vcs-type=git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, RELEASE=main, release=1763362218, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, distribution-scope=public, io.buildah.version=1.41.4, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, CEPH_POINT_RELEASE=, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., GIT_CLEAN=True, io.openshift.tags=rhceph ceph, architecture=x86_64, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z)
Dec 02 09:58:55 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-20245e23ab31c2e5ab74780b8cd433c5950892d5a190813f0ed10f98125e47a6-merged.mount: Deactivated successfully.
Dec 02 09:58:55 np0005541914.localdomain podman[303472]: 2025-12-02 09:58:55.411805784 +0000 UTC m=+0.082527209 container remove d822887a26ede71ef602192182ed6aa8a29c9ded8280d0046e8df8792550975a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=crazy_kalam, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, RELEASE=main, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-11-26T19:44:28Z, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, version=7, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.41.4)
Dec 02 09:58:55 np0005541914.localdomain systemd[1]: libpod-conmon-d822887a26ede71ef602192182ed6aa8a29c9ded8280d0046e8df8792550975a.scope: Deactivated successfully.
Dec 02 09:58:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:58:55.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:58:55 np0005541914.localdomain sudo[303417]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:55 np0005541914.localdomain sudo[303495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:58:55 np0005541914.localdomain sudo[303495]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:55 np0005541914.localdomain sudo[303495]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:55 np0005541914.localdomain sudo[303513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:58:55 np0005541914.localdomain sudo[303513]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:56 np0005541914.localdomain podman[303549]: 
Dec 02 09:58:56 np0005541914.localdomain podman[303549]: 2025-12-02 09:58:56.172461468 +0000 UTC m=+0.046502499 container create 365cddb5b023e94a164b94acafd08b5b87e2709cce94942ec55db0eadf4ca839 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_mccarthy, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, io.buildah.version=1.41.4, RELEASE=main, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_BRANCH=main, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, GIT_CLEAN=True, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, CEPH_POINT_RELEASE=, name=rhceph, description=Red Hat Ceph Storage 7)
Dec 02 09:58:56 np0005541914.localdomain systemd[1]: Started libpod-conmon-365cddb5b023e94a164b94acafd08b5b87e2709cce94942ec55db0eadf4ca839.scope.
Dec 02 09:58:56 np0005541914.localdomain ceph-mon[301710]: pgmap v11: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 0 B/s wr, 11 op/s
Dec 02 09:58:56 np0005541914.localdomain ceph-mon[301710]: Reconfiguring osd.4 (monmap changed)...
Dec 02 09:58:56 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon osd.4 on np0005541914.localdomain
Dec 02 09:58:56 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:56 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:56 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:56 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:56 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005541914.sqgqkj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 02 09:58:56 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:56 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:58:56 np0005541914.localdomain podman[303549]: 2025-12-02 09:58:56.239799453 +0000 UTC m=+0.113840504 container init 365cddb5b023e94a164b94acafd08b5b87e2709cce94942ec55db0eadf4ca839 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_mccarthy, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, release=1763362218, version=7, io.openshift.expose-services=, com.redhat.component=rhceph-container, RELEASE=main, io.openshift.tags=rhceph ceph, name=rhceph, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 02 09:58:56 np0005541914.localdomain affectionate_mccarthy[303564]: 167 167
Dec 02 09:58:56 np0005541914.localdomain podman[303549]: 2025-12-02 09:58:56.246808847 +0000 UTC m=+0.120849908 container start 365cddb5b023e94a164b94acafd08b5b87e2709cce94942ec55db0eadf4ca839 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_mccarthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_CLEAN=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.openshift.tags=rhceph ceph, name=rhceph, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.buildah.version=1.41.4, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec 02 09:58:56 np0005541914.localdomain podman[303549]: 2025-12-02 09:58:56.247095896 +0000 UTC m=+0.121136957 container attach 365cddb5b023e94a164b94acafd08b5b87e2709cce94942ec55db0eadf4ca839 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_mccarthy, com.redhat.component=rhceph-container, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vcs-type=git, io.buildah.version=1.41.4, io.openshift.expose-services=, CEPH_POINT_RELEASE=, architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git)
Dec 02 09:58:56 np0005541914.localdomain systemd[1]: libpod-365cddb5b023e94a164b94acafd08b5b87e2709cce94942ec55db0eadf4ca839.scope: Deactivated successfully.
Dec 02 09:58:56 np0005541914.localdomain podman[303549]: 2025-12-02 09:58:56.24954354 +0000 UTC m=+0.123584641 container died 365cddb5b023e94a164b94acafd08b5b87e2709cce94942ec55db0eadf4ca839 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_mccarthy, vcs-type=git, io.buildah.version=1.41.4, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.openshift.expose-services=, RELEASE=main, description=Red Hat Ceph Storage 7)
Dec 02 09:58:56 np0005541914.localdomain podman[303549]: 2025-12-02 09:58:56.152252212 +0000 UTC m=+0.026293273 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:58:56 np0005541914.localdomain podman[303569]: 2025-12-02 09:58:56.331304134 +0000 UTC m=+0.071680898 container remove 365cddb5b023e94a164b94acafd08b5b87e2709cce94942ec55db0eadf4ca839 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_mccarthy, io.openshift.expose-services=, RELEASE=main, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, distribution-scope=public, architecture=x86_64, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, version=7, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7)
Dec 02 09:58:56 np0005541914.localdomain systemd[1]: libpod-conmon-365cddb5b023e94a164b94acafd08b5b87e2709cce94942ec55db0eadf4ca839.scope: Deactivated successfully.
Dec 02 09:58:56 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3b3da472257dce8851c305b949cf3b9500004340f7e88cbb81c0ade7fefeb9ec-merged.mount: Deactivated successfully.
Dec 02 09:58:56 np0005541914.localdomain sudo[303513]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:56 np0005541914.localdomain sudo[303584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:58:56 np0005541914.localdomain sudo[303584]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:56 np0005541914.localdomain sudo[303584]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:58:56.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:58:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:58:56.527 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 09:58:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:58:56.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:58:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:58:56.550 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:58:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:58:56.551 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:58:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:58:56.551 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:58:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:58:56.551 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 09:58:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:58:56.551 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:58:56 np0005541914.localdomain sudo[303602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:58:56 np0005541914.localdomain sudo[303602]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:56 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 09:58:56 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1267034607' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:58:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:58:56.960 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:58:57 np0005541914.localdomain podman[303659]: 
Dec 02 09:58:57 np0005541914.localdomain podman[303659]: 2025-12-02 09:58:57.101701217 +0000 UTC m=+0.081872059 container create fddf5498f9827e7a622bccf55bc0b9e8eaa487ddf7f63f204bd19e31b4433de4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_raman, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, vcs-type=git, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, release=1763362218, RELEASE=main, GIT_BRANCH=main, com.redhat.component=rhceph-container, distribution-scope=public, CEPH_POINT_RELEASE=, name=rhceph, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=)
Dec 02 09:58:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:58:57.130 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 09:58:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:58:57.131 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11959MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 09:58:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:58:57.131 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:58:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:58:57.131 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:58:57 np0005541914.localdomain systemd[1]: Started libpod-conmon-fddf5498f9827e7a622bccf55bc0b9e8eaa487ddf7f63f204bd19e31b4433de4.scope.
Dec 02 09:58:57 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:58:57 np0005541914.localdomain podman[303659]: 2025-12-02 09:58:57.172923819 +0000 UTC m=+0.153094691 container init fddf5498f9827e7a622bccf55bc0b9e8eaa487ddf7f63f204bd19e31b4433de4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_raman, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, ceph=True, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, release=1763362218, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=)
Dec 02 09:58:57 np0005541914.localdomain podman[303659]: 2025-12-02 09:58:57.074466076 +0000 UTC m=+0.054636968 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:58:57 np0005541914.localdomain systemd[1]: tmp-crun.1vwcUb.mount: Deactivated successfully.
Dec 02 09:58:57 np0005541914.localdomain podman[303659]: 2025-12-02 09:58:57.186243476 +0000 UTC m=+0.166414358 container start fddf5498f9827e7a622bccf55bc0b9e8eaa487ddf7f63f204bd19e31b4433de4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_raman, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, io.openshift.expose-services=, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., ceph=True, distribution-scope=public, vcs-type=git, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, architecture=x86_64, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, com.redhat.component=rhceph-container, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph)
Dec 02 09:58:57 np0005541914.localdomain podman[303659]: 2025-12-02 09:58:57.186523414 +0000 UTC m=+0.166694296 container attach fddf5498f9827e7a622bccf55bc0b9e8eaa487ddf7f63f204bd19e31b4433de4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_raman, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, version=7, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, GIT_BRANCH=main, ceph=True, RELEASE=main, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Dec 02 09:58:57 np0005541914.localdomain pedantic_raman[303674]: 167 167
Dec 02 09:58:57 np0005541914.localdomain systemd[1]: libpod-fddf5498f9827e7a622bccf55bc0b9e8eaa487ddf7f63f204bd19e31b4433de4.scope: Deactivated successfully.
Dec 02 09:58:57 np0005541914.localdomain podman[303659]: 2025-12-02 09:58:57.189792774 +0000 UTC m=+0.169963656 container died fddf5498f9827e7a622bccf55bc0b9e8eaa487ddf7f63f204bd19e31b4433de4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_raman, vcs-type=git, com.redhat.component=rhceph-container, architecture=x86_64, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, build-date=2025-11-26T19:44:28Z, description=Red Hat Ceph Storage 7, version=7, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.openshift.expose-services=, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 02 09:58:57 np0005541914.localdomain ceph-mon[301710]: Reconfiguring mds.mds.np0005541914.sqgqkj (monmap changed)...
Dec 02 09:58:57 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon mds.mds.np0005541914.sqgqkj on np0005541914.localdomain
Dec 02 09:58:57 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:57 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:57 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005541914.lljzmk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 02 09:58:57 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "mgr services"} : dispatch
Dec 02 09:58:57 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:57 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/1267034607' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:58:57 np0005541914.localdomain podman[303679]: 2025-12-02 09:58:57.262393709 +0000 UTC m=+0.064390516 container remove fddf5498f9827e7a622bccf55bc0b9e8eaa487ddf7f63f204bd19e31b4433de4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pedantic_raman, GIT_CLEAN=True, io.buildah.version=1.41.4, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1763362218, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, name=rhceph, distribution-scope=public, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, build-date=2025-11-26T19:44:28Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, ceph=True)
Dec 02 09:58:57 np0005541914.localdomain systemd[1]: libpod-conmon-fddf5498f9827e7a622bccf55bc0b9e8eaa487ddf7f63f204bd19e31b4433de4.scope: Deactivated successfully.
Dec 02 09:58:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:58:57.306 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 09:58:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:58:57.307 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 09:58:57 np0005541914.localdomain sudo[303602]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:58:57.335 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:58:57 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-495600bccb1dc696ff159e62514e2403c9fc286289c8abd65f5f5bcfa3408f88-merged.mount: Deactivated successfully.
Dec 02 09:58:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054722 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:58:57 np0005541914.localdomain sudo[303696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 09:58:57 np0005541914.localdomain sudo[303696]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:57 np0005541914.localdomain sudo[303696]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:57 np0005541914.localdomain sudo[303724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 09:58:57 np0005541914.localdomain sudo[303724]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 09:58:57 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2732574246' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:58:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:58:57.748 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:58:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:58:57.756 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 09:58:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:58:57.776 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 09:58:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:58:57.780 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 09:58:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:58:57.781 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:58:57 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:58:57 np0005541914.localdomain podman[303776]: 
Dec 02 09:58:57 np0005541914.localdomain podman[303776]: 2025-12-02 09:58:57.923565109 +0000 UTC m=+0.058905188 container create 1b50e270794e1661ff94da2185fd13b0656535790e86f939c8323497bab0d665 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_lederberg, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, architecture=x86_64, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.41.4, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, version=7, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, name=rhceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., release=1763362218, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph)
Dec 02 09:58:57 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:58:57 np0005541914.localdomain systemd[1]: tmp-crun.2Kwi84.mount: Deactivated successfully.
Dec 02 09:58:57 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:58:57 np0005541914.localdomain systemd[1]: Started libpod-conmon-1b50e270794e1661ff94da2185fd13b0656535790e86f939c8323497bab0d665.scope.
Dec 02 09:58:57 np0005541914.localdomain podman[303768]: 2025-12-02 09:58:57.943320971 +0000 UTC m=+0.093659188 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 02 09:58:57 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 09:58:57 np0005541914.localdomain podman[303768]: 2025-12-02 09:58:57.972006477 +0000 UTC m=+0.122344694 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 02 09:58:57 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:58:57 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:58:57 np0005541914.localdomain podman[303776]: 2025-12-02 09:58:57.896575026 +0000 UTC m=+0.031915115 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 09:58:58 np0005541914.localdomain podman[303799]: 2025-12-02 09:58:58.014571045 +0000 UTC m=+0.070350627 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3)
Dec 02 09:58:58 np0005541914.localdomain podman[303826]: 2025-12-02 09:58:58.039017991 +0000 UTC m=+0.054867645 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 02 09:58:58 np0005541914.localdomain podman[303776]: 2025-12-02 09:58:58.102434355 +0000 UTC m=+0.237774414 container init 1b50e270794e1661ff94da2185fd13b0656535790e86f939c8323497bab0d665 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_lederberg, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, io.openshift.tags=rhceph ceph, name=rhceph, distribution-scope=public, GIT_CLEAN=True, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, io.openshift.expose-services=, vcs-type=git, release=1763362218, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 02 09:58:58 np0005541914.localdomain podman[303776]: 2025-12-02 09:58:58.110284125 +0000 UTC m=+0.245624164 container start 1b50e270794e1661ff94da2185fd13b0656535790e86f939c8323497bab0d665 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_lederberg, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, RELEASE=main, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, version=7, ceph=True, release=1763362218, distribution-scope=public, io.buildah.version=1.41.4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2025-11-26T19:44:28Z, CEPH_POINT_RELEASE=, GIT_BRANCH=main)
Dec 02 09:58:58 np0005541914.localdomain podman[303776]: 2025-12-02 09:58:58.110716718 +0000 UTC m=+0.246056787 container attach 1b50e270794e1661ff94da2185fd13b0656535790e86f939c8323497bab0d665 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_lederberg, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, io.buildah.version=1.41.4, release=1763362218, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, ceph=True, architecture=x86_64, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vcs-type=git, RELEASE=main, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 02 09:58:58 np0005541914.localdomain strange_lederberg[303812]: 167 167
Dec 02 09:58:58 np0005541914.localdomain systemd[1]: libpod-1b50e270794e1661ff94da2185fd13b0656535790e86f939c8323497bab0d665.scope: Deactivated successfully.
Dec 02 09:58:58 np0005541914.localdomain podman[303776]: 2025-12-02 09:58:58.113027409 +0000 UTC m=+0.248367488 container died 1b50e270794e1661ff94da2185fd13b0656535790e86f939c8323497bab0d665 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_lederberg, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, architecture=x86_64, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, ceph=True, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, distribution-scope=public, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1763362218, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 09:58:58 np0005541914.localdomain podman[303826]: 2025-12-02 09:58:58.143777137 +0000 UTC m=+0.159626791 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Dec 02 09:58:58 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:58:58 np0005541914.localdomain podman[303801]: 2025-12-02 09:58:58.232559806 +0000 UTC m=+0.286086329 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 09:58:58 np0005541914.localdomain ceph-mon[301710]: Reconfiguring mgr.np0005541914.lljzmk (monmap changed)...
Dec 02 09:58:58 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon mgr.np0005541914.lljzmk on np0005541914.localdomain
Dec 02 09:58:58 np0005541914.localdomain ceph-mon[301710]: pgmap v12: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 0 B/s wr, 11 op/s
Dec 02 09:58:58 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/2343414930' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:58:58 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:58 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:58 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:58:58 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 02 09:58:58 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:58 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/2732574246' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:58:58 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/3324502342' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:58:58 np0005541914.localdomain podman[303801]: 2025-12-02 09:58:58.240853289 +0000 UTC m=+0.294379812 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 09:58:58 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 09:58:58 np0005541914.localdomain podman[303799]: 2025-12-02 09:58:58.252190505 +0000 UTC m=+0.307970087 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:58:58 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 09:58:58 np0005541914.localdomain podman[303864]: 2025-12-02 09:58:58.337653131 +0000 UTC m=+0.213847254 container remove 1b50e270794e1661ff94da2185fd13b0656535790e86f939c8323497bab0d665 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=strange_lederberg, architecture=x86_64, io.buildah.version=1.41.4, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, version=7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vendor=Red Hat, Inc., GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, build-date=2025-11-26T19:44:28Z, com.redhat.component=rhceph-container, io.openshift.expose-services=)
Dec 02 09:58:58 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-229bd05a0662cada040fc5af96fa1bb6353596f71e74bec2dac31c7b2190f361-merged.mount: Deactivated successfully.
Dec 02 09:58:58 np0005541914.localdomain systemd[1]: libpod-conmon-1b50e270794e1661ff94da2185fd13b0656535790e86f939c8323497bab0d665.scope: Deactivated successfully.
Dec 02 09:58:58 np0005541914.localdomain sudo[303724]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:58 np0005541914.localdomain sudo[303893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 09:58:58 np0005541914.localdomain sudo[303893]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 09:58:58 np0005541914.localdomain sudo[303893]: pam_unix(sudo:session): session closed for user root
Dec 02 09:58:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:58:58.783 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:58:59 np0005541914.localdomain ceph-mon[301710]: Reconfiguring mon.np0005541914 (monmap changed)...
Dec 02 09:58:59 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon mon.np0005541914 on np0005541914.localdomain
Dec 02 09:58:59 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:59 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:59 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:58:59 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 09:58:59 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:59 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:58:59 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 09:58:59 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:58:59 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 02 09:58:59 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:59:00 np0005541914.localdomain ceph-mon[301710]: from='client.44508 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005541914", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 02 09:59:00 np0005541914.localdomain ceph-mon[301710]: pgmap v13: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:59:00 np0005541914.localdomain ceph-mon[301710]: Reconfiguring mon.np0005541912 (monmap changed)...
Dec 02 09:59:00 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon mon.np0005541912 on np0005541912.localdomain
Dec 02 09:59:00 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:59:00 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:59:00 np0005541914.localdomain ceph-mon[301710]: Reconfiguring mon.np0005541913 (monmap changed)...
Dec 02 09:59:00 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 02 09:59:00 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 02 09:59:00 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 09:59:00 np0005541914.localdomain ceph-mon[301710]: Reconfiguring daemon mon.np0005541913 on np0005541913.localdomain
Dec 02 09:59:01 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:59:01 np0005541914.localdomain ceph-mon[301710]: pgmap v14: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:59:01 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:59:01 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 09:59:01 np0005541914.localdomain sshd[299693]: Received disconnect from 192.168.122.11 port 58072:11: disconnected by user
Dec 02 09:59:01 np0005541914.localdomain sshd[299693]: Disconnected from user tripleo-admin 192.168.122.11 port 58072
Dec 02 09:59:01 np0005541914.localdomain sshd[299674]: pam_unix(sshd:session): session closed for user tripleo-admin
Dec 02 09:59:01 np0005541914.localdomain systemd[1]: session-68.scope: Deactivated successfully.
Dec 02 09:59:01 np0005541914.localdomain systemd[1]: session-68.scope: Consumed 1.656s CPU time.
Dec 02 09:59:01 np0005541914.localdomain systemd-logind[760]: Session 68 logged out. Waiting for processes to exit.
Dec 02 09:59:01 np0005541914.localdomain systemd-logind[760]: Removed session 68.
Dec 02 09:59:02 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:59:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:59:03.169 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:59:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:59:03.170 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:59:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 09:59:03.171 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:59:03 np0005541914.localdomain ceph-mon[301710]: pgmap v15: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:59:03 np0005541914.localdomain podman[239757]: time="2025-12-02T09:59:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 09:59:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:59:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 09:59:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:59:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19179 "" "Go-http-client/1.1"
Dec 02 09:59:04 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/747153591' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 09:59:04 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/747153591' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 09:59:05 np0005541914.localdomain ceph-mon[301710]: pgmap v16: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:59:05 np0005541914.localdomain ceph-mon[301710]: mgrmap e37: np0005541912.qwddia(active, since 25s), standbys: np0005541914.lljzmk, np0005541913.mfesdm
Dec 02 09:59:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:59:07 np0005541914.localdomain ceph-mon[301710]: pgmap v17: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:59:08 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:59:08 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:59:08 np0005541914.localdomain podman[303911]: 2025-12-02 09:59:08.601701271 +0000 UTC m=+0.094788212 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 09:59:08 np0005541914.localdomain podman[303911]: 2025-12-02 09:59:08.613957965 +0000 UTC m=+0.107044926 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 09:59:08 np0005541914.localdomain podman[303912]: 2025-12-02 09:59:08.654688788 +0000 UTC m=+0.144033905 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., config_id=edpm, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter)
Dec 02 09:59:08 np0005541914.localdomain podman[303912]: 2025-12-02 09:59:08.671962105 +0000 UTC m=+0.161307212 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, architecture=x86_64, config_id=edpm, vcs-type=git, com.redhat.component=ubi9-minimal-container, distribution-scope=public, name=ubi9-minimal, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9)
Dec 02 09:59:08 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:59:08 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:59:09 np0005541914.localdomain ceph-mon[301710]: pgmap v18: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:59:11 np0005541914.localdomain ceph-mon[301710]: pgmap v19: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:59:11 np0005541914.localdomain systemd[1]: Stopping User Manager for UID 1003...
Dec 02 09:59:11 np0005541914.localdomain systemd[299678]: Activating special unit Exit the Session...
Dec 02 09:59:11 np0005541914.localdomain systemd[299678]: Stopped target Main User Target.
Dec 02 09:59:11 np0005541914.localdomain systemd[299678]: Stopped target Basic System.
Dec 02 09:59:11 np0005541914.localdomain systemd[299678]: Stopped target Paths.
Dec 02 09:59:11 np0005541914.localdomain systemd[299678]: Stopped target Sockets.
Dec 02 09:59:11 np0005541914.localdomain systemd[299678]: Stopped target Timers.
Dec 02 09:59:11 np0005541914.localdomain systemd[299678]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 02 09:59:11 np0005541914.localdomain systemd[299678]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 02 09:59:11 np0005541914.localdomain systemd[299678]: Closed D-Bus User Message Bus Socket.
Dec 02 09:59:11 np0005541914.localdomain systemd[299678]: Stopped Create User's Volatile Files and Directories.
Dec 02 09:59:11 np0005541914.localdomain systemd[299678]: Removed slice User Application Slice.
Dec 02 09:59:11 np0005541914.localdomain systemd[299678]: Reached target Shutdown.
Dec 02 09:59:11 np0005541914.localdomain systemd[299678]: Finished Exit the Session.
Dec 02 09:59:11 np0005541914.localdomain systemd[299678]: Reached target Exit the Session.
Dec 02 09:59:11 np0005541914.localdomain systemd[1]: user@1003.service: Deactivated successfully.
Dec 02 09:59:11 np0005541914.localdomain systemd[1]: Stopped User Manager for UID 1003.
Dec 02 09:59:11 np0005541914.localdomain systemd[1]: Stopping User Runtime Directory /run/user/1003...
Dec 02 09:59:12 np0005541914.localdomain systemd[1]: run-user-1003.mount: Deactivated successfully.
Dec 02 09:59:12 np0005541914.localdomain systemd[1]: user-runtime-dir@1003.service: Deactivated successfully.
Dec 02 09:59:12 np0005541914.localdomain systemd[1]: Stopped User Runtime Directory /run/user/1003.
Dec 02 09:59:12 np0005541914.localdomain systemd[1]: Removed slice User Slice of UID 1003.
Dec 02 09:59:12 np0005541914.localdomain systemd[1]: user-1003.slice: Consumed 2.131s CPU time.
Dec 02 09:59:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:59:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 09:59:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:59:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:59:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:59:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:59:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:59:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 09:59:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:59:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:59:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 09:59:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:59:12 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:59:12 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:59:13 np0005541914.localdomain podman[303956]: 2025-12-02 09:59:13.080749922 +0000 UTC m=+0.078805006 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 09:59:13 np0005541914.localdomain podman[303956]: 2025-12-02 09:59:13.095246424 +0000 UTC m=+0.093301458 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 02 09:59:13 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:59:13 np0005541914.localdomain ceph-mon[301710]: pgmap v20: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:59:15 np0005541914.localdomain ceph-mon[301710]: pgmap v21: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:59:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:59:17 np0005541914.localdomain ceph-mon[301710]: pgmap v22: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:59:19 np0005541914.localdomain ceph-mon[301710]: pgmap v23: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:59:21 np0005541914.localdomain ceph-mon[301710]: pgmap v24: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:59:22 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:59:23 np0005541914.localdomain ceph-mon[301710]: pgmap v25: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:59:25 np0005541914.localdomain ceph-mon[301710]: pgmap v26: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:59:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:59:27 np0005541914.localdomain ceph-mon[301710]: pgmap v27: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:59:28 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 09:59:28 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 09:59:28 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 09:59:29 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 09:59:29 np0005541914.localdomain systemd[297067]: Starting Mark boot as successful...
Dec 02 09:59:29 np0005541914.localdomain podman[303977]: 2025-12-02 09:59:29.073703114 +0000 UTC m=+0.070516833 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 09:59:29 np0005541914.localdomain systemd[297067]: Finished Mark boot as successful.
Dec 02 09:59:29 np0005541914.localdomain podman[303977]: 2025-12-02 09:59:29.106510455 +0000 UTC m=+0.103324164 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 02 09:59:29 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 09:59:29 np0005541914.localdomain podman[303976]: 2025-12-02 09:59:29.191639491 +0000 UTC m=+0.190113211 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 02 09:59:29 np0005541914.localdomain podman[303976]: 2025-12-02 09:59:29.201000397 +0000 UTC m=+0.199474107 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 02 09:59:29 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 09:59:29 np0005541914.localdomain podman[303986]: 2025-12-02 09:59:29.160949305 +0000 UTC m=+0.147122610 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 02 09:59:29 np0005541914.localdomain podman[303978]: 2025-12-02 09:59:29.256042766 +0000 UTC m=+0.246088558 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec 02 09:59:29 np0005541914.localdomain podman[303978]: 2025-12-02 09:59:29.267763013 +0000 UTC m=+0.257808825 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2)
Dec 02 09:59:29 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 09:59:29 np0005541914.localdomain podman[303986]: 2025-12-02 09:59:29.355141139 +0000 UTC m=+0.341314474 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Dec 02 09:59:29 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 09:59:29 np0005541914.localdomain ceph-mon[301710]: pgmap v28: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:59:29 np0005541914.localdomain sshd[304061]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 09:59:30 np0005541914.localdomain sshd[304061]: Invalid user testuser from 34.78.29.97 port 55464
Dec 02 09:59:30 np0005541914.localdomain sshd[304061]: Received disconnect from 34.78.29.97 port 55464:11: Bye Bye [preauth]
Dec 02 09:59:30 np0005541914.localdomain sshd[304061]: Disconnected from invalid user testuser 34.78.29.97 port 55464 [preauth]
Dec 02 09:59:31 np0005541914.localdomain ceph-mon[301710]: pgmap v29: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:59:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:59:33 np0005541914.localdomain ceph-mon[301710]: pgmap v30: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:59:33 np0005541914.localdomain podman[239757]: time="2025-12-02T09:59:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 09:59:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:59:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 09:59:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:09:59:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19177 "" "Go-http-client/1.1"
Dec 02 09:59:35 np0005541914.localdomain ceph-mon[301710]: pgmap v31: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:59:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:59:37 np0005541914.localdomain ceph-mon[301710]: pgmap v32: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:59:37 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.200:0/2480513664' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 02 09:59:38 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 09:59:38 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 09:59:39 np0005541914.localdomain podman[304064]: 2025-12-02 09:59:39.208536658 +0000 UTC m=+0.212665499 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, io.openshift.expose-services=, distribution-scope=public, version=9.6, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, name=ubi9-minimal, config_id=edpm, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, vcs-type=git, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 09:59:39 np0005541914.localdomain podman[304064]: 2025-12-02 09:59:39.30721263 +0000 UTC m=+0.311341531 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., version=9.6, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, config_id=edpm, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vcs-type=git, container_name=openstack_network_exporter, distribution-scope=public, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=)
Dec 02 09:59:39 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 09:59:39 np0005541914.localdomain podman[304063]: 2025-12-02 09:59:39.273161048 +0000 UTC m=+0.277966080 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 02 09:59:39 np0005541914.localdomain podman[304063]: 2025-12-02 09:59:39.35702128 +0000 UTC m=+0.361826312 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 09:59:39 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 09:59:39 np0005541914.localdomain ceph-mon[301710]: pgmap v33: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:59:41 np0005541914.localdomain ceph-mon[301710]: pgmap v34: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:59:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:59:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 09:59:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:59:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:59:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:59:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 09:59:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:59:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:59:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 09:59:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   09:59:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 09:59:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 09:59:42 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:59:43 np0005541914.localdomain ceph-mon[301710]: pgmap v35: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:59:43 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 09:59:44 np0005541914.localdomain systemd[1]: tmp-crun.3MJGFn.mount: Deactivated successfully.
Dec 02 09:59:44 np0005541914.localdomain podman[304104]: 2025-12-02 09:59:44.056159285 +0000 UTC m=+0.066817407 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 02 09:59:44 np0005541914.localdomain podman[304104]: 2025-12-02 09:59:44.096955012 +0000 UTC m=+0.107613124 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 02 09:59:44 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 09:59:44 np0005541914.localdomain ceph-mon[301710]: from='client.64157 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 02 09:59:45 np0005541914.localdomain ceph-mon[301710]: pgmap v36: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:59:47 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:59:47 np0005541914.localdomain ceph-mon[301710]: pgmap v37: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:59:49 np0005541914.localdomain ceph-mon[301710]: pgmap v38: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:59:50 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 02 09:59:50 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/4215452679' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 02 09:59:51 np0005541914.localdomain ceph-mon[301710]: pgmap v39: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:59:51 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.200:0/4215452679' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 02 09:59:52 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:59:52 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/3862452935' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:59:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:59:53.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:59:53 np0005541914.localdomain ceph-mon[301710]: pgmap v40: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:59:53 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/3342764562' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:59:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:59:55.523 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:59:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:59:55.526 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:59:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:59:55.526 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:59:55 np0005541914.localdomain ceph-mon[301710]: pgmap v41: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:59:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:59:56.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:59:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:59:56.527 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 09:59:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:59:56.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 09:59:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:59:56.547 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 09:59:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:59:56.548 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:59:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:59:56.549 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:59:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:59:56.571 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:59:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:59:56.572 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:59:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:59:56.572 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:59:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:59:56.573 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 09:59:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:59:56.573 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:59:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 09:59:57 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2437377944' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:59:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:59:57.017 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:59:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:59:57.195 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 09:59:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:59:57.196 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11999MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 09:59:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:59:57.197 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 09:59:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:59:57.197 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 09:59:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:59:57.267 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 09:59:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:59:57.268 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 09:59:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:59:57.286 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 09:59:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 09:59:57 np0005541914.localdomain ceph-mon[301710]: pgmap v42: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:59:57 np0005541914.localdomain ceph-mon[301710]: from='client.64175 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 02 09:59:57 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/2437377944' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:59:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 09:59:57 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2802866471' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:59:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:59:57.769 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 09:59:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:59:57.774 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 09:59:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:59:57.801 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 09:59:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:59:57.804 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 09:59:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:59:57.804 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 09:59:58 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/2802866471' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:59:58 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/590004416' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 09:59:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:59:58.784 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:59:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:59:58.910 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:59:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:59:58.910 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 09:59:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 09:59:58.910 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 09:59:59 np0005541914.localdomain ceph-mon[301710]: pgmap v43: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 09:59:59 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/3653965259' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:00:00 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:00:00 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:00:00 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:00:00 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:00:00 np0005541914.localdomain systemd[1]: tmp-crun.2d2oan.mount: Deactivated successfully.
Dec 02 10:00:00 np0005541914.localdomain podman[304168]: 2025-12-02 10:00:00.112998573 +0000 UTC m=+0.072133158 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 10:00:00 np0005541914.localdomain podman[304170]: 2025-12-02 10:00:00.174520708 +0000 UTC m=+0.125359982 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:00:00 np0005541914.localdomain podman[304168]: 2025-12-02 10:00:00.192269666 +0000 UTC m=+0.151404241 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 10:00:00 np0005541914.localdomain podman[304169]: 2025-12-02 10:00:00.154093168 +0000 UTC m=+0.105678184 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=edpm, managed_by=edpm_ansible)
Dec 02 10:00:00 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:00:00 np0005541914.localdomain podman[304170]: 2025-12-02 10:00:00.206871919 +0000 UTC m=+0.157711193 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 02 10:00:00 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:00:00 np0005541914.localdomain podman[304167]: 2025-12-02 10:00:00.266790826 +0000 UTC m=+0.225275782 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:00:00 np0005541914.localdomain podman[304167]: 2025-12-02 10:00:00.271850219 +0000 UTC m=+0.230335195 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:00:00 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:00:00 np0005541914.localdomain podman[304169]: 2025-12-02 10:00:00.291893947 +0000 UTC m=+0.243478913 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, config_id=edpm)
Dec 02 10:00:00 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:00:00 np0005541914.localdomain sudo[304249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 10:00:00 np0005541914.localdomain sudo[304249]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:00 np0005541914.localdomain sudo[304249]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:00 np0005541914.localdomain sudo[304267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 10:00:00 np0005541914.localdomain sudo[304267]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:00 np0005541914.localdomain ceph-mon[301710]: Health detail: HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1 stray host(s) with 1 daemon(s) not managed by cephadm
Dec 02 10:00:00 np0005541914.localdomain ceph-mon[301710]: [WRN] CEPHADM_STRAY_DAEMON: 1 stray daemon(s) not managed by cephadm
Dec 02 10:00:00 np0005541914.localdomain ceph-mon[301710]:     stray daemon mgr.np0005541911.adcgiw on host np0005541911.localdomain not managed by cephadm
Dec 02 10:00:00 np0005541914.localdomain ceph-mon[301710]: [WRN] CEPHADM_STRAY_HOST: 1 stray host(s) with 1 daemon(s) not managed by cephadm
Dec 02 10:00:00 np0005541914.localdomain ceph-mon[301710]:     stray host np0005541911.localdomain has 1 stray daemons: ['mgr.np0005541911.adcgiw']
Dec 02 10:00:01 np0005541914.localdomain systemd[1]: tmp-crun.is8bts.mount: Deactivated successfully.
Dec 02 10:00:01 np0005541914.localdomain sudo[304267]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:01 np0005541914.localdomain sudo[304317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 10:00:01 np0005541914.localdomain sudo[304317]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:01 np0005541914.localdomain sudo[304317]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:01 np0005541914.localdomain ceph-mon[301710]: pgmap v44: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:01 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:00:01 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:00:01 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 10:00:01 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:00:02 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:00:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:00:03.171 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:00:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:00:03.172 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:00:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:00:03.172 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:00:03 np0005541914.localdomain podman[239757]: time="2025-12-02T10:00:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:00:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:00:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 10:00:03 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 02 10:00:03 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/2832629924' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:00:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:00:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19174 "" "Go-http-client/1.1"
Dec 02 10:00:03 np0005541914.localdomain ceph-mon[301710]: pgmap v45: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:03 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.200:0/2832629924' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:00:04 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2838133861' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:00:04 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2838133861' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:00:05 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Dec 02 10:00:05 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:05.493281) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 02 10:00:05 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Dec 02 10:00:05 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669605493354, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 14043, "num_deletes": 261, "total_data_size": 24103905, "memory_usage": 25278912, "flush_reason": "Manual Compaction"}
Dec 02 10:00:05 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Dec 02 10:00:05 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669605608518, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 18536123, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 14048, "table_properties": {"data_size": 18463960, "index_size": 40249, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30853, "raw_key_size": 327875, "raw_average_key_size": 26, "raw_value_size": 18252036, "raw_average_value_size": 1479, "num_data_blocks": 1542, "num_entries": 12334, "num_filter_entries": 12334, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669504, "oldest_key_time": 1764669504, "file_creation_time": 1764669605, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Dec 02 10:00:05 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 115304 microseconds, and 36661 cpu microseconds.
Dec 02 10:00:05 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:05.608579) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 18536123 bytes OK
Dec 02 10:00:05 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:05.608615) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Dec 02 10:00:05 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:05.610641) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Dec 02 10:00:05 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:05.610698) EVENT_LOG_v1 {"time_micros": 1764669605610686, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Dec 02 10:00:05 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:05.610724) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Dec 02 10:00:05 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 24011403, prev total WAL file size 24011727, number of live WAL files 2.
Dec 02 10:00:05 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:00:05 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:05.615077) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131323935' seq:72057594037927935, type:22 .. '7061786F73003131353437' seq:0, type:0; will stop at (end)
Dec 02 10:00:05 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Dec 02 10:00:05 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(17MB) 8(1762B)]
Dec 02 10:00:05 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669605615209, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 18537885, "oldest_snapshot_seqno": -1}
Dec 02 10:00:05 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 12082 keys, 18532582 bytes, temperature: kUnknown
Dec 02 10:00:05 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669605746133, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 18532582, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18461069, "index_size": 40244, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30213, "raw_key_size": 322992, "raw_average_key_size": 26, "raw_value_size": 18252441, "raw_average_value_size": 1510, "num_data_blocks": 1542, "num_entries": 12082, "num_filter_entries": 12082, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669502, "oldest_key_time": 0, "file_creation_time": 1764669605, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Dec 02 10:00:05 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 10:00:05 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:05.746518) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 18532582 bytes
Dec 02 10:00:05 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:05.748367) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 141.5 rd, 141.4 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(17.7, 0.0 +0.0 blob) out(17.7 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 12339, records dropped: 257 output_compression: NoCompression
Dec 02 10:00:05 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:05.748409) EVENT_LOG_v1 {"time_micros": 1764669605748389, "job": 4, "event": "compaction_finished", "compaction_time_micros": 131025, "compaction_time_cpu_micros": 51233, "output_level": 6, "num_output_files": 1, "total_output_size": 18532582, "num_input_records": 12339, "num_output_records": 12082, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 02 10:00:05 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:00:05 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669605752071, "job": 4, "event": "table_file_deletion", "file_number": 14}
Dec 02 10:00:05 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:00:05 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669605752169, "job": 4, "event": "table_file_deletion", "file_number": 8}
Dec 02 10:00:05 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:05.614899) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:00:05 np0005541914.localdomain ceph-mon[301710]: pgmap v46: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:05 np0005541914.localdomain ceph-mon[301710]: from='mgr.26660 172.18.0.106:0/2630977033' entity='mgr.np0005541912.qwddia' 
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 e91: 6 total, 6 up, 6 in
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: mgr handle_mgr_map Activating!
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: mgr handle_mgr_map I am now activating
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541912"} v 0)
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541913"} v 0)
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005541914"} v 0)
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005541914.sqgqkj"} v 0)
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mds metadata", "who": "mds.np0005541914.sqgqkj"} : dispatch
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).mds e16 all = 0
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005541913.maexpe"} v 0)
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mds metadata", "who": "mds.np0005541913.maexpe"} : dispatch
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).mds e16 all = 0
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mds metadata", "who": "mds.np0005541912.ghcwcm"} v 0)
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mds metadata", "who": "mds.np0005541912.ghcwcm"} : dispatch
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).mds e16 all = 0
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005541914.lljzmk", "id": "np0005541914.lljzmk"} v 0)
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr metadata", "who": "np0005541914.lljzmk", "id": "np0005541914.lljzmk"} : dispatch
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005541913.mfesdm", "id": "np0005541913.mfesdm"} v 0)
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr metadata", "who": "np0005541913.mfesdm", "id": "np0005541913.mfesdm"} : dispatch
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd metadata", "id": 3} v 0)
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata", "id": 3} : dispatch
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd metadata", "id": 4} v 0)
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata", "id": 4} : dispatch
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd metadata", "id": 5} v 0)
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata", "id": 5} : dispatch
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mds metadata"} : dispatch
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).mds e16 all = 1
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata"} : dispatch
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata"} : dispatch
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: mgr load Constructed class from module: balancer
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Starting
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:00:06
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Dec 02 10:00:06 np0005541914.localdomain sshd[302187]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: mgr load Constructed class from module: cephadm
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: mgr load Constructed class from module: crash
Dec 02 10:00:06 np0005541914.localdomain systemd[1]: session-70.scope: Deactivated successfully.
Dec 02 10:00:06 np0005541914.localdomain systemd[1]: session-70.scope: Consumed 11.185s CPU time.
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: mgr load Constructed class from module: devicehealth
Dec 02 10:00:06 np0005541914.localdomain systemd-logind[760]: Session 70 logged out. Waiting for processes to exit.
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: mgr load Constructed class from module: iostat
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: mgr load Constructed class from module: nfs
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 02 10:00:06 np0005541914.localdomain systemd-logind[760]: Removed session 70.
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: mgr load Constructed class from module: orchestrator
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [devicehealth INFO root] Starting
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: mgr load Constructed class from module: pg_autoscaler
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: mgr load Constructed class from module: progress
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Loading...
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7fd397e7ca90>, <progress.module.GhostEvent object at 0x7fd397e7cac0>, <progress.module.GhostEvent object at 0x7fd397e7caf0>, <progress.module.GhostEvent object at 0x7fd397e7cb20>, <progress.module.GhostEvent object at 0x7fd397e7cb50>, <progress.module.GhostEvent object at 0x7fd397e7cb80>, <progress.module.GhostEvent object at 0x7fd397e7cbb0>, <progress.module.GhostEvent object at 0x7fd397e7cbe0>, <progress.module.GhostEvent object at 0x7fd397e7cc10>, <progress.module.GhostEvent object at 0x7fd397e7cc40>, <progress.module.GhostEvent object at 0x7fd397e7cc70>, <progress.module.GhostEvent object at 0x7fd397e7cca0>, <progress.module.GhostEvent object at 0x7fd397e7ccd0>, <progress.module.GhostEvent object at 0x7fd397e7cd00>, <progress.module.GhostEvent object at 0x7fd397e7cd30>, <progress.module.GhostEvent object at 0x7fd397e7cd60>, <progress.module.GhostEvent object at 0x7fd397e7cd90>, <progress.module.GhostEvent object at 0x7fd397e7cdc0>, <progress.module.GhostEvent object at 0x7fd397e7cdf0>, <progress.module.GhostEvent object at 0x7fd397e7ce20>, <progress.module.GhostEvent object at 0x7fd397e7ce50>, <progress.module.GhostEvent object at 0x7fd397e7ce80>, <progress.module.GhostEvent object at 0x7fd397e7ceb0>, <progress.module.GhostEvent object at 0x7fd397e7cee0>, <progress.module.GhostEvent object at 0x7fd397e7cf10>, <progress.module.GhostEvent object at 0x7fd397e7cf40>, <progress.module.GhostEvent object at 0x7fd397e7cf70>, <progress.module.GhostEvent object at 0x7fd397e7cfa0>, <progress.module.GhostEvent object at 0x7fd397e7cfd0>, <progress.module.GhostEvent object at 0x7fd397e9a040>, <progress.module.GhostEvent object at 0x7fd397e9a070>, <progress.module.GhostEvent object at 0x7fd397e9a0a0>, <progress.module.GhostEvent object at 0x7fd397e9a0d0>, <progress.module.GhostEvent object at 0x7fd397e9a100>, <progress.module.GhostEvent object at 0x7fd397e9a130>, <progress.module.GhostEvent object at 0x7fd397e9a160>, <progress.module.GhostEvent object at 0x7fd397e9a190>, <progress.module.GhostEvent object at 0x7fd397e9a1c0>, <progress.module.GhostEvent object at 0x7fd397e9a1f0>, <progress.module.GhostEvent object at 0x7fd397e9a220>, <progress.module.GhostEvent object at 0x7fd397e9a250>, <progress.module.GhostEvent object at 0x7fd397e9a280>, <progress.module.GhostEvent object at 0x7fd397e9a2b0>, <progress.module.GhostEvent object at 0x7fd397e9a2e0>, <progress.module.GhostEvent object at 0x7fd397e9a310>, <progress.module.GhostEvent object at 0x7fd397e9a340>, <progress.module.GhostEvent object at 0x7fd397e9a370>, <progress.module.GhostEvent object at 0x7fd397e9a3a0>, <progress.module.GhostEvent object at 0x7fd397e9a3d0>, <progress.module.GhostEvent object at 0x7fd397e9a400>] historic events
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Loaded OSDMap, ready.
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: Activating manager daemon np0005541914.lljzmk
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.200:0/1313402171' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: osdmap e91: 6 total, 6 up, 6 in
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: mgrmap e38: np0005541914.lljzmk(active, starting, since 0.0398616s), standbys: np0005541913.mfesdm
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541912"} : dispatch
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541913"} : dispatch
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata", "id": "np0005541914"} : dispatch
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mds metadata", "who": "mds.np0005541914.sqgqkj"} : dispatch
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mds metadata", "who": "mds.np0005541913.maexpe"} : dispatch
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mds metadata", "who": "mds.np0005541912.ghcwcm"} : dispatch
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr metadata", "who": "np0005541914.lljzmk", "id": "np0005541914.lljzmk"} : dispatch
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr metadata", "who": "np0005541913.mfesdm", "id": "np0005541913.mfesdm"} : dispatch
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata", "id": 3} : dispatch
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata", "id": 4} : dispatch
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata", "id": 5} : dispatch
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mds metadata"} : dispatch
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd metadata"} : dispatch
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mon metadata"} : dispatch
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: Manager daemon np0005541914.lljzmk is now available
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] recovery thread starting
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] starting setup
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: mgr load Constructed class from module: rbd_support
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/mirror_snapshot_schedule"} v 0)
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/mirror_snapshot_schedule"} : dispatch
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: mgr load Constructed class from module: restful
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: mgr load Constructed class from module: status
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [restful INFO root] server_addr: :: server_port: 8003
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [restful WARNING root] server not running: no certificate configured
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: mgr load Constructed class from module: telemetry
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] PerfHandler: starting
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_task_task: vms, start_after=
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: mgr load Constructed class from module: volumes
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:00:06 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:00:06.949+0000 7fd382578640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:00:06 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:00:06.949+0000 7fd382578640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:00:06 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:00:06.949+0000 7fd382578640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:00:06 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:00:06.949+0000 7fd382578640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:00:06 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:00:06.949+0000 7fd382578640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:00:06 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:00:06.951+0000 7fd37f572640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:00:06 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:00:06.951+0000 7fd37f572640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:00:06 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:00:06.951+0000 7fd37f572640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:00:06 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:00:06.951+0000 7fd37f572640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:00:06 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:00:06.951+0000 7fd37f572640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_task_task: volumes, start_after=
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_task_task: images, start_after=
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_task_task: backups, start_after=
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] TaskHandler: starting
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/trash_purge_schedule"} v 0)
Dec 02 10:00:06 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/trash_purge_schedule"} : dispatch
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 02 10:00:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] setup complete
Dec 02 10:00:07 np0005541914.localdomain sshd[304476]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 10:00:07 np0005541914.localdomain sshd[304476]: Accepted publickey for ceph-admin from 192.168.122.108 port 51700 ssh2: RSA SHA256:/YNuT9SDdDKzRDkfkhY38xwkOxxhXakQO/p8xUCPPz0
Dec 02 10:00:07 np0005541914.localdomain systemd-logind[760]: New session 71 of user ceph-admin.
Dec 02 10:00:07 np0005541914.localdomain systemd[1]: Started Session 71 of User ceph-admin.
Dec 02 10:00:07 np0005541914.localdomain sshd[304476]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Dec 02 10:00:07 np0005541914.localdomain sudo[304480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 10:00:07 np0005541914.localdomain sudo[304480]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:07 np0005541914.localdomain sudo[304480]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:00:07 np0005541914.localdomain sudo[304498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 02 10:00:07 np0005541914.localdomain sudo[304498]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v3: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:07 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/mirror_snapshot_schedule"} : dispatch
Dec 02 10:00:07 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/mirror_snapshot_schedule"} : dispatch
Dec 02 10:00:07 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/trash_purge_schedule"} : dispatch
Dec 02 10:00:07 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005541914.lljzmk/trash_purge_schedule"} : dispatch
Dec 02 10:00:07 np0005541914.localdomain ceph-mon[301710]: mgrmap e39: np0005541914.lljzmk(active, since 1.04514s), standbys: np0005541913.mfesdm
Dec 02 10:00:08 np0005541914.localdomain podman[304583]: 2025-12-02 10:00:08.152357053 +0000 UTC m=+0.077324856 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, version=7, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, architecture=x86_64, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.41.4, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z)
Dec 02 10:00:08 np0005541914.localdomain podman[304583]: 2025-12-02 10:00:08.267974048 +0000 UTC m=+0.192941831 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, release=1763362218, RELEASE=main, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, ceph=True, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., name=rhceph, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, version=7, build-date=2025-11-26T19:44:28Z, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_CLEAN=True)
Dec 02 10:00:08 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 10:00:08 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v4: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:08 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 10:00:08 np0005541914.localdomain sudo[304498]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:08 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0)
Dec 02 10:00:08 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0)
Dec 02 10:00:08 np0005541914.localdomain ceph-mon[301710]: pgmap v3: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:08 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:00:08 np0005541914.localdomain ceph-mon[301710]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm)
Dec 02 10:00:08 np0005541914.localdomain ceph-mon[301710]: Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm)
Dec 02 10:00:08 np0005541914.localdomain ceph-mon[301710]: Cluster is now healthy
Dec 02 10:00:08 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:00:08 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:00:08 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:00:08 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cherrypy.error] [02/Dec/2025:10:00:08] ENGINE Bus STARTING
Dec 02 10:00:08 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : [02/Dec/2025:10:00:08] ENGINE Bus STARTING
Dec 02 10:00:08 np0005541914.localdomain sudo[304703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 10:00:08 np0005541914.localdomain sudo[304703]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:08 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 10:00:08 np0005541914.localdomain sudo[304703]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:09 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cherrypy.error] [02/Dec/2025:10:00:09] ENGINE Serving on http://172.18.0.108:8765
Dec 02 10:00:09 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : [02/Dec/2025:10:00:09] ENGINE Serving on http://172.18.0.108:8765
Dec 02 10:00:09 np0005541914.localdomain sudo[304732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 10:00:09 np0005541914.localdomain sudo[304732]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:09 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 10:00:09 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cherrypy.error] [02/Dec/2025:10:00:09] ENGINE Serving on https://172.18.0.108:7150
Dec 02 10:00:09 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : [02/Dec/2025:10:00:09] ENGINE Serving on https://172.18.0.108:7150
Dec 02 10:00:09 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cherrypy.error] [02/Dec/2025:10:00:09] ENGINE Bus STARTED
Dec 02 10:00:09 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : [02/Dec/2025:10:00:09] ENGINE Bus STARTED
Dec 02 10:00:09 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cherrypy.error] [02/Dec/2025:10:00:09] ENGINE Client ('172.18.0.108', 52382) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 02 10:00:09 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : [02/Dec/2025:10:00:09] ENGINE Client ('172.18.0.108', 52382) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 02 10:00:09 np0005541914.localdomain ceph-mgr[287188]: [devicehealth INFO root] Check health
Dec 02 10:00:09 np0005541914.localdomain sudo[304732]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:09 np0005541914.localdomain sudo[304803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 10:00:09 np0005541914.localdomain sudo[304803]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:09 np0005541914.localdomain sudo[304803]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:09 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:00:09 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:00:09 np0005541914.localdomain sudo[304823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Dec 02 10:00:09 np0005541914.localdomain sudo[304823]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:09 np0005541914.localdomain systemd[1]: tmp-crun.jpemxS.mount: Deactivated successfully.
Dec 02 10:00:09 np0005541914.localdomain podman[304822]: 2025-12-02 10:00:09.922765061 +0000 UTC m=+0.090992471 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, release=1755695350, managed_by=edpm_ansible, distribution-scope=public, vcs-type=git, com.redhat.component=ubi9-minimal-container, architecture=x86_64, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc.)
Dec 02 10:00:09 np0005541914.localdomain podman[304822]: 2025-12-02 10:00:09.933407783 +0000 UTC m=+0.101635133 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, release=1755695350, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, distribution-scope=public, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 02 10:00:09 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:00:09 np0005541914.localdomain ceph-mon[301710]: [02/Dec/2025:10:00:08] ENGINE Bus STARTING
Dec 02 10:00:09 np0005541914.localdomain ceph-mon[301710]: mgrmap e40: np0005541914.lljzmk(active, since 2s), standbys: np0005541913.mfesdm
Dec 02 10:00:09 np0005541914.localdomain ceph-mon[301710]: [02/Dec/2025:10:00:09] ENGINE Serving on http://172.18.0.108:8765
Dec 02 10:00:09 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:00:09 np0005541914.localdomain ceph-mon[301710]: [02/Dec/2025:10:00:09] ENGINE Serving on https://172.18.0.108:7150
Dec 02 10:00:09 np0005541914.localdomain ceph-mon[301710]: [02/Dec/2025:10:00:09] ENGINE Bus STARTED
Dec 02 10:00:09 np0005541914.localdomain ceph-mon[301710]: [02/Dec/2025:10:00:09] ENGINE Client ('172.18.0.108', 52382) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 02 10:00:09 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:00:10 np0005541914.localdomain podman[304821]: 2025-12-02 10:00:10.017643177 +0000 UTC m=+0.183762772 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 02 10:00:10 np0005541914.localdomain podman[304821]: 2025-12-02 10:00:10.067975723 +0000 UTC m=+0.234095268 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 10:00:10 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 10:00:10 np0005541914.localdomain sudo[304823]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0)
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 02 10:00:10 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO root] Adjusting osd_memory_target on np0005541913.localdomain to 836.6M
Dec 02 10:00:10 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005541913.localdomain to 836.6M
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0)
Dec 02 10:00:10 np0005541914.localdomain ceph-mgr[287188]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005541913.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 10:00:10 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005541913.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0)
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0)
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 02 10:00:10 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO root] Adjusting osd_memory_target on np0005541914.localdomain to 836.6M
Dec 02 10:00:10 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005541914.localdomain to 836.6M
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 02 10:00:10 np0005541914.localdomain ceph-mgr[287188]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005541914.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 10:00:10 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005541914.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:10.558630) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669610558717, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 490, "num_deletes": 252, "total_data_size": 1628548, "memory_usage": 1641112, "flush_reason": "Manual Compaction"}
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669610571619, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 1063710, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14053, "largest_seqno": 14538, "table_properties": {"data_size": 1060733, "index_size": 960, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 8165, "raw_average_key_size": 21, "raw_value_size": 1054438, "raw_average_value_size": 2819, "num_data_blocks": 38, "num_entries": 374, "num_filter_entries": 374, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669605, "oldest_key_time": 1764669605, "file_creation_time": 1764669610, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 13074 microseconds, and 5062 cpu microseconds.
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:10.571707) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 1063710 bytes OK
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:10.571739) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:10.573385) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:10.573409) EVENT_LOG_v1 {"time_micros": 1764669610573403, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:10.573433) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 1625430, prev total WAL file size 1625754, number of live WAL files 2.
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:10.574244) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033373536' seq:72057594037927935, type:22 .. '6D6772737461740034303038' seq:0, type:0; will stop at (end)
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(1038KB)], [15(17MB)]
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669610574310, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 19596292, "oldest_snapshot_seqno": -1}
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 11924 keys, 17247608 bytes, temperature: kUnknown
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669610697302, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 17247608, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17181691, "index_size": 35032, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29829, "raw_key_size": 320068, "raw_average_key_size": 26, "raw_value_size": 16980335, "raw_average_value_size": 1424, "num_data_blocks": 1326, "num_entries": 11924, "num_filter_entries": 11924, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669502, "oldest_key_time": 0, "file_creation_time": 1764669610, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:10.697733) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 17247608 bytes
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:10.699284) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 159.2 rd, 140.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 17.7 +0.0 blob) out(16.4 +0.0 blob), read-write-amplify(34.6) write-amplify(16.2) OK, records in: 12456, records dropped: 532 output_compression: NoCompression
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:10.699317) EVENT_LOG_v1 {"time_micros": 1764669610699302, "job": 6, "event": "compaction_finished", "compaction_time_micros": 123089, "compaction_time_cpu_micros": 47580, "output_level": 6, "num_output_files": 1, "total_output_size": 17247608, "num_input_records": 12456, "num_output_records": 11924, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669610699666, "job": 6, "event": "table_file_deletion", "file_number": 17}
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669610702239, "job": 6, "event": "table_file_deletion", "file_number": 15}
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:10.574141) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:10.702304) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:10.702312) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:10.702316) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:10.702318) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:00:10.702322) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0)
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 02 10:00:10 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO root] Adjusting osd_memory_target on np0005541912.localdomain to 836.6M
Dec 02 10:00:10 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005541912.localdomain to 836.6M
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 02 10:00:10 np0005541914.localdomain ceph-mgr[287188]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005541912.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 10:00:10 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005541912.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 02 10:00:10 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:00:10 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541912.localdomain:/etc/ceph/ceph.conf
Dec 02 10:00:10 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541912.localdomain:/etc/ceph/ceph.conf
Dec 02 10:00:10 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541913.localdomain:/etc/ceph/ceph.conf
Dec 02 10:00:10 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541913.localdomain:/etc/ceph/ceph.conf
Dec 02 10:00:10 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541914.localdomain:/etc/ceph/ceph.conf
Dec 02 10:00:10 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541914.localdomain:/etc/ceph/ceph.conf
Dec 02 10:00:10 np0005541914.localdomain sudo[304900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 02 10:00:10 np0005541914.localdomain sudo[304900]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:10 np0005541914.localdomain sudo[304900]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:10 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v5: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:10 np0005541914.localdomain sudo[304918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph
Dec 02 10:00:10 np0005541914.localdomain sudo[304918]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:10 np0005541914.localdomain sudo[304918]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:10 np0005541914.localdomain sudo[304936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 10:00:10 np0005541914.localdomain sudo[304936]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:10 np0005541914.localdomain sudo[304936]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:11 np0005541914.localdomain sudo[304954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 10:00:11 np0005541914.localdomain sudo[304954]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:11 np0005541914.localdomain sudo[304954]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:11 np0005541914.localdomain sudo[304972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 10:00:11 np0005541914.localdomain sudo[304972]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:11 np0005541914.localdomain sudo[304972]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:11 np0005541914.localdomain sudo[305006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 10:00:11 np0005541914.localdomain sudo[305006]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:11 np0005541914.localdomain sudo[305006]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:11 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:00:11 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:00:11 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 02 10:00:11 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 02 10:00:11 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 02 10:00:11 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 02 10:00:11 np0005541914.localdomain ceph-mon[301710]: Adjusting osd_memory_target on np0005541913.localdomain to 836.6M
Dec 02 10:00:11 np0005541914.localdomain ceph-mon[301710]: Unable to set osd_memory_target on np0005541913.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 10:00:11 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:00:11 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:00:11 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 02 10:00:11 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 02 10:00:11 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 02 10:00:11 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 02 10:00:11 np0005541914.localdomain ceph-mon[301710]: Adjusting osd_memory_target on np0005541914.localdomain to 836.6M
Dec 02 10:00:11 np0005541914.localdomain ceph-mon[301710]: Unable to set osd_memory_target on np0005541914.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 10:00:11 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:00:11 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:00:11 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 02 10:00:11 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 02 10:00:11 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 02 10:00:11 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 02 10:00:11 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:00:11 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:00:11 np0005541914.localdomain sudo[305024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new
Dec 02 10:00:11 np0005541914.localdomain sudo[305024]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:11 np0005541914.localdomain sudo[305024]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:11 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 10:00:11 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 10:00:11 np0005541914.localdomain sudo[305042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 02 10:00:11 np0005541914.localdomain sudo[305042]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:11 np0005541914.localdomain sudo[305042]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:11 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 10:00:11 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 10:00:11 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 10:00:11 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 10:00:11 np0005541914.localdomain sudo[305060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 10:00:11 np0005541914.localdomain sudo[305060]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:11 np0005541914.localdomain sudo[305060]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:11 np0005541914.localdomain sudo[305078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 10:00:11 np0005541914.localdomain sudo[305078]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:11 np0005541914.localdomain sudo[305078]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:11 np0005541914.localdomain sudo[305096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 10:00:11 np0005541914.localdomain sudo[305096]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:11 np0005541914.localdomain sudo[305096]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:11 np0005541914.localdomain sudo[305114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 10:00:11 np0005541914.localdomain sudo[305114]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:11 np0005541914.localdomain sudo[305114]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:11 np0005541914.localdomain sudo[305132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 10:00:11 np0005541914.localdomain sudo[305132]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:11 np0005541914.localdomain sudo[305132]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:11 np0005541914.localdomain ceph-mgr[287188]: mgr.server handle_open ignoring open from mgr.np0005541912.qwddia 172.18.0.106:0/2644696530; not ready for session (expect reconnect)
Dec 02 10:00:11 np0005541914.localdomain sudo[305166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 10:00:11 np0005541914.localdomain sudo[305166]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:11 np0005541914.localdomain sudo[305166]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:11 np0005541914.localdomain sudo[305184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new
Dec 02 10:00:11 np0005541914.localdomain sudo[305184]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:11 np0005541914.localdomain sudo[305184]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:11 np0005541914.localdomain sudo[305202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf.new /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 10:00:11 np0005541914.localdomain sudo[305202]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:11 np0005541914.localdomain sudo[305202]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:11 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541914.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 10:00:11 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541914.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 10:00:12 np0005541914.localdomain sudo[305220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 02 10:00:12 np0005541914.localdomain sudo[305220]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:12 np0005541914.localdomain sudo[305220]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:12 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541912.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 10:00:12 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541912.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 10:00:12 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541913.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 10:00:12 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541913.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 10:00:12 np0005541914.localdomain sudo[305238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph
Dec 02 10:00:12 np0005541914.localdomain sudo[305238]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:12 np0005541914.localdomain sudo[305238]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:00:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:00:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:00:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:00:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:00:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:00:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:00:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:00:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:00:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:00:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:00:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:00:12 np0005541914.localdomain sudo[305256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.client.admin.keyring.new
Dec 02 10:00:12 np0005541914.localdomain sudo[305256]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:12 np0005541914.localdomain sudo[305256]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:12 np0005541914.localdomain sudo[305274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 10:00:12 np0005541914.localdomain sudo[305274]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:12 np0005541914.localdomain sudo[305274]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:12 np0005541914.localdomain ceph-mon[301710]: Adjusting osd_memory_target on np0005541912.localdomain to 836.6M
Dec 02 10:00:12 np0005541914.localdomain ceph-mon[301710]: Unable to set osd_memory_target on np0005541912.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 10:00:12 np0005541914.localdomain ceph-mon[301710]: Updating np0005541912.localdomain:/etc/ceph/ceph.conf
Dec 02 10:00:12 np0005541914.localdomain ceph-mon[301710]: Updating np0005541913.localdomain:/etc/ceph/ceph.conf
Dec 02 10:00:12 np0005541914.localdomain ceph-mon[301710]: Updating np0005541914.localdomain:/etc/ceph/ceph.conf
Dec 02 10:00:12 np0005541914.localdomain ceph-mon[301710]: pgmap v5: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:12 np0005541914.localdomain ceph-mon[301710]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 10:00:12 np0005541914.localdomain ceph-mon[301710]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 10:00:12 np0005541914.localdomain ceph-mon[301710]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.conf
Dec 02 10:00:12 np0005541914.localdomain ceph-mon[301710]: mgrmap e41: np0005541914.lljzmk(active, since 4s), standbys: np0005541913.mfesdm
Dec 02 10:00:12 np0005541914.localdomain ceph-mon[301710]: Standby manager daemon np0005541912.qwddia started
Dec 02 10:00:12 np0005541914.localdomain sudo[305292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.client.admin.keyring.new
Dec 02 10:00:12 np0005541914.localdomain sudo[305292]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:12 np0005541914.localdomain sudo[305292]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:12 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:00:12 np0005541914.localdomain sudo[305326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.client.admin.keyring.new
Dec 02 10:00:12 np0005541914.localdomain sudo[305326]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:12 np0005541914.localdomain sudo[305326]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:12 np0005541914.localdomain sudo[305344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.client.admin.keyring.new
Dec 02 10:00:12 np0005541914.localdomain sudo[305344]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:12 np0005541914.localdomain sudo[305344]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:12 np0005541914.localdomain sudo[305362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 02 10:00:12 np0005541914.localdomain sudo[305362]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:12 np0005541914.localdomain sudo[305362]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:12 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 10:00:12 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 10:00:12 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 10:00:12 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 10:00:12 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO cephadm.serve] Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 10:00:12 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 10:00:12 np0005541914.localdomain sudo[305380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 10:00:12 np0005541914.localdomain sudo[305380]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:12 np0005541914.localdomain sudo[305380]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:12 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mgr metadata", "who": "np0005541912.qwddia", "id": "np0005541912.qwddia"} v 0)
Dec 02 10:00:12 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr metadata", "who": "np0005541912.qwddia", "id": "np0005541912.qwddia"} : dispatch
Dec 02 10:00:12 np0005541914.localdomain sudo[305398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config
Dec 02 10:00:12 np0005541914.localdomain sudo[305398]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:12 np0005541914.localdomain sudo[305398]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:12 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v6: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:12 np0005541914.localdomain sudo[305416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring.new
Dec 02 10:00:12 np0005541914.localdomain sudo[305416]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:12 np0005541914.localdomain sudo[305416]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:12 np0005541914.localdomain sudo[305434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074
Dec 02 10:00:12 np0005541914.localdomain sudo[305434]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:12 np0005541914.localdomain sudo[305434]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:13 np0005541914.localdomain sudo[305452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring.new
Dec 02 10:00:13 np0005541914.localdomain sudo[305452]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:13 np0005541914.localdomain sudo[305452]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:13 np0005541914.localdomain sudo[305486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring.new
Dec 02 10:00:13 np0005541914.localdomain sudo[305486]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:13 np0005541914.localdomain sudo[305486]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:13 np0005541914.localdomain sudo[305504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring.new
Dec 02 10:00:13 np0005541914.localdomain sudo[305504]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:13 np0005541914.localdomain sudo[305504]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:13 np0005541914.localdomain ceph-mon[301710]: Updating np0005541914.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 10:00:13 np0005541914.localdomain ceph-mon[301710]: Updating np0005541912.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 10:00:13 np0005541914.localdomain ceph-mon[301710]: Updating np0005541913.localdomain:/etc/ceph/ceph.client.admin.keyring
Dec 02 10:00:13 np0005541914.localdomain ceph-mon[301710]: mgrmap e42: np0005541914.lljzmk(active, since 5s), standbys: np0005541912.qwddia, np0005541913.mfesdm
Dec 02 10:00:13 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "mgr metadata", "who": "np0005541912.qwddia", "id": "np0005541912.qwddia"} : dispatch
Dec 02 10:00:13 np0005541914.localdomain sudo[305522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-c7c8e171-a193-56fb-95fa-8879fcfa7074/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring.new /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 10:00:13 np0005541914.localdomain sudo[305522]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:13 np0005541914.localdomain sudo[305522]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:13 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0)
Dec 02 10:00:13 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0)
Dec 02 10:00:13 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 10:00:13 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 10:00:13 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 10:00:13 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 10:00:13 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 02 10:00:13 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] update: starting ev 77321482-11e0-40b2-9e17-bc7146a7dd6f (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:00:13 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] complete: finished ev 77321482-11e0-40b2-9e17-bc7146a7dd6f (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:00:13 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Completed event 77321482-11e0-40b2-9e17-bc7146a7dd6f (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 02 10:00:13 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 02 10:00:13 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:00:13 np0005541914.localdomain sudo[305540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 10:00:13 np0005541914.localdomain sudo[305540]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:13 np0005541914.localdomain sudo[305540]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:13 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 10:00:13 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:00:13 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 02 10:00:13 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:00:13 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 02 10:00:13 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] update: starting ev 206d8b3f-30a4-4d05-ae2b-66d496706789 (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:00:13 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] complete: finished ev 206d8b3f-30a4-4d05-ae2b-66d496706789 (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:00:13 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Completed event 206d8b3f-30a4-4d05-ae2b-66d496706789 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 02 10:00:13 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 02 10:00:13 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:00:13 np0005541914.localdomain sudo[305558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 10:00:13 np0005541914.localdomain sudo[305558]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:00:13 np0005541914.localdomain sudo[305558]: pam_unix(sudo:session): session closed for user root
Dec 02 10:00:13 np0005541914.localdomain sshd[305576]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 10:00:14 np0005541914.localdomain ceph-mon[301710]: Updating np0005541914.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 10:00:14 np0005541914.localdomain ceph-mon[301710]: Updating np0005541912.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 10:00:14 np0005541914.localdomain ceph-mon[301710]: Updating np0005541913.localdomain:/var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/config/ceph.client.admin.keyring
Dec 02 10:00:14 np0005541914.localdomain ceph-mon[301710]: pgmap v6: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:14 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:00:14 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:00:14 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:00:14 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:00:14 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:00:14 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:00:14 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:00:14 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:00:14 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:00:14 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:00:14 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:00:14 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:00:14 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v7: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 0 B/s wr, 16 op/s
Dec 02 10:00:14 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:00:15 np0005541914.localdomain podman[305578]: 2025-12-02 10:00:15.070090375 +0000 UTC m=+0.073887470 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 02 10:00:15 np0005541914.localdomain podman[305578]: 2025-12-02 10:00:15.112100619 +0000 UTC m=+0.115897704 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 02 10:00:15 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:00:15 np0005541914.localdomain sshd[305576]: Invalid user ubuntu from 43.225.159.111 port 43648
Dec 02 10:00:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:00:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:00:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:00:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:00:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:00:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:00:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:00:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:00:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:00:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:00:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:00:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:00:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:00:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:00:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:00:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:00:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:00:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:00:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:00:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:00:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:00:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:00:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:00:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:00:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:00:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:00:15 np0005541914.localdomain sshd[305576]: Received disconnect from 43.225.159.111 port 43648:11:  [preauth]
Dec 02 10:00:15 np0005541914.localdomain sshd[305576]: Disconnected from invalid user ubuntu 43.225.159.111 port 43648 [preauth]
Dec 02 10:00:16 np0005541914.localdomain ceph-mon[301710]: pgmap v7: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 0 B/s wr, 16 op/s
Dec 02 10:00:16 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v8: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 0 B/s wr, 12 op/s
Dec 02 10:00:16 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events
Dec 02 10:00:16 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 02 10:00:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:00:17 np0005541914.localdomain ceph-mon[301710]: pgmap v8: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 0 B/s wr, 12 op/s
Dec 02 10:00:17 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:00:18 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v9: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s
Dec 02 10:00:19 np0005541914.localdomain ceph-mon[301710]: pgmap v9: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s
Dec 02 10:00:20 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v10: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Dec 02 10:00:21 np0005541914.localdomain ceph-mon[301710]: pgmap v10: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Dec 02 10:00:22 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:00:22 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v11: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Dec 02 10:00:23 np0005541914.localdomain ceph-mon[301710]: pgmap v11: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Dec 02 10:00:24 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v12: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Dec 02 10:00:25 np0005541914.localdomain ceph-mon[301710]: pgmap v12: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Dec 02 10:00:26 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v13: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:00:27 np0005541914.localdomain ceph-mon[301710]: pgmap v13: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:28 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v14: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:30 np0005541914.localdomain ceph-mon[301710]: pgmap v14: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:30 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v15: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:30 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:00:30 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:00:30 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:00:30 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:00:31 np0005541914.localdomain systemd[1]: tmp-crun.jTmPGt.mount: Deactivated successfully.
Dec 02 10:00:31 np0005541914.localdomain podman[305597]: 2025-12-02 10:00:31.090107507 +0000 UTC m=+0.090331551 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 02 10:00:31 np0005541914.localdomain podman[305597]: 2025-12-02 10:00:31.123820759 +0000 UTC m=+0.124044813 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Dec 02 10:00:31 np0005541914.localdomain systemd[1]: tmp-crun.Cf7Rv0.mount: Deactivated successfully.
Dec 02 10:00:31 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:00:31 np0005541914.localdomain podman[305598]: 2025-12-02 10:00:31.135422151 +0000 UTC m=+0.133549411 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 10:00:31 np0005541914.localdomain podman[305598]: 2025-12-02 10:00:31.145895867 +0000 UTC m=+0.144023127 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 02 10:00:31 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:00:31 np0005541914.localdomain podman[305599]: 2025-12-02 10:00:31.186579741 +0000 UTC m=+0.179580886 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true)
Dec 02 10:00:31 np0005541914.localdomain podman[305599]: 2025-12-02 10:00:31.200825283 +0000 UTC m=+0.193826448 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 02 10:00:31 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:00:31 np0005541914.localdomain podman[305600]: 2025-12-02 10:00:31.238145924 +0000 UTC m=+0.227590581 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:00:31 np0005541914.localdomain podman[305600]: 2025-12-02 10:00:31.303799966 +0000 UTC m=+0.293244613 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 02 10:00:31 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:00:31 np0005541914.localdomain ceph-mon[301710]: pgmap v15: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:00:32 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v16: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:33 np0005541914.localdomain podman[239757]: time="2025-12-02T10:00:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:00:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:00:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 10:00:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:00:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19191 "" "Go-http-client/1.1"
Dec 02 10:00:33 np0005541914.localdomain ceph-mon[301710]: pgmap v16: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:34 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v17: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:35 np0005541914.localdomain ceph-mon[301710]: pgmap v17: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:36 np0005541914.localdomain sshd[305678]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 10:00:36 np0005541914.localdomain sshd[305678]: Received disconnect from 34.78.29.97 port 58706:11: Bye Bye [preauth]
Dec 02 10:00:36 np0005541914.localdomain sshd[305678]: Disconnected from authenticating user root 34.78.29.97 port 58706 [preauth]
Dec 02 10:00:36 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v18: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:00:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:00:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:00:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:00:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:00:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:00:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:00:38 np0005541914.localdomain ceph-mon[301710]: pgmap v18: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:38 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v19: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:39 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:00:40 np0005541914.localdomain podman[305680]: 2025-12-02 10:00:40.057380271 +0000 UTC m=+0.060647820 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.buildah.version=1.33.7, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9)
Dec 02 10:00:40 np0005541914.localdomain podman[305680]: 2025-12-02 10:00:40.070859289 +0000 UTC m=+0.074126898 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, config_id=edpm, container_name=openstack_network_exporter, architecture=x86_64, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9)
Dec 02 10:00:40 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:00:40 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:00:40 np0005541914.localdomain ceph-mon[301710]: pgmap v19: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:40 np0005541914.localdomain systemd[1]: tmp-crun.uhg10h.mount: Deactivated successfully.
Dec 02 10:00:40 np0005541914.localdomain podman[305699]: 2025-12-02 10:00:40.173614995 +0000 UTC m=+0.068007863 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 02 10:00:40 np0005541914.localdomain podman[305699]: 2025-12-02 10:00:40.18399873 +0000 UTC m=+0.078391628 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 10:00:40 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:00:40 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v20: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:41 np0005541914.localdomain ceph-mon[301710]: pgmap v20: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:00:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:00:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:00:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:00:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:00:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:00:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:00:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:00:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:00:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:00:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:00:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:00:42 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:00:42 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v21: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:43 np0005541914.localdomain ceph-mon[301710]: pgmap v21: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:44 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v22: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:45 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:00:45 np0005541914.localdomain ceph-mon[301710]: pgmap v22: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:46 np0005541914.localdomain podman[305722]: 2025-12-02 10:00:46.052269113 +0000 UTC m=+0.057316159 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 02 10:00:46 np0005541914.localdomain podman[305722]: 2025-12-02 10:00:46.059401629 +0000 UTC m=+0.064448685 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible)
Dec 02 10:00:46 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:00:46 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v23: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:47 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:00:48 np0005541914.localdomain ceph-mon[301710]: pgmap v23: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:48 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v24: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:50 np0005541914.localdomain ceph-mon[301710]: pgmap v24: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:50 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v25: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:51 np0005541914.localdomain ceph-mon[301710]: pgmap v25: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:52 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:00:52 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v26: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:52 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/2791761851' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:00:53 np0005541914.localdomain ceph-mon[301710]: pgmap v26: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:53 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/2694848661' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:00:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:00:54.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:00:54 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v27: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:55 np0005541914.localdomain ceph-mon[301710]: pgmap v27: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:56 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v28: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:00:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:00:57.523 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:00:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:00:57.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:00:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:00:57.527 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 10:00:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:00:57.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 10:00:58 np0005541914.localdomain ceph-mon[301710]: pgmap v28: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:00:58 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v29: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:00 np0005541914.localdomain ceph-mon[301710]: pgmap v29: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:00 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Dec 02 10:01:00 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:01:00.656735) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 02 10:01:00 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Dec 02 10:01:00 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669660656799, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 952, "num_deletes": 256, "total_data_size": 1793915, "memory_usage": 1817584, "flush_reason": "Manual Compaction"}
Dec 02 10:01:00 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Dec 02 10:01:00 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669660667778, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 1174858, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14543, "largest_seqno": 15490, "table_properties": {"data_size": 1170557, "index_size": 1964, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10294, "raw_average_key_size": 20, "raw_value_size": 1161498, "raw_average_value_size": 2290, "num_data_blocks": 82, "num_entries": 507, "num_filter_entries": 507, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669610, "oldest_key_time": 1764669610, "file_creation_time": 1764669660, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Dec 02 10:01:00 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 11086 microseconds, and 3620 cpu microseconds.
Dec 02 10:01:00 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 10:01:00 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:01:00.667825) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 1174858 bytes OK
Dec 02 10:01:00 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:01:00.667849) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Dec 02 10:01:00 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:01:00.669536) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Dec 02 10:01:00 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:01:00.669550) EVENT_LOG_v1 {"time_micros": 1764669660669546, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 02 10:01:00 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:01:00.669568) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 02 10:01:00 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1788903, prev total WAL file size 1789227, number of live WAL files 2.
Dec 02 10:01:00 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:01:00 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:01:00.670083) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373637' seq:72057594037927935, type:22 .. '6C6F676D0034303139' seq:0, type:0; will stop at (end)
Dec 02 10:01:00 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 02 10:01:00 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(1147KB)], [18(16MB)]
Dec 02 10:01:00 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669660670144, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 18422466, "oldest_snapshot_seqno": -1}
Dec 02 10:01:00 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 11894 keys, 18282658 bytes, temperature: kUnknown
Dec 02 10:01:00 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669660788920, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 18282658, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18215448, "index_size": 36389, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29765, "raw_key_size": 320669, "raw_average_key_size": 26, "raw_value_size": 18013012, "raw_average_value_size": 1514, "num_data_blocks": 1380, "num_entries": 11894, "num_filter_entries": 11894, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669502, "oldest_key_time": 0, "file_creation_time": 1764669660, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Dec 02 10:01:00 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 10:01:00 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:01:00.789268) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 18282658 bytes
Dec 02 10:01:00 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:01:00.791120) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.0 rd, 153.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 16.4 +0.0 blob) out(17.4 +0.0 blob), read-write-amplify(31.2) write-amplify(15.6) OK, records in: 12431, records dropped: 537 output_compression: NoCompression
Dec 02 10:01:00 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:01:00.791166) EVENT_LOG_v1 {"time_micros": 1764669660791148, "job": 8, "event": "compaction_finished", "compaction_time_micros": 118863, "compaction_time_cpu_micros": 33415, "output_level": 6, "num_output_files": 1, "total_output_size": 18282658, "num_input_records": 12431, "num_output_records": 11894, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 02 10:01:00 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:01:00 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669660791471, "job": 8, "event": "table_file_deletion", "file_number": 20}
Dec 02 10:01:00 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:01:00 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669660793181, "job": 8, "event": "table_file_deletion", "file_number": 18}
Dec 02 10:01:00 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:01:00.669987) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:01:00 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:01:00.793257) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:01:00 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:01:00.793263) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:01:00 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:01:00.793266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:01:00 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:01:00.793268) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:01:00 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:01:00.793270) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:01:00 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v30: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:01 np0005541914.localdomain ceph-mon[301710]: pgmap v30: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:01 np0005541914.localdomain CROND[305743]: (root) CMD (run-parts /etc/cron.hourly)
Dec 02 10:01:01 np0005541914.localdomain run-parts[305746]: (/etc/cron.hourly) starting 0anacron
Dec 02 10:01:01 np0005541914.localdomain run-parts[305752]: (/etc/cron.hourly) finished 0anacron
Dec 02 10:01:01 np0005541914.localdomain CROND[305742]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 02 10:01:01 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:01:02 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:01:02 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:01:02 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:01:02 np0005541914.localdomain podman[305755]: 2025-12-02 10:01:02.086500335 +0000 UTC m=+0.070990733 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:01:02 np0005541914.localdomain podman[305755]: 2025-12-02 10:01:02.092129506 +0000 UTC m=+0.076619924 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:01:02 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:01:02 np0005541914.localdomain podman[305754]: 2025-12-02 10:01:02.133250803 +0000 UTC m=+0.122936089 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 10:01:02 np0005541914.localdomain podman[305754]: 2025-12-02 10:01:02.141602346 +0000 UTC m=+0.131287632 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 10:01:02 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:01:02 np0005541914.localdomain podman[305765]: 2025-12-02 10:01:02.142345218 +0000 UTC m=+0.122076691 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 02 10:01:02 np0005541914.localdomain podman[305753]: 2025-12-02 10:01:02.216164906 +0000 UTC m=+0.205984226 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 02 10:01:02 np0005541914.localdomain podman[305765]: 2025-12-02 10:01:02.239929447 +0000 UTC m=+0.219660950 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 02 10:01:02 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:01:02 np0005541914.localdomain podman[305753]: 2025-12-02 10:01:02.294886964 +0000 UTC m=+0.284706264 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 02 10:01:02 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:01:02 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:01:02 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v31: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:01:03.172 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:01:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:01:03.173 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:01:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:01:03.173 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:01:03 np0005541914.localdomain podman[239757]: time="2025-12-02T10:01:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:01:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:01:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 10:01:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:01:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19190 "" "Go-http-client/1.1"
Dec 02 10:01:04 np0005541914.localdomain ceph-mon[301710]: pgmap v31: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:04.286 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 10:01:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:04.287 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:01:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:04.288 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:01:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:04.288 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:01:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:04.289 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:01:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:04.289 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:01:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:04.289 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 10:01:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:04.289 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:01:04 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 02 10:01:04 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/457788740' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:01:04 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 02 10:01:04 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/457788740' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:01:04 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v32: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/457788740' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:01:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/457788740' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:01:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:05.084 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:01:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:05.084 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:01:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:05.085 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:01:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:05.085 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 10:01:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:05.086 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:01:05 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:01:05 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/198510701' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:01:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:05.496 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:01:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:05.716 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 10:01:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:05.718 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11993MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 10:01:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:05.719 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:01:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:05.719 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:01:06 np0005541914.localdomain ceph-mon[301710]: pgmap v32: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:06 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/27386934' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:01:06 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/198510701' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:01:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:01:06
Dec 02 10:01:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 02 10:01:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] do_upmap
Dec 02 10:01:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] pools ['vms', '.mgr', 'backups', 'manila_metadata', 'images', 'volumes', 'manila_data']
Dec 02 10:01:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes
Dec 02 10:01:06 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v33: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust
Dec 02 10:01:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:01:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 02 10:01:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:01:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32)
Dec 02 10:01:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:01:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 02 10:01:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:01:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014449417225013959 of space, bias 1.0, pg target 0.2885066972594454 quantized to 32 (current 32)
Dec 02 10:01:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:01:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 02 10:01:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:01:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 02 10:01:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:01:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.0019596681323283084 quantized to 16 (current 16)
Dec 02 10:01:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:01:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:01:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 02 10:01:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:01:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:01:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:01:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:01:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:01:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:01:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:01:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:01:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:06.947 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 10:01:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:06.948 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 10:01:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:06.966 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:01:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 02 10:01:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:01:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:01:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:01:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:01:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:01:07 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1910608220' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:01:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:01:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:07.398 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:01:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:07.404 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 10:01:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:07.420 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 10:01:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:07.424 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 10:01:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:07.425 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:01:08 np0005541914.localdomain ceph-mon[301710]: pgmap v33: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:08 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/1910608220' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:01:08 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/2730124847' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:01:08 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v34: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:10 np0005541914.localdomain ceph-mon[301710]: pgmap v34: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:10 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v35: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:10 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:01:10 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:01:11 np0005541914.localdomain systemd[1]: tmp-crun.79i7yl.mount: Deactivated successfully.
Dec 02 10:01:11 np0005541914.localdomain podman[305882]: 2025-12-02 10:01:11.087202444 +0000 UTC m=+0.092651600 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 02 10:01:11 np0005541914.localdomain podman[305883]: 2025-12-02 10:01:11.133398585 +0000 UTC m=+0.136336775 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, architecture=x86_64, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, distribution-scope=public, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter)
Dec 02 10:01:11 np0005541914.localdomain podman[305883]: 2025-12-02 10:01:11.145915034 +0000 UTC m=+0.148853254 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, managed_by=edpm_ansible, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, release=1755695350, version=9.6, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, com.redhat.component=ubi9-minimal-container)
Dec 02 10:01:11 np0005541914.localdomain podman[305882]: 2025-12-02 10:01:11.15369834 +0000 UTC m=+0.159147466 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 10:01:11 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:01:11 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:01:11 np0005541914.localdomain ceph-mon[301710]: pgmap v35: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:01:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:01:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:01:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:01:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:01:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:01:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:01:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:01:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:01:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:01:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:01:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:01:12 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:01:12 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v36: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:13 np0005541914.localdomain ceph-mon[301710]: pgmap v36: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:13 np0005541914.localdomain sudo[305927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 10:01:13 np0005541914.localdomain sudo[305927]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:01:13 np0005541914.localdomain sudo[305927]: pam_unix(sudo:session): session closed for user root
Dec 02 10:01:14 np0005541914.localdomain sudo[305945]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 02 10:01:14 np0005541914.localdomain sudo[305945]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:01:14 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 10:01:14 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 10:01:14 np0005541914.localdomain sudo[305945]: pam_unix(sudo:session): session closed for user root
Dec 02 10:01:14 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0)
Dec 02 10:01:14 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 10:01:14 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0)
Dec 02 10:01:14 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 10:01:14 np0005541914.localdomain sudo[305985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 10:01:14 np0005541914.localdomain sudo[305985]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:01:14 np0005541914.localdomain sudo[305985]: pam_unix(sudo:session): session closed for user root
Dec 02 10:01:14 np0005541914.localdomain sudo[306003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 10:01:14 np0005541914.localdomain sudo[306003]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:01:14 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v37: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:15 np0005541914.localdomain sudo[306003]: pam_unix(sudo:session): session closed for user root
Dec 02 10:01:15 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 10:01:15 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:01:15 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 02 10:01:15 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:01:15 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 02 10:01:15 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] update: starting ev 0f3ac13e-e4c3-4c2d-90b3-d07fc85ca851 (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:01:15 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] complete: finished ev 0f3ac13e-e4c3-4c2d-90b3-d07fc85ca851 (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:01:15 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Completed event 0f3ac13e-e4c3-4c2d-90b3-d07fc85ca851 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 02 10:01:15 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:01:15 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:01:15 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:01:15 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:01:15 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:01:15 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:01:15 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:01:15 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:01:15 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 02 10:01:15 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:01:15 np0005541914.localdomain sudo[306054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 10:01:15 np0005541914.localdomain sudo[306054]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:01:15 np0005541914.localdomain sudo[306054]: pam_unix(sudo:session): session closed for user root
Dec 02 10:01:16 np0005541914.localdomain ceph-mon[301710]: pgmap v37: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:16 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:01:16 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:01:16 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v38: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:16 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events
Dec 02 10:01:16 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 02 10:01:16 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:01:17 np0005541914.localdomain podman[306072]: 2025-12-02 10:01:17.094703519 +0000 UTC m=+0.098739566 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 02 10:01:17 np0005541914.localdomain podman[306072]: 2025-12-02 10:01:17.10598224 +0000 UTC m=+0.110018177 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:01:17 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:01:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:01:17 np0005541914.localdomain ceph-mon[301710]: pgmap v38: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:17 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:01:18 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v39: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:19 np0005541914.localdomain ceph-mon[301710]: pgmap v39: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:20 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v40: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:21 np0005541914.localdomain ceph-mon[301710]: pgmap v40: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:22 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:01:22 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v41: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:23 np0005541914.localdomain ceph-mon[301710]: pgmap v41: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:24 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v42: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:25 np0005541914.localdomain ceph-mon[301710]: pgmap v42: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:26 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v43: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:27 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:01:27.143 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:01:27 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:01:27.144 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 02 10:01:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:01:27 np0005541914.localdomain ceph-mon[301710]: pgmap v43: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:28 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v44: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:01:29.147 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:01:30 np0005541914.localdomain ceph-mon[301710]: pgmap v44: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:30 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v45: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:31 np0005541914.localdomain ceph-mon[301710]: pgmap v45: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:01:32 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v46: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:32 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:01:32 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:01:32 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:01:33 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:01:33 np0005541914.localdomain podman[306091]: 2025-12-02 10:01:33.085286276 +0000 UTC m=+0.084508973 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 02 10:01:33 np0005541914.localdomain podman[306099]: 2025-12-02 10:01:33.14351552 +0000 UTC m=+0.133610991 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 02 10:01:33 np0005541914.localdomain podman[306091]: 2025-12-02 10:01:33.164990042 +0000 UTC m=+0.164212709 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:01:33 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:01:33 np0005541914.localdomain podman[306092]: 2025-12-02 10:01:33.247767232 +0000 UTC m=+0.242360070 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 10:01:33 np0005541914.localdomain podman[306092]: 2025-12-02 10:01:33.255924339 +0000 UTC m=+0.250517137 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 10:01:33 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:01:33 np0005541914.localdomain podman[306093]: 2025-12-02 10:01:33.31532658 +0000 UTC m=+0.306552896 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 02 10:01:33 np0005541914.localdomain podman[306099]: 2025-12-02 10:01:33.323187969 +0000 UTC m=+0.313283490 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 02 10:01:33 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:01:33 np0005541914.localdomain podman[306093]: 2025-12-02 10:01:33.375333539 +0000 UTC m=+0.366559865 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 02 10:01:33 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:01:33 np0005541914.localdomain podman[239757]: time="2025-12-02T10:01:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:01:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:01:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 10:01:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:01:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19188 "" "Go-http-client/1.1"
Dec 02 10:01:33 np0005541914.localdomain ceph-mon[301710]: pgmap v46: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:34 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v47: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:36 np0005541914.localdomain ceph-mon[301710]: pgmap v47: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:36 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v48: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:01:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:01:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:01:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fd384580250>)]
Dec 02 10:01:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 02 10:01:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:01:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fd3845801c0>)]
Dec 02 10:01:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 02 10:01:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:01:38 np0005541914.localdomain ceph-mon[301710]: pgmap v48: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:38 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v49: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s
Dec 02 10:01:40 np0005541914.localdomain ceph-mon[301710]: pgmap v49: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s
Dec 02 10:01:40 np0005541914.localdomain ceph-mon[301710]: mgrmap e43: np0005541914.lljzmk(active, since 92s), standbys: np0005541912.qwddia, np0005541913.mfesdm
Dec 02 10:01:40 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v50: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s
Dec 02 10:01:41 np0005541914.localdomain ceph-mon[301710]: pgmap v50: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s
Dec 02 10:01:41 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:01:41 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:01:42 np0005541914.localdomain podman[306175]: 2025-12-02 10:01:42.067113715 +0000 UTC m=+0.065020353 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, name=ubi9-minimal, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.tags=minimal rhel9, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, managed_by=edpm_ansible, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 02 10:01:42 np0005541914.localdomain podman[306175]: 2025-12-02 10:01:42.082979426 +0000 UTC m=+0.080886134 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, release=1755695350, managed_by=edpm_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, name=ubi9-minimal, architecture=x86_64, build-date=2025-08-20T13:12:41)
Dec 02 10:01:42 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:01:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:01:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:01:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:01:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:01:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:01:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:01:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:01:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:01:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:01:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:01:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:01:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:01:42 np0005541914.localdomain podman[306174]: 2025-12-02 10:01:42.179557084 +0000 UTC m=+0.178197945 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 10:01:42 np0005541914.localdomain podman[306174]: 2025-12-02 10:01:42.190625839 +0000 UTC m=+0.189266720 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 10:01:42 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:01:42 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:01:42 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v51: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s
Dec 02 10:01:44 np0005541914.localdomain ceph-mon[301710]: pgmap v51: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s
Dec 02 10:01:44 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v52: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s
Dec 02 10:01:46 np0005541914.localdomain ceph-mon[301710]: pgmap v52: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s
Dec 02 10:01:46 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v53: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s
Dec 02 10:01:47 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:01:47 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:01:48 np0005541914.localdomain podman[306217]: 2025-12-02 10:01:48.091978316 +0000 UTC m=+0.097501927 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 02 10:01:48 np0005541914.localdomain ceph-mon[301710]: pgmap v53: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s
Dec 02 10:01:48 np0005541914.localdomain podman[306217]: 2025-12-02 10:01:48.103498316 +0000 UTC m=+0.109021967 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 02 10:01:48 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:01:48 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v54: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s
Dec 02 10:01:50 np0005541914.localdomain sshd[306235]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 10:01:50 np0005541914.localdomain ceph-mon[301710]: pgmap v54: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s
Dec 02 10:01:50 np0005541914.localdomain sshd[306235]: error: kex_exchange_identification: Connection closed by remote host
Dec 02 10:01:50 np0005541914.localdomain sshd[306235]: Connection closed by 43.251.161.76 port 51448
Dec 02 10:01:50 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v55: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:51 np0005541914.localdomain ceph-mon[301710]: pgmap v55: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:52 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:01:52 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/3800101711' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:01:52 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v56: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:53 np0005541914.localdomain ceph-mon[301710]: pgmap v56: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:54 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/4063615434' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:01:54 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v57: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:55 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e92 e92: 6 total, 6 up, 6 in
Dec 02 10:01:55 np0005541914.localdomain ceph-mon[301710]: pgmap v57: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:56 np0005541914.localdomain ceph-mon[301710]: osdmap e92: 6 total, 6 up, 6 in
Dec 02 10:01:56 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v59: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:01:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e93 e93: 6 total, 6 up, 6 in
Dec 02 10:01:57 np0005541914.localdomain ceph-mon[301710]: pgmap v59: 177 pgs: 177 active+clean; 105 MiB data, 584 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:01:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:58.664 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:01:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:58.665 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:01:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:58.666 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:01:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:58.666 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:01:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:58.666 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:01:58 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v61: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 5.1 MiB/s wr, 41 op/s
Dec 02 10:01:59 np0005541914.localdomain ceph-mon[301710]: osdmap e93: 6 total, 6 up, 6 in
Dec 02 10:01:59 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/118532377' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:01:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:59.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:01:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:59.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 10:01:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:59.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 10:01:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:59.685 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 10:01:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:59.686 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:01:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:59.687 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:01:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:59.725 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:01:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:59.726 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:01:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:59.726 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:01:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:59.726 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 10:01:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:01:59.727 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:02:00 np0005541914.localdomain ceph-mon[301710]: pgmap v61: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 5.1 MiB/s wr, 41 op/s
Dec 02 10:02:00 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/3891973243' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:02:00 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:02:00 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1269445713' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:02:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:00.189 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:02:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:00.397 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 10:02:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:00.398 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11985MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 10:02:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:00.399 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:02:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:00.399 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:02:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:00.587 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 10:02:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:00.587 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 10:02:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:00.606 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:02:00 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:02:00Z|00038|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Dec 02 10:02:00 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v62: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 5.1 MiB/s wr, 41 op/s
Dec 02 10:02:01 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:02:01 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2528476716' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:02:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:01.017 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:02:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:01.024 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 10:02:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:01.133 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 10:02:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:01.136 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 10:02:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:01.137 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:02:01 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/1269445713' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:02:01 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/2528476716' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:02:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:01.978 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:02:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:01.993 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:02:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:01.993 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 10:02:02 np0005541914.localdomain ceph-mon[301710]: pgmap v62: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 5.1 MiB/s wr, 41 op/s
Dec 02 10:02:02 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:02:02 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v63: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Dec 02 10:02:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:02:03.173 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:02:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:02:03.174 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:02:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:02:03.174 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:02:03 np0005541914.localdomain podman[239757]: time="2025-12-02T10:02:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:02:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:02:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 10:02:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:02:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19184 "" "Go-http-client/1.1"
Dec 02 10:02:03 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:02:03 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:02:03 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:02:03 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:02:04 np0005541914.localdomain podman[306281]: 2025-12-02 10:02:04.088087053 +0000 UTC m=+0.087371911 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 10:02:04 np0005541914.localdomain podman[306281]: 2025-12-02 10:02:04.12197104 +0000 UTC m=+0.121255838 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 02 10:02:04 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:02:04 np0005541914.localdomain podman[306280]: 2025-12-02 10:02:04.197956594 +0000 UTC m=+0.197930142 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 02 10:02:04 np0005541914.localdomain ceph-mon[301710]: pgmap v63: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Dec 02 10:02:04 np0005541914.localdomain podman[306282]: 2025-12-02 10:02:04.256249312 +0000 UTC m=+0.247664961 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 02 10:02:04 np0005541914.localdomain podman[306282]: 2025-12-02 10:02:04.266058399 +0000 UTC m=+0.257474038 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 02 10:02:04 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:02:04 np0005541914.localdomain podman[306280]: 2025-12-02 10:02:04.283178998 +0000 UTC m=+0.283152546 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 02 10:02:04 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:02:04 np0005541914.localdomain podman[306283]: 2025-12-02 10:02:04.303282677 +0000 UTC m=+0.294695376 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 02 10:02:04 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 02 10:02:04 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4056391954' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:02:04 np0005541914.localdomain podman[306283]: 2025-12-02 10:02:04.370764593 +0000 UTC m=+0.362177312 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 02 10:02:04 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 02 10:02:04 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4056391954' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:02:04 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:02:04 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v64: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 4.5 MiB/s wr, 42 op/s
Dec 02 10:02:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/4056391954' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:02:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/4056391954' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:02:06 np0005541914.localdomain ceph-mon[301710]: pgmap v64: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 4.5 MiB/s wr, 42 op/s
Dec 02 10:02:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:02:06
Dec 02 10:02:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 02 10:02:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] do_upmap
Dec 02 10:02:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] pools ['manila_data', 'vms', 'backups', 'images', '.mgr', 'volumes', 'manila_metadata']
Dec 02 10:02:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes
Dec 02 10:02:06 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v65: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 38 op/s
Dec 02 10:02:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust
Dec 02 10:02:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:02:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 02 10:02:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:02:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32)
Dec 02 10:02:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:02:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 02 10:02:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:02:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32)
Dec 02 10:02:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:02:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 02 10:02:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:02:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 02 10:02:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:02:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16)
Dec 02 10:02:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 02 10:02:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:02:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:02:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:02:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:02:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:02:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:02:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:02:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:02:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:02:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:02:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 02 10:02:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:02:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:02:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:02:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:02:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:02:08 np0005541914.localdomain ceph-mon[301710]: pgmap v65: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 38 op/s
Dec 02 10:02:08 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v66: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail; 3.0 KiB/s rd, 465 B/s wr, 4 op/s
Dec 02 10:02:10 np0005541914.localdomain ceph-mon[301710]: pgmap v66: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail; 3.0 KiB/s rd, 465 B/s wr, 4 op/s
Dec 02 10:02:10 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v67: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail; 2.7 KiB/s rd, 426 B/s wr, 3 op/s
Dec 02 10:02:11 np0005541914.localdomain ceph-mon[301710]: pgmap v67: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail; 2.7 KiB/s rd, 426 B/s wr, 3 op/s
Dec 02 10:02:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:02:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:02:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:02:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:02:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:02:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:02:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:02:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:02:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:02:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:02:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:02:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:02:12 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:02:12 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v68: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail; 2.7 KiB/s rd, 426 B/s wr, 3 op/s
Dec 02 10:02:12 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:02:12 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:02:13 np0005541914.localdomain podman[306365]: 2025-12-02 10:02:13.076235619 +0000 UTC m=+0.078994976 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 10:02:13 np0005541914.localdomain podman[306365]: 2025-12-02 10:02:13.08384616 +0000 UTC m=+0.086605517 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 10:02:13 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:02:13 np0005541914.localdomain podman[306366]: 2025-12-02 10:02:13.12077559 +0000 UTC m=+0.122171946 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, com.redhat.component=ubi9-minimal-container, config_id=edpm, release=1755695350, vendor=Red Hat, Inc., io.openshift.expose-services=, name=ubi9-minimal)
Dec 02 10:02:13 np0005541914.localdomain podman[306366]: 2025-12-02 10:02:13.157712499 +0000 UTC m=+0.159108885 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, version=9.6, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vendor=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, release=1755695350, container_name=openstack_network_exporter)
Dec 02 10:02:13 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:02:13 np0005541914.localdomain ceph-mon[301710]: pgmap v68: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail; 2.7 KiB/s rd, 426 B/s wr, 3 op/s
Dec 02 10:02:14 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v69: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:02:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.440 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:02:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:02:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:02:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:02:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:02:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:02:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:02:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:02:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:02:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:02:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:02:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:02:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:02:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:02:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:02:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:02:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:02:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:02:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:02:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:02:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:02:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:02:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:02:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:02:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:02:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:02:15 np0005541914.localdomain sudo[306404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 10:02:15 np0005541914.localdomain sudo[306404]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:02:15 np0005541914.localdomain sudo[306404]: pam_unix(sudo:session): session closed for user root
Dec 02 10:02:15 np0005541914.localdomain sudo[306422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 10:02:15 np0005541914.localdomain sudo[306422]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:02:15 np0005541914.localdomain ceph-mon[301710]: pgmap v69: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:02:16 np0005541914.localdomain sudo[306422]: pam_unix(sudo:session): session closed for user root
Dec 02 10:02:16 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 10:02:16 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:02:16 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 02 10:02:16 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:02:16 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 02 10:02:16 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] update: starting ev 1cc76e6b-84f6-41ff-8ef9-5c2a14957181 (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:02:16 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] complete: finished ev 1cc76e6b-84f6-41ff-8ef9-5c2a14957181 (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:02:16 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Completed event 1cc76e6b-84f6-41ff-8ef9-5c2a14957181 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 02 10:02:16 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 02 10:02:16 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:02:16 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v70: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:02:16 np0005541914.localdomain sudo[306472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 10:02:16 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events
Dec 02 10:02:16 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 02 10:02:16 np0005541914.localdomain sudo[306472]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:02:16 np0005541914.localdomain sudo[306472]: pam_unix(sudo:session): session closed for user root
Dec 02 10:02:16 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:02:16 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:02:16 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:02:16 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:02:16 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:02:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:02:17 np0005541914.localdomain ceph-mon[301710]: pgmap v70: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:02:18 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v71: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:02:18 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:02:19 np0005541914.localdomain systemd[297067]: Created slice User Background Tasks Slice.
Dec 02 10:02:19 np0005541914.localdomain systemd[297067]: Starting Cleanup of User's Temporary Files and Directories...
Dec 02 10:02:19 np0005541914.localdomain podman[306490]: 2025-12-02 10:02:19.092076047 +0000 UTC m=+0.087506104 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 02 10:02:19 np0005541914.localdomain systemd[297067]: Finished Cleanup of User's Temporary Files and Directories.
Dec 02 10:02:19 np0005541914.localdomain podman[306490]: 2025-12-02 10:02:19.106872426 +0000 UTC m=+0.102302473 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd)
Dec 02 10:02:19 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:02:20 np0005541914.localdomain ceph-mon[301710]: pgmap v71: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:02:20 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v72: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:02:21 np0005541914.localdomain ceph-mon[301710]: pgmap v72: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:02:22 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:02:22 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v73: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:02:23 np0005541914.localdomain ceph-mon[301710]: pgmap v73: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:02:24 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v74: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:02:26 np0005541914.localdomain ceph-mon[301710]: pgmap v74: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:02:26 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v75: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:02:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:02:27 np0005541914.localdomain sshd[306511]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 10:02:27 np0005541914.localdomain sshd[306511]: error: kex_exchange_identification: Connection closed by remote host
Dec 02 10:02:27 np0005541914.localdomain sshd[306511]: Connection closed by 193.32.162.146 port 53818
Dec 02 10:02:28 np0005541914.localdomain ceph-mon[301710]: pgmap v75: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:02:28 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v76: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:02:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:02:29.038 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:02:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:02:29.040 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 02 10:02:30 np0005541914.localdomain ceph-mon[301710]: pgmap v76: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:02:30 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v77: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:02:31 np0005541914.localdomain ceph-mon[301710]: pgmap v77: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:02:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:02:32 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v78: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:02:33 np0005541914.localdomain podman[239757]: time="2025-12-02T10:02:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:02:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:02:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 10:02:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:02:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19188 "" "Go-http-client/1.1"
Dec 02 10:02:33 np0005541914.localdomain ceph-mon[301710]: pgmap v78: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:02:34 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v79: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:02:34 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:02:34 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:02:34 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:02:35 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:02:35 np0005541914.localdomain podman[306513]: 2025-12-02 10:02:35.093847205 +0000 UTC m=+0.087953518 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 02 10:02:35 np0005541914.localdomain podman[306513]: 2025-12-02 10:02:35.102732925 +0000 UTC m=+0.096839198 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 02 10:02:35 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:02:35 np0005541914.localdomain podman[306515]: 2025-12-02 10:02:35.154876975 +0000 UTC m=+0.138140399 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller)
Dec 02 10:02:35 np0005541914.localdomain podman[306515]: 2025-12-02 10:02:35.196188458 +0000 UTC m=+0.179451842 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 02 10:02:35 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:02:35 np0005541914.localdomain podman[306512]: 2025-12-02 10:02:35.244646907 +0000 UTC m=+0.239706229 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:02:35 np0005541914.localdomain podman[306512]: 2025-12-02 10:02:35.254502946 +0000 UTC m=+0.249562238 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 02 10:02:35 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:02:35 np0005541914.localdomain podman[306514]: 2025-12-02 10:02:35.204289883 +0000 UTC m=+0.192628051 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 02 10:02:35 np0005541914.localdomain podman[306514]: 2025-12-02 10:02:35.338033669 +0000 UTC m=+0.326371837 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 02 10:02:35 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:02:35 np0005541914.localdomain ceph-mon[301710]: pgmap v79: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:02:36 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:02:36.042 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:02:36 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v80: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:02:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:02:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:02:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:02:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:02:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:02:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:02:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:02:38 np0005541914.localdomain ceph-mon[301710]: pgmap v80: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:02:38 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:02:38.771 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:02:38Z, description=, device_id=f7309812-362b-4bd1-84da-e909158b6cbe, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034ca4340>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034ca4910>], id=d5a7ee3e-3c8d-4f8c-ad01-26038c29d245, ip_allocation=immediate, mac_address=fa:16:3e:41:46:7b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=189, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:02:38Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:02:38 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v81: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:02:38 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses
Dec 02 10:02:38 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:02:38 np0005541914.localdomain podman[306612]: 2025-12-02 10:02:38.969236646 +0000 UTC m=+0.054583977 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:02:38 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:02:39 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:02:39.210 262347 INFO neutron.agent.dhcp.agent [None req-13bf1048-a70a-4e37-9d9f-5b7e1b84444c - - - - - -] DHCP configuration for ports {'d5a7ee3e-3c8d-4f8c-ad01-26038c29d245'} is completed
Dec 02 10:02:40 np0005541914.localdomain ceph-mon[301710]: pgmap v81: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:02:40 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v82: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:02:41 np0005541914.localdomain ceph-mon[301710]: pgmap v82: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:02:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:02:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:02:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:02:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:02:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:02:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:02:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:02:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:02:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:02:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:02:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:02:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:02:42 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:02:42 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v83: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:02:43 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:02:43 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:02:43 np0005541914.localdomain ceph-mon[301710]: pgmap v83: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:02:44 np0005541914.localdomain systemd[1]: tmp-crun.IZswFC.mount: Deactivated successfully.
Dec 02 10:02:44 np0005541914.localdomain podman[306634]: 2025-12-02 10:02:44.038767583 +0000 UTC m=+0.048804261 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.buildah.version=1.33.7, vcs-type=git)
Dec 02 10:02:44 np0005541914.localdomain podman[306633]: 2025-12-02 10:02:44.23459514 +0000 UTC m=+0.246299738 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 02 10:02:44 np0005541914.localdomain podman[306634]: 2025-12-02 10:02:44.243175781 +0000 UTC m=+0.253212449 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-type=git, distribution-scope=public, release=1755695350, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, version=9.6, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, architecture=x86_64)
Dec 02 10:02:44 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:02:44 np0005541914.localdomain podman[306633]: 2025-12-02 10:02:44.423609031 +0000 UTC m=+0.435313629 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 10:02:44 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:02:44 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v84: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:02:46 np0005541914.localdomain ceph-mon[301710]: pgmap v84: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:02:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:46.724 281049 DEBUG oslo_concurrency.lockutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Acquiring lock "268e09a3-7abe-4037-a14a-068e7b8a78fb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:02:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:46.725 281049 DEBUG oslo_concurrency.lockutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lock "268e09a3-7abe-4037-a14a-068e7b8a78fb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:02:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:46.745 281049 DEBUG nova.compute.manager [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 02 10:02:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:46.869 281049 DEBUG oslo_concurrency.lockutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:02:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:46.870 281049 DEBUG oslo_concurrency.lockutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:02:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:46.876 281049 DEBUG nova.virt.hardware [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 02 10:02:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:46.876 281049 INFO nova.compute.claims [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Claim successful on node np0005541914.localdomain
Dec 02 10:02:46 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v85: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:02:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:47.012 281049 DEBUG oslo_concurrency.processutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:02:47 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:02:47 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:02:47 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3473161244' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:02:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:47.451 281049 DEBUG oslo_concurrency.processutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:02:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:47.459 281049 DEBUG nova.compute.provider_tree [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 10:02:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:47.728 281049 DEBUG nova.scheduler.client.report [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 10:02:48 np0005541914.localdomain ceph-mon[301710]: pgmap v85: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:02:48 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/3473161244' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:02:48 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Dec 02 10:02:48 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:02:48.123300) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 02 10:02:48 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Dec 02 10:02:48 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669768123390, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1552, "num_deletes": 251, "total_data_size": 2354823, "memory_usage": 2499192, "flush_reason": "Manual Compaction"}
Dec 02 10:02:48 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Dec 02 10:02:48 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669768137907, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 1534273, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15495, "largest_seqno": 17042, "table_properties": {"data_size": 1528251, "index_size": 3300, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13615, "raw_average_key_size": 20, "raw_value_size": 1515825, "raw_average_value_size": 2321, "num_data_blocks": 142, "num_entries": 653, "num_filter_entries": 653, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669660, "oldest_key_time": 1764669660, "file_creation_time": 1764669768, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Dec 02 10:02:48 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 14656 microseconds, and 5885 cpu microseconds.
Dec 02 10:02:48 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 10:02:48 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:02:48.137968) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 1534273 bytes OK
Dec 02 10:02:48 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:02:48.137997) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Dec 02 10:02:48 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:02:48.139834) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Dec 02 10:02:48 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:02:48.139851) EVENT_LOG_v1 {"time_micros": 1764669768139846, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 02 10:02:48 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:02:48.139869) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 02 10:02:48 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 2347488, prev total WAL file size 2347488, number of live WAL files 2.
Dec 02 10:02:48 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:02:48 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:02:48.140737) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131353436' seq:72057594037927935, type:22 .. '7061786F73003131373938' seq:0, type:0; will stop at (end)
Dec 02 10:02:48 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 02 10:02:48 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(1498KB)], [21(17MB)]
Dec 02 10:02:48 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669768140860, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 19816931, "oldest_snapshot_seqno": -1}
Dec 02 10:02:48 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 12015 keys, 17167680 bytes, temperature: kUnknown
Dec 02 10:02:48 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669768240801, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 17167680, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17100154, "index_size": 36385, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30085, "raw_key_size": 323674, "raw_average_key_size": 26, "raw_value_size": 16896089, "raw_average_value_size": 1406, "num_data_blocks": 1378, "num_entries": 12015, "num_filter_entries": 12015, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669502, "oldest_key_time": 0, "file_creation_time": 1764669768, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Dec 02 10:02:48 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 10:02:48 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:02:48.241250) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 17167680 bytes
Dec 02 10:02:48 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:02:48.243425) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 198.0 rd, 171.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 17.4 +0.0 blob) out(16.4 +0.0 blob), read-write-amplify(24.1) write-amplify(11.2) OK, records in: 12547, records dropped: 532 output_compression: NoCompression
Dec 02 10:02:48 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:02:48.243482) EVENT_LOG_v1 {"time_micros": 1764669768243444, "job": 10, "event": "compaction_finished", "compaction_time_micros": 100071, "compaction_time_cpu_micros": 55937, "output_level": 6, "num_output_files": 1, "total_output_size": 17167680, "num_input_records": 12547, "num_output_records": 12015, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 02 10:02:48 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:02:48 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669768243902, "job": 10, "event": "table_file_deletion", "file_number": 23}
Dec 02 10:02:48 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:02:48 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669768247284, "job": 10, "event": "table_file_deletion", "file_number": 21}
Dec 02 10:02:48 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:02:48.140499) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:02:48 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:02:48.247443) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:02:48 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:02:48.247487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:02:48 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:02:48.247490) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:02:48 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:02:48.247493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:02:48 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:02:48.247496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:02:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:48.429 281049 DEBUG oslo_concurrency.lockutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:02:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:48.430 281049 DEBUG nova.compute.manager [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 02 10:02:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:48.500 281049 DEBUG nova.compute.manager [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Dec 02 10:02:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:48.529 281049 INFO nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 02 10:02:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:48.554 281049 DEBUG nova.compute.manager [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 02 10:02:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:48.667 281049 DEBUG nova.compute.manager [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 02 10:02:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:48.670 281049 DEBUG nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 02 10:02:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:48.671 281049 INFO nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Creating image(s)
Dec 02 10:02:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:48.705 281049 DEBUG nova.storage.rbd_utils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] rbd image 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 02 10:02:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:48.741 281049 DEBUG nova.storage.rbd_utils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] rbd image 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 02 10:02:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:48.787 281049 DEBUG nova.storage.rbd_utils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] rbd image 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 02 10:02:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:48.793 281049 DEBUG oslo_concurrency.lockutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Acquiring lock "43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:02:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:48.794 281049 DEBUG oslo_concurrency.lockutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lock "43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:02:48 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v86: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:02:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:49.906 281049 DEBUG nova.virt.libvirt.imagebackend [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Image locations are: [{'url': 'rbd://c7c8e171-a193-56fb-95fa-8879fcfa7074/images/d85e840d-fa56-497b-b5bd-b49584d3e97a/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://c7c8e171-a193-56fb-95fa-8879fcfa7074/images/d85e840d-fa56-497b-b5bd-b49584d3e97a/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 02 10:02:49 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:02:50 np0005541914.localdomain podman[306753]: 2025-12-02 10:02:50.478615067 +0000 UTC m=+0.481701576 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 02 10:02:50 np0005541914.localdomain ceph-mon[301710]: pgmap v86: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:02:50 np0005541914.localdomain podman[306753]: 2025-12-02 10:02:50.725518454 +0000 UTC m=+0.728605013 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 02 10:02:50 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v87: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:02:50 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:02:52 np0005541914.localdomain ceph-mon[301710]: pgmap v87: 177 pgs: 177 active+clean; 145 MiB data, 706 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:02:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:52.151 281049 DEBUG oslo_concurrency.processutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:02:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:52.219 281049 DEBUG oslo_concurrency.processutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc.part --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:02:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:52.221 281049 DEBUG nova.virt.images [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] d85e840d-fa56-497b-b5bd-b49584d3e97a was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Dec 02 10:02:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:52.222 281049 DEBUG nova.privsep.utils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Dec 02 10:02:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:52.223 281049 DEBUG oslo_concurrency.processutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc.part /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:02:52 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:02:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:52.404 281049 DEBUG oslo_concurrency.processutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc.part /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc.converted" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:02:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:52.407 281049 DEBUG oslo_concurrency.processutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:02:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:52.483 281049 DEBUG oslo_concurrency.processutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc.converted --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:02:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:52.484 281049 DEBUG oslo_concurrency.lockutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lock "43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 3.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:02:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:52.511 281049 DEBUG nova.storage.rbd_utils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] rbd image 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 02 10:02:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:52.515 281049 DEBUG oslo_concurrency.processutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:02:52 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v88: 177 pgs: 177 active+clean; 152 MiB data, 706 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 105 KiB/s wr, 13 op/s
Dec 02 10:02:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:53.205 281049 DEBUG oslo_concurrency.processutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.690s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:02:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:53.302 281049 DEBUG nova.storage.rbd_utils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] resizing rbd image 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 02 10:02:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:53.467 281049 DEBUG nova.objects.instance [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lazy-loading 'migration_context' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 02 10:02:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:53.775 281049 DEBUG nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 02 10:02:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:53.775 281049 DEBUG nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Ensure instance console log exists: /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 02 10:02:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:53.776 281049 DEBUG oslo_concurrency.lockutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:02:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:53.777 281049 DEBUG oslo_concurrency.lockutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:02:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:53.777 281049 DEBUG oslo_concurrency.lockutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:02:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:53.780 281049 DEBUG nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T10:01:53Z,direct_url=<?>,disk_format='qcow2',id=d85e840d-fa56-497b-b5bd-b49584d3e97a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e2d97696ab6749899bb8ba5ce29a3de2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T10:01:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encryption_secret_uuid': None, 'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'disk_bus': 'virtio', 'encrypted': False, 'size': 0, 'device_name': '/dev/vda', 'image_id': 'd85e840d-fa56-497b-b5bd-b49584d3e97a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 02 10:02:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:53.788 281049 WARNING nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 10:02:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:53.791 281049 DEBUG nova.virt.libvirt.host [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Searching host: 'np0005541914.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 02 10:02:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:53.792 281049 DEBUG nova.virt.libvirt.host [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 02 10:02:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:53.794 281049 DEBUG nova.virt.libvirt.host [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Searching host: 'np0005541914.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 02 10:02:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:53.795 281049 DEBUG nova.virt.libvirt.host [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 02 10:02:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:53.796 281049 DEBUG nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 02 10:02:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:53.796 281049 DEBUG nova.virt.hardware [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T10:01:31Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='82beb986-6d20-42dc-b738-1cef87dee30f',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T10:01:53Z,direct_url=<?>,disk_format='qcow2',id=d85e840d-fa56-497b-b5bd-b49584d3e97a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e2d97696ab6749899bb8ba5ce29a3de2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T10:01:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 02 10:02:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:53.797 281049 DEBUG nova.virt.hardware [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 02 10:02:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:53.797 281049 DEBUG nova.virt.hardware [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 02 10:02:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:53.798 281049 DEBUG nova.virt.hardware [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 02 10:02:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:53.798 281049 DEBUG nova.virt.hardware [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 02 10:02:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:53.799 281049 DEBUG nova.virt.hardware [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 02 10:02:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:53.799 281049 DEBUG nova.virt.hardware [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 02 10:02:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:53.800 281049 DEBUG nova.virt.hardware [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 02 10:02:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:53.800 281049 DEBUG nova.virt.hardware [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 02 10:02:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:53.801 281049 DEBUG nova.virt.hardware [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 02 10:02:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:53.801 281049 DEBUG nova.virt.hardware [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 02 10:02:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:53.807 281049 DEBUG nova.privsep.utils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Dec 02 10:02:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:53.808 281049 DEBUG oslo_concurrency.processutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:02:54 np0005541914.localdomain ceph-mon[301710]: pgmap v88: 177 pgs: 177 active+clean; 152 MiB data, 706 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 105 KiB/s wr, 13 op/s
Dec 02 10:02:54 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 02 10:02:54 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2378891610' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:54.281 281049 DEBUG oslo_concurrency.processutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:54.308 281049 DEBUG nova.storage.rbd_utils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] rbd image 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:54.311 281049 DEBUG oslo_concurrency.processutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:02:54 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 02 10:02:54 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3529410076' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:54.783 281049 DEBUG oslo_concurrency.processutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:54.787 281049 DEBUG nova.objects.instance [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lazy-loading 'pci_devices' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:54.831 281049 DEBUG nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] End _get_guest_xml xml=<domain type="kvm">
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:   <uuid>268e09a3-7abe-4037-a14a-068e7b8a78fb</uuid>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:   <name>instance-00000006</name>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:   <memory>131072</memory>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:   <vcpu>1</vcpu>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:   <metadata>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:       <nova:name>tempest-UnshelveToHostMultiNodesTest-server-2084001492</nova:name>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:       <nova:creationTime>2025-12-02 10:02:53</nova:creationTime>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:       <nova:flavor name="m1.nano">
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:         <nova:memory>128</nova:memory>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:         <nova:disk>1</nova:disk>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:         <nova:swap>0</nova:swap>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:         <nova:ephemeral>0</nova:ephemeral>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:         <nova:vcpus>1</nova:vcpus>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:       </nova:flavor>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:       <nova:owner>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:         <nova:user uuid="96d084f3c3184bf4ac7b9635139dd4aa">tempest-UnshelveToHostMultiNodesTest-557689334-project-member</nova:user>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:         <nova:project uuid="09cae3217c5e430b8dbe17828669a978">tempest-UnshelveToHostMultiNodesTest-557689334</nova:project>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:       </nova:owner>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:       <nova:root type="image" uuid="d85e840d-fa56-497b-b5bd-b49584d3e97a"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:       <nova:ports/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     </nova:instance>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:   </metadata>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:   <sysinfo type="smbios">
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <system>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:       <entry name="manufacturer">RDO</entry>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:       <entry name="product">OpenStack Compute</entry>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:       <entry name="serial">268e09a3-7abe-4037-a14a-068e7b8a78fb</entry>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:       <entry name="uuid">268e09a3-7abe-4037-a14a-068e7b8a78fb</entry>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:       <entry name="family">Virtual Machine</entry>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     </system>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:   </sysinfo>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:   <os>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <boot dev="hd"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <smbios mode="sysinfo"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:   </os>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:   <features>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <acpi/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <apic/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <vmcoreinfo/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:   </features>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:   <clock offset="utc">
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <timer name="pit" tickpolicy="delay"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <timer name="hpet" present="no"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:   </clock>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:   <cpu mode="host-model" match="exact">
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <topology sockets="1" cores="1" threads="1"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:   </cpu>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:   <devices>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <disk type="network" device="disk">
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:       <driver type="raw" cache="none"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:       <source protocol="rbd" name="vms/268e09a3-7abe-4037-a14a-068e7b8a78fb_disk">
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:         <host name="172.18.0.103" port="6789"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:         <host name="172.18.0.104" port="6789"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:         <host name="172.18.0.105" port="6789"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:       </source>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:       <auth username="openstack">
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:         <secret type="ceph" uuid="c7c8e171-a193-56fb-95fa-8879fcfa7074"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:       </auth>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:       <target dev="vda" bus="virtio"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     </disk>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <disk type="network" device="cdrom">
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:       <driver type="raw" cache="none"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:       <source protocol="rbd" name="vms/268e09a3-7abe-4037-a14a-068e7b8a78fb_disk.config">
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:         <host name="172.18.0.103" port="6789"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:         <host name="172.18.0.104" port="6789"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:         <host name="172.18.0.105" port="6789"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:       </source>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:       <auth username="openstack">
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:         <secret type="ceph" uuid="c7c8e171-a193-56fb-95fa-8879fcfa7074"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:       </auth>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:       <target dev="sda" bus="sata"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     </disk>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <serial type="pty">
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:       <log file="/var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb/console.log" append="off"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     </serial>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <video>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:       <model type="virtio"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     </video>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <input type="tablet" bus="usb"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <rng model="virtio">
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:       <backend model="random">/dev/urandom</backend>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     </rng>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <controller type="usb" index="0"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     <memballoon model="virtio">
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:       <stats period="10"/>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:     </memballoon>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:   </devices>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]: </domain>
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:54.923 281049 DEBUG nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:54.924 281049 DEBUG nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:54.925 281049 INFO nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Using config drive
Dec 02 10:02:54 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v89: 177 pgs: 177 active+clean; 152 MiB data, 706 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 105 KiB/s wr, 13 op/s
Dec 02 10:02:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:54.960 281049 DEBUG nova.storage.rbd_utils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] rbd image 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 02 10:02:54 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:02:54.976 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:02:54Z, description=, device_id=f36e6078-7c75-4c7a-9ef2-e9c65f9cb32e, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034cbd400>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034cbde50>], id=4265c039-facc-45ce-8659-ec262cbe782c, ip_allocation=immediate, mac_address=fa:16:3e:24:61:e3, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=263, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:02:54Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:02:55 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/2378891610' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:02:55 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/3529410076' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:02:55 np0005541914.localdomain podman[306993]: 2025-12-02 10:02:55.176750843 +0000 UTC m=+0.050952636 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 02 10:02:55 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses
Dec 02 10:02:55 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:02:55 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:02:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:55.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:02:55 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:02:55.710 262347 INFO neutron.agent.dhcp.agent [None req-ffbd8990-af5a-4331-a172-6451ebbfcb89 - - - - - -] DHCP configuration for ports {'4265c039-facc-45ce-8659-ec262cbe782c'} is completed
Dec 02 10:02:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:56.001 281049 INFO nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Creating config drive at /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.config
Dec 02 10:02:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:56.007 281049 DEBUG oslo_concurrency.processutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcm0b84ay execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:02:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:56.133 281049 DEBUG oslo_concurrency.processutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcm0b84ay" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:02:56 np0005541914.localdomain ceph-mon[301710]: pgmap v89: 177 pgs: 177 active+clean; 152 MiB data, 706 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 105 KiB/s wr, 13 op/s
Dec 02 10:02:56 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/660414268' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:02:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:56.336 281049 DEBUG nova.storage.rbd_utils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] rbd image 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 02 10:02:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:56.341 281049 DEBUG oslo_concurrency.processutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.config 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:02:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:56.593 281049 DEBUG oslo_concurrency.processutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.config 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.252s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:02:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:56.595 281049 INFO nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Deleting local config drive /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.config because it was imported into RBD.
Dec 02 10:02:56 np0005541914.localdomain systemd[1]: Started libvirt secret daemon.
Dec 02 10:02:56 np0005541914.localdomain systemd-machined[202765]: New machine qemu-1-instance-00000006.
Dec 02 10:02:56 np0005541914.localdomain systemd[1]: Started Virtual Machine qemu-1-instance-00000006.
Dec 02 10:02:56 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v90: 177 pgs: 177 active+clean; 152 MiB data, 706 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 105 KiB/s wr, 13 op/s
Dec 02 10:02:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:56.981 281049 DEBUG nova.virt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Emitting event <LifecycleEvent: 1764669776.9629092, 268e09a3-7abe-4037-a14a-068e7b8a78fb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 02 10:02:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:56.982 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] VM Resumed (Lifecycle Event)
Dec 02 10:02:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:56.985 281049 DEBUG nova.compute.manager [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 02 10:02:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:56.985 281049 DEBUG nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 02 10:02:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:56.988 281049 INFO nova.virt.libvirt.driver [-] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance spawned successfully.
Dec 02 10:02:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:56.989 281049 DEBUG nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 02 10:02:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:57.005 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 02 10:02:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:57.011 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 02 10:02:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:57.013 281049 DEBUG nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 02 10:02:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:57.014 281049 DEBUG nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 02 10:02:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:57.014 281049 DEBUG nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 02 10:02:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:57.015 281049 DEBUG nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 02 10:02:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:57.015 281049 DEBUG nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 02 10:02:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:57.015 281049 DEBUG nova.virt.libvirt.driver [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 02 10:02:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:57.044 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 02 10:02:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:57.044 281049 DEBUG nova.virt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Emitting event <LifecycleEvent: 1764669776.9630558, 268e09a3-7abe-4037-a14a-068e7b8a78fb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 02 10:02:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:57.045 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] VM Started (Lifecycle Event)
Dec 02 10:02:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:57.088 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 02 10:02:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:57.091 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 02 10:02:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:57.151 281049 INFO nova.compute.manager [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Took 8.48 seconds to spawn the instance on the hypervisor.
Dec 02 10:02:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:57.152 281049 DEBUG nova.compute.manager [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 02 10:02:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:57.162 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 02 10:02:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:57.236 281049 INFO nova.compute.manager [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Took 10.41 seconds to build instance.
Dec 02 10:02:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:57.256 281049 DEBUG oslo_concurrency.lockutils [None req-cdd43979-0f22-482f-90a7-52882f2a2d2b 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lock "268e09a3-7abe-4037-a14a-068e7b8a78fb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:02:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e94 e94: 6 total, 6 up, 6 in
Dec 02 10:02:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:02:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:57.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:02:58 np0005541914.localdomain ceph-mon[301710]: pgmap v90: 177 pgs: 177 active+clean; 152 MiB data, 706 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 105 KiB/s wr, 13 op/s
Dec 02 10:02:58 np0005541914.localdomain ceph-mon[301710]: osdmap e94: 6 total, 6 up, 6 in
Dec 02 10:02:58 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/2247793407' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:02:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:58.524 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:02:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:58.526 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:02:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:58.718 281049 DEBUG oslo_concurrency.lockutils [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Acquiring lock "268e09a3-7abe-4037-a14a-068e7b8a78fb" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:02:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:58.719 281049 DEBUG oslo_concurrency.lockutils [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lock "268e09a3-7abe-4037-a14a-068e7b8a78fb" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:02:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:58.719 281049 INFO nova.compute.manager [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Shelving
Dec 02 10:02:58 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v92: 177 pgs: 177 active+clean; 192 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 79 op/s
Dec 02 10:02:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:02:58.995 281049 DEBUG nova.virt.libvirt.driver [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 02 10:02:59 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e95 e95: 6 total, 6 up, 6 in
Dec 02 10:03:00 np0005541914.localdomain ceph-mon[301710]: pgmap v92: 177 pgs: 177 active+clean; 192 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 79 op/s
Dec 02 10:03:00 np0005541914.localdomain ceph-mon[301710]: osdmap e95: 6 total, 6 up, 6 in
Dec 02 10:03:00 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/1870214121' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:03:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:00.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:03:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:00.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 10:03:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:00.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 10:03:00 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v94: 177 pgs: 177 active+clean; 192 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 467 KiB/s rd, 2.5 MiB/s wr, 79 op/s
Dec 02 10:03:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:00.983 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "refresh_cache-268e09a3-7abe-4037-a14a-068e7b8a78fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 02 10:03:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:00.984 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquired lock "refresh_cache-268e09a3-7abe-4037-a14a-068e7b8a78fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 02 10:03:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:00.984 281049 DEBUG nova.network.neutron [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 02 10:03:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:00.985 281049 DEBUG nova.objects.instance [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 02 10:03:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:01.246 281049 DEBUG nova.network.neutron [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 02 10:03:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:01.421 281049 DEBUG nova.network.neutron [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 02 10:03:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:01.441 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Releasing lock "refresh_cache-268e09a3-7abe-4037-a14a-068e7b8a78fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 02 10:03:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:01.442 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 02 10:03:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:01.442 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:03:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:01.443 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:03:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:01.443 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 02 10:03:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:01.540 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:03:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:01.541 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:03:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:01.541 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 10:03:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:01.542 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:03:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:01.565 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:03:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:01.566 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:03:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:01.566 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:03:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:01.567 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 10:03:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:01.568 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:03:01 np0005541914.localdomain ceph-mon[301710]: pgmap v94: 177 pgs: 177 active+clean; 192 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 467 KiB/s rd, 2.5 MiB/s wr, 79 op/s
Dec 02 10:03:02 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:03:02 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/388746291' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:03:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:02.061 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:03:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:02.110 281049 DEBUG nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 02 10:03:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:02.110 281049 DEBUG nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 02 10:03:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:02.229 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 10:03:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:02.230 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11770MB free_disk=41.774322509765625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 10:03:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:02.231 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:03:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:02.231 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:03:02 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:03:02 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/388746291' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:03:02 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/1138207709' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:03:02 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v95: 177 pgs: 177 active+clean; 192 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 2.9 MiB/s rd, 2.5 MiB/s wr, 190 op/s
Dec 02 10:03:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:03.174 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:03:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:03.176 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:03:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:03.176 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:03:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:03.342 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Instance 268e09a3-7abe-4037-a14a-068e7b8a78fb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 02 10:03:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:03.343 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 10:03:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:03.343 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=640MB phys_disk=41GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 10:03:03 np0005541914.localdomain podman[239757]: time="2025-12-02T10:03:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:03:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:03:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 10:03:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:03:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19202 "" "Go-http-client/1.1"
Dec 02 10:03:03 np0005541914.localdomain ceph-mon[301710]: pgmap v95: 177 pgs: 177 active+clean; 192 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 2.9 MiB/s rd, 2.5 MiB/s wr, 190 op/s
Dec 02 10:03:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:03.808 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Refreshing inventories for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 02 10:03:04 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:03:04.328 2 INFO neutron.agent.securitygroups_rpc [None req-5c06dfad-89c5-4abc-a7de-de583f339085 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Security group member updated ['5c93e274-85ac-42d3-b949-bdb62e6b8c39']
Dec 02 10:03:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:04.433 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Updating ProviderTree inventory for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 02 10:03:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:04.434 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Updating inventory in ProviderTree for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 02 10:03:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:04.453 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Refreshing aggregate associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 02 10:03:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:04.485 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Refreshing trait associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, traits: HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,HW_CPU_X86_SSE42,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_IDE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AKI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 02 10:03:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:04.529 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:03:04 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1475536354' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:03:04 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1475536354' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:03:04 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v96: 177 pgs: 177 active+clean; 192 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 2.9 MiB/s rd, 2.5 MiB/s wr, 190 op/s
Dec 02 10:03:04 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:03:04 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4080154938' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:03:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:04.988 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:03:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:04.996 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Updating inventory in ProviderTree for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 with inventory: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 02 10:03:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:05.079 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Updated inventory for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 with generation 4 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Dec 02 10:03:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:05.079 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Updating resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 generation from 4 to 5 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 02 10:03:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:05.080 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Updating inventory in ProviderTree for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 with inventory: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 02 10:03:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:05.113 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 10:03:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:05.113 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:03:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:05.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:03:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:05.527 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 02 10:03:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:05.645 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 02 10:03:05 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e96 e96: 6 total, 6 up, 6 in
Dec 02 10:03:05 np0005541914.localdomain ceph-mon[301710]: pgmap v96: 177 pgs: 177 active+clean; 192 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 2.9 MiB/s rd, 2.5 MiB/s wr, 190 op/s
Dec 02 10:03:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/4080154938' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:03:05 np0005541914.localdomain ceph-mon[301710]: osdmap e96: 6 total, 6 up, 6 in
Dec 02 10:03:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:03:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:03:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:03:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:03:06 np0005541914.localdomain systemd[1]: tmp-crun.MYhWpy.mount: Deactivated successfully.
Dec 02 10:03:06 np0005541914.localdomain podman[307180]: 2025-12-02 10:03:06.150775802 +0000 UTC m=+0.100629773 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 02 10:03:06 np0005541914.localdomain podman[307179]: 2025-12-02 10:03:06.164810237 +0000 UTC m=+0.146065969 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 02 10:03:06 np0005541914.localdomain podman[307177]: 2025-12-02 10:03:06.203192891 +0000 UTC m=+0.192249120 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 02 10:03:06 np0005541914.localdomain podman[307177]: 2025-12-02 10:03:06.211804632 +0000 UTC m=+0.200860851 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 02 10:03:06 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:03:06 np0005541914.localdomain podman[307179]: 2025-12-02 10:03:06.229603761 +0000 UTC m=+0.210859463 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 02 10:03:06 np0005541914.localdomain podman[307180]: 2025-12-02 10:03:06.236775999 +0000 UTC m=+0.186630190 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 02 10:03:06 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:03:06 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:03:06 np0005541914.localdomain podman[307178]: 2025-12-02 10:03:06.116088059 +0000 UTC m=+0.100723755 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 10:03:06 np0005541914.localdomain podman[307178]: 2025-12-02 10:03:06.301009506 +0000 UTC m=+0.285645202 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 02 10:03:06 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:03:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:03:06
Dec 02 10:03:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 02 10:03:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] do_upmap
Dec 02 10:03:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] pools ['manila_data', 'vms', 'images', 'backups', 'manila_metadata', '.mgr', 'volumes']
Dec 02 10:03:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes
Dec 02 10:03:06 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v98: 177 pgs: 177 active+clean; 192 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 2.5 MiB/s rd, 1.7 KiB/s wr, 111 op/s
Dec 02 10:03:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:03:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:03:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust
Dec 02 10:03:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:03:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:03:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:03:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:03:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:03:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 02 10:03:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:03:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004817926437744277 of space, bias 1.0, pg target 0.9635852875488554 quantized to 32 (current 32)
Dec 02 10:03:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:03:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 02 10:03:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:03:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8570103846780196 quantized to 32 (current 32)
Dec 02 10:03:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:03:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 02 10:03:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:03:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 02 10:03:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:03:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001949853433835846 quantized to 16 (current 16)
Dec 02 10:03:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 02 10:03:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:03:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:03:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:03:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:03:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 02 10:03:06 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:03:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:03:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:03:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:03:07 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:03:07.116 2 INFO neutron.agent.securitygroups_rpc [None req-897aec69-e9e3-465e-bb92-a062d09dda9e 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Security group member updated ['5c93e274-85ac-42d3-b949-bdb62e6b8c39']
Dec 02 10:03:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:03:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:07.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:03:07 np0005541914.localdomain ceph-mon[301710]: pgmap v98: 177 pgs: 177 active+clean; 192 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 2.5 MiB/s rd, 1.7 KiB/s wr, 111 op/s
Dec 02 10:03:08 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:03:08Z|00039|memory|INFO|peak resident set size grew 53% in last 2264.5 seconds, from 13080 kB to 20000 kB
Dec 02 10:03:08 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:03:08Z|00040|memory|INFO|idl-cells-OVN_Southbound:7343 idl-cells-Open_vSwitch:1041 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:185 lflow-cache-entries-cache-matches:228 lflow-cache-size-KB:708 local_datapath_usage-KB:2 ofctrl_desired_flow_usage-KB:325 ofctrl_installed_flow_usage-KB:239 ofctrl_sb_flow_ref_usage-KB:127
Dec 02 10:03:08 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:03:08.598 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:03:08Z, description=, device_id=2f08599e-d6d6-408a-a486-e1f5476b437a, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c17a90>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c17a00>], id=4916a0d9-8c51-43e8-a712-508fc8b29742, ip_allocation=immediate, mac_address=fa:16:3e:15:88:70, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=364, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:03:08Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:03:08 np0005541914.localdomain podman[307276]: 2025-12-02 10:03:08.751294628 +0000 UTC m=+0.033395634 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:03:08 np0005541914.localdomain systemd[1]: tmp-crun.tSGOZP.mount: Deactivated successfully.
Dec 02 10:03:08 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 4 addresses
Dec 02 10:03:08 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:03:08 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:03:08 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v99: 177 pgs: 177 active+clean; 192 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 1.5 KiB/s wr, 93 op/s
Dec 02 10:03:08 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:03:08.937 262347 INFO neutron.agent.dhcp.agent [None req-b931de01-4140-4357-892c-4690f3d0965e - - - - - -] DHCP configuration for ports {'4916a0d9-8c51-43e8-a712-508fc8b29742'} is completed
Dec 02 10:03:09 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:09.041 281049 DEBUG nova.virt.libvirt.driver [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 02 10:03:10 np0005541914.localdomain ceph-mon[301710]: pgmap v99: 177 pgs: 177 active+clean; 192 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 1.5 KiB/s wr, 93 op/s
Dec 02 10:03:10 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v100: 177 pgs: 177 active+clean; 192 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 2.0 MiB/s rd, 1.4 KiB/s wr, 89 op/s
Dec 02 10:03:11 np0005541914.localdomain ceph-mon[301710]: pgmap v100: 177 pgs: 177 active+clean; 192 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 2.0 MiB/s rd, 1.4 KiB/s wr, 89 op/s
Dec 02 10:03:11 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:03:11.844 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:03:10Z, description=, device_id=c633bc2a-d8d8-4d52-951c-727821eef4f5, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c7a9a0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c11580>], id=03bfc4ec-c7c2-4fb4-8f6a-cb567b21dd97, ip_allocation=immediate, mac_address=fa:16:3e:dd:9c:3c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=401, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:03:10Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:03:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:03:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:03:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:03:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:03:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:03:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:03:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:03:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:03:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:03:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:03:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:03:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:03:12 np0005541914.localdomain podman[307312]: 2025-12-02 10:03:12.146229091 +0000 UTC m=+0.068346043 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 02 10:03:12 np0005541914.localdomain systemd[1]: tmp-crun.naN9Ii.mount: Deactivated successfully.
Dec 02 10:03:12 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 5 addresses
Dec 02 10:03:12 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:03:12 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:03:12 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:03:12.401 262347 INFO neutron.agent.dhcp.agent [None req-e4c035ea-f24e-4c10-9fcb-762f81b0aa77 - - - - - -] DHCP configuration for ports {'03bfc4ec-c7c2-4fb4-8f6a-cb567b21dd97'} is completed
Dec 02 10:03:12 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:03:12 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v101: 177 pgs: 177 active+clean; 225 MiB data, 868 MiB used, 41 GiB / 42 GiB avail; 351 KiB/s rd, 2.6 MiB/s wr, 71 op/s
Dec 02 10:03:14 np0005541914.localdomain ceph-mon[301710]: pgmap v101: 177 pgs: 177 active+clean; 225 MiB data, 868 MiB used, 41 GiB / 42 GiB avail; 351 KiB/s rd, 2.6 MiB/s wr, 71 op/s
Dec 02 10:03:14 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/2025890447' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:03:14 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v102: 177 pgs: 177 active+clean; 225 MiB data, 868 MiB used, 41 GiB / 42 GiB avail; 351 KiB/s rd, 2.6 MiB/s wr, 71 op/s
Dec 02 10:03:14 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:03:14 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:03:15 np0005541914.localdomain podman[307333]: 2025-12-02 10:03:15.076940509 +0000 UTC m=+0.084886015 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 10:03:15 np0005541914.localdomain podman[307333]: 2025-12-02 10:03:15.085728965 +0000 UTC m=+0.093674521 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 10:03:15 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:03:15 np0005541914.localdomain systemd[1]: tmp-crun.BqIG1R.mount: Deactivated successfully.
Dec 02 10:03:15 np0005541914.localdomain podman[307334]: 2025-12-02 10:03:15.189013087 +0000 UTC m=+0.194129687 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, name=ubi9-minimal, architecture=x86_64, version=9.6, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, release=1755695350, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible)
Dec 02 10:03:15 np0005541914.localdomain podman[307334]: 2025-12-02 10:03:15.203160586 +0000 UTC m=+0.208277236 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, container_name=openstack_network_exporter, io.buildah.version=1.33.7, release=1755695350, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, name=ubi9-minimal, architecture=x86_64, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 10:03:15 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:03:16 np0005541914.localdomain ceph-mon[301710]: pgmap v102: 177 pgs: 177 active+clean; 225 MiB data, 868 MiB used, 41 GiB / 42 GiB avail; 351 KiB/s rd, 2.6 MiB/s wr, 71 op/s
Dec 02 10:03:16 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v103: 177 pgs: 177 active+clean; 225 MiB data, 868 MiB used, 41 GiB / 42 GiB avail; 314 KiB/s rd, 2.3 MiB/s wr, 64 op/s
Dec 02 10:03:17 np0005541914.localdomain sudo[307377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 10:03:17 np0005541914.localdomain sudo[307377]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:03:17 np0005541914.localdomain sudo[307377]: pam_unix(sudo:session): session closed for user root
Dec 02 10:03:17 np0005541914.localdomain sudo[307395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 10:03:17 np0005541914.localdomain sudo[307395]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:03:17 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/3290176110' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:03:17 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/4171051508' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:03:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:03:17 np0005541914.localdomain sudo[307395]: pam_unix(sudo:session): session closed for user root
Dec 02 10:03:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 10:03:17 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:03:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 02 10:03:17 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:03:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 02 10:03:17 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] update: starting ev b2c4c858-4826-44f1-b8ec-0a71ce93fc1e (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:03:17 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] complete: finished ev b2c4c858-4826-44f1-b8ec-0a71ce93fc1e (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:03:17 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Completed event b2c4c858-4826-44f1-b8ec-0a71ce93fc1e (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 02 10:03:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 02 10:03:17 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:03:18 np0005541914.localdomain sudo[307446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 10:03:18 np0005541914.localdomain sudo[307446]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:03:18 np0005541914.localdomain sudo[307446]: pam_unix(sudo:session): session closed for user root
Dec 02 10:03:18 np0005541914.localdomain ceph-mon[301710]: pgmap v103: 177 pgs: 177 active+clean; 225 MiB data, 868 MiB used, 41 GiB / 42 GiB avail; 314 KiB/s rd, 2.3 MiB/s wr, 64 op/s
Dec 02 10:03:18 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:03:18 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:03:18 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:03:18 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:03:18 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v104: 177 pgs: 177 active+clean; 271 MiB data, 932 MiB used, 41 GiB / 42 GiB avail; 2.0 MiB/s rd, 3.9 MiB/s wr, 94 op/s
Dec 02 10:03:20 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:20.089 281049 DEBUG nova.virt.libvirt.driver [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 02 10:03:20 np0005541914.localdomain ceph-mon[301710]: pgmap v104: 177 pgs: 177 active+clean; 271 MiB data, 932 MiB used, 41 GiB / 42 GiB avail; 2.0 MiB/s rd, 3.9 MiB/s wr, 94 op/s
Dec 02 10:03:20 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v105: 177 pgs: 177 active+clean; 271 MiB data, 932 MiB used, 41 GiB / 42 GiB avail; 2.0 MiB/s rd, 3.9 MiB/s wr, 93 op/s
Dec 02 10:03:21 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:03:21 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:03:21.297 262347 INFO neutron.agent.linux.ip_lib [None req-5acd532c-08ca-4a9d-a064-c6abe7e794ca - - - - - -] Device tap955afcdf-dd cannot be used as it has no MAC address
Dec 02 10:03:21 np0005541914.localdomain kernel: device tap955afcdf-dd entered promiscuous mode
Dec 02 10:03:21 np0005541914.localdomain NetworkManager[5967]: <info>  [1764669801.3236] manager: (tap955afcdf-dd): new Generic device (/org/freedesktop/NetworkManager/Devices/15)
Dec 02 10:03:21 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:03:21Z|00041|binding|INFO|Claiming lport 955afcdf-dd99-4cb5-939f-5919590f8e3b for this chassis.
Dec 02 10:03:21 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:03:21Z|00042|binding|INFO|955afcdf-dd99-4cb5-939f-5919590f8e3b: Claiming unknown
Dec 02 10:03:21 np0005541914.localdomain systemd-udevd[307488]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 10:03:21 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:21.337 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-5376d097-2da8-4019-8e01-8b89ed4f41cf', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5376d097-2da8-4019-8e01-8b89ed4f41cf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'edfb5cc295894fc9a8dc307891edb831', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a37dcb8f-9361-4075-bf0e-f19264ce897a, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=955afcdf-dd99-4cb5-939f-5919590f8e3b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:03:21 np0005541914.localdomain podman[307467]: 2025-12-02 10:03:21.337911861 +0000 UTC m=+0.094702843 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 02 10:03:21 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:21.339 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 955afcdf-dd99-4cb5-939f-5919590f8e3b in datapath 5376d097-2da8-4019-8e01-8b89ed4f41cf bound to our chassis
Dec 02 10:03:21 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:21.344 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5376d097-2da8-4019-8e01-8b89ed4f41cf or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 02 10:03:21 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:21.345 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[1e9bee3e-7826-4a67-9162-eb61b55bff38]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:21 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:03:21Z|00043|binding|INFO|Setting lport 955afcdf-dd99-4cb5-939f-5919590f8e3b ovn-installed in OVS
Dec 02 10:03:21 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:03:21Z|00044|binding|INFO|Setting lport 955afcdf-dd99-4cb5-939f-5919590f8e3b up in Southbound
Dec 02 10:03:21 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap955afcdf-dd: No such device
Dec 02 10:03:21 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap955afcdf-dd: No such device
Dec 02 10:03:21 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap955afcdf-dd: No such device
Dec 02 10:03:21 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap955afcdf-dd: No such device
Dec 02 10:03:21 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap955afcdf-dd: No such device
Dec 02 10:03:21 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap955afcdf-dd: No such device
Dec 02 10:03:21 np0005541914.localdomain podman[307467]: 2025-12-02 10:03:21.384100602 +0000 UTC m=+0.140891654 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:03:21 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap955afcdf-dd: No such device
Dec 02 10:03:21 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap955afcdf-dd: No such device
Dec 02 10:03:21 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:03:21 np0005541914.localdomain ceph-mon[301710]: pgmap v105: 177 pgs: 177 active+clean; 271 MiB data, 932 MiB used, 41 GiB / 42 GiB avail; 2.0 MiB/s rd, 3.9 MiB/s wr, 93 op/s
Dec 02 10:03:21 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events
Dec 02 10:03:21 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 02 10:03:22 np0005541914.localdomain podman[307564]: 
Dec 02 10:03:22 np0005541914.localdomain podman[307564]: 2025-12-02 10:03:22.308016624 +0000 UTC m=+0.120399922 container create c3e40baa6efc7a222839910b4f686c83709ef08a70aec6810fe0e450c9165367 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5376d097-2da8-4019-8e01-8b89ed4f41cf, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 02 10:03:22 np0005541914.localdomain podman[307564]: 2025-12-02 10:03:22.233184706 +0000 UTC m=+0.045568014 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 02 10:03:22 np0005541914.localdomain systemd[1]: Started libpod-conmon-c3e40baa6efc7a222839910b4f686c83709ef08a70aec6810fe0e450c9165367.scope.
Dec 02 10:03:22 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 10:03:22 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb1e695aeb824d8fe3d96595a317c2bff704005f07a946fa3818e91a4a7fa6e0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 10:03:22 np0005541914.localdomain podman[307564]: 2025-12-02 10:03:22.37876608 +0000 UTC m=+0.191149338 container init c3e40baa6efc7a222839910b4f686c83709ef08a70aec6810fe0e450c9165367 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5376d097-2da8-4019-8e01-8b89ed4f41cf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 02 10:03:22 np0005541914.localdomain systemd[1]: tmp-crun.1FTKeb.mount: Deactivated successfully.
Dec 02 10:03:22 np0005541914.localdomain dnsmasq[307582]: started, version 2.85 cachesize 150
Dec 02 10:03:22 np0005541914.localdomain dnsmasq[307582]: DNS service limited to local subnets
Dec 02 10:03:22 np0005541914.localdomain dnsmasq[307582]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 02 10:03:22 np0005541914.localdomain dnsmasq[307582]: warning: no upstream servers configured
Dec 02 10:03:22 np0005541914.localdomain dnsmasq-dhcp[307582]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 02 10:03:22 np0005541914.localdomain dnsmasq[307582]: read /var/lib/neutron/dhcp/5376d097-2da8-4019-8e01-8b89ed4f41cf/addn_hosts - 0 addresses
Dec 02 10:03:22 np0005541914.localdomain dnsmasq-dhcp[307582]: read /var/lib/neutron/dhcp/5376d097-2da8-4019-8e01-8b89ed4f41cf/host
Dec 02 10:03:22 np0005541914.localdomain dnsmasq-dhcp[307582]: read /var/lib/neutron/dhcp/5376d097-2da8-4019-8e01-8b89ed4f41cf/opts
Dec 02 10:03:22 np0005541914.localdomain podman[307564]: 2025-12-02 10:03:22.402838469 +0000 UTC m=+0.215221767 container start c3e40baa6efc7a222839910b4f686c83709ef08a70aec6810fe0e450c9165367 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5376d097-2da8-4019-8e01-8b89ed4f41cf, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 02 10:03:22 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:03:22 np0005541914.localdomain systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000006.scope: Deactivated successfully.
Dec 02 10:03:22 np0005541914.localdomain systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000006.scope: Consumed 13.513s CPU time.
Dec 02 10:03:22 np0005541914.localdomain systemd-machined[202765]: Machine qemu-1-instance-00000006 terminated.
Dec 02 10:03:22 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:03:22.586 262347 INFO neutron.agent.dhcp.agent [None req-04c883c2-f92e-43dc-8ef1-30e0a2586132 - - - - - -] DHCP configuration for ports {'49323d14-7592-4a54-9b77-1ecf72f22e67'} is completed
Dec 02 10:03:22 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:03:22.891 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:03:22Z, description=, device_id=5ca2e9db-941e-4fab-a091-25cb4779ba29, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c31820>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c31430>], id=8f3263d0-7d7a-4de5-9c22-9ff5f0990009, ip_allocation=immediate, mac_address=fa:16:3e:b7:c0:89, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=443, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:03:22Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:03:22 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v106: 177 pgs: 177 active+clean; 271 MiB data, 932 MiB used, 41 GiB / 42 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 121 op/s
Dec 02 10:03:23 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:03:23 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 6 addresses
Dec 02 10:03:23 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:03:23 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:03:23 np0005541914.localdomain podman[307605]: 2025-12-02 10:03:23.101049568 +0000 UTC m=+0.060060991 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:03:23 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:23.106 281049 INFO nova.virt.libvirt.driver [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance shutdown successfully after 24 seconds.
Dec 02 10:03:23 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:23.113 281049 INFO nova.virt.libvirt.driver [-] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance destroyed successfully.
Dec 02 10:03:23 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:23.113 281049 DEBUG nova.objects.instance [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lazy-loading 'numa_topology' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 02 10:03:23 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:23.182 281049 INFO nova.virt.libvirt.driver [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Beginning cold snapshot process
Dec 02 10:03:23 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:03:23.619 262347 INFO neutron.agent.dhcp.agent [None req-62eddd7d-7df6-421a-b8f7-a131f5868c24 - - - - - -] DHCP configuration for ports {'8f3263d0-7d7a-4de5-9c22-9ff5f0990009'} is completed
Dec 02 10:03:23 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:23.787 281049 DEBUG nova.virt.libvirt.imagebackend [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] No parent info for d85e840d-fa56-497b-b5bd-b49584d3e97a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 02 10:03:23 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:23.819 281049 DEBUG nova.storage.rbd_utils [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] creating snapshot(c66f7cd910e54661adc476e3131c14ea) on rbd image(268e09a3-7abe-4037-a14a-068e7b8a78fb_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 02 10:03:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e97 e97: 6 total, 6 up, 6 in
Dec 02 10:03:24 np0005541914.localdomain ceph-mon[301710]: pgmap v106: 177 pgs: 177 active+clean; 271 MiB data, 932 MiB used, 41 GiB / 42 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 121 op/s
Dec 02 10:03:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:24.136 281049 DEBUG nova.storage.rbd_utils [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] cloning vms/268e09a3-7abe-4037-a14a-068e7b8a78fb_disk@c66f7cd910e54661adc476e3131c14ea to images/c6f7f1b0-6018-4e6f-a628-8d5a24dbbfd0 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 02 10:03:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:24.514 281049 DEBUG nova.storage.rbd_utils [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] flattening images/c6f7f1b0-6018-4e6f-a628-8d5a24dbbfd0 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 02 10:03:24 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v108: 177 pgs: 177 active+clean; 271 MiB data, 932 MiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 2.2 MiB/s wr, 74 op/s
Dec 02 10:03:25 np0005541914.localdomain ceph-mon[301710]: osdmap e97: 6 total, 6 up, 6 in
Dec 02 10:03:25 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:25.451 281049 DEBUG nova.storage.rbd_utils [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] removing snapshot(c66f7cd910e54661adc476e3131c14ea) on rbd image(268e09a3-7abe-4037-a14a-068e7b8a78fb_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 02 10:03:26 np0005541914.localdomain ceph-mon[301710]: pgmap v108: 177 pgs: 177 active+clean; 271 MiB data, 932 MiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 2.2 MiB/s wr, 74 op/s
Dec 02 10:03:26 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e98 e98: 6 total, 6 up, 6 in
Dec 02 10:03:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:26.245 281049 DEBUG nova.storage.rbd_utils [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] creating snapshot(snap) on rbd image(c6f7f1b0-6018-4e6f-a628-8d5a24dbbfd0) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 02 10:03:26 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v110: 177 pgs: 177 active+clean; 271 MiB data, 932 MiB used, 41 GiB / 42 GiB avail; 640 KiB/s rd, 42 KiB/s wr, 41 op/s
Dec 02 10:03:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e99 e99: 6 total, 6 up, 6 in
Dec 02 10:03:27 np0005541914.localdomain ceph-mon[301710]: osdmap e98: 6 total, 6 up, 6 in
Dec 02 10:03:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:03:28 np0005541914.localdomain ceph-mon[301710]: pgmap v110: 177 pgs: 177 active+clean; 271 MiB data, 932 MiB used, 41 GiB / 42 GiB avail; 640 KiB/s rd, 42 KiB/s wr, 41 op/s
Dec 02 10:03:28 np0005541914.localdomain ceph-mon[301710]: osdmap e99: 6 total, 6 up, 6 in
Dec 02 10:03:28 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v112: 177 pgs: 177 active+clean; 350 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 11 MiB/s rd, 7.8 MiB/s wr, 251 op/s
Dec 02 10:03:29 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:29.627 281049 DEBUG nova.virt.libvirt.driver [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Creating tmpfile /var/lib/nova/instances/tmp6m2ihysk to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041
Dec 02 10:03:29 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:29.653 281049 DEBUG nova.compute.manager [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=12288,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp6m2ihysk',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476
Dec 02 10:03:29 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:29.704 281049 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 02 10:03:29 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:29.704 281049 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 02 10:03:29 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:29.735 281049 INFO nova.compute.rpcapi [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66
Dec 02 10:03:29 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:29.736 281049 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 02 10:03:29 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:03:29.791 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:03:25Z, description=, device_id=5ca2e9db-941e-4fab-a091-25cb4779ba29, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034cccfd0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f403558bc70>], id=667c36b3-db20-46dc-9ff7-d5dee0a9356b, ip_allocation=immediate, mac_address=fa:16:3e:ff:f8:bc, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:03:14Z, description=, dns_domain=, id=5376d097-2da8-4019-8e01-8b89ed4f41cf, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ImagesNegativeTestJSON-1756550547-network, port_security_enabled=True, project_id=edfb5cc295894fc9a8dc307891edb831, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=5622, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=417, status=ACTIVE, subnets=['d68ba9ba-0fb5-4191-b45f-f1c149653c02'], tags=[], tenant_id=edfb5cc295894fc9a8dc307891edb831, updated_at=2025-12-02T10:03:19Z, vlan_transparent=None, network_id=5376d097-2da8-4019-8e01-8b89ed4f41cf, port_security_enabled=False, project_id=edfb5cc295894fc9a8dc307891edb831, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=445, status=DOWN, tags=[], tenant_id=edfb5cc295894fc9a8dc307891edb831, updated_at=2025-12-02T10:03:25Z on network 5376d097-2da8-4019-8e01-8b89ed4f41cf
Dec 02 10:03:29 np0005541914.localdomain podman[307785]: 2025-12-02 10:03:29.99619321 +0000 UTC m=+0.054867044 container kill c3e40baa6efc7a222839910b4f686c83709ef08a70aec6810fe0e450c9165367 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5376d097-2da8-4019-8e01-8b89ed4f41cf, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 02 10:03:29 np0005541914.localdomain dnsmasq[307582]: read /var/lib/neutron/dhcp/5376d097-2da8-4019-8e01-8b89ed4f41cf/addn_hosts - 1 addresses
Dec 02 10:03:29 np0005541914.localdomain dnsmasq-dhcp[307582]: read /var/lib/neutron/dhcp/5376d097-2da8-4019-8e01-8b89ed4f41cf/host
Dec 02 10:03:29 np0005541914.localdomain dnsmasq-dhcp[307582]: read /var/lib/neutron/dhcp/5376d097-2da8-4019-8e01-8b89ed4f41cf/opts
Dec 02 10:03:30 np0005541914.localdomain ceph-mon[301710]: pgmap v112: 177 pgs: 177 active+clean; 350 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 11 MiB/s rd, 7.8 MiB/s wr, 251 op/s
Dec 02 10:03:30 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e100 e100: 6 total, 6 up, 6 in
Dec 02 10:03:30 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v114: 177 pgs: 177 active+clean; 350 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 11 MiB/s rd, 7.8 MiB/s wr, 251 op/s
Dec 02 10:03:31 np0005541914.localdomain ceph-mon[301710]: osdmap e100: 6 total, 6 up, 6 in
Dec 02 10:03:31 np0005541914.localdomain ceph-mon[301710]: pgmap v114: 177 pgs: 177 active+clean; 350 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 11 MiB/s rd, 7.8 MiB/s wr, 251 op/s
Dec 02 10:03:32 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:03:32.335 262347 INFO neutron.agent.dhcp.agent [None req-f75195db-2c41-4bc9-bd43-698d7089f316 - - - - - -] DHCP configuration for ports {'667c36b3-db20-46dc-9ff7-d5dee0a9356b'} is completed
Dec 02 10:03:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:03:32 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:32.429 281049 INFO nova.virt.libvirt.driver [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Snapshot image upload complete
Dec 02 10:03:32 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:32.430 281049 DEBUG nova.compute.manager [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 02 10:03:32 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:03:32.434 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:03:29Z, description=, device_id=2f998fb5-566b-4272-a579-f71fea3296d4, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034d3fdf0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c63610>], id=3fa961f6-eb13-451b-a216-8747851567ad, ip_allocation=immediate, mac_address=fa:16:3e:db:c7:ae, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=446, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:03:29Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:03:32 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:32.665 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:03:32 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:32.667 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 02 10:03:32 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:32.685 281049 INFO nova.compute.manager [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Shelve offloading
Dec 02 10:03:32 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:32.695 281049 INFO nova.virt.libvirt.driver [-] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance destroyed successfully.
Dec 02 10:03:32 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:32.696 281049 DEBUG nova.compute.manager [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 02 10:03:32 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:32.699 281049 DEBUG oslo_concurrency.lockutils [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Acquiring lock "refresh_cache-268e09a3-7abe-4037-a14a-068e7b8a78fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 02 10:03:32 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:32.700 281049 DEBUG oslo_concurrency.lockutils [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Acquired lock "refresh_cache-268e09a3-7abe-4037-a14a-068e7b8a78fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 02 10:03:32 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:32.700 281049 DEBUG nova.network.neutron [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 02 10:03:32 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:32.767 281049 DEBUG nova.network.neutron [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 02 10:03:32 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 7 addresses
Dec 02 10:03:32 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:03:32 np0005541914.localdomain systemd[1]: tmp-crun.XPpFpd.mount: Deactivated successfully.
Dec 02 10:03:32 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:03:32 np0005541914.localdomain podman[307823]: 2025-12-02 10:03:32.833573609 +0000 UTC m=+0.046275805 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:03:32 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v115: 177 pgs: 177 active+clean; 350 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 9.6 MiB/s rd, 6.9 MiB/s wr, 240 op/s
Dec 02 10:03:32 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:03:32.997 262347 INFO neutron.agent.dhcp.agent [None req-db9e299b-a10d-40d7-a058-32d8e31bdd97 - - - - - -] DHCP configuration for ports {'3fa961f6-eb13-451b-a216-8747851567ad'} is completed
Dec 02 10:03:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:33.004 281049 DEBUG nova.network.neutron [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 02 10:03:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:33.020 281049 DEBUG oslo_concurrency.lockutils [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Releasing lock "refresh_cache-268e09a3-7abe-4037-a14a-068e7b8a78fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 02 10:03:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:33.028 281049 INFO nova.virt.libvirt.driver [-] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance destroyed successfully.
Dec 02 10:03:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:33.029 281049 DEBUG nova.objects.instance [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lazy-loading 'resources' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 02 10:03:33 np0005541914.localdomain podman[239757]: time="2025-12-02T10:03:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:03:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:33.637 281049 INFO nova.virt.libvirt.driver [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Deleting instance files /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb_del
Dec 02 10:03:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:33.638 281049 INFO nova.virt.libvirt.driver [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Deletion of /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb_del complete
Dec 02 10:03:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:03:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 158570 "" "Go-http-client/1.1"
Dec 02 10:03:33 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:33.669 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:03:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:03:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19679 "" "Go-http-client/1.1"
Dec 02 10:03:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:33.784 281049 DEBUG nova.virt.libvirt.host [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Dec 02 10:03:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:33.785 281049 INFO nova.virt.libvirt.host [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] UEFI support detected
Dec 02 10:03:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:33.922 281049 INFO nova.scheduler.client.report [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Deleted allocations for instance 268e09a3-7abe-4037-a14a-068e7b8a78fb
Dec 02 10:03:34 np0005541914.localdomain ceph-mon[301710]: pgmap v115: 177 pgs: 177 active+clean; 350 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 9.6 MiB/s rd, 6.9 MiB/s wr, 240 op/s
Dec 02 10:03:34 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:34.202 281049 DEBUG oslo_concurrency.lockutils [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:03:34 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:34.202 281049 DEBUG oslo_concurrency.lockutils [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:03:34 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:34.252 281049 DEBUG oslo_concurrency.processutils [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:03:34 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:03:34 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1135731250' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:03:34 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:34.715 281049 DEBUG oslo_concurrency.processutils [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:03:34 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:34.721 281049 DEBUG nova.compute.provider_tree [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 10:03:34 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:34.792 281049 DEBUG nova.scheduler.client.report [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 10:03:34 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:34.843 281049 DEBUG oslo_concurrency.lockutils [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:03:34 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v116: 177 pgs: 177 active+clean; 350 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 8.1 MiB/s rd, 5.8 MiB/s wr, 202 op/s
Dec 02 10:03:35 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/1135731250' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:03:35 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:35.095 281049 DEBUG oslo_concurrency.lockutils [None req-6e187907-d676-4b56-a217-2fccf411986a 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lock "268e09a3-7abe-4037-a14a-068e7b8a78fb" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 36.376s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:03:35 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:35.267 281049 DEBUG nova.compute.manager [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=12288,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp6m2ihysk',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='63092ab0-9432-4c74-933e-e9d5428e6162',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604
Dec 02 10:03:35 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:35.732 281049 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Acquiring lock "refresh_cache-63092ab0-9432-4c74-933e-e9d5428e6162" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 02 10:03:35 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:35.732 281049 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Acquired lock "refresh_cache-63092ab0-9432-4c74-933e-e9d5428e6162" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 02 10:03:35 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:35.733 281049 DEBUG nova.network.neutron [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 02 10:03:35 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:03:35.787 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:03:25Z, description=, device_id=5ca2e9db-941e-4fab-a091-25cb4779ba29, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034cc2700>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034cc20d0>], id=667c36b3-db20-46dc-9ff7-d5dee0a9356b, ip_allocation=immediate, mac_address=fa:16:3e:ff:f8:bc, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:03:14Z, description=, dns_domain=, id=5376d097-2da8-4019-8e01-8b89ed4f41cf, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ImagesNegativeTestJSON-1756550547-network, port_security_enabled=True, project_id=edfb5cc295894fc9a8dc307891edb831, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=5622, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=417, status=ACTIVE, subnets=['d68ba9ba-0fb5-4191-b45f-f1c149653c02'], tags=[], tenant_id=edfb5cc295894fc9a8dc307891edb831, updated_at=2025-12-02T10:03:19Z, vlan_transparent=None, network_id=5376d097-2da8-4019-8e01-8b89ed4f41cf, port_security_enabled=False, project_id=edfb5cc295894fc9a8dc307891edb831, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=445, status=DOWN, tags=[], tenant_id=edfb5cc295894fc9a8dc307891edb831, updated_at=2025-12-02T10:03:25Z on network 5376d097-2da8-4019-8e01-8b89ed4f41cf
Dec 02 10:03:36 np0005541914.localdomain dnsmasq[307582]: read /var/lib/neutron/dhcp/5376d097-2da8-4019-8e01-8b89ed4f41cf/addn_hosts - 1 addresses
Dec 02 10:03:36 np0005541914.localdomain podman[307901]: 2025-12-02 10:03:36.00416787 +0000 UTC m=+0.041625112 container kill c3e40baa6efc7a222839910b4f686c83709ef08a70aec6810fe0e450c9165367 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5376d097-2da8-4019-8e01-8b89ed4f41cf, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 02 10:03:36 np0005541914.localdomain dnsmasq-dhcp[307582]: read /var/lib/neutron/dhcp/5376d097-2da8-4019-8e01-8b89ed4f41cf/host
Dec 02 10:03:36 np0005541914.localdomain dnsmasq-dhcp[307582]: read /var/lib/neutron/dhcp/5376d097-2da8-4019-8e01-8b89ed4f41cf/opts
Dec 02 10:03:36 np0005541914.localdomain ceph-mon[301710]: pgmap v116: 177 pgs: 177 active+clean; 350 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 8.1 MiB/s rd, 5.8 MiB/s wr, 202 op/s
Dec 02 10:03:36 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:03:36.298 262347 INFO neutron.agent.dhcp.agent [None req-9afe9aec-48f5-4778-90cb-3730bedb50bd - - - - - -] DHCP configuration for ports {'667c36b3-db20-46dc-9ff7-d5dee0a9356b'} is completed
Dec 02 10:03:36 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v117: 177 pgs: 177 active+clean; 350 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 6.7 MiB/s rd, 4.8 MiB/s wr, 167 op/s
Dec 02 10:03:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:03:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:03:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:03:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:03:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:03:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:03:36 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:03:36 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:03:36 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:03:36 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:03:37 np0005541914.localdomain systemd[1]: tmp-crun.atr2Ud.mount: Deactivated successfully.
Dec 02 10:03:37 np0005541914.localdomain podman[307921]: 2025-12-02 10:03:37.128924932 +0000 UTC m=+0.128903829 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 10:03:37 np0005541914.localdomain podman[307922]: 2025-12-02 10:03:37.082639059 +0000 UTC m=+0.081791001 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0)
Dec 02 10:03:37 np0005541914.localdomain podman[307921]: 2025-12-02 10:03:37.140782352 +0000 UTC m=+0.140761229 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 10:03:37 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:03:37 np0005541914.localdomain podman[307920]: 2025-12-02 10:03:37.192936753 +0000 UTC m=+0.193788236 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 02 10:03:37 np0005541914.localdomain podman[307920]: 2025-12-02 10:03:37.197514022 +0000 UTC m=+0.198365495 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 02 10:03:37 np0005541914.localdomain podman[307923]: 2025-12-02 10:03:37.106934285 +0000 UTC m=+0.097258470 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:03:37 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:03:37 np0005541914.localdomain podman[307922]: 2025-12-02 10:03:37.21459258 +0000 UTC m=+0.213744492 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 02 10:03:37 np0005541914.localdomain podman[307923]: 2025-12-02 10:03:37.236550165 +0000 UTC m=+0.226874290 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 02 10:03:37 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:03:37 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:03:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:37.253 281049 DEBUG oslo_concurrency.lockutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Acquiring lock "268e09a3-7abe-4037-a14a-068e7b8a78fb" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:03:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:37.253 281049 DEBUG oslo_concurrency.lockutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lock "268e09a3-7abe-4037-a14a-068e7b8a78fb" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:03:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:37.254 281049 INFO nova.compute.manager [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Unshelving
Dec 02 10:03:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:03:37 np0005541914.localdomain podman[308021]: 2025-12-02 10:03:37.440600612 +0000 UTC m=+0.046450919 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:03:37 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 6 addresses
Dec 02 10:03:37 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:03:37 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:03:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:37.463 281049 DEBUG oslo_concurrency.lockutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:03:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:37.464 281049 DEBUG oslo_concurrency.lockutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:03:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:37.467 281049 DEBUG nova.objects.instance [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lazy-loading 'pci_requests' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 02 10:03:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:37.626 281049 DEBUG nova.network.neutron [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Updating instance_info_cache with network_info: [{"id": "31de197b-ef56-4d2a-9fa2-293715a60004", "address": "fa:16:3e:8f:bb:bd", "network": {"id": "62df5f27-c8d9-4d79-9ad6-2f32e63bf47f", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-307256986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "cccbafb2e3c343b2aab51714734bddce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31de197b-ef", "ovs_interfaceid": "31de197b-ef56-4d2a-9fa2-293715a60004", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 02 10:03:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:37.745 281049 DEBUG nova.objects.instance [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lazy-loading 'numa_topology' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 02 10:03:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:37.748 281049 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764669802.7416174, 268e09a3-7abe-4037-a14a-068e7b8a78fb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 02 10:03:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:37.749 281049 INFO nova.compute.manager [-] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] VM Stopped (Lifecycle Event)
Dec 02 10:03:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:37.919 281049 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Releasing lock "refresh_cache-63092ab0-9432-4c74-933e-e9d5428e6162" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 02 10:03:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:37.921 281049 DEBUG nova.virt.libvirt.driver [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=12288,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp6m2ihysk',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='63092ab0-9432-4c74-933e-e9d5428e6162',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827
Dec 02 10:03:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:37.922 281049 DEBUG nova.virt.libvirt.driver [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Creating instance directory: /var/lib/nova/instances/63092ab0-9432-4c74-933e-e9d5428e6162 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840
Dec 02 10:03:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:37.923 281049 DEBUG nova.virt.libvirt.driver [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Ensure instance console log exists: /var/lib/nova/instances/63092ab0-9432-4c74-933e-e9d5428e6162/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 02 10:03:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:37.924 281049 DEBUG nova.virt.libvirt.driver [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794
Dec 02 10:03:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:37.925 281049 DEBUG nova.virt.libvirt.vif [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T10:03:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-861747463',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005541913.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-861747463',id=7,image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-02T10:03:21Z,launched_on='np0005541913.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='np0005541913.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='cccbafb2e3c343b2aab51714734bddce',ramdisk_id='',reservation_id='r-sf2jj0i0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-5814605',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-5814605-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T10:03:21Z,user_data=None,user_id='60f523e6d03743daa3ff6f5bc7122d00',uuid=63092ab0-9432-4c74-933e-e9d5428e6162,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31de197b-ef56-4d2a-9fa2-293715a60004", "address": "fa:16:3e:8f:bb:bd", "network": {"id": "62df5f27-c8d9-4d79-9ad6-2f32e63bf47f", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-307256986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "cccbafb2e3c343b2aab51714734bddce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap31de197b-ef", "ovs_interfaceid": "31de197b-ef56-4d2a-9fa2-293715a60004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 02 10:03:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:37.926 281049 DEBUG nova.network.os_vif_util [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Converting VIF {"id": "31de197b-ef56-4d2a-9fa2-293715a60004", "address": "fa:16:3e:8f:bb:bd", "network": {"id": "62df5f27-c8d9-4d79-9ad6-2f32e63bf47f", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-307256986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "cccbafb2e3c343b2aab51714734bddce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap31de197b-ef", "ovs_interfaceid": "31de197b-ef56-4d2a-9fa2-293715a60004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 02 10:03:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:37.928 281049 DEBUG nova.network.os_vif_util [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:bb:bd,bridge_name='br-int',has_traffic_filtering=True,id=31de197b-ef56-4d2a-9fa2-293715a60004,network=Network(62df5f27-c8d9-4d79-9ad6-2f32e63bf47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap31de197b-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 02 10:03:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:37.929 281049 DEBUG os_vif [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:bb:bd,bridge_name='br-int',has_traffic_filtering=True,id=31de197b-ef56-4d2a-9fa2-293715a60004,network=Network(62df5f27-c8d9-4d79-9ad6-2f32e63bf47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap31de197b-ef') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 02 10:03:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:37.969 281049 DEBUG nova.virt.hardware [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 02 10:03:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:37.969 281049 INFO nova.compute.claims [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Claim successful on node np0005541914.localdomain
Dec 02 10:03:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:37.982 281049 DEBUG ovsdbapp.backend.ovs_idl [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 02 10:03:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:37.983 281049 DEBUG ovsdbapp.backend.ovs_idl [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 02 10:03:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:37.983 281049 DEBUG ovsdbapp.backend.ovs_idl [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 02 10:03:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:37.984 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 02 10:03:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:37.984 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:37.985 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 02 10:03:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:37.985 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:37.987 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:37.989 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:38 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:38.008 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:38 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:38.008 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:03:38 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:38.009 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 02 10:03:38 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:38.010 281049 INFO oslo.privsep.daemon [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpim6erqeg/privsep.sock']
Dec 02 10:03:38 np0005541914.localdomain ceph-mon[301710]: pgmap v117: 177 pgs: 177 active+clean; 350 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 6.7 MiB/s rd, 4.8 MiB/s wr, 167 op/s
Dec 02 10:03:38 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:38.142 281049 DEBUG nova.compute.manager [None req-0f1d3d96-c07f-48e6-9675-f183a17c95f9 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 02 10:03:38 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:38.412 281049 DEBUG oslo_concurrency.processutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:03:38 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:38.616 281049 INFO oslo.privsep.daemon [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Spawned new privsep daemon via rootwrap
Dec 02 10:03:38 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:38.522 308046 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 02 10:03:38 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:38.524 308046 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 02 10:03:38 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:38.526 308046 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Dec 02 10:03:38 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:38.526 308046 INFO oslo.privsep.daemon [-] privsep daemon running as pid 308046
Dec 02 10:03:38 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:03:38 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/84271393' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:03:38 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:38.876 281049 DEBUG oslo_concurrency.processutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:03:38 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:38.883 281049 DEBUG nova.compute.provider_tree [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 10:03:38 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:38.889 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:38 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:38.890 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31de197b-ef, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:03:38 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:38.892 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap31de197b-ef, col_values=(('external_ids', {'iface-id': '31de197b-ef56-4d2a-9fa2-293715a60004', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8f:bb:bd', 'vm-uuid': '63092ab0-9432-4c74-933e-e9d5428e6162'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:03:38 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:38.930 281049 DEBUG nova.scheduler.client.report [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 10:03:38 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:38.937 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:38 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:38.940 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 02 10:03:38 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:38.941 281049 INFO os_vif [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:bb:bd,bridge_name='br-int',has_traffic_filtering=True,id=31de197b-ef56-4d2a-9fa2-293715a60004,network=Network(62df5f27-c8d9-4d79-9ad6-2f32e63bf47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap31de197b-ef')
Dec 02 10:03:38 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:38.942 281049 DEBUG nova.virt.libvirt.driver [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954
Dec 02 10:03:38 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:38.943 281049 DEBUG nova.compute.manager [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=12288,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp6m2ihysk',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='63092ab0-9432-4c74-933e-e9d5428e6162',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668
Dec 02 10:03:38 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v118: 177 pgs: 177 active+clean; 304 MiB data, 1007 MiB used, 41 GiB / 42 GiB avail; 518 KiB/s rd, 2.6 MiB/s wr, 122 op/s
Dec 02 10:03:38 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:38.967 281049 DEBUG oslo_concurrency.lockutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.503s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:03:39 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:39.077 281049 DEBUG oslo_concurrency.lockutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Acquiring lock "refresh_cache-268e09a3-7abe-4037-a14a-068e7b8a78fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 02 10:03:39 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:39.077 281049 DEBUG oslo_concurrency.lockutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Acquired lock "refresh_cache-268e09a3-7abe-4037-a14a-068e7b8a78fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 02 10:03:39 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:39.078 281049 DEBUG nova.network.neutron [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 02 10:03:39 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/84271393' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:03:39 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:39.153 281049 DEBUG nova.network.neutron [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 02 10:03:39 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:39.568 281049 DEBUG nova.network.neutron [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 02 10:03:39 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:39.585 281049 DEBUG oslo_concurrency.lockutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Releasing lock "refresh_cache-268e09a3-7abe-4037-a14a-068e7b8a78fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 02 10:03:39 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:39.587 281049 DEBUG nova.virt.libvirt.driver [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 02 10:03:39 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:39.588 281049 INFO nova.virt.libvirt.driver [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Creating image(s)
Dec 02 10:03:39 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:39.628 281049 DEBUG nova.storage.rbd_utils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] rbd image 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 02 10:03:39 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:39.633 281049 DEBUG nova.objects.instance [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lazy-loading 'trusted_certs' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 02 10:03:39 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:39.684 281049 DEBUG nova.storage.rbd_utils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] rbd image 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 02 10:03:39 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:39.725 281049 DEBUG nova.storage.rbd_utils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] rbd image 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 02 10:03:39 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:39.730 281049 DEBUG oslo_concurrency.lockutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Acquiring lock "5233e139d3cbedb726dc33eeee1a17df7ea669b9" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:03:39 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:39.731 281049 DEBUG oslo_concurrency.lockutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lock "5233e139d3cbedb726dc33eeee1a17df7ea669b9" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:03:39 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:39.796 281049 DEBUG nova.virt.libvirt.imagebackend [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Image locations are: [{'url': 'rbd://c7c8e171-a193-56fb-95fa-8879fcfa7074/images/c6f7f1b0-6018-4e6f-a628-8d5a24dbbfd0/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://c7c8e171-a193-56fb-95fa-8879fcfa7074/images/c6f7f1b0-6018-4e6f-a628-8d5a24dbbfd0/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 02 10:03:39 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:39.876 281049 DEBUG nova.virt.libvirt.imagebackend [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Selected location: {'url': 'rbd://c7c8e171-a193-56fb-95fa-8879fcfa7074/images/c6f7f1b0-6018-4e6f-a628-8d5a24dbbfd0/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Dec 02 10:03:39 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:39.877 281049 DEBUG nova.storage.rbd_utils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] cloning images/c6f7f1b0-6018-4e6f-a628-8d5a24dbbfd0@snap to None/268e09a3-7abe-4037-a14a-068e7b8a78fb_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 02 10:03:40 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:40.081 281049 DEBUG oslo_concurrency.lockutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lock "5233e139d3cbedb726dc33eeee1a17df7ea669b9" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.349s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:03:40 np0005541914.localdomain ceph-mon[301710]: pgmap v118: 177 pgs: 177 active+clean; 304 MiB data, 1007 MiB used, 41 GiB / 42 GiB avail; 518 KiB/s rd, 2.6 MiB/s wr, 122 op/s
Dec 02 10:03:40 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:40.305 281049 DEBUG nova.objects.instance [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lazy-loading 'migration_context' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 02 10:03:40 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:40.397 281049 DEBUG nova.storage.rbd_utils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] flattening vms/268e09a3-7abe-4037-a14a-068e7b8a78fb_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 02 10:03:40 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v119: 177 pgs: 177 active+clean; 304 MiB data, 1007 MiB used, 41 GiB / 42 GiB avail; 509 KiB/s rd, 2.5 MiB/s wr, 120 op/s
Dec 02 10:03:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:41.286 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:41.410 281049 DEBUG nova.virt.libvirt.driver [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Image rbd:vms/268e09a3-7abe-4037-a14a-068e7b8a78fb_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Dec 02 10:03:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:41.411 281049 DEBUG nova.virt.libvirt.driver [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 02 10:03:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:41.412 281049 DEBUG nova.virt.libvirt.driver [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Ensure instance console log exists: /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 02 10:03:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:41.412 281049 DEBUG oslo_concurrency.lockutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:03:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:41.413 281049 DEBUG oslo_concurrency.lockutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:03:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:41.413 281049 DEBUG oslo_concurrency.lockutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:03:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:41.416 281049 DEBUG nova.virt.libvirt.driver [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-12-02T10:02:58Z,direct_url=<?>,disk_format='raw',id=c6f7f1b0-6018-4e6f-a628-8d5a24dbbfd0,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-2084001492-shelved',owner='09cae3217c5e430b8dbe17828669a978',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-02T10:03:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encryption_secret_uuid': None, 'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'disk_bus': 'virtio', 'encrypted': False, 'size': 0, 'device_name': '/dev/vda', 'image_id': 'd85e840d-fa56-497b-b5bd-b49584d3e97a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 02 10:03:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:41.420 281049 WARNING nova.virt.libvirt.driver [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 10:03:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:41.422 281049 DEBUG nova.virt.libvirt.host [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Searching host: 'np0005541914.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 02 10:03:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:41.423 281049 DEBUG nova.virt.libvirt.host [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 02 10:03:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:41.424 281049 DEBUG nova.virt.libvirt.host [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Searching host: 'np0005541914.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 02 10:03:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:41.425 281049 DEBUG nova.virt.libvirt.host [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 02 10:03:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:41.425 281049 DEBUG nova.virt.libvirt.driver [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 02 10:03:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:41.426 281049 DEBUG nova.virt.hardware [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T10:01:31Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='82beb986-6d20-42dc-b738-1cef87dee30f',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-12-02T10:02:58Z,direct_url=<?>,disk_format='raw',id=c6f7f1b0-6018-4e6f-a628-8d5a24dbbfd0,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-2084001492-shelved',owner='09cae3217c5e430b8dbe17828669a978',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-02T10:03:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 02 10:03:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:41.427 281049 DEBUG nova.virt.hardware [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 02 10:03:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:41.428 281049 DEBUG nova.virt.hardware [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 02 10:03:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:41.428 281049 DEBUG nova.virt.hardware [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 02 10:03:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:41.429 281049 DEBUG nova.virt.hardware [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 02 10:03:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:41.429 281049 DEBUG nova.virt.hardware [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 02 10:03:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:41.430 281049 DEBUG nova.virt.hardware [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 02 10:03:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:41.430 281049 DEBUG nova.virt.hardware [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 02 10:03:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:41.431 281049 DEBUG nova.virt.hardware [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 02 10:03:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:41.431 281049 DEBUG nova.virt.hardware [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 02 10:03:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:41.431 281049 DEBUG nova.virt.hardware [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 02 10:03:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:41.432 281049 DEBUG nova.objects.instance [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lazy-loading 'vcpu_model' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 02 10:03:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:41.460 281049 DEBUG oslo_concurrency.processutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:03:41 np0005541914.localdomain ceph-mon[301710]: pgmap v119: 177 pgs: 177 active+clean; 304 MiB data, 1007 MiB used, 41 GiB / 42 GiB avail; 509 KiB/s rd, 2.5 MiB/s wr, 120 op/s
Dec 02 10:03:41 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 02 10:03:41 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2234913475' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:03:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:41.963 281049 DEBUG oslo_concurrency.processutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:42.003 281049 DEBUG nova.storage.rbd_utils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] rbd image 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:42.008 281049 DEBUG oslo_concurrency.processutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:42.033 281049 DEBUG nova.network.neutron [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Port 31de197b-ef56-4d2a-9fa2-293715a60004 updated with migration profile {'migrating_to': 'np0005541914.localdomain'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:42.036 281049 DEBUG nova.compute.manager [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=12288,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp6m2ihysk',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='63092ab0-9432-4c74-933e-e9d5428e6162',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723
Dec 02 10:03:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:03:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:03:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:03:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:03:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:03:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:03:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:03:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:03:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:03:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:03:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:03:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:03:42 np0005541914.localdomain sshd[308328]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 10:03:42 np0005541914.localdomain sshd[308328]: Accepted publickey for nova from 172.17.0.107 port 35950 ssh2: ECDSA SHA256:F9h/iD/7DLBkMy7oU5JeQ80dnSC7auKWKHT/OdSB0Bo
Dec 02 10:03:42 np0005541914.localdomain systemd[1]: Created slice User Slice of UID 42436.
Dec 02 10:03:42 np0005541914.localdomain systemd[1]: Starting User Runtime Directory /run/user/42436...
Dec 02 10:03:42 np0005541914.localdomain systemd-logind[760]: New session 72 of user nova.
Dec 02 10:03:42 np0005541914.localdomain systemd[1]: Finished User Runtime Directory /run/user/42436.
Dec 02 10:03:42 np0005541914.localdomain systemd[1]: Starting User Manager for UID 42436...
Dec 02 10:03:42 np0005541914.localdomain systemd[308350]: pam_unix(systemd-user:session): session opened for user nova(uid=42436) by (uid=0)
Dec 02 10:03:42 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:03:42 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 02 10:03:42 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/91978558' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:42.469 281049 DEBUG oslo_concurrency.processutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:42.472 281049 DEBUG nova.objects.instance [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lazy-loading 'pci_devices' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:42.492 281049 DEBUG nova.virt.libvirt.driver [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] End _get_guest_xml xml=<domain type="kvm">
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:   <uuid>268e09a3-7abe-4037-a14a-068e7b8a78fb</uuid>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:   <name>instance-00000006</name>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:   <memory>131072</memory>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:   <vcpu>1</vcpu>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:   <metadata>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:       <nova:name>tempest-UnshelveToHostMultiNodesTest-server-2084001492</nova:name>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:       <nova:creationTime>2025-12-02 10:03:41</nova:creationTime>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:       <nova:flavor name="m1.nano">
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:         <nova:memory>128</nova:memory>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:         <nova:disk>1</nova:disk>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:         <nova:swap>0</nova:swap>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:         <nova:ephemeral>0</nova:ephemeral>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:         <nova:vcpus>1</nova:vcpus>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:       </nova:flavor>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:       <nova:owner>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:         <nova:user uuid="96d084f3c3184bf4ac7b9635139dd4aa">tempest-UnshelveToHostMultiNodesTest-557689334-project-member</nova:user>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:         <nova:project uuid="09cae3217c5e430b8dbe17828669a978">tempest-UnshelveToHostMultiNodesTest-557689334</nova:project>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:       </nova:owner>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:       <nova:root type="image" uuid="c6f7f1b0-6018-4e6f-a628-8d5a24dbbfd0"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:       <nova:ports/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     </nova:instance>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:   </metadata>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:   <sysinfo type="smbios">
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <system>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:       <entry name="manufacturer">RDO</entry>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:       <entry name="product">OpenStack Compute</entry>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:       <entry name="serial">268e09a3-7abe-4037-a14a-068e7b8a78fb</entry>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:       <entry name="uuid">268e09a3-7abe-4037-a14a-068e7b8a78fb</entry>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:       <entry name="family">Virtual Machine</entry>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     </system>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:   </sysinfo>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:   <os>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <boot dev="hd"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <smbios mode="sysinfo"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:   </os>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:   <features>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <acpi/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <apic/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <vmcoreinfo/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:   </features>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:   <clock offset="utc">
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <timer name="pit" tickpolicy="delay"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <timer name="hpet" present="no"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:   </clock>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:   <cpu mode="host-model" match="exact">
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <topology sockets="1" cores="1" threads="1"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:   </cpu>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:   <devices>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <disk type="network" device="disk">
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:       <driver type="raw" cache="none"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:       <source protocol="rbd" name="vms/268e09a3-7abe-4037-a14a-068e7b8a78fb_disk">
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:         <host name="172.18.0.103" port="6789"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:         <host name="172.18.0.104" port="6789"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:         <host name="172.18.0.105" port="6789"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:       </source>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:       <auth username="openstack">
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:         <secret type="ceph" uuid="c7c8e171-a193-56fb-95fa-8879fcfa7074"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:       </auth>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:       <target dev="vda" bus="virtio"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     </disk>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <disk type="network" device="cdrom">
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:       <driver type="raw" cache="none"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:       <source protocol="rbd" name="vms/268e09a3-7abe-4037-a14a-068e7b8a78fb_disk.config">
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:         <host name="172.18.0.103" port="6789"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:         <host name="172.18.0.104" port="6789"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:         <host name="172.18.0.105" port="6789"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:       </source>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:       <auth username="openstack">
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:         <secret type="ceph" uuid="c7c8e171-a193-56fb-95fa-8879fcfa7074"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:       </auth>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:       <target dev="sda" bus="sata"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     </disk>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <serial type="pty">
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:       <log file="/var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb/console.log" append="off"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     </serial>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <video>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:       <model type="virtio"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     </video>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <input type="tablet" bus="usb"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <input type="keyboard" bus="usb"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <rng model="virtio">
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:       <backend model="random">/dev/urandom</backend>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     </rng>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <controller type="usb" index="0"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     <memballoon model="virtio">
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:       <stats period="10"/>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:     </memballoon>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:   </devices>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]: </domain>
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 02 10:03:42 np0005541914.localdomain systemd[308350]: Queued start job for default target Main User Target.
Dec 02 10:03:42 np0005541914.localdomain systemd[308350]: Created slice User Application Slice.
Dec 02 10:03:42 np0005541914.localdomain systemd[308350]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 02 10:03:42 np0005541914.localdomain systemd[308350]: Started Daily Cleanup of User's Temporary Directories.
Dec 02 10:03:42 np0005541914.localdomain systemd[308350]: Reached target Paths.
Dec 02 10:03:42 np0005541914.localdomain systemd[308350]: Reached target Timers.
Dec 02 10:03:42 np0005541914.localdomain systemd[308350]: Starting D-Bus User Message Bus Socket...
Dec 02 10:03:42 np0005541914.localdomain systemd[308350]: Starting Create User's Volatile Files and Directories...
Dec 02 10:03:42 np0005541914.localdomain systemd[308350]: Listening on D-Bus User Message Bus Socket.
Dec 02 10:03:42 np0005541914.localdomain systemd[308350]: Finished Create User's Volatile Files and Directories.
Dec 02 10:03:42 np0005541914.localdomain systemd[308350]: Reached target Sockets.
Dec 02 10:03:42 np0005541914.localdomain systemd[308350]: Reached target Basic System.
Dec 02 10:03:42 np0005541914.localdomain systemd[308350]: Reached target Main User Target.
Dec 02 10:03:42 np0005541914.localdomain systemd[308350]: Startup finished in 174ms.
Dec 02 10:03:42 np0005541914.localdomain systemd[1]: Started User Manager for UID 42436.
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:42.554 281049 DEBUG nova.virt.libvirt.driver [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:42.555 281049 DEBUG nova.virt.libvirt.driver [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:42.556 281049 INFO nova.virt.libvirt.driver [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Using config drive
Dec 02 10:03:42 np0005541914.localdomain systemd[1]: Started Session 72 of User nova.
Dec 02 10:03:42 np0005541914.localdomain sshd[308328]: pam_unix(sshd:session): session opened for user nova(uid=42436) by (uid=0)
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:42.592 281049 DEBUG nova.storage.rbd_utils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] rbd image 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:42.626 281049 DEBUG nova.objects.instance [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lazy-loading 'ec2_ids' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:42.710 281049 DEBUG nova.objects.instance [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lazy-loading 'keypairs' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 02 10:03:42 np0005541914.localdomain kernel: tun: Universal TUN/TAP device driver, 1.6
Dec 02 10:03:42 np0005541914.localdomain kernel: device tap31de197b-ef entered promiscuous mode
Dec 02 10:03:42 np0005541914.localdomain NetworkManager[5967]: <info>  [1764669822.7789] manager: (tap31de197b-ef): new Tun device (/org/freedesktop/NetworkManager/Devices/16)
Dec 02 10:03:42 np0005541914.localdomain systemd-udevd[308403]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:42.798 281049 INFO nova.virt.libvirt.driver [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Creating config drive at /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.config
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:42.803 281049 DEBUG oslo_concurrency.processutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxu8ptwds execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:03:42 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:03:42Z|00045|binding|INFO|Claiming lport 31de197b-ef56-4d2a-9fa2-293715a60004 for this additional chassis.
Dec 02 10:03:42 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:03:42Z|00046|binding|INFO|31de197b-ef56-4d2a-9fa2-293715a60004: Claiming fa:16:3e:8f:bb:bd 10.100.0.4
Dec 02 10:03:42 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:03:42Z|00047|binding|INFO|Claiming lport 40590dd1-9250-4409-a2d0-cd4f4774bfc8 for this additional chassis.
Dec 02 10:03:42 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:03:42Z|00048|binding|INFO|40590dd1-9250-4409-a2d0-cd4f4774bfc8: Claiming fa:16:3e:51:01:78 19.80.0.123
Dec 02 10:03:42 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/2234913475' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:03:42 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/91978558' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:03:42 np0005541914.localdomain NetworkManager[5967]: <info>  [1764669822.8423] device (tap31de197b-ef): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external')
Dec 02 10:03:42 np0005541914.localdomain NetworkManager[5967]: <info>  [1764669822.8430] device (tap31de197b-ef): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external')
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:42.849 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:42 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:03:42Z|00049|binding|INFO|Setting lport 31de197b-ef56-4d2a-9fa2-293715a60004 ovn-installed in OVS
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:42.855 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:42.861 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:42 np0005541914.localdomain systemd-machined[202765]: New machine qemu-2-instance-00000007.
Dec 02 10:03:42 np0005541914.localdomain systemd[1]: Started Virtual Machine qemu-2-instance-00000007.
Dec 02 10:03:42 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v120: 177 pgs: 177 active+clean; 383 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 4.3 MiB/s rd, 6.0 MiB/s wr, 178 op/s
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:42.959 281049 DEBUG oslo_concurrency.processutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxu8ptwds" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:03:42 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:42.995 281049 DEBUG nova.storage.rbd_utils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] rbd image 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 02 10:03:43 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:43.000 281049 DEBUG oslo_concurrency.processutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.config 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:03:43 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:43.169 281049 DEBUG nova.virt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Emitting event <LifecycleEvent: 1764669823.169287, 63092ab0-9432-4c74-933e-e9d5428e6162 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 02 10:03:43 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:43.171 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] VM Started (Lifecycle Event)
Dec 02 10:03:43 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:43.199 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 02 10:03:43 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:43.210 281049 DEBUG oslo_concurrency.processutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.config 268e09a3-7abe-4037-a14a-068e7b8a78fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:03:43 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:43.211 281049 INFO nova.virt.libvirt.driver [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Deleting local config drive /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb/disk.config because it was imported into RBD.
Dec 02 10:03:43 np0005541914.localdomain systemd-machined[202765]: New machine qemu-3-instance-00000006.
Dec 02 10:03:43 np0005541914.localdomain systemd[1]: Started Virtual Machine qemu-3-instance-00000006.
Dec 02 10:03:43 np0005541914.localdomain dnsmasq[307582]: read /var/lib/neutron/dhcp/5376d097-2da8-4019-8e01-8b89ed4f41cf/addn_hosts - 0 addresses
Dec 02 10:03:43 np0005541914.localdomain dnsmasq-dhcp[307582]: read /var/lib/neutron/dhcp/5376d097-2da8-4019-8e01-8b89ed4f41cf/host
Dec 02 10:03:43 np0005541914.localdomain podman[308549]: 2025-12-02 10:03:43.496554525 +0000 UTC m=+0.054981468 container kill c3e40baa6efc7a222839910b4f686c83709ef08a70aec6810fe0e450c9165367 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5376d097-2da8-4019-8e01-8b89ed4f41cf, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:03:43 np0005541914.localdomain dnsmasq-dhcp[307582]: read /var/lib/neutron/dhcp/5376d097-2da8-4019-8e01-8b89ed4f41cf/opts
Dec 02 10:03:43 np0005541914.localdomain systemd[1]: tmp-crun.yJlvHA.mount: Deactivated successfully.
Dec 02 10:03:43 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:03:43.574 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:03:43Z, description=, device_id=88a5a4f4-0c8e-40f7-81a0-9e11da229be3, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034ca0b20>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034ca0ee0>], id=093d20f1-e161-409e-b4af-0b70203841d0, ip_allocation=immediate, mac_address=fa:16:3e:83:bd:6f, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=479, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:03:43Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:03:43 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:43.588 281049 DEBUG nova.virt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Emitting event <LifecycleEvent: 1764669823.5880399, 268e09a3-7abe-4037-a14a-068e7b8a78fb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 02 10:03:43 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:43.588 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] VM Resumed (Lifecycle Event)
Dec 02 10:03:43 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:43.590 281049 DEBUG nova.compute.manager [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 02 10:03:43 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:43.590 281049 DEBUG nova.virt.libvirt.driver [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 02 10:03:43 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:43.593 281049 INFO nova.virt.libvirt.driver [-] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance spawned successfully.
Dec 02 10:03:43 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:43.622 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 02 10:03:43 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:43.624 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 02 10:03:43 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:43.649 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 02 10:03:43 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:43.649 281049 DEBUG nova.virt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Emitting event <LifecycleEvent: 1764669823.5901845, 268e09a3-7abe-4037-a14a-068e7b8a78fb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 02 10:03:43 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:43.650 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] VM Started (Lifecycle Event)
Dec 02 10:03:43 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:43.656 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:43 np0005541914.localdomain kernel: device tap955afcdf-dd left promiscuous mode
Dec 02 10:03:43 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:03:43Z|00050|binding|INFO|Releasing lport 955afcdf-dd99-4cb5-939f-5919590f8e3b from this chassis (sb_readonly=0)
Dec 02 10:03:43 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:03:43Z|00051|binding|INFO|Setting lport 955afcdf-dd99-4cb5-939f-5919590f8e3b down in Southbound
Dec 02 10:03:43 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:43.667 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 02 10:03:43 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:43.667 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-5376d097-2da8-4019-8e01-8b89ed4f41cf', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5376d097-2da8-4019-8e01-8b89ed4f41cf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'edfb5cc295894fc9a8dc307891edb831', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a37dcb8f-9361-4075-bf0e-f19264ce897a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=955afcdf-dd99-4cb5-939f-5919590f8e3b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:03:43 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:43.670 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 955afcdf-dd99-4cb5-939f-5919590f8e3b in datapath 5376d097-2da8-4019-8e01-8b89ed4f41cf unbound from our chassis
Dec 02 10:03:43 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:43.673 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5376d097-2da8-4019-8e01-8b89ed4f41cf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:03:43 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:43.674 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[4836133d-c7ce-47e1-b8e6-82e3122f44fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:43 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:43.680 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:43 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:43.681 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 02 10:03:43 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:43.728 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 02 10:03:43 np0005541914.localdomain systemd[1]: tmp-crun.oSKOcD.mount: Deactivated successfully.
Dec 02 10:03:43 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 7 addresses
Dec 02 10:03:43 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:03:43 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:03:43 np0005541914.localdomain podman[308612]: 2025-12-02 10:03:43.815113434 +0000 UTC m=+0.064361122 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 02 10:03:43 np0005541914.localdomain ceph-mon[301710]: pgmap v120: 177 pgs: 177 active+clean; 383 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 4.3 MiB/s rd, 6.0 MiB/s wr, 178 op/s
Dec 02 10:03:43 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:43.862 281049 DEBUG nova.virt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Emitting event <LifecycleEvent: 1764669823.8617115, 63092ab0-9432-4c74-933e-e9d5428e6162 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 02 10:03:43 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:43.863 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] VM Resumed (Lifecycle Event)
Dec 02 10:03:43 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:43.973 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:44.014 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 02 10:03:44 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:03:44.094 262347 INFO neutron.agent.dhcp.agent [None req-b596b8e6-a457-42e1-a1fe-aed70f07f94a - - - - - -] DHCP configuration for ports {'093d20f1-e161-409e-b4af-0b70203841d0'} is completed
Dec 02 10:03:44 np0005541914.localdomain sshd[308382]: Received disconnect from 172.17.0.107 port 35950:11: disconnected by user
Dec 02 10:03:44 np0005541914.localdomain sshd[308382]: Disconnected from user nova 172.17.0.107 port 35950
Dec 02 10:03:44 np0005541914.localdomain sshd[308328]: pam_unix(sshd:session): session closed for user nova
Dec 02 10:03:44 np0005541914.localdomain systemd[1]: session-72.scope: Deactivated successfully.
Dec 02 10:03:44 np0005541914.localdomain systemd-logind[760]: Session 72 logged out. Waiting for processes to exit.
Dec 02 10:03:44 np0005541914.localdomain systemd-logind[760]: Removed session 72.
Dec 02 10:03:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:44.164 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 02 10:03:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:44.184 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] During the sync_power process the instance has moved from host np0005541913.localdomain to host np0005541914.localdomain
Dec 02 10:03:44 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e101 e101: 6 total, 6 up, 6 in
Dec 02 10:03:44 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:03:44Z|00052|binding|INFO|Claiming lport 31de197b-ef56-4d2a-9fa2-293715a60004 for this chassis.
Dec 02 10:03:44 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:03:44Z|00053|binding|INFO|31de197b-ef56-4d2a-9fa2-293715a60004: Claiming fa:16:3e:8f:bb:bd 10.100.0.4
Dec 02 10:03:44 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:03:44Z|00054|binding|INFO|Claiming lport 40590dd1-9250-4409-a2d0-cd4f4774bfc8 for this chassis.
Dec 02 10:03:44 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:03:44Z|00055|binding|INFO|40590dd1-9250-4409-a2d0-cd4f4774bfc8: Claiming fa:16:3e:51:01:78 19.80.0.123
Dec 02 10:03:44 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:03:44Z|00056|binding|INFO|Setting lport 31de197b-ef56-4d2a-9fa2-293715a60004 up in Southbound
Dec 02 10:03:44 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:03:44Z|00057|binding|INFO|Setting lport 40590dd1-9250-4409-a2d0-cd4f4774bfc8 up in Southbound
Dec 02 10:03:44 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:44.912 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:bb:bd 10.100.0.4'], port_security=['fa:16:3e:8f:bb:bd 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-17247491', 'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '63092ab0-9432-4c74-933e-e9d5428e6162', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-17247491', 'neutron:project_id': 'cccbafb2e3c343b2aab51714734bddce', 'neutron:revision_number': '9', 'neutron:security_group_ids': '5c93e274-85ac-42d3-b949-bdb62e6b8c39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541913.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c5273a4-e474-4c2c-a95a-a522e1a174bd, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=31de197b-ef56-4d2a-9fa2-293715a60004) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:03:44 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:44.914 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:01:78 19.80.0.123'], port_security=['fa:16:3e:51:01:78 19.80.0.123'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': ''}, parent_port=['31de197b-ef56-4d2a-9fa2-293715a60004'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1284966936', 'neutron:cidrs': '19.80.0.123/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3673812c-f461-4e86-831f-b7a7821f4bda', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1284966936', 'neutron:project_id': 'cccbafb2e3c343b2aab51714734bddce', 'neutron:revision_number': '3', 'neutron:security_group_ids': '5c93e274-85ac-42d3-b949-bdb62e6b8c39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=23ebc33b-05e4-4907-9bc1-7e563b7692f1, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=40590dd1-9250-4409-a2d0-cd4f4774bfc8) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:03:44 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:44.915 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 31de197b-ef56-4d2a-9fa2-293715a60004 in datapath 62df5f27-c8d9-4d79-9ad6-2f32e63bf47f bound to our chassis
Dec 02 10:03:44 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:44.916 159483 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 62df5f27-c8d9-4d79-9ad6-2f32e63bf47f
Dec 02 10:03:44 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v122: 177 pgs: 177 active+clean; 383 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 5.2 MiB/s rd, 7.2 MiB/s wr, 203 op/s
Dec 02 10:03:44 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:03:44.954 2 WARNING neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [req-a1f7258b-8365-4eb2-997c-eb7bece0a428 req-b7b5c7b5-3f24-45f3-b756-f295fdd89115 4ea94a3d730c499a8a661131692645ce 497073c2347a4b2dbbf501873318fbd3 - - default default] This port is not SRIOV, skip binding for port 31de197b-ef56-4d2a-9fa2-293715a60004.
Dec 02 10:03:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:45.056 281049 INFO nova.compute.manager [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Post operation of migration started
Dec 02 10:03:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:45.203 281049 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Acquiring lock "refresh_cache-63092ab0-9432-4c74-933e-e9d5428e6162" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 02 10:03:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:45.203 281049 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Acquired lock "refresh_cache-63092ab0-9432-4c74-933e-e9d5428e6162" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 02 10:03:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:45.203 281049 DEBUG nova.network.neutron [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 02 10:03:45 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:45.432 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[53cb8ace-c4fc-4e76-b403-3bba4dee83e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:45 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:45.433 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap62df5f27-c1 in ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 02 10:03:45 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:45.436 262550 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap62df5f27-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 02 10:03:45 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:45.436 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[6b643cb7-7925-4781-a209-8f0229697d93]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:45 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:45.438 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[d6498375-9df2-4119-a29a-11d28b221e8c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:45 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:45.465 159602 DEBUG oslo.privsep.daemon [-] privsep: reply[bb913cc3-414d-418c-9240-b8b1cf44c97f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:45 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:45.481 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[fcf7e578-a6c8-423d-b315-bf1187aafe5f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:45 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:03:45 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:45.486 159483 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpxpfixk9t/privsep.sock']
Dec 02 10:03:45 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:03:45 np0005541914.localdomain podman[308640]: 2025-12-02 10:03:45.581281633 +0000 UTC m=+0.082417729 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=edpm, managed_by=edpm_ansible, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.tags=minimal rhel9, architecture=x86_64, io.openshift.expose-services=, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container)
Dec 02 10:03:45 np0005541914.localdomain podman[308640]: 2025-12-02 10:03:45.592919436 +0000 UTC m=+0.094055532 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 02 10:03:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:45.592 281049 DEBUG nova.compute.manager [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 02 10:03:45 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:03:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:45.609 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:45 np0005541914.localdomain systemd[1]: tmp-crun.dsaF3o.mount: Deactivated successfully.
Dec 02 10:03:45 np0005541914.localdomain podman[308639]: 2025-12-02 10:03:45.646323935 +0000 UTC m=+0.150351789 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 10:03:45 np0005541914.localdomain podman[308639]: 2025-12-02 10:03:45.666858778 +0000 UTC m=+0.170886582 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 10:03:45 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:03:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:45.680 281049 DEBUG oslo_concurrency.lockutils [None req-9f157956-8c3a-43fd-9a59-5e7984b47953 1cb5f3cd655948d69eadad12de0d4055 2d58bf4832b74708b28917a57e00803f - - default default] Lock "268e09a3-7abe-4037-a14a-068e7b8a78fb" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 8.426s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:03:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:45.846 281049 DEBUG nova.network.neutron [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Updating instance_info_cache with network_info: [{"id": "31de197b-ef56-4d2a-9fa2-293715a60004", "address": "fa:16:3e:8f:bb:bd", "network": {"id": "62df5f27-c8d9-4d79-9ad6-2f32e63bf47f", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-307256986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "cccbafb2e3c343b2aab51714734bddce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31de197b-ef", "ovs_interfaceid": "31de197b-ef56-4d2a-9fa2-293715a60004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 02 10:03:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:45.869 281049 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Releasing lock "refresh_cache-63092ab0-9432-4c74-933e-e9d5428e6162" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 02 10:03:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:45.888 281049 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:03:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:45.890 281049 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:03:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:45.890 281049 DEBUG oslo_concurrency.lockutils [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:03:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:45.896 281049 INFO nova.virt.libvirt.driver [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Sending announce-self command to QEMU monitor. Attempt 1 of 3
Dec 02 10:03:45 np0005541914.localdomain virtqemud[228953]: Domain id=2 name='instance-00000007' uuid=63092ab0-9432-4c74-933e-e9d5428e6162 is tainted: custom-monitor
Dec 02 10:03:45 np0005541914.localdomain ceph-mon[301710]: osdmap e101: 6 total, 6 up, 6 in
Dec 02 10:03:45 np0005541914.localdomain ceph-mon[301710]: pgmap v122: 177 pgs: 177 active+clean; 383 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 5.2 MiB/s rd, 7.2 MiB/s wr, 203 op/s
Dec 02 10:03:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:45.936 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:46 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:46.216 159483 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 02 10:03:46 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:46.218 159483 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpxpfixk9t/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 02 10:03:46 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:46.085 308685 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 02 10:03:46 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:46.090 308685 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 02 10:03:46 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:46.094 308685 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec 02 10:03:46 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:46.094 308685 INFO oslo.privsep.daemon [-] privsep daemon running as pid 308685
Dec 02 10:03:46 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:46.221 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[f4ac4a98-fc0a-4833-b943-87502f26bebd]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:46.332 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:46.507 281049 DEBUG oslo_concurrency.lockutils [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Acquiring lock "268e09a3-7abe-4037-a14a-068e7b8a78fb" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:03:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:46.508 281049 DEBUG oslo_concurrency.lockutils [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lock "268e09a3-7abe-4037-a14a-068e7b8a78fb" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:03:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:46.508 281049 INFO nova.compute.manager [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Shelving
Dec 02 10:03:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:46.535 281049 DEBUG nova.virt.libvirt.driver [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Dec 02 10:03:46 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:46.699 308685 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:03:46 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:46.699 308685 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:03:46 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:46.699 308685 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:03:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:46.905 281049 INFO nova.virt.libvirt.driver [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Sending announce-self command to QEMU monitor. Attempt 2 of 3
Dec 02 10:03:46 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v123: 177 pgs: 177 active+clean; 383 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 5.2 MiB/s rd, 7.2 MiB/s wr, 203 op/s
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:47.205 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[3e558ab3-0241-4907-90db-57fc04284a25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:47.226 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[54bbfa4e-f5ad-469d-9d22-e510872699a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:47 np0005541914.localdomain NetworkManager[5967]: <info>  [1764669827.2279] manager: (tap62df5f27-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/17)
Dec 02 10:03:47 np0005541914.localdomain systemd-udevd[308695]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:47.265 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[8373eb8c-b678-41e0-9b5c-f01fb67db209]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:47.269 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[f622a358-6fcc-491b-8432-94725d07c0cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:47 np0005541914.localdomain NetworkManager[5967]: <info>  [1764669827.2923] device (tap62df5f27-c0): carrier: link connected
Dec 02 10:03:47 np0005541914.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap62df5f27-c1: link becomes ready
Dec 02 10:03:47 np0005541914.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap62df5f27-c0: link becomes ready
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:47.300 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[1995fddd-0162-4bcf-ae7f-712cf6327251]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:47.325 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[afd2fbe2-e716-4e0c-b461-2240adaf445d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62df5f27-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:73:df:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 17], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1199145, 'reachable_time': 40551, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308715, 'error': None, 'target': 'ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:47.347 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[6ac4a4d5-7efe-49f1-b050-8d5e8457204b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe73:df9c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1199145, 'tstamp': 1199145}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308716, 'error': None, 'target': 'ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:47.369 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[4055d988-0cba-44a2-8af9-dc566865bf18]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap62df5f27-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:73:df:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 17], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1199145, 'reachable_time': 40551, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308717, 'error': None, 'target': 'ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:47.403 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[991849b5-b088-472a-892c-6daf5bf1513e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:47 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:47.464 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[47e410bf-f0f5-4349-9bc9-073a2796ff05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:47.467 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62df5f27-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:47.467 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:47.468 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62df5f27-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:03:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:47.523 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:47 np0005541914.localdomain kernel: device tap62df5f27-c0 entered promiscuous mode
Dec 02 10:03:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:47.527 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:47.528 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap62df5f27-c0, col_values=(('external_ids', {'iface-id': 'ea045be8-e121-4ff5-bb82-2a757b7ce736'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:03:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:47.529 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:47 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:03:47Z|00058|binding|INFO|Releasing lport ea045be8-e121-4ff5-bb82-2a757b7ce736 from this chassis (sb_readonly=0)
Dec 02 10:03:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:47.537 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:47.540 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:47.542 159483 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/62df5f27-c8d9-4d79-9ad6-2f32e63bf47f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/62df5f27-c8d9-4d79-9ad6-2f32e63bf47f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:47.543 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[12e9c2dc-58e7-4633-a92b-7798d1659aa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:47.545 159483 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]: global
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]:     log         /dev/log local0 debug
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]:     log-tag     haproxy-metadata-proxy-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]:     user        root
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]:     group       root
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]:     maxconn     1024
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]:     pidfile     /var/lib/neutron/external/pids/62df5f27-c8d9-4d79-9ad6-2f32e63bf47f.pid.haproxy
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]:     daemon
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]: 
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]: defaults
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]:     log global
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]:     mode http
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]:     option httplog
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]:     option dontlognull
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]:     option http-server-close
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]:     option forwardfor
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]:     retries                 3
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]:     timeout http-request    30s
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]:     timeout connect         30s
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]:     timeout client          32s
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]:     timeout server          32s
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]:     timeout http-keep-alive 30s
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]: 
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]: 
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]: listen listener
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]:     bind 169.254.169.254:80
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]:     server metadata /var/lib/neutron/metadata_proxy
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]:     http-request add-header X-OVN-Network-ID 62df5f27-c8d9-4d79-9ad6-2f32e63bf47f
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 02 10:03:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:47.546 159483 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f', 'env', 'PROCESS_TAG=haproxy-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/62df5f27-c8d9-4d79-9ad6-2f32e63bf47f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 02 10:03:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:47.912 281049 INFO nova.virt.libvirt.driver [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Sending announce-self command to QEMU monitor. Attempt 3 of 3
Dec 02 10:03:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:47.917 281049 DEBUG nova.compute.manager [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 02 10:03:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:47.936 281049 DEBUG nova.objects.instance [None req-a1f7258b-8365-4eb2-997c-eb7bece0a428 0f34e0319cfd4e2680d0e40bb8d8500f dfb2b4e8d0aa49b0b34376cadc0ea911 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Dec 02 10:03:47 np0005541914.localdomain podman[308750]: 
Dec 02 10:03:47 np0005541914.localdomain podman[308750]: 2025-12-02 10:03:47.977937089 +0000 UTC m=+0.086968477 container create aab282695765cbe494c9ca1119c2bc6691861fe314f464e9ce84c6d9ee1f5485 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:03:48 np0005541914.localdomain ceph-mon[301710]: pgmap v123: 177 pgs: 177 active+clean; 383 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 5.2 MiB/s rd, 7.2 MiB/s wr, 203 op/s
Dec 02 10:03:48 np0005541914.localdomain podman[308750]: 2025-12-02 10:03:47.92552956 +0000 UTC m=+0.034560908 image pull  quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 02 10:03:48 np0005541914.localdomain systemd[1]: Started libpod-conmon-aab282695765cbe494c9ca1119c2bc6691861fe314f464e9ce84c6d9ee1f5485.scope.
Dec 02 10:03:48 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 10:03:48 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ae3f6c85df8190497f8b25c7653b5a855ddd3a7711f745ce3c2dd3a00d6dcd7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 10:03:48 np0005541914.localdomain podman[308750]: 2025-12-02 10:03:48.084340206 +0000 UTC m=+0.193371564 container init aab282695765cbe494c9ca1119c2bc6691861fe314f464e9ce84c6d9ee1f5485 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec 02 10:03:48 np0005541914.localdomain podman[308750]: 2025-12-02 10:03:48.092163152 +0000 UTC m=+0.201194500 container start aab282695765cbe494c9ca1119c2bc6691861fe314f464e9ce84c6d9ee1f5485 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:03:48 np0005541914.localdomain neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f[308764]: [NOTICE]   (308768) : New worker (308770) forked
Dec 02 10:03:48 np0005541914.localdomain neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f[308764]: [NOTICE]   (308768) : Loading success.
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:48.147 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 40590dd1-9250-4409-a2d0-cd4f4774bfc8 in datapath 3673812c-f461-4e86-831f-b7a7821f4bda unbound from our chassis
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:48.152 159483 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3673812c-f461-4e86-831f-b7a7821f4bda
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:48.163 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[80c012f6-e04d-46c9-8da3-ccfb64dcab5c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:48.164 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3673812c-f1 in ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:48.167 262550 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3673812c-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:48.168 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[04a72069-ce94-4d4d-8f0d-0d83c4a6acd1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:48.169 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[1648ef10-0358-4276-abd1-456e87b66116]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:48.189 159602 DEBUG oslo.privsep.daemon [-] privsep: reply[3786c921-124b-4e03-b222-aab41f658bb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:48.203 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[aa39951c-2043-45f9-b832-d0b348255b06]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:48.230 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[cd276ffc-37df-4d68-a5ac-86bc046736d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:48 np0005541914.localdomain systemd-udevd[308709]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 10:03:48 np0005541914.localdomain NetworkManager[5967]: <info>  [1764669828.2418] manager: (tap3673812c-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/18)
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:48.243 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[683abe99-1a95-4a53-91d5-f19712101fbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:48.274 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[a1c27e57-45e7-43c4-927f-043f720fda61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:48.280 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[14a225f8-08f9-4183-a5df-984138ca48c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:48 np0005541914.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap3673812c-f1: link becomes ready
Dec 02 10:03:48 np0005541914.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap3673812c-f0: link becomes ready
Dec 02 10:03:48 np0005541914.localdomain NetworkManager[5967]: <info>  [1764669828.3003] device (tap3673812c-f0): carrier: link connected
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:48.303 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[955941e4-8cc7-4f18-819c-a26834d98366]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:48.318 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[e03b6054-4904-4a22-bf06-2cc827321138]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3673812c-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:e1:13:c5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1199246, 'reachable_time': 37960, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308790, 'error': None, 'target': 'ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:48.332 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[3eca2dfc-02fb-4318-8eca-1c534ca076c3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee1:13c5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1199246, 'tstamp': 1199246}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308791, 'error': None, 'target': 'ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:48.350 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[1f86e8fe-136d-45cd-87ea-a819a8d4a148]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3673812c-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:e1:13:c5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1199246, 'reachable_time': 37960, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308792, 'error': None, 'target': 'ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:48.380 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[edc2c947-879f-4652-b0ea-9b06eb79eabb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:48.433 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[4502348a-f214-4042-9006-46302d753500]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:48.435 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3673812c-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:48.436 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:48.437 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3673812c-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:03:48 np0005541914.localdomain kernel: device tap3673812c-f0 entered promiscuous mode
Dec 02 10:03:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:48.440 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:48.443 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:48.445 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3673812c-f0, col_values=(('external_ids', {'iface-id': 'ba8757f7-1076-4bc0-8968-1084ffa48766'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:03:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:48.447 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:48 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:03:48Z|00059|binding|INFO|Releasing lport ba8757f7-1076-4bc0-8968-1084ffa48766 from this chassis (sb_readonly=0)
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:48.450 159483 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3673812c-f461-4e86-831f-b7a7821f4bda.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3673812c-f461-4e86-831f-b7a7821f4bda.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:48.452 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[d0032f69-4d11-4e7c-ab11-0808b4f0a79b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:48.453 159483 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]: global
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]:     log         /dev/log local0 debug
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]:     log-tag     haproxy-metadata-proxy-3673812c-f461-4e86-831f-b7a7821f4bda
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]:     user        root
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]:     group       root
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]:     maxconn     1024
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]:     pidfile     /var/lib/neutron/external/pids/3673812c-f461-4e86-831f-b7a7821f4bda.pid.haproxy
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]:     daemon
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]: 
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]: defaults
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]:     log global
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]:     mode http
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]:     option httplog
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]:     option dontlognull
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]:     option http-server-close
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]:     option forwardfor
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]:     retries                 3
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]:     timeout http-request    30s
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]:     timeout connect         30s
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]:     timeout client          32s
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]:     timeout server          32s
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]:     timeout http-keep-alive 30s
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]: 
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]: 
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]: listen listener
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]:     bind 169.254.169.254:80
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]:     server metadata /var/lib/neutron/metadata_proxy
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]:     http-request add-header X-OVN-Network-ID 3673812c-f461-4e86-831f-b7a7821f4bda
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 02 10:03:48 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:48.454 159483 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda', 'env', 'PROCESS_TAG=haproxy-3673812c-f461-4e86-831f-b7a7821f4bda', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3673812c-f461-4e86-831f-b7a7821f4bda.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 02 10:03:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:48.457 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:48 np0005541914.localdomain snmpd[69217]: empty variable list in _query
Dec 02 10:03:48 np0005541914.localdomain snmpd[69217]: empty variable list in _query
Dec 02 10:03:48 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:03:48Z|00060|binding|INFO|Releasing lport ea045be8-e121-4ff5-bb82-2a757b7ce736 from this chassis (sb_readonly=0)
Dec 02 10:03:48 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:03:48Z|00061|binding|INFO|Releasing lport ba8757f7-1076-4bc0-8968-1084ffa48766 from this chassis (sb_readonly=0)
Dec 02 10:03:48 np0005541914.localdomain podman[308834]: 2025-12-02 10:03:48.866974006 +0000 UTC m=+0.096186137 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:03:48 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 6 addresses
Dec 02 10:03:48 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:03:48 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:03:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:48.912 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:48 np0005541914.localdomain podman[308849]: 
Dec 02 10:03:48 np0005541914.localdomain podman[308849]: 2025-12-02 10:03:48.947426466 +0000 UTC m=+0.105723907 container create 45a9aa51387896f86af7827af886be505d05fc4ed3e46069c523e94778226bc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:03:48 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v124: 177 pgs: 177 active+clean; 304 MiB data, 1016 MiB used, 41 GiB / 42 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 214 op/s
Dec 02 10:03:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:48.974 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:48 np0005541914.localdomain systemd[1]: Started libpod-conmon-45a9aa51387896f86af7827af886be505d05fc4ed3e46069c523e94778226bc6.scope.
Dec 02 10:03:49 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 10:03:49 np0005541914.localdomain podman[308849]: 2025-12-02 10:03:48.895032707 +0000 UTC m=+0.053330158 image pull  quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 02 10:03:49 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2dd055fe24137a82f7d8c9b97ce73c529332056d65bf867e145475b505e1900/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 10:03:49 np0005541914.localdomain podman[308849]: 2025-12-02 10:03:49.027554644 +0000 UTC m=+0.185852085 container init 45a9aa51387896f86af7827af886be505d05fc4ed3e46069c523e94778226bc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 02 10:03:49 np0005541914.localdomain podman[308849]: 2025-12-02 10:03:49.039285591 +0000 UTC m=+0.197583032 container start 45a9aa51387896f86af7827af886be505d05fc4ed3e46069c523e94778226bc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 02 10:03:49 np0005541914.localdomain neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda[308873]: [NOTICE]   (308877) : New worker (308879) forked
Dec 02 10:03:49 np0005541914.localdomain neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda[308873]: [NOTICE]   (308877) : Loading success.
Dec 02 10:03:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:49.201 281049 DEBUG nova.compute.manager [req-f7f1eb25-16d3-40ff-88e0-d3be382ffcd4 req-89c58e69-ed85-4d9e-8c6a-b631e6caa9b2 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Received event network-vif-plugged-31de197b-ef56-4d2a-9fa2-293715a60004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 02 10:03:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:49.202 281049 DEBUG oslo_concurrency.lockutils [req-f7f1eb25-16d3-40ff-88e0-d3be382ffcd4 req-89c58e69-ed85-4d9e-8c6a-b631e6caa9b2 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:03:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:49.202 281049 DEBUG oslo_concurrency.lockutils [req-f7f1eb25-16d3-40ff-88e0-d3be382ffcd4 req-89c58e69-ed85-4d9e-8c6a-b631e6caa9b2 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:03:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:49.202 281049 DEBUG oslo_concurrency.lockutils [req-f7f1eb25-16d3-40ff-88e0-d3be382ffcd4 req-89c58e69-ed85-4d9e-8c6a-b631e6caa9b2 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:03:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:49.202 281049 DEBUG nova.compute.manager [req-f7f1eb25-16d3-40ff-88e0-d3be382ffcd4 req-89c58e69-ed85-4d9e-8c6a-b631e6caa9b2 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] No waiting events found dispatching network-vif-plugged-31de197b-ef56-4d2a-9fa2-293715a60004 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 02 10:03:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:49.203 281049 WARNING nova.compute.manager [req-f7f1eb25-16d3-40ff-88e0-d3be382ffcd4 req-89c58e69-ed85-4d9e-8c6a-b631e6caa9b2 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Received unexpected event network-vif-plugged-31de197b-ef56-4d2a-9fa2-293715a60004 for instance with vm_state active and task_state None.
Dec 02 10:03:49 np0005541914.localdomain dnsmasq[307582]: exiting on receipt of SIGTERM
Dec 02 10:03:49 np0005541914.localdomain podman[308905]: 2025-12-02 10:03:49.664639321 +0000 UTC m=+0.069934921 container kill c3e40baa6efc7a222839910b4f686c83709ef08a70aec6810fe0e450c9165367 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5376d097-2da8-4019-8e01-8b89ed4f41cf, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 02 10:03:49 np0005541914.localdomain systemd[1]: libpod-c3e40baa6efc7a222839910b4f686c83709ef08a70aec6810fe0e450c9165367.scope: Deactivated successfully.
Dec 02 10:03:49 np0005541914.localdomain podman[308919]: 2025-12-02 10:03:49.742362547 +0000 UTC m=+0.060691161 container died c3e40baa6efc7a222839910b4f686c83709ef08a70aec6810fe0e450c9165367 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5376d097-2da8-4019-8e01-8b89ed4f41cf, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 02 10:03:49 np0005541914.localdomain podman[308919]: 2025-12-02 10:03:49.772827321 +0000 UTC m=+0.091155875 container cleanup c3e40baa6efc7a222839910b4f686c83709ef08a70aec6810fe0e450c9165367 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5376d097-2da8-4019-8e01-8b89ed4f41cf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 02 10:03:49 np0005541914.localdomain systemd[1]: libpod-conmon-c3e40baa6efc7a222839910b4f686c83709ef08a70aec6810fe0e450c9165367.scope: Deactivated successfully.
Dec 02 10:03:49 np0005541914.localdomain podman[308920]: 2025-12-02 10:03:49.844479544 +0000 UTC m=+0.159081605 container remove c3e40baa6efc7a222839910b4f686c83709ef08a70aec6810fe0e450c9165367 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5376d097-2da8-4019-8e01-8b89ed4f41cf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 02 10:03:49 np0005541914.localdomain systemd[1]: tmp-crun.27am72.mount: Deactivated successfully.
Dec 02 10:03:49 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-fb1e695aeb824d8fe3d96595a317c2bff704005f07a946fa3818e91a4a7fa6e0-merged.mount: Deactivated successfully.
Dec 02 10:03:49 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c3e40baa6efc7a222839910b4f686c83709ef08a70aec6810fe0e450c9165367-userdata-shm.mount: Deactivated successfully.
Dec 02 10:03:50 np0005541914.localdomain ceph-mon[301710]: pgmap v124: 177 pgs: 177 active+clean; 304 MiB data, 1016 MiB used, 41 GiB / 42 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 214 op/s
Dec 02 10:03:50 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/1033273436' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:03:50 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/3031907273' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:03:50 np0005541914.localdomain systemd[1]: run-netns-qdhcp\x2d5376d097\x2d2da8\x2d4019\x2d8e01\x2d8b89ed4f41cf.mount: Deactivated successfully.
Dec 02 10:03:50 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:03:50.121 262347 INFO neutron.agent.dhcp.agent [None req-ba36a1e8-81e8-4404-a121-231e9de97bd4 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:03:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:50.166 281049 DEBUG oslo_concurrency.lockutils [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Acquiring lock "63092ab0-9432-4c74-933e-e9d5428e6162" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:03:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:50.167 281049 DEBUG oslo_concurrency.lockutils [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:03:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:50.167 281049 DEBUG oslo_concurrency.lockutils [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Acquiring lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:03:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:50.168 281049 DEBUG oslo_concurrency.lockutils [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:03:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:50.169 281049 DEBUG oslo_concurrency.lockutils [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:03:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:50.171 281049 INFO nova.compute.manager [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Terminating instance
Dec 02 10:03:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:50.172 281049 DEBUG nova.compute.manager [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 02 10:03:50 np0005541914.localdomain kernel: device tap31de197b-ef left promiscuous mode
Dec 02 10:03:50 np0005541914.localdomain NetworkManager[5967]: <info>  [1764669830.2192] device (tap31de197b-ef): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed')
Dec 02 10:03:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:50.221 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:50 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:03:50Z|00062|binding|INFO|Releasing lport 31de197b-ef56-4d2a-9fa2-293715a60004 from this chassis (sb_readonly=0)
Dec 02 10:03:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:50.227 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:50 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:03:50Z|00063|binding|INFO|Setting lport 31de197b-ef56-4d2a-9fa2-293715a60004 down in Southbound
Dec 02 10:03:50 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:03:50Z|00064|binding|INFO|Releasing lport 40590dd1-9250-4409-a2d0-cd4f4774bfc8 from this chassis (sb_readonly=0)
Dec 02 10:03:50 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:03:50Z|00065|binding|INFO|Setting lport 40590dd1-9250-4409-a2d0-cd4f4774bfc8 down in Southbound
Dec 02 10:03:50 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:03:50Z|00066|binding|INFO|Removing iface tap31de197b-ef ovn-installed in OVS
Dec 02 10:03:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:50.232 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:50 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:03:50Z|00067|binding|INFO|Releasing lport ea045be8-e121-4ff5-bb82-2a757b7ce736 from this chassis (sb_readonly=0)
Dec 02 10:03:50 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:03:50Z|00068|binding|INFO|Releasing lport ba8757f7-1076-4bc0-8968-1084ffa48766 from this chassis (sb_readonly=0)
Dec 02 10:03:50 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:50.243 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8f:bb:bd 10.100.0.4'], port_security=['fa:16:3e:8f:bb:bd 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-17247491', 'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '63092ab0-9432-4c74-933e-e9d5428e6162', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-17247491', 'neutron:project_id': 'cccbafb2e3c343b2aab51714734bddce', 'neutron:revision_number': '12', 'neutron:security_group_ids': '5c93e274-85ac-42d3-b949-bdb62e6b8c39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c5273a4-e474-4c2c-a95a-a522e1a174bd, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=31de197b-ef56-4d2a-9fa2-293715a60004) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:03:50 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:50.247 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:51:01:78 19.80.0.123'], port_security=['fa:16:3e:51:01:78 19.80.0.123'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['31de197b-ef56-4d2a-9fa2-293715a60004'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1284966936', 'neutron:cidrs': '19.80.0.123/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3673812c-f461-4e86-831f-b7a7821f4bda', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1284966936', 'neutron:project_id': 'cccbafb2e3c343b2aab51714734bddce', 'neutron:revision_number': '5', 'neutron:security_group_ids': '5c93e274-85ac-42d3-b949-bdb62e6b8c39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=23ebc33b-05e4-4907-9bc1-7e563b7692f1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=40590dd1-9250-4409-a2d0-cd4f4774bfc8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:03:50 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:50.249 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 31de197b-ef56-4d2a-9fa2-293715a60004 in datapath 62df5f27-c8d9-4d79-9ad6-2f32e63bf47f unbound from our chassis
Dec 02 10:03:50 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:50.257 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:03:50 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:50.259 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[2b63b0ef-0e75-40d7-8531-748adb2bc0c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:50 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:50.260 159483 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f namespace which is not needed anymore
Dec 02 10:03:50 np0005541914.localdomain systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000007.scope: Deactivated successfully.
Dec 02 10:03:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:50.267 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:50 np0005541914.localdomain systemd-machined[202765]: Machine qemu-2-instance-00000007 terminated.
Dec 02 10:03:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:50.300 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:50 np0005541914.localdomain neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f[308764]: [NOTICE]   (308768) : haproxy version is 2.8.14-c23fe91
Dec 02 10:03:50 np0005541914.localdomain neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f[308764]: [NOTICE]   (308768) : path to executable is /usr/sbin/haproxy
Dec 02 10:03:50 np0005541914.localdomain neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f[308764]: [ALERT]    (308768) : Current worker (308770) exited with code 143 (Terminated)
Dec 02 10:03:50 np0005541914.localdomain neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f[308764]: [WARNING]  (308768) : All workers exited. Exiting... (0)
Dec 02 10:03:50 np0005541914.localdomain systemd[1]: libpod-aab282695765cbe494c9ca1119c2bc6691861fe314f464e9ce84c6d9ee1f5485.scope: Deactivated successfully.
Dec 02 10:03:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:50.412 281049 INFO nova.virt.libvirt.driver [-] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Instance destroyed successfully.
Dec 02 10:03:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:50.413 281049 DEBUG nova.objects.instance [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Lazy-loading 'resources' on Instance uuid 63092ab0-9432-4c74-933e-e9d5428e6162 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 02 10:03:50 np0005541914.localdomain podman[308969]: 2025-12-02 10:03:50.417989333 +0000 UTC m=+0.057970479 container died aab282695765cbe494c9ca1119c2bc6691861fe314f464e9ce84c6d9ee1f5485 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 02 10:03:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:50.432 281049 DEBUG nova.virt.libvirt.vif [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-12-02T10:03:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-861747463',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005541914.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-861747463',id=7,image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-02T10:03:21Z,launched_on='np0005541913.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='np0005541914.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cccbafb2e3c343b2aab51714734bddce',ramdisk_id='',reservation_id='r-sf2jj0i0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-5814605',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-5814605-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T10:03:47Z,user_data=None,user_id='60f523e6d03743daa3ff6f5bc7122d00',uuid=63092ab0-9432-4c74-933e-e9d5428e6162,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31de197b-ef56-4d2a-9fa2-293715a60004", "address": "fa:16:3e:8f:bb:bd", "network": {"id": "62df5f27-c8d9-4d79-9ad6-2f32e63bf47f", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-307256986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "cccbafb2e3c343b2aab51714734bddce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31de197b-ef", "ovs_interfaceid": "31de197b-ef56-4d2a-9fa2-293715a60004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 02 10:03:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:50.432 281049 DEBUG nova.network.os_vif_util [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Converting VIF {"id": "31de197b-ef56-4d2a-9fa2-293715a60004", "address": "fa:16:3e:8f:bb:bd", "network": {"id": "62df5f27-c8d9-4d79-9ad6-2f32e63bf47f", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-307256986-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "cccbafb2e3c343b2aab51714734bddce", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31de197b-ef", "ovs_interfaceid": "31de197b-ef56-4d2a-9fa2-293715a60004", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 02 10:03:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:50.433 281049 DEBUG nova.network.os_vif_util [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8f:bb:bd,bridge_name='br-int',has_traffic_filtering=True,id=31de197b-ef56-4d2a-9fa2-293715a60004,network=Network(62df5f27-c8d9-4d79-9ad6-2f32e63bf47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap31de197b-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 02 10:03:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:50.433 281049 DEBUG os_vif [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:bb:bd,bridge_name='br-int',has_traffic_filtering=True,id=31de197b-ef56-4d2a-9fa2-293715a60004,network=Network(62df5f27-c8d9-4d79-9ad6-2f32e63bf47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap31de197b-ef') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 02 10:03:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:50.437 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:50.437 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31de197b-ef, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:03:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:50.439 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:50.441 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:50.443 281049 INFO os_vif [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8f:bb:bd,bridge_name='br-int',has_traffic_filtering=True,id=31de197b-ef56-4d2a-9fa2-293715a60004,network=Network(62df5f27-c8d9-4d79-9ad6-2f32e63bf47f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap31de197b-ef')
Dec 02 10:03:50 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:03:50.454 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:03:50 np0005541914.localdomain podman[308969]: 2025-12-02 10:03:50.45585903 +0000 UTC m=+0.095840116 container cleanup aab282695765cbe494c9ca1119c2bc6691861fe314f464e9ce84c6d9ee1f5485 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 02 10:03:50 np0005541914.localdomain podman[308993]: 2025-12-02 10:03:50.486945593 +0000 UTC m=+0.061724813 container cleanup aab282695765cbe494c9ca1119c2bc6691861fe314f464e9ce84c6d9ee1f5485 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 02 10:03:50 np0005541914.localdomain systemd[1]: libpod-conmon-aab282695765cbe494c9ca1119c2bc6691861fe314f464e9ce84c6d9ee1f5485.scope: Deactivated successfully.
Dec 02 10:03:50 np0005541914.localdomain podman[309021]: 2025-12-02 10:03:50.536901207 +0000 UTC m=+0.064377342 container remove aab282695765cbe494c9ca1119c2bc6691861fe314f464e9ce84c6d9ee1f5485 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 02 10:03:50 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:50.542 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[efa58c88-7def-41e3-baa6-20d6999b30f4]: (4, ('Tue Dec  2 10:03:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f (aab282695765cbe494c9ca1119c2bc6691861fe314f464e9ce84c6d9ee1f5485)\naab282695765cbe494c9ca1119c2bc6691861fe314f464e9ce84c6d9ee1f5485\nTue Dec  2 10:03:50 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f (aab282695765cbe494c9ca1119c2bc6691861fe314f464e9ce84c6d9ee1f5485)\naab282695765cbe494c9ca1119c2bc6691861fe314f464e9ce84c6d9ee1f5485\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:50 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:50.544 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[172f4331-dd9f-4f9a-b0c8-cd3e226cbe09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:50 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:50.545 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62df5f27-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:03:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:50.548 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:50 np0005541914.localdomain kernel: device tap62df5f27-c0 left promiscuous mode
Dec 02 10:03:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:50.554 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:50 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:50.559 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[024ad253-86d0-45cf-a175-5b3cd975931a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:50 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:50.576 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[e4190c2d-037a-4550-9bdc-b431a3ec302a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:50 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:50.578 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[85116c96-893e-4541-9389-59648ae05b6c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:50 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:50.591 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[29d969e2-9d75-482c-94ed-10a224ddf94d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1199136, 'reachable_time': 42765, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309039, 'error': None, 'target': 'ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:50 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:50.600 159602 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-62df5f27-c8d9-4d79-9ad6-2f32e63bf47f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 02 10:03:50 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:50.600 159602 DEBUG oslo.privsep.daemon [-] privsep: reply[822ba53e-a82c-4cfa-bedc-b7323ce74e39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:50 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:50.602 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 40590dd1-9250-4409-a2d0-cd4f4774bfc8 in datapath 3673812c-f461-4e86-831f-b7a7821f4bda unbound from our chassis
Dec 02 10:03:50 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:50.608 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3673812c-f461-4e86-831f-b7a7821f4bda, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:03:50 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:50.610 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[ed53ef49-54b9-4ec5-b669-daf69239b792]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:50 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:50.612 159483 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda namespace which is not needed anymore
Dec 02 10:03:50 np0005541914.localdomain neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda[308873]: [NOTICE]   (308877) : haproxy version is 2.8.14-c23fe91
Dec 02 10:03:50 np0005541914.localdomain neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda[308873]: [NOTICE]   (308877) : path to executable is /usr/sbin/haproxy
Dec 02 10:03:50 np0005541914.localdomain neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda[308873]: [WARNING]  (308877) : Exiting Master process...
Dec 02 10:03:50 np0005541914.localdomain neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda[308873]: [ALERT]    (308877) : Current worker (308879) exited with code 143 (Terminated)
Dec 02 10:03:50 np0005541914.localdomain neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda[308873]: [WARNING]  (308877) : All workers exited. Exiting... (0)
Dec 02 10:03:50 np0005541914.localdomain systemd[1]: libpod-45a9aa51387896f86af7827af886be505d05fc4ed3e46069c523e94778226bc6.scope: Deactivated successfully.
Dec 02 10:03:50 np0005541914.localdomain podman[309058]: 2025-12-02 10:03:50.757491366 +0000 UTC m=+0.059787223 container died 45a9aa51387896f86af7827af886be505d05fc4ed3e46069c523e94778226bc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:03:50 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e102 e102: 6 total, 6 up, 6 in
Dec 02 10:03:50 np0005541914.localdomain podman[309058]: 2025-12-02 10:03:50.791219999 +0000 UTC m=+0.093515796 container cleanup 45a9aa51387896f86af7827af886be505d05fc4ed3e46069c523e94778226bc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 02 10:03:50 np0005541914.localdomain podman[309072]: 2025-12-02 10:03:50.821836287 +0000 UTC m=+0.056546445 container cleanup 45a9aa51387896f86af7827af886be505d05fc4ed3e46069c523e94778226bc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 02 10:03:50 np0005541914.localdomain systemd[1]: libpod-conmon-45a9aa51387896f86af7827af886be505d05fc4ed3e46069c523e94778226bc6.scope: Deactivated successfully.
Dec 02 10:03:50 np0005541914.localdomain podman[309085]: 2025-12-02 10:03:50.878486334 +0000 UTC m=+0.069090286 container remove 45a9aa51387896f86af7827af886be505d05fc4ed3e46069c523e94778226bc6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:03:50 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:50.881 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[5f85fcc5-5599-4e7f-91f9-ac195309a115]: (4, ('Tue Dec  2 10:03:50 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda (45a9aa51387896f86af7827af886be505d05fc4ed3e46069c523e94778226bc6)\n45a9aa51387896f86af7827af886be505d05fc4ed3e46069c523e94778226bc6\nTue Dec  2 10:03:50 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda (45a9aa51387896f86af7827af886be505d05fc4ed3e46069c523e94778226bc6)\n45a9aa51387896f86af7827af886be505d05fc4ed3e46069c523e94778226bc6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:50 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:50.883 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[a94c5baa-16f3-448b-a29b-ccaf4ff9aebe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:50 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:50.884 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3673812c-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:03:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:50.886 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:50 np0005541914.localdomain kernel: device tap3673812c-f0 left promiscuous mode
Dec 02 10:03:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:50.895 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:50 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:50.898 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[106c8d8a-6795-434b-b747-127f5a4fa473]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:50 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:50.912 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[20650ce1-9078-4395-ad67-14f6ff78185c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:50 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:50.913 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[2b1612d7-cf00-4479-a11a-8e736571b0f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:50 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:50.927 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[d717fc77-74ca-4fb0-b35c-1c26a417baf6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1199239, 'reachable_time': 22157, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309108, 'error': None, 'target': 'ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:50 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:50.929 159602 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3673812c-f461-4e86-831f-b7a7821f4bda deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 02 10:03:50 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:03:50.929 159602 DEBUG oslo.privsep.daemon [-] privsep: reply[4880a318-4272-4105-a59d-9abc8f7b6be2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:03:50 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v126: 177 pgs: 177 active+clean; 304 MiB data, 1016 MiB used, 41 GiB / 42 GiB avail; 2.9 MiB/s rd, 20 KiB/s wr, 152 op/s
Dec 02 10:03:50 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c2dd055fe24137a82f7d8c9b97ce73c529332056d65bf867e145475b505e1900-merged.mount: Deactivated successfully.
Dec 02 10:03:50 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-45a9aa51387896f86af7827af886be505d05fc4ed3e46069c523e94778226bc6-userdata-shm.mount: Deactivated successfully.
Dec 02 10:03:50 np0005541914.localdomain systemd[1]: run-netns-ovnmeta\x2d3673812c\x2df461\x2d4e86\x2d831f\x2db7a7821f4bda.mount: Deactivated successfully.
Dec 02 10:03:50 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-6ae3f6c85df8190497f8b25c7653b5a855ddd3a7711f745ce3c2dd3a00d6dcd7-merged.mount: Deactivated successfully.
Dec 02 10:03:50 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aab282695765cbe494c9ca1119c2bc6691861fe314f464e9ce84c6d9ee1f5485-userdata-shm.mount: Deactivated successfully.
Dec 02 10:03:50 np0005541914.localdomain systemd[1]: run-netns-ovnmeta\x2d62df5f27\x2dc8d9\x2d4d79\x2d9ad6\x2d2f32e63bf47f.mount: Deactivated successfully.
Dec 02 10:03:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:51.037 281049 INFO nova.virt.libvirt.driver [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Deleting instance files /var/lib/nova/instances/63092ab0-9432-4c74-933e-e9d5428e6162_del
Dec 02 10:03:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:51.037 281049 INFO nova.virt.libvirt.driver [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Deletion of /var/lib/nova/instances/63092ab0-9432-4c74-933e-e9d5428e6162_del complete
Dec 02 10:03:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:51.103 281049 INFO nova.compute.manager [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Took 0.93 seconds to destroy the instance on the hypervisor.
Dec 02 10:03:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:51.103 281049 DEBUG oslo.service.loopingcall [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 02 10:03:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:51.103 281049 DEBUG nova.compute.manager [-] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 02 10:03:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:51.104 281049 DEBUG nova.network.neutron [-] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 02 10:03:51 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:03:51.172 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:03:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:51.268 281049 DEBUG nova.compute.manager [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Received event network-vif-plugged-31de197b-ef56-4d2a-9fa2-293715a60004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 02 10:03:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:51.269 281049 DEBUG oslo_concurrency.lockutils [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:03:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:51.270 281049 DEBUG oslo_concurrency.lockutils [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:03:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:51.270 281049 DEBUG oslo_concurrency.lockutils [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:03:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:51.270 281049 DEBUG nova.compute.manager [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] No waiting events found dispatching network-vif-plugged-31de197b-ef56-4d2a-9fa2-293715a60004 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 02 10:03:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:51.271 281049 WARNING nova.compute.manager [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Received unexpected event network-vif-plugged-31de197b-ef56-4d2a-9fa2-293715a60004 for instance with vm_state active and task_state deleting.
Dec 02 10:03:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:51.271 281049 DEBUG nova.compute.manager [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Received event network-vif-plugged-31de197b-ef56-4d2a-9fa2-293715a60004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 02 10:03:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:51.272 281049 DEBUG oslo_concurrency.lockutils [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:03:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:51.272 281049 DEBUG oslo_concurrency.lockutils [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:03:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:51.273 281049 DEBUG oslo_concurrency.lockutils [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:03:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:51.273 281049 DEBUG nova.compute.manager [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] No waiting events found dispatching network-vif-plugged-31de197b-ef56-4d2a-9fa2-293715a60004 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 02 10:03:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:51.274 281049 WARNING nova.compute.manager [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Received unexpected event network-vif-plugged-31de197b-ef56-4d2a-9fa2-293715a60004 for instance with vm_state active and task_state deleting.
Dec 02 10:03:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:51.274 281049 DEBUG nova.compute.manager [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Received event network-vif-unplugged-31de197b-ef56-4d2a-9fa2-293715a60004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 02 10:03:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:51.275 281049 DEBUG oslo_concurrency.lockutils [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:03:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:51.275 281049 DEBUG oslo_concurrency.lockutils [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:03:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:51.275 281049 DEBUG oslo_concurrency.lockutils [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:03:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:51.276 281049 DEBUG nova.compute.manager [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] No waiting events found dispatching network-vif-unplugged-31de197b-ef56-4d2a-9fa2-293715a60004 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 02 10:03:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:51.276 281049 DEBUG nova.compute.manager [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Received event network-vif-unplugged-31de197b-ef56-4d2a-9fa2-293715a60004 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 02 10:03:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:51.277 281049 DEBUG nova.compute.manager [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Received event network-vif-plugged-31de197b-ef56-4d2a-9fa2-293715a60004 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 02 10:03:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:51.277 281049 DEBUG oslo_concurrency.lockutils [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:03:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:51.278 281049 DEBUG oslo_concurrency.lockutils [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:03:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:51.278 281049 DEBUG oslo_concurrency.lockutils [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:03:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:51.278 281049 DEBUG nova.compute.manager [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] No waiting events found dispatching network-vif-plugged-31de197b-ef56-4d2a-9fa2-293715a60004 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 02 10:03:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:51.279 281049 WARNING nova.compute.manager [req-1f4a7214-9c71-498d-b37c-8bd1c3842d5c req-0dc829cf-c501-448d-ab86-803a2092c51b dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Received unexpected event network-vif-plugged-31de197b-ef56-4d2a-9fa2-293715a60004 for instance with vm_state active and task_state deleting.
Dec 02 10:03:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:51.382 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:51 np0005541914.localdomain ceph-mon[301710]: osdmap e102: 6 total, 6 up, 6 in
Dec 02 10:03:51 np0005541914.localdomain ceph-mon[301710]: pgmap v126: 177 pgs: 177 active+clean; 304 MiB data, 1016 MiB used, 41 GiB / 42 GiB avail; 2.9 MiB/s rd, 20 KiB/s wr, 152 op/s
Dec 02 10:03:51 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:03:52 np0005541914.localdomain systemd[1]: tmp-crun.RfhleY.mount: Deactivated successfully.
Dec 02 10:03:52 np0005541914.localdomain podman[309109]: 2025-12-02 10:03:52.076293085 +0000 UTC m=+0.085585054 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0)
Dec 02 10:03:52 np0005541914.localdomain podman[309109]: 2025-12-02 10:03:52.179533769 +0000 UTC m=+0.188825718 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:03:52 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:03:52 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:03:52 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v127: 177 pgs: 177 active+clean; 225 MiB data, 869 MiB used, 41 GiB / 42 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 192 op/s
Dec 02 10:03:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:53.552 281049 DEBUG nova.network.neutron [-] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 02 10:03:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:53.571 281049 INFO nova.compute.manager [-] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Took 2.47 seconds to deallocate network for instance.
Dec 02 10:03:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:53.625 281049 DEBUG oslo_concurrency.lockutils [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:03:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:53.625 281049 DEBUG oslo_concurrency.lockutils [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:03:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:53.627 281049 DEBUG oslo_concurrency.lockutils [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:03:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:53.672 281049 INFO nova.scheduler.client.report [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Deleted allocations for instance 63092ab0-9432-4c74-933e-e9d5428e6162
Dec 02 10:03:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:53.763 281049 DEBUG oslo_concurrency.lockutils [None req-bd9f9399-8ee7-41b1-9196-1aff6d19bc34 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Lock "63092ab0-9432-4c74-933e-e9d5428e6162" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:03:54 np0005541914.localdomain ceph-mon[301710]: pgmap v127: 177 pgs: 177 active+clean; 225 MiB data, 869 MiB used, 41 GiB / 42 GiB avail; 2.9 MiB/s rd, 22 KiB/s wr, 192 op/s
Dec 02 10:03:54 np0005541914.localdomain systemd[1]: Stopping User Manager for UID 42436...
Dec 02 10:03:54 np0005541914.localdomain systemd[308350]: Activating special unit Exit the Session...
Dec 02 10:03:54 np0005541914.localdomain systemd[308350]: Stopped target Main User Target.
Dec 02 10:03:54 np0005541914.localdomain systemd[308350]: Stopped target Basic System.
Dec 02 10:03:54 np0005541914.localdomain systemd[308350]: Stopped target Paths.
Dec 02 10:03:54 np0005541914.localdomain systemd[308350]: Stopped target Sockets.
Dec 02 10:03:54 np0005541914.localdomain systemd[308350]: Stopped target Timers.
Dec 02 10:03:54 np0005541914.localdomain systemd[308350]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 02 10:03:54 np0005541914.localdomain systemd[308350]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 02 10:03:54 np0005541914.localdomain systemd[308350]: Closed D-Bus User Message Bus Socket.
Dec 02 10:03:54 np0005541914.localdomain systemd[308350]: Stopped Create User's Volatile Files and Directories.
Dec 02 10:03:54 np0005541914.localdomain systemd[308350]: Removed slice User Application Slice.
Dec 02 10:03:54 np0005541914.localdomain systemd[308350]: Reached target Shutdown.
Dec 02 10:03:54 np0005541914.localdomain systemd[308350]: Finished Exit the Session.
Dec 02 10:03:54 np0005541914.localdomain systemd[308350]: Reached target Exit the Session.
Dec 02 10:03:54 np0005541914.localdomain systemd[1]: user@42436.service: Deactivated successfully.
Dec 02 10:03:54 np0005541914.localdomain systemd[1]: Stopped User Manager for UID 42436.
Dec 02 10:03:54 np0005541914.localdomain systemd[1]: Stopping User Runtime Directory /run/user/42436...
Dec 02 10:03:54 np0005541914.localdomain systemd[1]: run-user-42436.mount: Deactivated successfully.
Dec 02 10:03:54 np0005541914.localdomain systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Dec 02 10:03:54 np0005541914.localdomain systemd[1]: Stopped User Runtime Directory /run/user/42436.
Dec 02 10:03:54 np0005541914.localdomain systemd[1]: Removed slice User Slice of UID 42436.
Dec 02 10:03:54 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v128: 177 pgs: 177 active+clean; 225 MiB data, 869 MiB used, 41 GiB / 42 GiB avail; 2.4 MiB/s rd, 18 KiB/s wr, 155 op/s
Dec 02 10:03:55 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/2587233418' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:03:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:55.440 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:55 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:03:55.498 2 INFO neutron.agent.securitygroups_rpc [None req-e52c4e8f-c1be-4de8-b00c-43719449fd5b 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Security group member updated ['5c93e274-85ac-42d3-b949-bdb62e6b8c39']
Dec 02 10:03:56 np0005541914.localdomain ceph-mon[301710]: pgmap v128: 177 pgs: 177 active+clean; 225 MiB data, 869 MiB used, 41 GiB / 42 GiB avail; 2.4 MiB/s rd, 18 KiB/s wr, 155 op/s
Dec 02 10:03:56 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/2649201936' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:03:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:56.384 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:56.538 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:03:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:56.594 281049 DEBUG nova.virt.libvirt.driver [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Dec 02 10:03:56 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v129: 177 pgs: 177 active+clean; 225 MiB data, 869 MiB used, 41 GiB / 42 GiB avail; 2.4 MiB/s rd, 18 KiB/s wr, 155 op/s
Dec 02 10:03:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:03:57 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 5 addresses
Dec 02 10:03:57 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:03:57 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:03:57 np0005541914.localdomain podman[309145]: 2025-12-02 10:03:57.750064297 +0000 UTC m=+0.056537549 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:03:58 np0005541914.localdomain ceph-mon[301710]: pgmap v129: 177 pgs: 177 active+clean; 225 MiB data, 869 MiB used, 41 GiB / 42 GiB avail; 2.4 MiB/s rd, 18 KiB/s wr, 155 op/s
Dec 02 10:03:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:58.524 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:03:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:58.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:03:58 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v130: 177 pgs: 177 active+clean; 226 MiB data, 869 MiB used, 41 GiB / 42 GiB avail; 659 KiB/s rd, 37 KiB/s wr, 89 op/s
Dec 02 10:03:58 np0005541914.localdomain systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000006.scope: Deactivated successfully.
Dec 02 10:03:58 np0005541914.localdomain systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000006.scope: Consumed 13.130s CPU time.
Dec 02 10:03:58 np0005541914.localdomain systemd-machined[202765]: Machine qemu-3-instance-00000006 terminated.
Dec 02 10:03:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:59.263 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:59.610 281049 INFO nova.virt.libvirt.driver [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance shutdown successfully after 13 seconds.
Dec 02 10:03:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:59.617 281049 INFO nova.virt.libvirt.driver [-] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance destroyed successfully.
Dec 02 10:03:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:59.617 281049 DEBUG nova.objects.instance [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lazy-loading 'numa_topology' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 02 10:03:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:59.700 281049 INFO nova.virt.libvirt.driver [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Beginning cold snapshot process
Dec 02 10:03:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:59.715 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:03:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:59.879 281049 DEBUG nova.virt.libvirt.imagebackend [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] No parent info for d85e840d-fa56-497b-b5bd-b49584d3e97a; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 02 10:03:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:03:59.921 281049 DEBUG nova.storage.rbd_utils [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] creating snapshot(564b67ddbec84407a44d7f8337429375) on rbd image(268e09a3-7abe-4037-a14a-068e7b8a78fb_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 02 10:04:00 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e103 e103: 6 total, 6 up, 6 in
Dec 02 10:04:00 np0005541914.localdomain ceph-mon[301710]: pgmap v130: 177 pgs: 177 active+clean; 226 MiB data, 869 MiB used, 41 GiB / 42 GiB avail; 659 KiB/s rd, 37 KiB/s wr, 89 op/s
Dec 02 10:04:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:00.284 281049 DEBUG nova.storage.rbd_utils [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] cloning vms/268e09a3-7abe-4037-a14a-068e7b8a78fb_disk@564b67ddbec84407a44d7f8337429375 to images/0e87d55f-56a4-4da8-9198-c633785685ee clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 02 10:04:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:00.443 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:00.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:04:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:00.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 10:04:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:00.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 10:04:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:00.707 281049 DEBUG nova.storage.rbd_utils [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] flattening images/0e87d55f-56a4-4da8-9198-c633785685ee flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 02 10:04:00 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v132: 177 pgs: 177 active+clean; 226 MiB data, 869 MiB used, 41 GiB / 42 GiB avail; 659 KiB/s rd, 37 KiB/s wr, 89 op/s
Dec 02 10:04:01 np0005541914.localdomain ceph-mon[301710]: osdmap e103: 6 total, 6 up, 6 in
Dec 02 10:04:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:01.386 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:01.451 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "refresh_cache-268e09a3-7abe-4037-a14a-068e7b8a78fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 02 10:04:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:01.451 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquired lock "refresh_cache-268e09a3-7abe-4037-a14a-068e7b8a78fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 02 10:04:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:01.451 281049 DEBUG nova.network.neutron [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 02 10:04:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:01.451 281049 DEBUG nova.objects.instance [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 02 10:04:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:01.572 281049 DEBUG nova.network.neutron [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 02 10:04:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:01.675 281049 DEBUG nova.storage.rbd_utils [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] removing snapshot(564b67ddbec84407a44d7f8337429375) on rbd image(268e09a3-7abe-4037-a14a-068e7b8a78fb_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 02 10:04:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:01.809 281049 DEBUG nova.network.neutron [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 02 10:04:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:01.828 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Releasing lock "refresh_cache-268e09a3-7abe-4037-a14a-068e7b8a78fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 02 10:04:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:01.829 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 02 10:04:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:01.830 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:04:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:01.830 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:04:01 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:04:01.896 2 INFO neutron.agent.securitygroups_rpc [None req-d7c6b922-a31a-45e0-b3f4-c5bd99f50015 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Security group member updated ['576d6513-029b-4880-bb0b-58094b586b90']
Dec 02 10:04:02 np0005541914.localdomain ceph-mon[301710]: pgmap v132: 177 pgs: 177 active+clean; 226 MiB data, 869 MiB used, 41 GiB / 42 GiB avail; 659 KiB/s rd, 37 KiB/s wr, 89 op/s
Dec 02 10:04:02 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e104 e104: 6 total, 6 up, 6 in
Dec 02 10:04:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:02.298 281049 DEBUG nova.storage.rbd_utils [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] creating snapshot(snap) on rbd image(0e87d55f-56a4-4da8-9198-c633785685ee) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 02 10:04:02 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:04:02.301 262347 INFO neutron.agent.linux.ip_lib [None req-42b907e5-4841-49aa-ab03-f3e6a1a35935 - - - - - -] Device tap7466a138-c4 cannot be used as it has no MAC address
Dec 02 10:04:02 np0005541914.localdomain kernel: device tap7466a138-c4 entered promiscuous mode
Dec 02 10:04:02 np0005541914.localdomain NetworkManager[5967]: <info>  [1764669842.3671] manager: (tap7466a138-c4): new Generic device (/org/freedesktop/NetworkManager/Devices/19)
Dec 02 10:04:02 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:04:02Z|00069|binding|INFO|Claiming lport 7466a138-c45f-458b-a865-8c5d3b978b39 for this chassis.
Dec 02 10:04:02 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:04:02Z|00070|binding|INFO|7466a138-c45f-458b-a865-8c5d3b978b39: Claiming unknown
Dec 02 10:04:02 np0005541914.localdomain systemd-udevd[309319]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 10:04:02 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:02.393 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-e59f1a37-9713-45f0-9ce4-adafcc25b854', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e59f1a37-9713-45f0-9ce4-adafcc25b854', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1db4f455ea047e3b37458f6d2c5e699', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=507aa10c-3500-464e-ac80-7fecb3c41257, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=7466a138-c45f-458b-a865-8c5d3b978b39) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:04:02 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:02.395 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 7466a138-c45f-458b-a865-8c5d3b978b39 in datapath e59f1a37-9713-45f0-9ce4-adafcc25b854 bound to our chassis
Dec 02 10:04:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:02.397 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:02 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:02.398 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Port 854baa2d-45b1-472d-99b3-9d9f1dbe8c4b IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 02 10:04:02 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:02.398 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e59f1a37-9713-45f0-9ce4-adafcc25b854, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:04:02 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:02.399 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[1794f115-8951-4c44-84ef-31a973aa6359]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:02 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap7466a138-c4: No such device
Dec 02 10:04:02 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap7466a138-c4: No such device
Dec 02 10:04:02 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:04:02Z|00071|binding|INFO|Setting lport 7466a138-c45f-458b-a865-8c5d3b978b39 ovn-installed in OVS
Dec 02 10:04:02 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:04:02Z|00072|binding|INFO|Setting lport 7466a138-c45f-458b-a865-8c5d3b978b39 up in Southbound
Dec 02 10:04:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:02.407 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:02 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap7466a138-c4: No such device
Dec 02 10:04:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:02.411 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:02 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:04:02 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap7466a138-c4: No such device
Dec 02 10:04:02 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap7466a138-c4: No such device
Dec 02 10:04:02 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap7466a138-c4: No such device
Dec 02 10:04:02 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap7466a138-c4: No such device
Dec 02 10:04:02 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap7466a138-c4: No such device
Dec 02 10:04:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:02.438 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:02.459 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:02.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:04:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:02.554 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:04:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:02.554 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:04:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:02.579 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:04:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:02.580 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:04:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:02.580 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:04:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:02.581 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 10:04:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:02.581 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:04:02 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v134: 177 pgs: 177 active+clean; 307 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 6.6 MiB/s rd, 5.9 MiB/s wr, 181 op/s
Dec 02 10:04:02 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:04:02 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2142671951' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:04:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:02.994 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:04:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:03.039 281049 DEBUG nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 02 10:04:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:03.039 281049 DEBUG nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 02 10:04:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:03.175 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:04:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:03.176 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:04:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:03.176 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:04:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:03.199 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 10:04:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:03.200 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11595MB free_disk=41.70033645629883GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 10:04:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:03.201 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:04:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:03.201 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:04:03 np0005541914.localdomain ceph-mon[301710]: osdmap e104: 6 total, 6 up, 6 in
Dec 02 10:04:03 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/2479108286' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:04:03 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/2142671951' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:04:03 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/1023460823' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:04:03 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e105 e105: 6 total, 6 up, 6 in
Dec 02 10:04:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:03.309 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Instance 268e09a3-7abe-4037-a14a-068e7b8a78fb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 02 10:04:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:03.309 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 10:04:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:03.310 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=640MB phys_disk=41GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 10:04:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:03.358 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:04:03 np0005541914.localdomain podman[309415]: 2025-12-02 10:04:03.359278204 +0000 UTC m=+0.046522761 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 02 10:04:03 np0005541914.localdomain podman[309415]: 
Dec 02 10:04:03 np0005541914.localdomain podman[309415]: 2025-12-02 10:04:03.492392028 +0000 UTC m=+0.179636525 container create eb17c2a09156f0110f30ce386a62fe87266b4900d8ad79be27255fb75185e461 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e59f1a37-9713-45f0-9ce4-adafcc25b854, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 02 10:04:03 np0005541914.localdomain systemd[1]: Started libpod-conmon-eb17c2a09156f0110f30ce386a62fe87266b4900d8ad79be27255fb75185e461.scope.
Dec 02 10:04:03 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 10:04:03 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0d093117dd79caf19187febcd7ccef397c254025e14d4a134627538b5ac62e5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 10:04:03 np0005541914.localdomain podman[309415]: 2025-12-02 10:04:03.573163622 +0000 UTC m=+0.260408129 container init eb17c2a09156f0110f30ce386a62fe87266b4900d8ad79be27255fb75185e461 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e59f1a37-9713-45f0-9ce4-adafcc25b854, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:04:03 np0005541914.localdomain podman[309415]: 2025-12-02 10:04:03.581186258 +0000 UTC m=+0.268430765 container start eb17c2a09156f0110f30ce386a62fe87266b4900d8ad79be27255fb75185e461 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e59f1a37-9713-45f0-9ce4-adafcc25b854, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 02 10:04:03 np0005541914.localdomain dnsmasq[309453]: started, version 2.85 cachesize 150
Dec 02 10:04:03 np0005541914.localdomain dnsmasq[309453]: DNS service limited to local subnets
Dec 02 10:04:03 np0005541914.localdomain dnsmasq[309453]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 02 10:04:03 np0005541914.localdomain dnsmasq[309453]: warning: no upstream servers configured
Dec 02 10:04:03 np0005541914.localdomain dnsmasq-dhcp[309453]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 02 10:04:03 np0005541914.localdomain dnsmasq[309453]: read /var/lib/neutron/dhcp/e59f1a37-9713-45f0-9ce4-adafcc25b854/addn_hosts - 0 addresses
Dec 02 10:04:03 np0005541914.localdomain dnsmasq-dhcp[309453]: read /var/lib/neutron/dhcp/e59f1a37-9713-45f0-9ce4-adafcc25b854/host
Dec 02 10:04:03 np0005541914.localdomain dnsmasq-dhcp[309453]: read /var/lib/neutron/dhcp/e59f1a37-9713-45f0-9ce4-adafcc25b854/opts
Dec 02 10:04:03 np0005541914.localdomain podman[239757]: time="2025-12-02T10:04:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:04:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:04:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 158570 "" "Go-http-client/1.1"
Dec 02 10:04:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:04:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19678 "" "Go-http-client/1.1"
Dec 02 10:04:03 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:04:03.813 262347 INFO neutron.agent.dhcp.agent [None req-e3374617-6fae-4cf4-83f9-f5993e6fd367 - - - - - -] DHCP configuration for ports {'09a8ae64-d204-4cfc-97ef-a2500c78fa1a'} is completed
Dec 02 10:04:03 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:04:03 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/682142553' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:04:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:03.879 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:04:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:03.886 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 10:04:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:03.908 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 10:04:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:03.933 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 10:04:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:03.934 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:04:03 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:04:03.942 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:04:03Z, description=, device_id=c9ef5a4e-598e-409c-9cfd-7339d9fd74ab, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c3a4c0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c3aca0>], id=9f97a067-96bf-4249-b193-b9cbcf841c2f, ip_allocation=immediate, mac_address=fa:16:3e:c2:f0:84, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=548, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:04:03Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:04:04 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:04:04.111 2 INFO neutron.agent.securitygroups_rpc [None req-477510e9-c030-4124-bb5e-ce2ad555248a 60f523e6d03743daa3ff6f5bc7122d00 cccbafb2e3c343b2aab51714734bddce - - default default] Security group member updated ['5c93e274-85ac-42d3-b949-bdb62e6b8c39']
Dec 02 10:04:04 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 6 addresses
Dec 02 10:04:04 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:04:04 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:04:04 np0005541914.localdomain podman[309471]: 2025-12-02 10:04:04.161380761 +0000 UTC m=+0.062006498 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 02 10:04:04 np0005541914.localdomain ceph-mon[301710]: pgmap v134: 177 pgs: 177 active+clean; 307 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 6.6 MiB/s rd, 5.9 MiB/s wr, 181 op/s
Dec 02 10:04:04 np0005541914.localdomain ceph-mon[301710]: osdmap e105: 6 total, 6 up, 6 in
Dec 02 10:04:04 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/682142553' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:04:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:04.322 281049 INFO nova.virt.libvirt.driver [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Snapshot image upload complete
Dec 02 10:04:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:04.322 281049 DEBUG nova.compute.manager [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 02 10:04:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:04.393 281049 INFO nova.compute.manager [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Shelve offloading
Dec 02 10:04:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:04.403 281049 INFO nova.virt.libvirt.driver [-] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance destroyed successfully.
Dec 02 10:04:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:04.403 281049 DEBUG nova.compute.manager [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 02 10:04:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:04.408 281049 DEBUG oslo_concurrency.lockutils [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Acquiring lock "refresh_cache-268e09a3-7abe-4037-a14a-068e7b8a78fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 02 10:04:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:04.408 281049 DEBUG oslo_concurrency.lockutils [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Acquired lock "refresh_cache-268e09a3-7abe-4037-a14a-068e7b8a78fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 02 10:04:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:04.409 281049 DEBUG nova.network.neutron [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 02 10:04:04 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:04:04.435 262347 INFO neutron.agent.dhcp.agent [None req-c5b9814f-9350-44f2-8d86-e75292c7a42b - - - - - -] DHCP configuration for ports {'9f97a067-96bf-4249-b193-b9cbcf841c2f'} is completed
Dec 02 10:04:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:04.477 281049 DEBUG nova.network.neutron [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 02 10:04:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:04.711 281049 DEBUG nova.network.neutron [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 02 10:04:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:04.738 281049 DEBUG oslo_concurrency.lockutils [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Releasing lock "refresh_cache-268e09a3-7abe-4037-a14a-068e7b8a78fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 02 10:04:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:04.748 281049 INFO nova.virt.libvirt.driver [-] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Instance destroyed successfully.
Dec 02 10:04:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:04.749 281049 DEBUG nova.objects.instance [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lazy-loading 'resources' on Instance uuid 268e09a3-7abe-4037-a14a-068e7b8a78fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 02 10:04:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:04.908 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:04:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:04.908 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 10:04:04 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v136: 177 pgs: 177 active+clean; 307 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 146 op/s
Dec 02 10:04:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1748178721' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:04:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1748178721' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:04:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:05.405 281049 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764669830.4039314, 63092ab0-9432-4c74-933e-e9d5428e6162 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 02 10:04:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:05.406 281049 INFO nova.compute.manager [-] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] VM Stopped (Lifecycle Event)
Dec 02 10:04:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:05.441 281049 DEBUG nova.compute.manager [None req-9e4e8d66-15e5-44dd-a723-2513e261e1b8 - - - - - -] [instance: 63092ab0-9432-4c74-933e-e9d5428e6162] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 02 10:04:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:05.447 281049 INFO nova.virt.libvirt.driver [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Deleting instance files /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb_del
Dec 02 10:04:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:05.447 281049 INFO nova.virt.libvirt.driver [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Deletion of /var/lib/nova/instances/268e09a3-7abe-4037-a14a-068e7b8a78fb_del complete
Dec 02 10:04:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:05.450 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:05.544 281049 INFO nova.scheduler.client.report [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Deleted allocations for instance 268e09a3-7abe-4037-a14a-068e7b8a78fb
Dec 02 10:04:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:05.583 281049 DEBUG oslo_concurrency.lockutils [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:04:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:05.584 281049 DEBUG oslo_concurrency.lockutils [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:04:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:05.608 281049 DEBUG oslo_concurrency.processutils [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:04:05 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:04:05.621 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:04:05Z, description=, device_id=3138b719-9e61-40de-8133-b237ae970e06, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c570d0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034ce47c0>], id=3420ea4d-964c-418c-b3a2-2a8fc6d1f09f, ip_allocation=immediate, mac_address=fa:16:3e:2e:dd:70, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=560, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:04:05Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:04:05 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e106 e106: 6 total, 6 up, 6 in
Dec 02 10:04:05 np0005541914.localdomain systemd[1]: tmp-crun.pDid0f.mount: Deactivated successfully.
Dec 02 10:04:05 np0005541914.localdomain podman[309549]: 2025-12-02 10:04:05.842982674 +0000 UTC m=+0.053585068 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 02 10:04:05 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 7 addresses
Dec 02 10:04:05 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:04:05 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:04:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:04:06 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4051220607' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:04:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:06.086 281049 DEBUG oslo_concurrency.processutils [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:04:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:06.092 281049 DEBUG nova.compute.provider_tree [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 10:04:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:06.125 281049 DEBUG nova.scheduler.client.report [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 10:04:06 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:04:06.127 262347 INFO neutron.agent.dhcp.agent [None req-a3156c44-5873-487a-a537-03a254fd7ee1 - - - - - -] DHCP configuration for ports {'3420ea4d-964c-418c-b3a2-2a8fc6d1f09f'} is completed
Dec 02 10:04:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:06.159 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:06.164 281049 DEBUG oslo_concurrency.lockutils [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:04:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:06.233 281049 DEBUG oslo_concurrency.lockutils [None req-8b84d287-4811-45fb-97d4-bb6d8ef1eeb7 96d084f3c3184bf4ac7b9635139dd4aa 09cae3217c5e430b8dbe17828669a978 - - default default] Lock "268e09a3-7abe-4037-a14a-068e7b8a78fb" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 19.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:04:06 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:04:06.358 262347 INFO neutron.agent.linux.ip_lib [None req-d9f24fe9-79c3-4a66-9e57-9eb192b8d7a2 - - - - - -] Device tapde515592-06 cannot be used as it has no MAC address
Dec 02 10:04:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:06.378 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:06 np0005541914.localdomain kernel: device tapde515592-06 entered promiscuous mode
Dec 02 10:04:06 np0005541914.localdomain NetworkManager[5967]: <info>  [1764669846.3857] manager: (tapde515592-06): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Dec 02 10:04:06 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:04:06Z|00073|binding|INFO|Claiming lport de515592-061d-469f-83fb-52a8d86b335c for this chassis.
Dec 02 10:04:06 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:04:06Z|00074|binding|INFO|de515592-061d-469f-83fb-52a8d86b335c: Claiming unknown
Dec 02 10:04:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:06.386 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:06 np0005541914.localdomain systemd-udevd[309582]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 10:04:06 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapde515592-06: No such device
Dec 02 10:04:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:06.418 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:06 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:04:06Z|00075|binding|INFO|Setting lport de515592-061d-469f-83fb-52a8d86b335c ovn-installed in OVS
Dec 02 10:04:06 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapde515592-06: No such device
Dec 02 10:04:06 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapde515592-06: No such device
Dec 02 10:04:06 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapde515592-06: No such device
Dec 02 10:04:06 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapde515592-06: No such device
Dec 02 10:04:06 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapde515592-06: No such device
Dec 02 10:04:06 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapde515592-06: No such device
Dec 02 10:04:06 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapde515592-06: No such device
Dec 02 10:04:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:06.460 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:06.494 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:06 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:04:06Z|00076|binding|INFO|Setting lport de515592-061d-469f-83fb-52a8d86b335c up in Southbound
Dec 02 10:04:06 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:06.524 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '19.80.0.2/24', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-c40d86e4-7101-443b-abce-328f7d1ea40e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c40d86e4-7101-443b-abce-328f7d1ea40e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd048f19ff5fc47dc88162ef5f9cebe8b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e1e893da-07af-44e3-945f-c862571583e8, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=de515592-061d-469f-83fb-52a8d86b335c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:04:06 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:06.526 159483 INFO neutron.agent.ovn.metadata.agent [-] Port de515592-061d-469f-83fb-52a8d86b335c in datapath c40d86e4-7101-443b-abce-328f7d1ea40e bound to our chassis
Dec 02 10:04:06 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:06.528 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c40d86e4-7101-443b-abce-328f7d1ea40e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 02 10:04:06 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:06.530 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[5fb458ec-3aeb-4bf1-a864-555e021b6fb2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:06 np0005541914.localdomain ceph-mon[301710]: pgmap v136: 177 pgs: 177 active+clean; 307 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 146 op/s
Dec 02 10:04:06 np0005541914.localdomain ceph-mon[301710]: osdmap e106: 6 total, 6 up, 6 in
Dec 02 10:04:06 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/4051220607' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:04:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:04:06
Dec 02 10:04:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 02 10:04:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] do_upmap
Dec 02 10:04:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] pools ['volumes', 'backups', 'vms', 'manila_data', '.mgr', 'images', 'manila_metadata']
Dec 02 10:04:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes
Dec 02 10:04:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:06.873 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:04:06 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v138: 177 pgs: 177 active+clean; 307 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 146 op/s
Dec 02 10:04:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:04:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:04:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:04:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:04:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:04:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust
Dec 02 10:04:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:04:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 02 10:04:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:04:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006580482708682301 of space, bias 1.0, pg target 1.31609654173646 quantized to 32 (current 32)
Dec 02 10:04:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:04:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 02 10:04:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:04:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.007545049466080402 of space, bias 1.0, pg target 1.50146484375 quantized to 32 (current 32)
Dec 02 10:04:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:04:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 02 10:04:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:04:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 02 10:04:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:04:06 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.0019400387353433835 quantized to 16 (current 16)
Dec 02 10:04:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 02 10:04:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:04:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 02 10:04:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:04:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:04:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:04:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:04:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:04:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:04:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:04:07 np0005541914.localdomain podman[309653]: 
Dec 02 10:04:07 np0005541914.localdomain podman[309653]: 2025-12-02 10:04:07.37541726 +0000 UTC m=+0.089644817 container create 8a85197bd70814f58cb15afaa29c7b2cca6e3e23fc2d2480aab6c6637289b464 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c40d86e4-7101-443b-abce-328f7d1ea40e, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec 02 10:04:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:04:07 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:04:07 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:04:07 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:04:07 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:04:07 np0005541914.localdomain podman[309653]: 2025-12-02 10:04:07.332267463 +0000 UTC m=+0.046495050 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 02 10:04:07 np0005541914.localdomain systemd[1]: Started libpod-conmon-8a85197bd70814f58cb15afaa29c7b2cca6e3e23fc2d2480aab6c6637289b464.scope.
Dec 02 10:04:07 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 10:04:07 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffda5ed93ce183a48e515ae2d9c5dd554b9888bdc7086e24a8673d10ea36adfd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 10:04:07 np0005541914.localdomain podman[309653]: 2025-12-02 10:04:07.47197901 +0000 UTC m=+0.186206567 container init 8a85197bd70814f58cb15afaa29c7b2cca6e3e23fc2d2480aab6c6637289b464 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c40d86e4-7101-443b-abce-328f7d1ea40e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 02 10:04:07 np0005541914.localdomain dnsmasq[309707]: started, version 2.85 cachesize 150
Dec 02 10:04:07 np0005541914.localdomain dnsmasq[309707]: DNS service limited to local subnets
Dec 02 10:04:07 np0005541914.localdomain dnsmasq[309707]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 02 10:04:07 np0005541914.localdomain dnsmasq[309707]: warning: no upstream servers configured
Dec 02 10:04:07 np0005541914.localdomain dnsmasq-dhcp[309707]: DHCP, static leases only on 19.80.0.0, lease time 1d
Dec 02 10:04:07 np0005541914.localdomain dnsmasq[309707]: read /var/lib/neutron/dhcp/c40d86e4-7101-443b-abce-328f7d1ea40e/addn_hosts - 0 addresses
Dec 02 10:04:07 np0005541914.localdomain dnsmasq-dhcp[309707]: read /var/lib/neutron/dhcp/c40d86e4-7101-443b-abce-328f7d1ea40e/host
Dec 02 10:04:07 np0005541914.localdomain dnsmasq-dhcp[309707]: read /var/lib/neutron/dhcp/c40d86e4-7101-443b-abce-328f7d1ea40e/opts
Dec 02 10:04:07 np0005541914.localdomain podman[309667]: 2025-12-02 10:04:07.518694326 +0000 UTC m=+0.095579030 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 02 10:04:07 np0005541914.localdomain podman[309653]: 2025-12-02 10:04:07.531214751 +0000 UTC m=+0.245442268 container start 8a85197bd70814f58cb15afaa29c7b2cca6e3e23fc2d2480aab6c6637289b464 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c40d86e4-7101-443b-abce-328f7d1ea40e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 02 10:04:07 np0005541914.localdomain podman[309669]: 2025-12-02 10:04:07.577796204 +0000 UTC m=+0.138895862 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:04:07 np0005541914.localdomain podman[309669]: 2025-12-02 10:04:07.590217116 +0000 UTC m=+0.151316734 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:04:07 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:04:07 np0005541914.localdomain podman[309667]: 2025-12-02 10:04:07.6023697 +0000 UTC m=+0.179254464 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 02 10:04:07 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:04:07 np0005541914.localdomain ceph-mon[301710]: pgmap v138: 177 pgs: 177 active+clean; 307 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 146 op/s
Dec 02 10:04:07 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:04:07.642 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:04:07Z, description=, device_id=c9ef5a4e-598e-409c-9cfd-7339d9fd74ab, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c96b50>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c96a00>], id=995d0e5e-bc64-4b9b-ab8f-120b9c16f0c1, ip_allocation=immediate, mac_address=fa:16:3e:34:09:6c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:03:54Z, description=, dns_domain=, id=e59f1a37-9713-45f0-9ce4-adafcc25b854, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupRulesNegativeTestJSON-774700713-network, port_security_enabled=True, project_id=b1db4f455ea047e3b37458f6d2c5e699, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=45410, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=521, status=ACTIVE, subnets=['63a6053d-0067-412b-8c97-ca7de4cc1f0d'], tags=[], tenant_id=b1db4f455ea047e3b37458f6d2c5e699, updated_at=2025-12-02T10:03:59Z, vlan_transparent=None, network_id=e59f1a37-9713-45f0-9ce4-adafcc25b854, port_security_enabled=False, project_id=b1db4f455ea047e3b37458f6d2c5e699, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=570, status=DOWN, tags=[], tenant_id=b1db4f455ea047e3b37458f6d2c5e699, updated_at=2025-12-02T10:04:07Z on network e59f1a37-9713-45f0-9ce4-adafcc25b854
Dec 02 10:04:07 np0005541914.localdomain podman[309675]: 2025-12-02 10:04:07.690980855 +0000 UTC m=+0.252758525 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 02 10:04:07 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:04:07.701 262347 INFO neutron.agent.dhcp.agent [None req-ccdf9dc0-3f52-470a-a720-dd15054f0053 - - - - - -] DHCP configuration for ports {'60398627-924e-4353-b9ee-b86c24b6fc87'} is completed
Dec 02 10:04:07 np0005541914.localdomain podman[309668]: 2025-12-02 10:04:07.765725323 +0000 UTC m=+0.336127798 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 10:04:07 np0005541914.localdomain podman[309675]: 2025-12-02 10:04:07.773890494 +0000 UTC m=+0.335668174 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 02 10:04:07 np0005541914.localdomain podman[309668]: 2025-12-02 10:04:07.782782277 +0000 UTC m=+0.353184692 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 10:04:07 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:04:07 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:04:07 np0005541914.localdomain dnsmasq[309453]: read /var/lib/neutron/dhcp/e59f1a37-9713-45f0-9ce4-adafcc25b854/addn_hosts - 1 addresses
Dec 02 10:04:07 np0005541914.localdomain dnsmasq-dhcp[309453]: read /var/lib/neutron/dhcp/e59f1a37-9713-45f0-9ce4-adafcc25b854/host
Dec 02 10:04:07 np0005541914.localdomain podman[309768]: 2025-12-02 10:04:07.90836857 +0000 UTC m=+0.064586978 container kill eb17c2a09156f0110f30ce386a62fe87266b4900d8ad79be27255fb75185e461 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e59f1a37-9713-45f0-9ce4-adafcc25b854, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:04:07 np0005541914.localdomain dnsmasq-dhcp[309453]: read /var/lib/neutron/dhcp/e59f1a37-9713-45f0-9ce4-adafcc25b854/opts
Dec 02 10:04:08 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:04:08.169 262347 INFO neutron.agent.dhcp.agent [None req-436baad9-cd96-470f-9d0f-2b02d58116f8 - - - - - -] DHCP configuration for ports {'995d0e5e-bc64-4b9b-ab8f-120b9c16f0c1'} is completed
Dec 02 10:04:08 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/476037169' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:04:08 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v139: 177 pgs: 177 active+clean; 226 MiB data, 876 MiB used, 41 GiB / 42 GiB avail; 7.0 MiB/s rd, 6.9 MiB/s wr, 204 op/s
Dec 02 10:04:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:04:09.324 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:04:07Z, description=, device_id=c9ef5a4e-598e-409c-9cfd-7339d9fd74ab, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c37ee0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c371f0>], id=995d0e5e-bc64-4b9b-ab8f-120b9c16f0c1, ip_allocation=immediate, mac_address=fa:16:3e:34:09:6c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:03:54Z, description=, dns_domain=, id=e59f1a37-9713-45f0-9ce4-adafcc25b854, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupRulesNegativeTestJSON-774700713-network, port_security_enabled=True, project_id=b1db4f455ea047e3b37458f6d2c5e699, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=45410, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=521, status=ACTIVE, subnets=['63a6053d-0067-412b-8c97-ca7de4cc1f0d'], tags=[], tenant_id=b1db4f455ea047e3b37458f6d2c5e699, updated_at=2025-12-02T10:03:59Z, vlan_transparent=None, network_id=e59f1a37-9713-45f0-9ce4-adafcc25b854, port_security_enabled=False, project_id=b1db4f455ea047e3b37458f6d2c5e699, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=570, status=DOWN, tags=[], tenant_id=b1db4f455ea047e3b37458f6d2c5e699, updated_at=2025-12-02T10:04:07Z on network e59f1a37-9713-45f0-9ce4-adafcc25b854
Dec 02 10:04:09 np0005541914.localdomain dnsmasq[309453]: read /var/lib/neutron/dhcp/e59f1a37-9713-45f0-9ce4-adafcc25b854/addn_hosts - 1 addresses
Dec 02 10:04:09 np0005541914.localdomain podman[309806]: 2025-12-02 10:04:09.575860509 +0000 UTC m=+0.061962016 container kill eb17c2a09156f0110f30ce386a62fe87266b4900d8ad79be27255fb75185e461 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e59f1a37-9713-45f0-9ce4-adafcc25b854, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 02 10:04:09 np0005541914.localdomain dnsmasq-dhcp[309453]: read /var/lib/neutron/dhcp/e59f1a37-9713-45f0-9ce4-adafcc25b854/host
Dec 02 10:04:09 np0005541914.localdomain dnsmasq-dhcp[309453]: read /var/lib/neutron/dhcp/e59f1a37-9713-45f0-9ce4-adafcc25b854/opts
Dec 02 10:04:09 np0005541914.localdomain ceph-mon[301710]: pgmap v139: 177 pgs: 177 active+clean; 226 MiB data, 876 MiB used, 41 GiB / 42 GiB avail; 7.0 MiB/s rd, 6.9 MiB/s wr, 204 op/s
Dec 02 10:04:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:04:09.893 262347 INFO neutron.agent.dhcp.agent [None req-8ba79ff6-fe6c-4423-94a1-6d28de18d525 - - - - - -] DHCP configuration for ports {'995d0e5e-bc64-4b9b-ab8f-120b9c16f0c1'} is completed
Dec 02 10:04:10 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:10.453 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:10 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:04:10.811 2 INFO neutron.agent.securitygroups_rpc [None req-4cc1fa1d-9a41-40fb-9e7e-ba331f6b18b7 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Security group member updated ['576d6513-029b-4880-bb0b-58094b586b90']
Dec 02 10:04:10 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:04:10.896 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:04:09Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034cccdc0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034ccc5b0>], id=ffcaba02-6808-4409-8458-941ca0af2e66, ip_allocation=immediate, mac_address=fa:16:3e:a7:75:fd, name=tempest-subport-1664568330, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:04:02Z, description=, dns_domain=, id=c40d86e4-7101-443b-abce-328f7d1ea40e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-subport_net-1016568838, port_security_enabled=True, project_id=d048f19ff5fc47dc88162ef5f9cebe8b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=31453, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=546, status=ACTIVE, subnets=['77a7f10b-646e-4333-96b4-7957dbd5d33c'], tags=[], tenant_id=d048f19ff5fc47dc88162ef5f9cebe8b, updated_at=2025-12-02T10:04:05Z, vlan_transparent=None, network_id=c40d86e4-7101-443b-abce-328f7d1ea40e, port_security_enabled=True, project_id=d048f19ff5fc47dc88162ef5f9cebe8b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['576d6513-029b-4880-bb0b-58094b586b90'], standard_attr_id=584, status=DOWN, tags=[], tenant_id=d048f19ff5fc47dc88162ef5f9cebe8b, updated_at=2025-12-02T10:04:10Z on network c40d86e4-7101-443b-abce-328f7d1ea40e
Dec 02 10:04:10 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v140: 177 pgs: 177 active+clean; 226 MiB data, 876 MiB used, 41 GiB / 42 GiB avail; 44 KiB/s rd, 2.4 KiB/s wr, 61 op/s
Dec 02 10:04:11 np0005541914.localdomain dnsmasq[309707]: read /var/lib/neutron/dhcp/c40d86e4-7101-443b-abce-328f7d1ea40e/addn_hosts - 1 addresses
Dec 02 10:04:11 np0005541914.localdomain dnsmasq-dhcp[309707]: read /var/lib/neutron/dhcp/c40d86e4-7101-443b-abce-328f7d1ea40e/host
Dec 02 10:04:11 np0005541914.localdomain podman[309845]: 2025-12-02 10:04:11.112715702 +0000 UTC m=+0.062833663 container kill 8a85197bd70814f58cb15afaa29c7b2cca6e3e23fc2d2480aab6c6637289b464 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c40d86e4-7101-443b-abce-328f7d1ea40e, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:04:11 np0005541914.localdomain dnsmasq-dhcp[309707]: read /var/lib/neutron/dhcp/c40d86e4-7101-443b-abce-328f7d1ea40e/opts
Dec 02 10:04:11 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:04:11.357 262347 INFO neutron.agent.dhcp.agent [None req-00d6282d-693f-4855-b5c3-b8421787d7d3 - - - - - -] DHCP configuration for ports {'ffcaba02-6808-4409-8458-941ca0af2e66'} is completed
Dec 02 10:04:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:11.425 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:11 np0005541914.localdomain ceph-mon[301710]: pgmap v140: 177 pgs: 177 active+clean; 226 MiB data, 876 MiB used, 41 GiB / 42 GiB avail; 44 KiB/s rd, 2.4 KiB/s wr, 61 op/s
Dec 02 10:04:11 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/534215597' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:04:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:12.049 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:04:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:04:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:04:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:04:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:04:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:04:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:04:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:04:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:04:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:04:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:04:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:04:12 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:04:12 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/1346093737' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:04:12 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v141: 177 pgs: 177 active+clean; 307 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 4.9 MiB/s rd, 4.8 MiB/s wr, 152 op/s
Dec 02 10:04:13 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 6 addresses
Dec 02 10:04:13 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:04:13 np0005541914.localdomain podman[309884]: 2025-12-02 10:04:13.745535447 +0000 UTC m=+0.046865302 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 02 10:04:13 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:04:13 np0005541914.localdomain ceph-mon[301710]: pgmap v141: 177 pgs: 177 active+clean; 307 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 4.9 MiB/s rd, 4.8 MiB/s wr, 152 op/s
Dec 02 10:04:13 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e107 e107: 6 total, 6 up, 6 in
Dec 02 10:04:13 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:04:13.904 262347 INFO neutron.agent.dhcp.agent [None req-72a6fe85-35b8-438d-a5cc-737c0f2a4004 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:04:13Z, description=, device_id=021f5268-3dcb-4f99-bfbf-465820cbeab2, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c4aee0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c4a3a0>], id=8137ce5b-74f9-4747-998d-7d813193bde0, ip_allocation=immediate, mac_address=fa:16:3e:5f:02:b0, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=590, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:04:13Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:04:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:14.159 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:14.162 281049 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764669839.1615121, 268e09a3-7abe-4037-a14a-068e7b8a78fb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 02 10:04:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:14.163 281049 INFO nova.compute.manager [-] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] VM Stopped (Lifecycle Event)
Dec 02 10:04:14 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 7 addresses
Dec 02 10:04:14 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:04:14 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:04:14 np0005541914.localdomain podman[309920]: 2025-12-02 10:04:14.21941362 +0000 UTC m=+0.070553320 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 02 10:04:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:14.701 281049 DEBUG nova.compute.manager [None req-fc31f5e1-6ae9-48c9-a927-c618e71720af - - - - - -] [instance: 268e09a3-7abe-4037-a14a-068e7b8a78fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 02 10:04:14 np0005541914.localdomain ceph-mon[301710]: osdmap e107: 6 total, 6 up, 6 in
Dec 02 10:04:14 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v143: 177 pgs: 177 active+clean; 307 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 5.2 MiB/s rd, 5.1 MiB/s wr, 160 op/s
Dec 02 10:04:15 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:04:15.159 262347 INFO neutron.agent.dhcp.agent [None req-11537578-730e-445c-98e7-4f501f9f06bf - - - - - -] DHCP configuration for ports {'8137ce5b-74f9-4747-998d-7d813193bde0'} is completed
Dec 02 10:04:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:04:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:04:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:04:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:04:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:04:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:04:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:04:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:04:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:04:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:04:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:04:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:04:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:04:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:04:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:04:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:04:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:04:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:04:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:04:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:04:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:04:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:04:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:04:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:04:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:04:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:04:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:15.456 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:15 np0005541914.localdomain ceph-mon[301710]: pgmap v143: 177 pgs: 177 active+clean; 307 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 5.2 MiB/s rd, 5.1 MiB/s wr, 160 op/s
Dec 02 10:04:15 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:04:15 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:04:16 np0005541914.localdomain podman[309942]: 2025-12-02 10:04:16.137340291 +0000 UTC m=+0.138454840 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 02 10:04:16 np0005541914.localdomain podman[309942]: 2025-12-02 10:04:16.146654237 +0000 UTC m=+0.147768796 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 10:04:16 np0005541914.localdomain podman[309943]: 2025-12-02 10:04:16.105091248 +0000 UTC m=+0.105392302 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., config_id=edpm, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, name=ubi9-minimal, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 02 10:04:16 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:04:16 np0005541914.localdomain podman[309943]: 2025-12-02 10:04:16.190264997 +0000 UTC m=+0.190566061 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=)
Dec 02 10:04:16 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:04:16 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:04:16.388 2 INFO neutron.agent.securitygroups_rpc [req-bec2fcab-0b29-48c5-8c73-7c95715690aa req-3ce61e55-77a0-41a7-a01c-658bb353c505 5d2a1dd73fee440789897d09ac4f0afc b1db4f455ea047e3b37458f6d2c5e699 - - default default] Security group rule updated ['df5547d9-a152-449e-8fa5-5094da38cd68']
Dec 02 10:04:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:16.448 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:16.743 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:16 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v144: 177 pgs: 177 active+clean; 307 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 4.7 MiB/s rd, 4.7 MiB/s wr, 147 op/s
Dec 02 10:04:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:04:17 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:04:17.485 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:04:16Z, description=, device_id=804ee8e0-ce25-466a-ae8c-2cc899f8a9bc, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034ca0e50>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034ca0160>], id=f65c1a0f-6b44-40ae-872d-dbeedbe50a5f, ip_allocation=immediate, mac_address=fa:16:3e:fa:3f:67, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=598, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:04:16Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:04:17 np0005541914.localdomain podman[309999]: 2025-12-02 10:04:17.705996742 +0000 UTC m=+0.058494050 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:04:17 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 8 addresses
Dec 02 10:04:17 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:04:17 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:04:17 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:04:17.923 262347 INFO neutron.agent.dhcp.agent [None req-876ef987-9ad9-4826-92aa-06d490871d4a - - - - - -] DHCP configuration for ports {'f65c1a0f-6b44-40ae-872d-dbeedbe50a5f'} is completed
Dec 02 10:04:18 np0005541914.localdomain ceph-mon[301710]: pgmap v144: 177 pgs: 177 active+clean; 307 MiB data, 1008 MiB used, 41 GiB / 42 GiB avail; 4.7 MiB/s rd, 4.7 MiB/s wr, 147 op/s
Dec 02 10:04:18 np0005541914.localdomain sudo[310020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 10:04:18 np0005541914.localdomain sudo[310020]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:04:18 np0005541914.localdomain sudo[310020]: pam_unix(sudo:session): session closed for user root
Dec 02 10:04:18 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:04:18.296 2 INFO neutron.agent.securitygroups_rpc [req-3542c6d6-3e9a-4403-b3b7-62c55b0a2440 req-a1b9621e-b7b6-4f72-a92d-ded5fdb895c8 5d2a1dd73fee440789897d09ac4f0afc b1db4f455ea047e3b37458f6d2c5e699 - - default default] Security group rule updated ['df5547d9-a152-449e-8fa5-5094da38cd68']
Dec 02 10:04:18 np0005541914.localdomain sudo[310038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 10:04:18 np0005541914.localdomain sudo[310038]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:04:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:18.738 281049 DEBUG oslo_concurrency.lockutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Acquiring lock "82e23ec3-1d57-4166-9ba0-839ded943a78" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:04:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:18.738 281049 DEBUG oslo_concurrency.lockutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:04:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:18.754 281049 DEBUG nova.compute.manager [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 02 10:04:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:18.819 281049 DEBUG oslo_concurrency.lockutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:04:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:18.820 281049 DEBUG oslo_concurrency.lockutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:04:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:18.824 281049 DEBUG nova.virt.hardware [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 02 10:04:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:18.824 281049 INFO nova.compute.claims [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Claim successful on node np0005541914.localdomain
Dec 02 10:04:18 np0005541914.localdomain sudo[310038]: pam_unix(sudo:session): session closed for user root
Dec 02 10:04:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:18.947 281049 DEBUG oslo_concurrency.processutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:04:18 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v145: 177 pgs: 177 active+clean; 226 MiB data, 873 MiB used, 41 GiB / 42 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 208 op/s
Dec 02 10:04:19 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 10:04:19 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:04:19 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 02 10:04:19 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:04:19 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 02 10:04:19 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] update: starting ev a2e9238b-7b7c-4105-87fe-d88907e6ba50 (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:04:19 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] complete: finished ev a2e9238b-7b7c-4105-87fe-d88907e6ba50 (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:04:19 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Completed event a2e9238b-7b7c-4105-87fe-d88907e6ba50 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 02 10:04:19 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 02 10:04:19 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:04:19 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:04:19 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/675123350' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:04:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:19.350 281049 DEBUG oslo_concurrency.processutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:04:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:19.356 281049 DEBUG nova.compute.provider_tree [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 10:04:19 np0005541914.localdomain sudo[310109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 10:04:19 np0005541914.localdomain sudo[310109]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:04:19 np0005541914.localdomain sudo[310109]: pam_unix(sudo:session): session closed for user root
Dec 02 10:04:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:19.382 281049 DEBUG nova.scheduler.client.report [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 10:04:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:19.406 281049 DEBUG oslo_concurrency.lockutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:04:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:19.407 281049 DEBUG nova.compute.manager [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 02 10:04:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:19.465 281049 DEBUG nova.compute.manager [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 02 10:04:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:19.465 281049 DEBUG nova.network.neutron [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 02 10:04:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:19.484 281049 INFO nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 02 10:04:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:19.503 281049 DEBUG nova.compute.manager [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 02 10:04:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:19.657 281049 DEBUG nova.compute.manager [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 02 10:04:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:19.659 281049 DEBUG nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 02 10:04:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:19.660 281049 INFO nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Creating image(s)
Dec 02 10:04:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:19.696 281049 DEBUG nova.storage.rbd_utils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] rbd image 82e23ec3-1d57-4166-9ba0-839ded943a78_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 02 10:04:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:19.732 281049 DEBUG nova.storage.rbd_utils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] rbd image 82e23ec3-1d57-4166-9ba0-839ded943a78_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 02 10:04:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:19.764 281049 DEBUG nova.storage.rbd_utils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] rbd image 82e23ec3-1d57-4166-9ba0-839ded943a78_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 02 10:04:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:19.770 281049 DEBUG oslo_concurrency.processutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:04:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:19.844 281049 DEBUG oslo_concurrency.processutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:04:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:19.845 281049 DEBUG oslo_concurrency.lockutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Acquiring lock "43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:04:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:19.846 281049 DEBUG oslo_concurrency.lockutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Lock "43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:04:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:19.846 281049 DEBUG oslo_concurrency.lockutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Lock "43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:04:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:19.873 281049 DEBUG nova.storage.rbd_utils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] rbd image 82e23ec3-1d57-4166-9ba0-839ded943a78_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 02 10:04:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:19.878 281049 DEBUG oslo_concurrency.processutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc 82e23ec3-1d57-4166-9ba0-839ded943a78_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:04:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:19.977 281049 WARNING oslo_policy.policy [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Dec 02 10:04:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:19.978 281049 WARNING oslo_policy.policy [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Dec 02 10:04:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:19.982 281049 DEBUG nova.policy [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ec20a6cceee246d6b46878df263d30a4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd048f19ff5fc47dc88162ef5f9cebe8b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 02 10:04:20 np0005541914.localdomain ceph-mon[301710]: pgmap v145: 177 pgs: 177 active+clean; 226 MiB data, 873 MiB used, 41 GiB / 42 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 208 op/s
Dec 02 10:04:20 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:04:20 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:04:20 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:04:20 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:04:20 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/675123350' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:04:20 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/2552744948' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:04:20 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:20.458 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:20 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:20.572 281049 DEBUG oslo_concurrency.processutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc 82e23ec3-1d57-4166-9ba0-839ded943a78_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.694s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:04:20 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:20.661 281049 DEBUG nova.storage.rbd_utils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] resizing rbd image 82e23ec3-1d57-4166-9ba0-839ded943a78_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 02 10:04:20 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e108 e108: 6 total, 6 up, 6 in
Dec 02 10:04:20 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:20.817 281049 DEBUG nova.objects.instance [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Lazy-loading 'migration_context' on Instance uuid 82e23ec3-1d57-4166-9ba0-839ded943a78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 02 10:04:20 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:20.833 281049 DEBUG nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 02 10:04:20 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:20.834 281049 DEBUG nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Ensure instance console log exists: /var/lib/nova/instances/82e23ec3-1d57-4166-9ba0-839ded943a78/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 02 10:04:20 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:20.835 281049 DEBUG oslo_concurrency.lockutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:04:20 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:20.835 281049 DEBUG oslo_concurrency.lockutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:04:20 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:20.835 281049 DEBUG oslo_concurrency.lockutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:04:20 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:20.928 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:20 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v147: 177 pgs: 177 active+clean; 226 MiB data, 873 MiB used, 41 GiB / 42 GiB avail; 2.9 MiB/s rd, 1.7 KiB/s wr, 137 op/s
Dec 02 10:04:21 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:21.451 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:21 np0005541914.localdomain ceph-mon[301710]: osdmap e108: 6 total, 6 up, 6 in
Dec 02 10:04:21 np0005541914.localdomain ceph-mon[301710]: pgmap v147: 177 pgs: 177 active+clean; 226 MiB data, 873 MiB used, 41 GiB / 42 GiB avail; 2.9 MiB/s rd, 1.7 KiB/s wr, 137 op/s
Dec 02 10:04:22 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events
Dec 02 10:04:22 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 02 10:04:22 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:22.189 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:22 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:04:22 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v148: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 2.3 MiB/s wr, 193 op/s
Dec 02 10:04:22 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:22.975 281049 DEBUG nova.network.neutron [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Successfully updated port: 54433c73-7e5c-481c-b64c-19e9cfd6e56f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 02 10:04:22 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:04:22 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:22.992 281049 DEBUG oslo_concurrency.lockutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Acquiring lock "refresh_cache-82e23ec3-1d57-4166-9ba0-839ded943a78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 02 10:04:22 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:22.993 281049 DEBUG oslo_concurrency.lockutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Acquired lock "refresh_cache-82e23ec3-1d57-4166-9ba0-839ded943a78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 02 10:04:22 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:22.993 281049 DEBUG nova.network.neutron [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 02 10:04:23 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:04:23 np0005541914.localdomain systemd[1]: tmp-crun.JIeEY4.mount: Deactivated successfully.
Dec 02 10:04:23 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:23.094 281049 DEBUG nova.network.neutron [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 02 10:04:23 np0005541914.localdomain podman[310295]: 2025-12-02 10:04:23.094618455 +0000 UTC m=+0.101471001 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, config_id=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 02 10:04:23 np0005541914.localdomain podman[310295]: 2025-12-02 10:04:23.134028317 +0000 UTC m=+0.140880883 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Dec 02 10:04:23 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:23.135 281049 DEBUG nova.compute.manager [req-49db8f41-4b33-4754-9c0e-a2e78eb50402 req-71aa8440-988b-49c0-9105-4b74f064778d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Received event network-changed-54433c73-7e5c-481c-b64c-19e9cfd6e56f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 02 10:04:23 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:23.136 281049 DEBUG nova.compute.manager [req-49db8f41-4b33-4754-9c0e-a2e78eb50402 req-71aa8440-988b-49c0-9105-4b74f064778d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Refreshing instance network info cache due to event network-changed-54433c73-7e5c-481c-b64c-19e9cfd6e56f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 02 10:04:23 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:23.136 281049 DEBUG oslo_concurrency.lockutils [req-49db8f41-4b33-4754-9c0e-a2e78eb50402 req-71aa8440-988b-49c0-9105-4b74f064778d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "refresh_cache-82e23ec3-1d57-4166-9ba0-839ded943a78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 02 10:04:23 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:04:23 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:04:23.966 2 INFO neutron.agent.securitygroups_rpc [req-65998e7d-c26a-45a5-8676-fd86a74e40b3 req-1863187d-62f6-4dd8-8a63-a2eeaa9837d3 1583e961fefc48749f39fdf4f81945c8 a0475908295e475d873fdbfd8cc82cea - - default default] Security group rule updated ['dfa589a5-e6b3-419a-9bd7-e5b7ecfd8cd6']
Dec 02 10:04:24 np0005541914.localdomain podman[310330]: 2025-12-02 10:04:24.072339262 +0000 UTC m=+0.059511361 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:04:24 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 7 addresses
Dec 02 10:04:24 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:04:24 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:04:24 np0005541914.localdomain ceph-mon[301710]: pgmap v148: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 2.3 MiB/s wr, 193 op/s
Dec 02 10:04:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:24.331 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:24.353 281049 DEBUG nova.network.neutron [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Updating instance_info_cache with network_info: [{"id": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "address": "fa:16:3e:bb:b6:1c", "network": {"id": "13bbad22-ab61-4b1f-849e-c651aa8f3297", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1859087569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "d048f19ff5fc47dc88162ef5f9cebe8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54433c73-7e", "ovs_interfaceid": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 02 10:04:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:24.382 281049 DEBUG oslo_concurrency.lockutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Releasing lock "refresh_cache-82e23ec3-1d57-4166-9ba0-839ded943a78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 02 10:04:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:24.383 281049 DEBUG nova.compute.manager [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Instance network_info: |[{"id": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "address": "fa:16:3e:bb:b6:1c", "network": {"id": "13bbad22-ab61-4b1f-849e-c651aa8f3297", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1859087569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "d048f19ff5fc47dc88162ef5f9cebe8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54433c73-7e", "ovs_interfaceid": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 02 10:04:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:24.384 281049 DEBUG oslo_concurrency.lockutils [req-49db8f41-4b33-4754-9c0e-a2e78eb50402 req-71aa8440-988b-49c0-9105-4b74f064778d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquired lock "refresh_cache-82e23ec3-1d57-4166-9ba0-839ded943a78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 02 10:04:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:24.384 281049 DEBUG nova.network.neutron [req-49db8f41-4b33-4754-9c0e-a2e78eb50402 req-71aa8440-988b-49c0-9105-4b74f064778d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Refreshing network info cache for port 54433c73-7e5c-481c-b64c-19e9cfd6e56f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 02 10:04:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:24.389 281049 DEBUG nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Start _get_guest_xml network_info=[{"id": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "address": "fa:16:3e:bb:b6:1c", "network": {"id": "13bbad22-ab61-4b1f-849e-c651aa8f3297", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1859087569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "d048f19ff5fc47dc88162ef5f9cebe8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54433c73-7e", "ovs_interfaceid": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T10:01:53Z,direct_url=<?>,disk_format='qcow2',id=d85e840d-fa56-497b-b5bd-b49584d3e97a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e2d97696ab6749899bb8ba5ce29a3de2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T10:01:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encryption_secret_uuid': None, 'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'disk_bus': 'virtio', 'encrypted': False, 'size': 0, 'device_name': '/dev/vda', 'image_id': 'd85e840d-fa56-497b-b5bd-b49584d3e97a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 02 10:04:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:24.397 281049 WARNING nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 10:04:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:24.401 281049 DEBUG nova.virt.libvirt.host [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Searching host: 'np0005541914.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 02 10:04:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:24.402 281049 DEBUG nova.virt.libvirt.host [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 02 10:04:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:24.404 281049 DEBUG nova.virt.libvirt.host [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Searching host: 'np0005541914.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 02 10:04:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:24.405 281049 DEBUG nova.virt.libvirt.host [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 02 10:04:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:24.405 281049 DEBUG nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 02 10:04:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:24.406 281049 DEBUG nova.virt.hardware [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T10:01:31Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='82beb986-6d20-42dc-b738-1cef87dee30f',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T10:01:53Z,direct_url=<?>,disk_format='qcow2',id=d85e840d-fa56-497b-b5bd-b49584d3e97a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e2d97696ab6749899bb8ba5ce29a3de2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T10:01:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 02 10:04:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:24.407 281049 DEBUG nova.virt.hardware [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 02 10:04:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:24.407 281049 DEBUG nova.virt.hardware [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 02 10:04:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:24.408 281049 DEBUG nova.virt.hardware [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 02 10:04:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:24.408 281049 DEBUG nova.virt.hardware [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 02 10:04:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:24.408 281049 DEBUG nova.virt.hardware [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 02 10:04:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:24.409 281049 DEBUG nova.virt.hardware [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 02 10:04:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:24.409 281049 DEBUG nova.virt.hardware [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 02 10:04:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:24.410 281049 DEBUG nova.virt.hardware [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 02 10:04:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:24.410 281049 DEBUG nova.virt.hardware [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 02 10:04:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:24.411 281049 DEBUG nova.virt.hardware [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 02 10:04:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:24.416 281049 DEBUG oslo_concurrency.processutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:04:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 02 10:04:24 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2801301008' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:04:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:24.868 281049 DEBUG oslo_concurrency.processutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:04:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:24.906 281049 DEBUG nova.storage.rbd_utils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] rbd image 82e23ec3-1d57-4166-9ba0-839ded943a78_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 02 10:04:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:24.912 281049 DEBUG oslo_concurrency.processutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:04:24 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v149: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 175 op/s
Dec 02 10:04:25 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/2801301008' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:04:25 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:25.268 281049 DEBUG nova.network.neutron [req-49db8f41-4b33-4754-9c0e-a2e78eb50402 req-71aa8440-988b-49c0-9105-4b74f064778d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Updated VIF entry in instance network info cache for port 54433c73-7e5c-481c-b64c-19e9cfd6e56f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 02 10:04:25 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:25.268 281049 DEBUG nova.network.neutron [req-49db8f41-4b33-4754-9c0e-a2e78eb50402 req-71aa8440-988b-49c0-9105-4b74f064778d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Updating instance_info_cache with network_info: [{"id": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "address": "fa:16:3e:bb:b6:1c", "network": {"id": "13bbad22-ab61-4b1f-849e-c651aa8f3297", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1859087569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "d048f19ff5fc47dc88162ef5f9cebe8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54433c73-7e", "ovs_interfaceid": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 02 10:04:25 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 02 10:04:25 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2351640434' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:04:25 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:25.402 281049 DEBUG oslo_concurrency.processutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:04:25 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:25.404 281049 DEBUG nova.virt.libvirt.vif [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T10:04:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-39688497',display_name='tempest-LiveMigrationTest-server-39688497',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005541914.localdomain',hostname='tempest-livemigrationtest-server-39688497',id=8,image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005541914.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005541914.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d048f19ff5fc47dc88162ef5f9cebe8b',ramdisk_id='',reservation_id='r-lnn0by93',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-1345186206',owner_user_name='tempest-LiveMigrationTest-1345186206-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T10:04:19Z,user_data=None,user_id='ec20a6cceee246d6b46878df263d30a4',uuid=82e23ec3-1d57-4166-9ba0-839ded943a78,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "address": "fa:16:3e:bb:b6:1c", "network": {"id": "13bbad22-ab61-4b1f-849e-c651aa8f3297", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1859087569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "d048f19ff5fc47dc88162ef5f9cebe8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54433c73-7e", "ovs_interfaceid": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 02 10:04:25 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:25.405 281049 DEBUG nova.network.os_vif_util [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Converting VIF {"id": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "address": "fa:16:3e:bb:b6:1c", "network": {"id": "13bbad22-ab61-4b1f-849e-c651aa8f3297", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1859087569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "d048f19ff5fc47dc88162ef5f9cebe8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54433c73-7e", "ovs_interfaceid": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 02 10:04:25 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:25.406 281049 DEBUG nova.network.os_vif_util [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:b6:1c,bridge_name='br-int',has_traffic_filtering=True,id=54433c73-7e5c-481c-b64c-19e9cfd6e56f,network=Network(13bbad22-ab61-4b1f-849e-c651aa8f3297),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap54433c73-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 02 10:04:25 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:25.409 281049 DEBUG nova.objects.instance [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Lazy-loading 'pci_devices' on Instance uuid 82e23ec3-1d57-4166-9ba0-839ded943a78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 02 10:04:25 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:25.463 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:26 np0005541914.localdomain ceph-mon[301710]: pgmap v149: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 175 op/s
Dec 02 10:04:26 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/2351640434' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:26.470 281049 DEBUG nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] End _get_guest_xml xml=<domain type="kvm">
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:   <uuid>82e23ec3-1d57-4166-9ba0-839ded943a78</uuid>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:   <name>instance-00000008</name>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:   <memory>131072</memory>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:   <vcpu>1</vcpu>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:   <metadata>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:       <nova:name>tempest-LiveMigrationTest-server-39688497</nova:name>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:       <nova:creationTime>2025-12-02 10:04:24</nova:creationTime>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:       <nova:flavor name="m1.nano">
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:         <nova:memory>128</nova:memory>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:         <nova:disk>1</nova:disk>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:         <nova:swap>0</nova:swap>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:         <nova:ephemeral>0</nova:ephemeral>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:         <nova:vcpus>1</nova:vcpus>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:       </nova:flavor>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:       <nova:owner>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:         <nova:user uuid="ec20a6cceee246d6b46878df263d30a4">tempest-LiveMigrationTest-1345186206-project-member</nova:user>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:         <nova:project uuid="d048f19ff5fc47dc88162ef5f9cebe8b">tempest-LiveMigrationTest-1345186206</nova:project>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:       </nova:owner>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:       <nova:root type="image" uuid="d85e840d-fa56-497b-b5bd-b49584d3e97a"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:       <nova:ports>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:         <nova:port uuid="54433c73-7e5c-481c-b64c-19e9cfd6e56f">
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:         </nova:port>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:       </nova:ports>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     </nova:instance>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:   </metadata>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:   <sysinfo type="smbios">
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <system>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:       <entry name="manufacturer">RDO</entry>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:       <entry name="product">OpenStack Compute</entry>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:       <entry name="serial">82e23ec3-1d57-4166-9ba0-839ded943a78</entry>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:       <entry name="uuid">82e23ec3-1d57-4166-9ba0-839ded943a78</entry>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:       <entry name="family">Virtual Machine</entry>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     </system>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:   </sysinfo>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:   <os>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <boot dev="hd"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <smbios mode="sysinfo"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:   </os>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:   <features>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <acpi/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <apic/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <vmcoreinfo/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:   </features>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:   <clock offset="utc">
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <timer name="pit" tickpolicy="delay"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <timer name="hpet" present="no"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:   </clock>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:   <cpu mode="host-model" match="exact">
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <topology sockets="1" cores="1" threads="1"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:   </cpu>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:   <devices>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <disk type="network" device="disk">
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:       <driver type="raw" cache="none"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:       <source protocol="rbd" name="vms/82e23ec3-1d57-4166-9ba0-839ded943a78_disk">
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:         <host name="172.18.0.103" port="6789"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:         <host name="172.18.0.104" port="6789"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:         <host name="172.18.0.105" port="6789"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:       </source>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:       <auth username="openstack">
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:         <secret type="ceph" uuid="c7c8e171-a193-56fb-95fa-8879fcfa7074"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:       </auth>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:       <target dev="vda" bus="virtio"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     </disk>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <disk type="network" device="cdrom">
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:       <driver type="raw" cache="none"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:       <source protocol="rbd" name="vms/82e23ec3-1d57-4166-9ba0-839ded943a78_disk.config">
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:         <host name="172.18.0.103" port="6789"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:         <host name="172.18.0.104" port="6789"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:         <host name="172.18.0.105" port="6789"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:       </source>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:       <auth username="openstack">
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:         <secret type="ceph" uuid="c7c8e171-a193-56fb-95fa-8879fcfa7074"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:       </auth>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:       <target dev="sda" bus="sata"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     </disk>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <interface type="ethernet">
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:       <mac address="fa:16:3e:bb:b6:1c"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:       <model type="virtio"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:       <driver name="vhost" rx_queue_size="512"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:       <mtu size="1442"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:       <target dev="tap54433c73-7e"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     </interface>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <serial type="pty">
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:       <log file="/var/lib/nova/instances/82e23ec3-1d57-4166-9ba0-839ded943a78/console.log" append="off"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     </serial>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <video>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:       <model type="virtio"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     </video>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <input type="tablet" bus="usb"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <rng model="virtio">
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:       <backend model="random">/dev/urandom</backend>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     </rng>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <controller type="usb" index="0"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     <memballoon model="virtio">
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:       <stats period="10"/>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:     </memballoon>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:   </devices>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]: </domain>
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:26.471 281049 DEBUG nova.compute.manager [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Preparing to wait for external event network-vif-plugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:26.472 281049 DEBUG oslo_concurrency.lockutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Acquiring lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:26.472 281049 DEBUG oslo_concurrency.lockutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:26.472 281049 DEBUG oslo_concurrency.lockutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:26.474 281049 DEBUG nova.virt.libvirt.vif [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T10:04:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-39688497',display_name='tempest-LiveMigrationTest-server-39688497',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005541914.localdomain',hostname='tempest-livemigrationtest-server-39688497',id=8,image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005541914.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005541914.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d048f19ff5fc47dc88162ef5f9cebe8b',ramdisk_id='',reservation_id='r-lnn0by93',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-1345186206',owner_user_name='tempest-LiveMigrationTest-1345186206-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T10:04:19Z,user_data=None,user_id='ec20a6cceee246d6b46878df263d30a4',uuid=82e23ec3-1d57-4166-9ba0-839ded943a78,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "address": "fa:16:3e:bb:b6:1c", "network": {"id": "13bbad22-ab61-4b1f-849e-c651aa8f3297", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1859087569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "d048f19ff5fc47dc88162ef5f9cebe8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54433c73-7e", "ovs_interfaceid": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:26.474 281049 DEBUG nova.network.os_vif_util [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Converting VIF {"id": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "address": "fa:16:3e:bb:b6:1c", "network": {"id": "13bbad22-ab61-4b1f-849e-c651aa8f3297", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1859087569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "d048f19ff5fc47dc88162ef5f9cebe8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54433c73-7e", "ovs_interfaceid": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:26.475 281049 DEBUG nova.network.os_vif_util [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:b6:1c,bridge_name='br-int',has_traffic_filtering=True,id=54433c73-7e5c-481c-b64c-19e9cfd6e56f,network=Network(13bbad22-ab61-4b1f-849e-c651aa8f3297),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap54433c73-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:26.476 281049 DEBUG os_vif [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:b6:1c,bridge_name='br-int',has_traffic_filtering=True,id=54433c73-7e5c-481c-b64c-19e9cfd6e56f,network=Network(13bbad22-ab61-4b1f-849e-c651aa8f3297),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap54433c73-7e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:26.476 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:26.477 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:26.478 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:26.482 281049 DEBUG oslo_concurrency.lockutils [req-49db8f41-4b33-4754-9c0e-a2e78eb50402 req-71aa8440-988b-49c0-9105-4b74f064778d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Releasing lock "refresh_cache-82e23ec3-1d57-4166-9ba0-839ded943a78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:26.483 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:26.484 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54433c73-7e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:26.485 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap54433c73-7e, col_values=(('external_ids', {'iface-id': '54433c73-7e5c-481c-b64c-19e9cfd6e56f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bb:b6:1c', 'vm-uuid': '82e23ec3-1d57-4166-9ba0-839ded943a78'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:26.496 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:26.504 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:26.506 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:26.508 281049 INFO os_vif [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:b6:1c,bridge_name='br-int',has_traffic_filtering=True,id=54433c73-7e5c-481c-b64c-19e9cfd6e56f,network=Network(13bbad22-ab61-4b1f-849e-c651aa8f3297),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap54433c73-7e')
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:26.566 281049 DEBUG nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:26.566 281049 DEBUG nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:26.566 281049 DEBUG nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] No VIF found with MAC fa:16:3e:bb:b6:1c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:26.567 281049 INFO nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Using config drive
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:26.598 281049 DEBUG nova.storage.rbd_utils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] rbd image 82e23ec3-1d57-4166-9ba0-839ded943a78_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:26.729 281049 INFO nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Creating config drive at /var/lib/nova/instances/82e23ec3-1d57-4166-9ba0-839ded943a78/disk.config
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:26.735 281049 DEBUG oslo_concurrency.processutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/82e23ec3-1d57-4166-9ba0-839ded943a78/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq5sfi6fb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:26.865 281049 DEBUG oslo_concurrency.processutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/82e23ec3-1d57-4166-9ba0-839ded943a78/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq5sfi6fb" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:26.907 281049 DEBUG nova.storage.rbd_utils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] rbd image 82e23ec3-1d57-4166-9ba0-839ded943a78_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 02 10:04:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:26.911 281049 DEBUG oslo_concurrency.processutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/82e23ec3-1d57-4166-9ba0-839ded943a78/disk.config 82e23ec3-1d57-4166-9ba0-839ded943a78_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:04:26 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v150: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 175 op/s
Dec 02 10:04:27 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:27.117 281049 DEBUG oslo_concurrency.processutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/82e23ec3-1d57-4166-9ba0-839ded943a78/disk.config 82e23ec3-1d57-4166-9ba0-839ded943a78_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:04:27 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:27.119 281049 INFO nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Deleting local config drive /var/lib/nova/instances/82e23ec3-1d57-4166-9ba0-839ded943a78/disk.config because it was imported into RBD.
Dec 02 10:04:27 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:04:27.148 2 INFO neutron.agent.securitygroups_rpc [req-6d0b23d6-658e-4a79-96cf-b8ca52a56a83 req-dc334455-9197-4ae2-b241-5b724098ced8 1583e961fefc48749f39fdf4f81945c8 a0475908295e475d873fdbfd8cc82cea - - default default] Security group rule updated ['aadc9cbe-01f3-422d-afff-735004537d1d']
Dec 02 10:04:27 np0005541914.localdomain kernel: device tap54433c73-7e entered promiscuous mode
Dec 02 10:04:27 np0005541914.localdomain NetworkManager[5967]: <info>  [1764669867.1729] manager: (tap54433c73-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/21)
Dec 02 10:04:27 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:27.173 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:27 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:04:27Z|00077|binding|INFO|Claiming lport 54433c73-7e5c-481c-b64c-19e9cfd6e56f for this chassis.
Dec 02 10:04:27 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:04:27Z|00078|binding|INFO|54433c73-7e5c-481c-b64c-19e9cfd6e56f: Claiming fa:16:3e:bb:b6:1c 10.100.0.13
Dec 02 10:04:27 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:04:27Z|00079|binding|INFO|Claiming lport ffcaba02-6808-4409-8458-941ca0af2e66 for this chassis.
Dec 02 10:04:27 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:04:27Z|00080|binding|INFO|ffcaba02-6808-4409-8458-941ca0af2e66: Claiming fa:16:3e:a7:75:fd 19.80.0.43
Dec 02 10:04:27 np0005541914.localdomain systemd-udevd[310481]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 10:04:27 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:04:27Z|00081|binding|INFO|Setting lport 54433c73-7e5c-481c-b64c-19e9cfd6e56f ovn-installed in OVS
Dec 02 10:04:27 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:27.190 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:27 np0005541914.localdomain NetworkManager[5967]: <info>  [1764669867.1981] device (tap54433c73-7e): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external')
Dec 02 10:04:27 np0005541914.localdomain NetworkManager[5967]: <info>  [1764669867.1995] device (tap54433c73-7e): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external')
Dec 02 10:04:27 np0005541914.localdomain systemd-machined[202765]: New machine qemu-4-instance-00000008.
Dec 02 10:04:27 np0005541914.localdomain systemd[1]: Started Virtual Machine qemu-4-instance-00000008.
Dec 02 10:04:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:04:27 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:27.549 281049 DEBUG nova.virt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Emitting event <LifecycleEvent: 1764669867.5477786, 82e23ec3-1d57-4166-9ba0-839ded943a78 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 02 10:04:27 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:27.550 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] VM Started (Lifecycle Event)
Dec 02 10:04:28 np0005541914.localdomain ceph-mon[301710]: pgmap v150: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 175 op/s
Dec 02 10:04:28 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:04:28Z|00082|binding|INFO|Setting lport 54433c73-7e5c-481c-b64c-19e9cfd6e56f up in Southbound
Dec 02 10:04:28 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:04:28Z|00083|binding|INFO|Setting lport ffcaba02-6808-4409-8458-941ca0af2e66 up in Southbound
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:28.329 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:75:fd 19.80.0.43'], port_security=['fa:16:3e:a7:75:fd 19.80.0.43'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['54433c73-7e5c-481c-b64c-19e9cfd6e56f'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1664568330', 'neutron:cidrs': '19.80.0.43/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c40d86e4-7101-443b-abce-328f7d1ea40e', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1664568330', 'neutron:project_id': 'd048f19ff5fc47dc88162ef5f9cebe8b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '576d6513-029b-4880-bb0b-58094b586b90', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=e1e893da-07af-44e3-945f-c862571583e8, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=ffcaba02-6808-4409-8458-941ca0af2e66) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:28.331 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:b6:1c 10.100.0.13'], port_security=['fa:16:3e:bb:b6:1c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-146896978', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '82e23ec3-1d57-4166-9ba0-839ded943a78', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13bbad22-ab61-4b1f-849e-c651aa8f3297', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-146896978', 'neutron:project_id': 'd048f19ff5fc47dc88162ef5f9cebe8b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '576d6513-029b-4880-bb0b-58094b586b90', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51e42abf-8647-4013-9c62-778191c64ad0, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=54433c73-7e5c-481c-b64c-19e9cfd6e56f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:28.332 159483 INFO neutron.agent.ovn.metadata.agent [-] Port ffcaba02-6808-4409-8458-941ca0af2e66 in datapath c40d86e4-7101-443b-abce-328f7d1ea40e bound to our chassis
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:28.335 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Port d8d3f12d-b617-495e-ba6c-02c2da59133c IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:28.335 159483 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c40d86e4-7101-443b-abce-328f7d1ea40e
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:28.353 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[34e43b8a-a301-468f-856c-b4dc67888086]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:28.354 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc40d86e4-71 in ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:28.356 262550 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc40d86e4-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:28.356 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[f99a6360-4c2d-4bee-9b4f-a85f8dac6e20]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:28.357 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[1535f9b5-72a7-4a6f-a550-72ab0844dcf0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:28.368 159602 DEBUG oslo.privsep.daemon [-] privsep: reply[129385b2-0cbd-4c53-99b1-5f4ed45eaaae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:28 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:28.374 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 02 10:04:28 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:28.381 281049 DEBUG nova.virt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Emitting event <LifecycleEvent: 1764669867.5479777, 82e23ec3-1d57-4166-9ba0-839ded943a78 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 02 10:04:28 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:28.381 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] VM Paused (Lifecycle Event)
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:28.381 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[cc159051-91c0-4d79-8e5f-01fecb59eb5f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:28 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:28.398 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 02 10:04:28 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:28.403 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:28.406 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[11859cec-4356-455e-ab03-32a2e6675113]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:28.413 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[db17ba87-e5b3-49f3-91da-4f2be962e445]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:28 np0005541914.localdomain NetworkManager[5967]: <info>  [1764669868.4140] manager: (tapc40d86e4-70): new Veth device (/org/freedesktop/NetworkManager/Devices/22)
Dec 02 10:04:28 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:28.419 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:28.439 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[2d71cbaf-07fa-4e6c-8f64-c279a66bfddc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:28.442 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[ce7c7530-586a-47c1-84d9-cda5acd17c0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:28 np0005541914.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tapc40d86e4-71: link becomes ready
Dec 02 10:04:28 np0005541914.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tapc40d86e4-70: link becomes ready
Dec 02 10:04:28 np0005541914.localdomain NetworkManager[5967]: <info>  [1764669868.4602] device (tapc40d86e4-70): carrier: link connected
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:28.466 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[5802b722-bc38-4723-8d17-0f5a8480c462]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:28.482 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[e9d86255-fc0c-4ec4-8398-2c7d824b6354]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc40d86e4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:0f:45:7f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1203262, 'reachable_time': 28919, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310560, 'error': None, 'target': 'ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:28.497 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[55930abe-45c3-434c-a4a4-3d3efcf0fadb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0f:457f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1203262, 'tstamp': 1203262}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310561, 'error': None, 'target': 'ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:28.513 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[2d311e9c-6775-4eec-bbeb-d038b0f94b41]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc40d86e4-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:0f:45:7f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1203262, 'reachable_time': 28919, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310562, 'error': None, 'target': 'ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:28.541 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[289b3590-6db4-4164-9945-6c36d02feb0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:28.594 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[b133aec6-362d-4b51-8c98-78584d210213]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:28.596 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc40d86e4-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:28.597 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:28.597 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc40d86e4-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:04:28 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:28.645 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:28 np0005541914.localdomain kernel: device tapc40d86e4-70 entered promiscuous mode
Dec 02 10:04:28 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:28.648 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:28.650 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc40d86e4-70, col_values=(('external_ids', {'iface-id': '60398627-924e-4353-b9ee-b86c24b6fc87'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:04:28 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:04:28Z|00084|binding|INFO|Releasing lport 60398627-924e-4353-b9ee-b86c24b6fc87 from this chassis (sb_readonly=0)
Dec 02 10:04:28 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:28.652 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:28 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:28.663 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:28.664 159483 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c40d86e4-7101-443b-abce-328f7d1ea40e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c40d86e4-7101-443b-abce-328f7d1ea40e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:28.665 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[1b9d51e4-e42c-4c2c-998c-a7a75f12b091]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:28.666 159483 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]: global
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]:     log         /dev/log local0 debug
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]:     log-tag     haproxy-metadata-proxy-c40d86e4-7101-443b-abce-328f7d1ea40e
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]:     user        root
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]:     group       root
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]:     maxconn     1024
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]:     pidfile     /var/lib/neutron/external/pids/c40d86e4-7101-443b-abce-328f7d1ea40e.pid.haproxy
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]:     daemon
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]: 
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]: defaults
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]:     log global
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]:     mode http
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]:     option httplog
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]:     option dontlognull
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]:     option http-server-close
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]:     option forwardfor
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]:     retries                 3
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]:     timeout http-request    30s
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]:     timeout connect         30s
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]:     timeout client          32s
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]:     timeout server          32s
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]:     timeout http-keep-alive 30s
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]: 
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]: 
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]: listen listener
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]:     bind 169.254.169.254:80
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]:     server metadata /var/lib/neutron/metadata_proxy
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]:     http-request add-header X-OVN-Network-ID c40d86e4-7101-443b-abce-328f7d1ea40e
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 02 10:04:28 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:28.666 159483 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e', 'env', 'PROCESS_TAG=haproxy-c40d86e4-7101-443b-abce-328f7d1ea40e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c40d86e4-7101-443b-abce-328f7d1ea40e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 02 10:04:28 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v151: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 52 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Dec 02 10:04:29 np0005541914.localdomain podman[310595]: 
Dec 02 10:04:29 np0005541914.localdomain podman[310595]: 2025-12-02 10:04:29.064002238 +0000 UTC m=+0.074864483 container create 2cc5d349e43bb674d5121150df24056df04782ad376f6f5a22a4da7efb6a7e68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:04:29 np0005541914.localdomain systemd[1]: Started libpod-conmon-2cc5d349e43bb674d5121150df24056df04782ad376f6f5a22a4da7efb6a7e68.scope.
Dec 02 10:04:29 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 10:04:29 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba06f5d13e82d475898b43f8dfdffe45494dd6b8060a149bf598427f2a15c274/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 10:04:29 np0005541914.localdomain podman[310595]: 2025-12-02 10:04:29.028214958 +0000 UTC m=+0.039077163 image pull  quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 02 10:04:29 np0005541914.localdomain podman[310595]: 2025-12-02 10:04:29.129305056 +0000 UTC m=+0.140167271 container init 2cc5d349e43bb674d5121150df24056df04782ad376f6f5a22a4da7efb6a7e68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 02 10:04:29 np0005541914.localdomain podman[310595]: 2025-12-02 10:04:29.14112514 +0000 UTC m=+0.151987345 container start 2cc5d349e43bb674d5121150df24056df04782ad376f6f5a22a4da7efb6a7e68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:04:29 np0005541914.localdomain neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e[310609]: [NOTICE]   (310613) : New worker (310615) forked
Dec 02 10:04:29 np0005541914.localdomain neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e[310609]: [NOTICE]   (310613) : Loading success.
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:29.212 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 54433c73-7e5c-481c-b64c-19e9cfd6e56f in datapath 13bbad22-ab61-4b1f-849e-c651aa8f3297 unbound from our chassis
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:29.216 159483 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 13bbad22-ab61-4b1f-849e-c651aa8f3297
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:29.227 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[a4abf6ba-8307-4911-a2a2-d22559ef58f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:29.229 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap13bbad22-a1 in ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:29.231 262550 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap13bbad22-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:29.231 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[db4fd512-0df2-4511-b205-2ce764eb0b94]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:29.232 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[251a3dd1-11a3-45f9-9d98-b25a7d0355f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:29.243 159602 DEBUG oslo.privsep.daemon [-] privsep: reply[52ad341a-e7a1-483f-8d71-994cd2ded410]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:29.258 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[0469064d-3016-48be-95b9-d1baf1653b86]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:29.289 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[69348485-606f-4752-8f11-933bd427b21f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:29 np0005541914.localdomain NetworkManager[5967]: <info>  [1764669869.2981] manager: (tap13bbad22-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/23)
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:29.297 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[d695257d-6e64-4db8-b8f5-fb063d30639c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:29 np0005541914.localdomain systemd-udevd[310551]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:29.336 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[8edf0c21-e2b2-4f40-8153-86a87f97e399]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:29.339 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[e57adc1d-e6cc-45ae-ab66-eec140807700]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:29 np0005541914.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap13bbad22-a0: link becomes ready
Dec 02 10:04:29 np0005541914.localdomain NetworkManager[5967]: <info>  [1764669869.3678] device (tap13bbad22-a0): carrier: link connected
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:29.373 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[9ceb333a-854d-4c23-891e-3fcd31b05796]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:29.392 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[2905c7ba-8aea-4c8b-8834-2d057d976c72]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap13bbad22-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:0f:43:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1203353, 'reachable_time': 32975, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310634, 'error': None, 'target': 'ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:29.409 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[de33862f-e9c2-43ab-8e0b-0fcba5709ed4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0f:4317'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1203353, 'tstamp': 1203353}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310635, 'error': None, 'target': 'ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:29.425 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[80bff650-6974-4189-b03e-45806d0f70b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap13bbad22-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:0f:43:17'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1203353, 'reachable_time': 32975, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310636, 'error': None, 'target': 'ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:29.455 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[34e75e57-c6d8-4681-8dd7-a853b7ccfa3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:29.517 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[1eac4cfe-b478-48c1-b2ea-49b03dcb6c99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:29.519 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13bbad22-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:29.521 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:29.522 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13bbad22-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:04:29 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:29.525 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:29 np0005541914.localdomain kernel: device tap13bbad22-a0 entered promiscuous mode
Dec 02 10:04:29 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:29.529 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:29.532 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap13bbad22-a0, col_values=(('external_ids', {'iface-id': '202be55f-4a2f-4e8a-884e-d4a72a4d525d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:04:29 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:04:29Z|00085|binding|INFO|Releasing lport 202be55f-4a2f-4e8a-884e-d4a72a4d525d from this chassis (sb_readonly=0)
Dec 02 10:04:29 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:29.534 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:29 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:29.549 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:29.551 159483 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/13bbad22-ab61-4b1f-849e-c651aa8f3297.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/13bbad22-ab61-4b1f-849e-c651aa8f3297.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:29.553 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[abb0cb8c-5fee-4fa6-a004-88ae43ec6095]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:29.553 159483 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]: global
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]:     log         /dev/log local0 debug
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]:     log-tag     haproxy-metadata-proxy-13bbad22-ab61-4b1f-849e-c651aa8f3297
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]:     user        root
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]:     group       root
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]:     maxconn     1024
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]:     pidfile     /var/lib/neutron/external/pids/13bbad22-ab61-4b1f-849e-c651aa8f3297.pid.haproxy
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]:     daemon
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]: 
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]: defaults
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]:     log global
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]:     mode http
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]:     option httplog
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]:     option dontlognull
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]:     option http-server-close
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]:     option forwardfor
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]:     retries                 3
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]:     timeout http-request    30s
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]:     timeout connect         30s
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]:     timeout client          32s
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]:     timeout server          32s
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]:     timeout http-keep-alive 30s
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]: 
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]: 
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]: listen listener
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]:     bind 169.254.169.254:80
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]:     server metadata /var/lib/neutron/metadata_proxy
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]:     http-request add-header X-OVN-Network-ID 13bbad22-ab61-4b1f-849e-c651aa8f3297
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 02 10:04:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:29.555 159483 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297', 'env', 'PROCESS_TAG=haproxy-13bbad22-ab61-4b1f-849e-c651aa8f3297', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/13bbad22-ab61-4b1f-849e-c651aa8f3297.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 02 10:04:29 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:04:29.591 2 INFO neutron.agent.securitygroups_rpc [req-1c594721-186d-4097-a94c-c620e0979c63 req-4b6914f0-ee8c-4772-ac7a-a3075974ee64 1583e961fefc48749f39fdf4f81945c8 a0475908295e475d873fdbfd8cc82cea - - default default] Security group rule updated ['41f7c9c8-7668-4604-9cee-64c2ce6fa2c0']
Dec 02 10:04:29 np0005541914.localdomain podman[310668]: 
Dec 02 10:04:30 np0005541914.localdomain podman[310668]: 2025-12-02 10:04:30.009910798 +0000 UTC m=+0.101981978 container create ace0705f7911dc8a9f0c9c950296f1f3829bcf23699368c75ed6ffd69d3d23fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:04:30 np0005541914.localdomain systemd[1]: Started libpod-conmon-ace0705f7911dc8a9f0c9c950296f1f3829bcf23699368c75ed6ffd69d3d23fe.scope.
Dec 02 10:04:30 np0005541914.localdomain podman[310668]: 2025-12-02 10:04:29.961156868 +0000 UTC m=+0.053228088 image pull  quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 02 10:04:30 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 10:04:30 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2da036c8c12fc8adba381824ea5735259dd4f5bbe0a18fc50f819cf9bed9c07/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 10:04:30 np0005541914.localdomain podman[310668]: 2025-12-02 10:04:30.085960796 +0000 UTC m=+0.178031946 container init ace0705f7911dc8a9f0c9c950296f1f3829bcf23699368c75ed6ffd69d3d23fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 02 10:04:30 np0005541914.localdomain podman[310668]: 2025-12-02 10:04:30.095478609 +0000 UTC m=+0.187549769 container start ace0705f7911dc8a9f0c9c950296f1f3829bcf23699368c75ed6ffd69d3d23fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 02 10:04:30 np0005541914.localdomain neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297[310682]: [NOTICE]   (310686) : New worker (310688) forked
Dec 02 10:04:30 np0005541914.localdomain neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297[310682]: [NOTICE]   (310686) : Loading success.
Dec 02 10:04:30 np0005541914.localdomain ceph-mon[301710]: pgmap v151: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 52 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Dec 02 10:04:30 np0005541914.localdomain dnsmasq[309453]: read /var/lib/neutron/dhcp/e59f1a37-9713-45f0-9ce4-adafcc25b854/addn_hosts - 0 addresses
Dec 02 10:04:30 np0005541914.localdomain dnsmasq-dhcp[309453]: read /var/lib/neutron/dhcp/e59f1a37-9713-45f0-9ce4-adafcc25b854/host
Dec 02 10:04:30 np0005541914.localdomain podman[310712]: 2025-12-02 10:04:30.310695527 +0000 UTC m=+0.060519342 container kill eb17c2a09156f0110f30ce386a62fe87266b4900d8ad79be27255fb75185e461 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e59f1a37-9713-45f0-9ce4-adafcc25b854, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 02 10:04:30 np0005541914.localdomain dnsmasq-dhcp[309453]: read /var/lib/neutron/dhcp/e59f1a37-9713-45f0-9ce4-adafcc25b854/opts
Dec 02 10:04:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:30.493 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:30 np0005541914.localdomain kernel: device tap7466a138-c4 left promiscuous mode
Dec 02 10:04:30 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:04:30Z|00086|binding|INFO|Releasing lport 7466a138-c45f-458b-a865-8c5d3b978b39 from this chassis (sb_readonly=0)
Dec 02 10:04:30 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:04:30Z|00087|binding|INFO|Setting lport 7466a138-c45f-458b-a865-8c5d3b978b39 down in Southbound
Dec 02 10:04:30 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:30.502 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-e59f1a37-9713-45f0-9ce4-adafcc25b854', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e59f1a37-9713-45f0-9ce4-adafcc25b854', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1db4f455ea047e3b37458f6d2c5e699', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=507aa10c-3500-464e-ac80-7fecb3c41257, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=7466a138-c45f-458b-a865-8c5d3b978b39) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:04:30 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:30.504 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 7466a138-c45f-458b-a865-8c5d3b978b39 in datapath e59f1a37-9713-45f0-9ce4-adafcc25b854 unbound from our chassis
Dec 02 10:04:30 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:30.508 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e59f1a37-9713-45f0-9ce4-adafcc25b854, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:04:30 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:30.509 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[66b4f4ab-6a27-4b7d-9562-61b38d9e5c26]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:30.521 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:30 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v152: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 51 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Dec 02 10:04:31 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:31.529 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:31 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:31.533 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:31 np0005541914.localdomain ceph-mon[301710]: pgmap v152: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 51 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Dec 02 10:04:32 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:04:32.190 2 INFO neutron.agent.securitygroups_rpc [req-10b28dbb-d460-47e0-a99a-7ab94b16b5dd req-5be6a150-24be-4b75-af16-d1e63344c43d 1583e961fefc48749f39fdf4f81945c8 a0475908295e475d873fdbfd8cc82cea - - default default] Security group rule updated ['20cbc49d-f7c3-4e2e-87e6-586884a8dc4b']
Dec 02 10:04:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:04:32 np0005541914.localdomain podman[310752]: 2025-12-02 10:04:32.785043569 +0000 UTC m=+0.061665577 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:04:32 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 6 addresses
Dec 02 10:04:32 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:04:32 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:04:32 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:04:32Z|00088|binding|INFO|Releasing lport 202be55f-4a2f-4e8a-884e-d4a72a4d525d from this chassis (sb_readonly=0)
Dec 02 10:04:32 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:04:32Z|00089|binding|INFO|Releasing lport 60398627-924e-4353-b9ee-b86c24b6fc87 from this chassis (sb_readonly=0)
Dec 02 10:04:32 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:32.898 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:32 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v153: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Dec 02 10:04:33 np0005541914.localdomain podman[310792]: 2025-12-02 10:04:33.315831473 +0000 UTC m=+0.067049973 container kill eb17c2a09156f0110f30ce386a62fe87266b4900d8ad79be27255fb75185e461 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e59f1a37-9713-45f0-9ce4-adafcc25b854, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 02 10:04:33 np0005541914.localdomain dnsmasq[309453]: exiting on receipt of SIGTERM
Dec 02 10:04:33 np0005541914.localdomain systemd[1]: libpod-eb17c2a09156f0110f30ce386a62fe87266b4900d8ad79be27255fb75185e461.scope: Deactivated successfully.
Dec 02 10:04:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:33.329 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:33 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:33.329 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:04:33 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:33.332 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 02 10:04:33 np0005541914.localdomain podman[310804]: 2025-12-02 10:04:33.403326393 +0000 UTC m=+0.067053983 container died eb17c2a09156f0110f30ce386a62fe87266b4900d8ad79be27255fb75185e461 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e59f1a37-9713-45f0-9ce4-adafcc25b854, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 02 10:04:33 np0005541914.localdomain systemd[1]: tmp-crun.SxcIUA.mount: Deactivated successfully.
Dec 02 10:04:33 np0005541914.localdomain podman[310804]: 2025-12-02 10:04:33.443279822 +0000 UTC m=+0.107007352 container cleanup eb17c2a09156f0110f30ce386a62fe87266b4900d8ad79be27255fb75185e461 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e59f1a37-9713-45f0-9ce4-adafcc25b854, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 02 10:04:33 np0005541914.localdomain systemd[1]: libpod-conmon-eb17c2a09156f0110f30ce386a62fe87266b4900d8ad79be27255fb75185e461.scope: Deactivated successfully.
Dec 02 10:04:33 np0005541914.localdomain podman[310806]: 2025-12-02 10:04:33.488906535 +0000 UTC m=+0.143499454 container remove eb17c2a09156f0110f30ce386a62fe87266b4900d8ad79be27255fb75185e461 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e59f1a37-9713-45f0-9ce4-adafcc25b854, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 02 10:04:33 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:04:33.516 262347 INFO neutron.agent.dhcp.agent [None req-917017cd-852b-4ae7-8660-9d6b1c3da2b8 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:04:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:33.526 281049 DEBUG nova.compute.manager [req-896f48c4-9b7f-4309-899f-671a9f2dc67b req-02d0c9cf-a674-4bc7-9006-095024c54e05 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Received event network-vif-plugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 02 10:04:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:33.527 281049 DEBUG oslo_concurrency.lockutils [req-896f48c4-9b7f-4309-899f-671a9f2dc67b req-02d0c9cf-a674-4bc7-9006-095024c54e05 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:04:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:33.528 281049 DEBUG oslo_concurrency.lockutils [req-896f48c4-9b7f-4309-899f-671a9f2dc67b req-02d0c9cf-a674-4bc7-9006-095024c54e05 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:04:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:33.528 281049 DEBUG oslo_concurrency.lockutils [req-896f48c4-9b7f-4309-899f-671a9f2dc67b req-02d0c9cf-a674-4bc7-9006-095024c54e05 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:04:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:33.529 281049 DEBUG nova.compute.manager [req-896f48c4-9b7f-4309-899f-671a9f2dc67b req-02d0c9cf-a674-4bc7-9006-095024c54e05 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Processing event network-vif-plugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 02 10:04:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:33.530 281049 DEBUG nova.compute.manager [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 02 10:04:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:33.535 281049 DEBUG nova.virt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Emitting event <LifecycleEvent: 1764669873.5349603, 82e23ec3-1d57-4166-9ba0-839ded943a78 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 02 10:04:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:33.536 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] VM Resumed (Lifecycle Event)
Dec 02 10:04:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:33.540 281049 DEBUG nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 02 10:04:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:33.547 281049 INFO nova.virt.libvirt.driver [-] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Instance spawned successfully.
Dec 02 10:04:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:33.548 281049 DEBUG nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 02 10:04:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:33.556 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 02 10:04:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:33.567 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 02 10:04:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:33.579 281049 DEBUG nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 02 10:04:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:33.580 281049 DEBUG nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 02 10:04:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:33.580 281049 DEBUG nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 02 10:04:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:33.581 281049 DEBUG nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 02 10:04:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:33.582 281049 DEBUG nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 02 10:04:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:33.583 281049 DEBUG nova.virt.libvirt.driver [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 02 10:04:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:33.592 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 02 10:04:33 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:04:33.608 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:04:33 np0005541914.localdomain podman[239757]: time="2025-12-02T10:04:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:04:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:33.638 281049 INFO nova.compute.manager [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Took 13.98 seconds to spawn the instance on the hypervisor.
Dec 02 10:04:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:33.640 281049 DEBUG nova.compute.manager [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 02 10:04:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:04:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 160942 "" "Go-http-client/1.1"
Dec 02 10:04:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:04:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20630 "" "Go-http-client/1.1"
Dec 02 10:04:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:33.725 281049 INFO nova.compute.manager [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Took 14.93 seconds to build instance.
Dec 02 10:04:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:33.743 281049 DEBUG oslo_concurrency.lockutils [None req-64c8b1de-be13-42cb-88f0-b3cbf22b5810 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:04:33 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-b0d093117dd79caf19187febcd7ccef397c254025e14d4a134627538b5ac62e5-merged.mount: Deactivated successfully.
Dec 02 10:04:33 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eb17c2a09156f0110f30ce386a62fe87266b4900d8ad79be27255fb75185e461-userdata-shm.mount: Deactivated successfully.
Dec 02 10:04:33 np0005541914.localdomain systemd[1]: run-netns-qdhcp\x2de59f1a37\x2d9713\x2d45f0\x2d9ce4\x2dadafcc25b854.mount: Deactivated successfully.
Dec 02 10:04:33 np0005541914.localdomain systemd[1]: tmp-crun.8QvC4R.mount: Deactivated successfully.
Dec 02 10:04:33 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 5 addresses
Dec 02 10:04:33 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:04:33 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:04:33 np0005541914.localdomain podman[310850]: 2025-12-02 10:04:33.830278943 +0000 UTC m=+0.074549793 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 02 10:04:34 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:04:34Z|00090|binding|INFO|Releasing lport 202be55f-4a2f-4e8a-884e-d4a72a4d525d from this chassis (sb_readonly=0)
Dec 02 10:04:34 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:04:34Z|00091|binding|INFO|Releasing lport 60398627-924e-4353-b9ee-b86c24b6fc87 from this chassis (sb_readonly=0)
Dec 02 10:04:34 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:34.099 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:34 np0005541914.localdomain ceph-mon[301710]: pgmap v153: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Dec 02 10:04:34 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v154: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Dec 02 10:04:35 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:35.684 281049 DEBUG nova.compute.manager [req-a694283e-793a-4e94-a08e-ffe052892600 req-f0840fdc-c66f-42c3-a31d-5f212cebf12d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Received event network-vif-plugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 02 10:04:35 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:35.685 281049 DEBUG oslo_concurrency.lockutils [req-a694283e-793a-4e94-a08e-ffe052892600 req-f0840fdc-c66f-42c3-a31d-5f212cebf12d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:04:35 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:35.685 281049 DEBUG oslo_concurrency.lockutils [req-a694283e-793a-4e94-a08e-ffe052892600 req-f0840fdc-c66f-42c3-a31d-5f212cebf12d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:04:35 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:35.686 281049 DEBUG oslo_concurrency.lockutils [req-a694283e-793a-4e94-a08e-ffe052892600 req-f0840fdc-c66f-42c3-a31d-5f212cebf12d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:04:35 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:35.686 281049 DEBUG nova.compute.manager [req-a694283e-793a-4e94-a08e-ffe052892600 req-f0840fdc-c66f-42c3-a31d-5f212cebf12d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] No waiting events found dispatching network-vif-plugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 02 10:04:35 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:35.686 281049 WARNING nova.compute.manager [req-a694283e-793a-4e94-a08e-ffe052892600 req-f0840fdc-c66f-42c3-a31d-5f212cebf12d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Received unexpected event network-vif-plugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f for instance with vm_state active and task_state None.
Dec 02 10:04:35 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:35.903 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:36 np0005541914.localdomain ceph-mon[301710]: pgmap v154: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Dec 02 10:04:36 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:36.335 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:04:36 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:04:36.406 2 INFO neutron.agent.securitygroups_rpc [req-7ec4157a-3973-4fe9-90a5-6b7e95187ed9 req-9adda286-3e5c-4f67-99d9-e6d6658a3dd8 1583e961fefc48749f39fdf4f81945c8 a0475908295e475d873fdbfd8cc82cea - - default default] Security group rule updated ['ec37aab1-8e3e-42dd-a42d-6454010a3bb1']
Dec 02 10:04:36 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:36.534 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:36 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:04:36.790 2 INFO neutron.agent.securitygroups_rpc [req-41dd90c2-f92d-4e4d-a9a2-5512726d06ed req-eda4abe0-dc4d-48d0-a211-5598e3a12357 1583e961fefc48749f39fdf4f81945c8 a0475908295e475d873fdbfd8cc82cea - - default default] Security group rule updated ['ec37aab1-8e3e-42dd-a42d-6454010a3bb1']
Dec 02 10:04:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:04:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:04:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:04:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:04:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:04:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:04:36 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v155: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Dec 02 10:04:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:04:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:37.494 281049 DEBUG nova.virt.libvirt.driver [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Check if temp file /var/lib/nova/instances/tmpvcgqfy3k exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065
Dec 02 10:04:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:37.494 281049 DEBUG nova.compute.manager [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=12288,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpvcgqfy3k',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='82e23ec3-1d57-4166-9ba0-839ded943a78',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587
Dec 02 10:04:37 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:04:37 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:04:37 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:04:37 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:04:38 np0005541914.localdomain ceph-mon[301710]: pgmap v155: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Dec 02 10:04:38 np0005541914.localdomain systemd[1]: tmp-crun.TDB7oZ.mount: Deactivated successfully.
Dec 02 10:04:38 np0005541914.localdomain podman[310871]: 2025-12-02 10:04:38.108318733 +0000 UTC m=+0.104962718 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 02 10:04:38 np0005541914.localdomain podman[310872]: 2025-12-02 10:04:38.117718913 +0000 UTC m=+0.110200591 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 02 10:04:38 np0005541914.localdomain podman[310871]: 2025-12-02 10:04:38.123938064 +0000 UTC m=+0.120582069 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 02 10:04:38 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:04:38 np0005541914.localdomain podman[310870]: 2025-12-02 10:04:38.1855884 +0000 UTC m=+0.185360441 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 02 10:04:38 np0005541914.localdomain podman[310872]: 2025-12-02 10:04:38.190901843 +0000 UTC m=+0.183383511 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 02 10:04:38 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:04:38 np0005541914.localdomain podman[310870]: 2025-12-02 10:04:38.200686044 +0000 UTC m=+0.200458085 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 02 10:04:38 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:04:38 np0005541914.localdomain podman[310869]: 2025-12-02 10:04:38.244031577 +0000 UTC m=+0.243350265 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 02 10:04:38 np0005541914.localdomain podman[310869]: 2025-12-02 10:04:38.252940271 +0000 UTC m=+0.252258949 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:04:38 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:04:38 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:04:38.318 2 INFO neutron.agent.securitygroups_rpc [req-8e540b7a-da71-4acd-ab56-fd3bce480c0a req-799fda44-ad0c-42d1-806d-41b3bc34424c 1583e961fefc48749f39fdf4f81945c8 a0475908295e475d873fdbfd8cc82cea - - default default] Security group rule updated ['ec37aab1-8e3e-42dd-a42d-6454010a3bb1']
Dec 02 10:04:38 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v156: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 02 10:04:40 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:04:40.004 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:04:39Z, description=, device_id=279e244d-14ba-4911-a425-d38d92768269, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c57b20>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c578b0>], id=2a5f6f54-623d-4412-84a8-0e113f2d185f, ip_allocation=immediate, mac_address=fa:16:3e:a8:b0:59, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=692, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:04:39Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:04:40 np0005541914.localdomain ceph-mon[301710]: pgmap v156: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 02 10:04:40 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 6 addresses
Dec 02 10:04:40 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:04:40 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:04:40 np0005541914.localdomain podman[310966]: 2025-12-02 10:04:40.216088413 +0000 UTC m=+0.055119416 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:04:40 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:04:40.515 262347 INFO neutron.agent.dhcp.agent [None req-da977d56-85d3-4320-851d-66b5e47d3862 - - - - - -] DHCP configuration for ports {'2a5f6f54-623d-4412-84a8-0e113f2d185f'} is completed
Dec 02 10:04:40 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:04:40.706 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:04:40Z, description=, device_id=11e16c5e-46e1-4a00-8cde-eb7c634beb6e, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c7ab50>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c7afd0>], id=1f543bfe-6c57-4a47-ae94-6dbd02322d8e, ip_allocation=immediate, mac_address=fa:16:3e:50:3c:9e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=699, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:04:40Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:04:40 np0005541914.localdomain podman[311005]: 2025-12-02 10:04:40.931808273 +0000 UTC m=+0.060000976 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:04:40 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 7 addresses
Dec 02 10:04:40 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:04:40 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:04:40 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v157: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 63 op/s
Dec 02 10:04:41 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/3361271791' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:04:41 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:04:41.292 262347 INFO neutron.agent.dhcp.agent [None req-70668fb7-61a1-4972-b9d9-6eb4a52faef0 - - - - - -] DHCP configuration for ports {'1f543bfe-6c57-4a47-ae94-6dbd02322d8e'} is completed
Dec 02 10:04:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:41.536 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 02 10:04:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:41.537 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:41.537 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Dec 02 10:04:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:41.538 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 02 10:04:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:41.538 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 02 10:04:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:41.541 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:41.762 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:04:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:04:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:04:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:04:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:04:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:04:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:04:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:04:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:04:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:04:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:04:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:04:42 np0005541914.localdomain ceph-mon[301710]: pgmap v157: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 63 op/s
Dec 02 10:04:42 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:04:42 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:42.733 281049 DEBUG nova.compute.manager [req-e94686d1-8b87-43bc-bd34-00a93eed8d94 req-53063fa0-11e1-411e-aa93-01731abbaf9d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Received event network-vif-unplugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 02 10:04:42 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:42.733 281049 DEBUG oslo_concurrency.lockutils [req-e94686d1-8b87-43bc-bd34-00a93eed8d94 req-53063fa0-11e1-411e-aa93-01731abbaf9d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:04:42 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:42.734 281049 DEBUG oslo_concurrency.lockutils [req-e94686d1-8b87-43bc-bd34-00a93eed8d94 req-53063fa0-11e1-411e-aa93-01731abbaf9d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:04:42 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:42.734 281049 DEBUG oslo_concurrency.lockutils [req-e94686d1-8b87-43bc-bd34-00a93eed8d94 req-53063fa0-11e1-411e-aa93-01731abbaf9d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:04:42 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:42.735 281049 DEBUG nova.compute.manager [req-e94686d1-8b87-43bc-bd34-00a93eed8d94 req-53063fa0-11e1-411e-aa93-01731abbaf9d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] No waiting events found dispatching network-vif-unplugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 02 10:04:42 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:42.735 281049 DEBUG nova.compute.manager [req-e94686d1-8b87-43bc-bd34-00a93eed8d94 req-53063fa0-11e1-411e-aa93-01731abbaf9d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Received event network-vif-unplugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 02 10:04:42 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v158: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 63 op/s
Dec 02 10:04:43 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:43.482 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:44 np0005541914.localdomain ceph-mon[301710]: pgmap v158: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 63 op/s
Dec 02 10:04:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:44.910 281049 DEBUG nova.compute.manager [req-53a04abe-c550-4709-beac-434f3fe55ddf req-a319bd0c-c9d4-4135-8840-1a64d72841ea dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Received event network-vif-plugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 02 10:04:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:44.911 281049 DEBUG oslo_concurrency.lockutils [req-53a04abe-c550-4709-beac-434f3fe55ddf req-a319bd0c-c9d4-4135-8840-1a64d72841ea dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:04:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:44.911 281049 DEBUG oslo_concurrency.lockutils [req-53a04abe-c550-4709-beac-434f3fe55ddf req-a319bd0c-c9d4-4135-8840-1a64d72841ea dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:04:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:44.912 281049 DEBUG oslo_concurrency.lockutils [req-53a04abe-c550-4709-beac-434f3fe55ddf req-a319bd0c-c9d4-4135-8840-1a64d72841ea dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:04:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:44.912 281049 DEBUG nova.compute.manager [req-53a04abe-c550-4709-beac-434f3fe55ddf req-a319bd0c-c9d4-4135-8840-1a64d72841ea dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] No waiting events found dispatching network-vif-plugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 02 10:04:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:44.913 281049 WARNING nova.compute.manager [req-53a04abe-c550-4709-beac-434f3fe55ddf req-a319bd0c-c9d4-4135-8840-1a64d72841ea dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Received unexpected event network-vif-plugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f for instance with vm_state active and task_state migrating.
Dec 02 10:04:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:44.932 281049 INFO nova.compute.manager [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Took 6.60 seconds for pre_live_migration on destination host np0005541913.localdomain.
Dec 02 10:04:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:44.932 281049 DEBUG nova.compute.manager [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 02 10:04:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:44.971 281049 DEBUG nova.compute.manager [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=12288,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpvcgqfy3k',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='82e23ec3-1d57-4166-9ba0-839ded943a78',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(f83e1b81-4647-4642-b7c4-b4f369bef051),old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939
Dec 02 10:04:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:44.975 281049 DEBUG nova.objects.instance [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Lazy-loading 'migration_context' on Instance uuid 82e23ec3-1d57-4166-9ba0-839ded943a78 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 02 10:04:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:44.976 281049 DEBUG nova.virt.libvirt.driver [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639
Dec 02 10:04:44 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v159: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 63 op/s
Dec 02 10:04:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:44.979 281049 DEBUG nova.virt.libvirt.driver [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440
Dec 02 10:04:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:44.979 281049 DEBUG nova.virt.libvirt.driver [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449
Dec 02 10:04:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:45.065 281049 DEBUG nova.virt.libvirt.vif [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T10:04:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-39688497',display_name='tempest-LiveMigrationTest-server-39688497',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005541914.localdomain',hostname='tempest-livemigrationtest-server-39688497',id=8,image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-02T10:04:33Z,launched_on='np0005541914.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005541914.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d048f19ff5fc47dc88162ef5f9cebe8b',ramdisk_id='',reservation_id='r-lnn0by93',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-1345186206',owner_user_name='tempest-LiveMigrationTest-1345186206-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T10:04:33Z,user_data=None,user_id='ec20a6cceee246d6b46878df263d30a4',uuid=82e23ec3-1d57-4166-9ba0-839ded943a78,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "address": "fa:16:3e:bb:b6:1c", "network": {"id": "13bbad22-ab61-4b1f-849e-c651aa8f3297", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1859087569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "d048f19ff5fc47dc88162ef5f9cebe8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap54433c73-7e", "ovs_interfaceid": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 02 10:04:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:45.066 281049 DEBUG nova.network.os_vif_util [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Converting VIF {"id": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "address": "fa:16:3e:bb:b6:1c", "network": {"id": "13bbad22-ab61-4b1f-849e-c651aa8f3297", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1859087569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "d048f19ff5fc47dc88162ef5f9cebe8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap54433c73-7e", "ovs_interfaceid": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 02 10:04:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:45.067 281049 DEBUG nova.network.os_vif_util [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:b6:1c,bridge_name='br-int',has_traffic_filtering=True,id=54433c73-7e5c-481c-b64c-19e9cfd6e56f,network=Network(13bbad22-ab61-4b1f-849e-c651aa8f3297),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap54433c73-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 02 10:04:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:45.068 281049 DEBUG nova.virt.libvirt.migration [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Updating guest XML with vif config: <interface type="ethernet">
Dec 02 10:04:45 np0005541914.localdomain nova_compute[281045]:   <mac address="fa:16:3e:bb:b6:1c"/>
Dec 02 10:04:45 np0005541914.localdomain nova_compute[281045]:   <model type="virtio"/>
Dec 02 10:04:45 np0005541914.localdomain nova_compute[281045]:   <driver name="vhost" rx_queue_size="512"/>
Dec 02 10:04:45 np0005541914.localdomain nova_compute[281045]:   <mtu size="1442"/>
Dec 02 10:04:45 np0005541914.localdomain nova_compute[281045]:   <target dev="tap54433c73-7e"/>
Dec 02 10:04:45 np0005541914.localdomain nova_compute[281045]: </interface>
Dec 02 10:04:45 np0005541914.localdomain nova_compute[281045]:  _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388
Dec 02 10:04:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:45.069 281049 DEBUG nova.virt.libvirt.driver [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272
Dec 02 10:04:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:45.482 281049 DEBUG nova.virt.libvirt.migration [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Dec 02 10:04:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:45.483 281049 INFO nova.virt.libvirt.migration [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Increasing downtime to 50 ms after 0 sec elapsed time
Dec 02 10:04:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:45.721 281049 INFO nova.virt.libvirt.driver [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Dec 02 10:04:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:46.226 281049 DEBUG nova.virt.libvirt.migration [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Dec 02 10:04:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:46.227 281049 DEBUG nova.virt.libvirt.migration [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Dec 02 10:04:46 np0005541914.localdomain ceph-mon[301710]: pgmap v159: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 63 op/s
Dec 02 10:04:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:46.540 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:46.730 281049 DEBUG nova.virt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Emitting event <LifecycleEvent: 1764669886.730069, 82e23ec3-1d57-4166-9ba0-839ded943a78 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 02 10:04:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:46.731 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] VM Paused (Lifecycle Event)
Dec 02 10:04:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:46.751 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 02 10:04:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:46.757 281049 DEBUG nova.virt.libvirt.migration [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Dec 02 10:04:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:46.758 281049 DEBUG nova.virt.libvirt.migration [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Dec 02 10:04:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:46.758 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 02 10:04:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:46.778 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] During sync_power_state the instance has a pending task (migrating). Skip.
Dec 02 10:04:46 np0005541914.localdomain kernel: device tap54433c73-7e left promiscuous mode
Dec 02 10:04:46 np0005541914.localdomain NetworkManager[5967]: <info>  [1764669886.9750] device (tap54433c73-7e): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed')
Dec 02 10:04:46 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v160: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 63 op/s
Dec 02 10:04:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:46.983 281049 DEBUG nova.compute.manager [req-b912a26d-203e-47cf-b3b1-a11e64eda8be req-e71a2a54-72d8-433d-a783-05d6808955cd dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Received event network-changed-54433c73-7e5c-481c-b64c-19e9cfd6e56f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 02 10:04:46 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:04:46Z|00092|binding|INFO|Releasing lport 54433c73-7e5c-481c-b64c-19e9cfd6e56f from this chassis (sb_readonly=0)
Dec 02 10:04:46 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:04:46Z|00093|binding|INFO|Setting lport 54433c73-7e5c-481c-b64c-19e9cfd6e56f down in Southbound
Dec 02 10:04:46 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:04:46Z|00094|binding|INFO|Releasing lport ffcaba02-6808-4409-8458-941ca0af2e66 from this chassis (sb_readonly=0)
Dec 02 10:04:46 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:04:46Z|00095|binding|INFO|Setting lport ffcaba02-6808-4409-8458-941ca0af2e66 down in Southbound
Dec 02 10:04:46 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:04:46Z|00096|binding|INFO|Removing iface tap54433c73-7e ovn-installed in OVS
Dec 02 10:04:46 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:04:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:46.984 281049 DEBUG nova.compute.manager [req-b912a26d-203e-47cf-b3b1-a11e64eda8be req-e71a2a54-72d8-433d-a783-05d6808955cd dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Refreshing instance network info cache due to event network-changed-54433c73-7e5c-481c-b64c-19e9cfd6e56f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 02 10:04:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:46.984 281049 DEBUG oslo_concurrency.lockutils [req-b912a26d-203e-47cf-b3b1-a11e64eda8be req-e71a2a54-72d8-433d-a783-05d6808955cd dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "refresh_cache-82e23ec3-1d57-4166-9ba0-839ded943a78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 02 10:04:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:46.984 281049 DEBUG oslo_concurrency.lockutils [req-b912a26d-203e-47cf-b3b1-a11e64eda8be req-e71a2a54-72d8-433d-a783-05d6808955cd dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquired lock "refresh_cache-82e23ec3-1d57-4166-9ba0-839ded943a78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 02 10:04:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:46.985 281049 DEBUG nova.network.neutron [req-b912a26d-203e-47cf-b3b1-a11e64eda8be req-e71a2a54-72d8-433d-a783-05d6808955cd dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Refreshing network info cache for port 54433c73-7e5c-481c-b64c-19e9cfd6e56f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 02 10:04:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:46.986 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:46 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:04:46 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:46.992 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:75:fd 19.80.0.43'], port_security=['fa:16:3e:a7:75:fd 19.80.0.43'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['54433c73-7e5c-481c-b64c-19e9cfd6e56f'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1664568330', 'neutron:cidrs': '19.80.0.43/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c40d86e4-7101-443b-abce-328f7d1ea40e', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1664568330', 'neutron:project_id': 'd048f19ff5fc47dc88162ef5f9cebe8b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '576d6513-029b-4880-bb0b-58094b586b90', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=e1e893da-07af-44e3-945f-c862571583e8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=ffcaba02-6808-4409-8458-941ca0af2e66) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:04:46 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:04:46Z|00097|binding|INFO|Releasing lport 202be55f-4a2f-4e8a-884e-d4a72a4d525d from this chassis (sb_readonly=0)
Dec 02 10:04:46 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:04:46Z|00098|binding|INFO|Releasing lport 60398627-924e-4353-b9ee-b86c24b6fc87 from this chassis (sb_readonly=0)
Dec 02 10:04:46 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:46.994 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:b6:1c 10.100.0.13'], port_security=['fa:16:3e:bb:b6:1c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain,np0005541913.localdomain', 'activation-strategy': 'rarp', 'additional-chassis-activated': 'cd2e60f3-a677-4ac1-88e4-9a23beb0fcdd'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-146896978', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '82e23ec3-1d57-4166-9ba0-839ded943a78', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13bbad22-ab61-4b1f-849e-c651aa8f3297', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-146896978', 'neutron:project_id': 'd048f19ff5fc47dc88162ef5f9cebe8b', 'neutron:revision_number': '8', 'neutron:security_group_ids': '576d6513-029b-4880-bb0b-58094b586b90', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51e42abf-8647-4013-9c62-778191c64ad0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=54433c73-7e5c-481c-b64c-19e9cfd6e56f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:04:46 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:46.995 159483 INFO neutron.agent.ovn.metadata.agent [-] Port ffcaba02-6808-4409-8458-941ca0af2e66 in datapath c40d86e4-7101-443b-abce-328f7d1ea40e unbound from our chassis
Dec 02 10:04:46 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:46.998 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Port d8d3f12d-b617-495e-ba6c-02c2da59133c IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 02 10:04:46 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:46.998 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c40d86e4-7101-443b-abce-328f7d1ea40e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.004 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[035bc1f9-32f3-4900-a3bc-42982e454188]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.006 159483 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e namespace which is not needed anymore
Dec 02 10:04:47 np0005541914.localdomain systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000008.scope: Deactivated successfully.
Dec 02 10:04:47 np0005541914.localdomain systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000008.scope: Consumed 12.799s CPU time.
Dec 02 10:04:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:47.037 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:47 np0005541914.localdomain systemd-machined[202765]: Machine qemu-4-instance-00000008 terminated.
Dec 02 10:04:47 np0005541914.localdomain podman[311033]: 2025-12-02 10:04:47.077322013 +0000 UTC m=+0.074521283 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 10:04:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:47.077 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:47 np0005541914.localdomain podman[311033]: 2025-12-02 10:04:47.085332739 +0000 UTC m=+0.082532009 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 10:04:47 np0005541914.localdomain virtqemud[228953]: Unable to get XATTR trusted.libvirt.security.ref_selinux on vms/82e23ec3-1d57-4166-9ba0-839ded943a78_disk: No such file or directory
Dec 02 10:04:47 np0005541914.localdomain virtqemud[228953]: Unable to get XATTR trusted.libvirt.security.ref_dac on vms/82e23ec3-1d57-4166-9ba0-839ded943a78_disk: No such file or directory
Dec 02 10:04:47 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:04:47 np0005541914.localdomain kernel: device tap54433c73-7e entered promiscuous mode
Dec 02 10:04:47 np0005541914.localdomain NetworkManager[5967]: <info>  [1764669887.1065] manager: (tap54433c73-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Dec 02 10:04:47 np0005541914.localdomain kernel: device tap54433c73-7e left promiscuous mode
Dec 02 10:04:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:47.112 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:47 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:04:47Z|00099|binding|INFO|Claiming lport 54433c73-7e5c-481c-b64c-19e9cfd6e56f for this chassis.
Dec 02 10:04:47 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:04:47Z|00100|binding|INFO|54433c73-7e5c-481c-b64c-19e9cfd6e56f: Claiming fa:16:3e:bb:b6:1c 10.100.0.13
Dec 02 10:04:47 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:04:47Z|00101|binding|INFO|Claiming lport ffcaba02-6808-4409-8458-941ca0af2e66 for this chassis.
Dec 02 10:04:47 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:04:47Z|00102|binding|INFO|ffcaba02-6808-4409-8458-941ca0af2e66: Claiming fa:16:3e:a7:75:fd 19.80.0.43
Dec 02 10:04:47 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:04:47Z|00103|binding|INFO|Releasing lport 54433c73-7e5c-481c-b64c-19e9cfd6e56f from this chassis (sb_readonly=0)
Dec 02 10:04:47 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:04:47Z|00104|binding|INFO|Releasing lport ffcaba02-6808-4409-8458-941ca0af2e66 from this chassis (sb_readonly=0)
Dec 02 10:04:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:47.131 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.134 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:75:fd 19.80.0.43'], port_security=['fa:16:3e:a7:75:fd 19.80.0.43'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['54433c73-7e5c-481c-b64c-19e9cfd6e56f'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1664568330', 'neutron:cidrs': '19.80.0.43/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c40d86e4-7101-443b-abce-328f7d1ea40e', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1664568330', 'neutron:project_id': 'd048f19ff5fc47dc88162ef5f9cebe8b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '576d6513-029b-4880-bb0b-58094b586b90', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=e1e893da-07af-44e3-945f-c862571583e8, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=ffcaba02-6808-4409-8458-941ca0af2e66) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.136 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:b6:1c 10.100.0.13'], port_security=['fa:16:3e:bb:b6:1c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain,np0005541913.localdomain', 'activation-strategy': 'rarp', 'additional-chassis-activated': 'cd2e60f3-a677-4ac1-88e4-9a23beb0fcdd'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-146896978', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '82e23ec3-1d57-4166-9ba0-839ded943a78', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13bbad22-ab61-4b1f-849e-c651aa8f3297', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-146896978', 'neutron:project_id': 'd048f19ff5fc47dc88162ef5f9cebe8b', 'neutron:revision_number': '8', 'neutron:security_group_ids': '576d6513-029b-4880-bb0b-58094b586b90', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51e42abf-8647-4013-9c62-778191c64ad0, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=54433c73-7e5c-481c-b64c-19e9cfd6e56f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:04:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:47.138 281049 DEBUG nova.virt.libvirt.driver [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279
Dec 02 10:04:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:47.139 281049 DEBUG nova.virt.libvirt.driver [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327
Dec 02 10:04:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:47.139 281049 DEBUG nova.virt.libvirt.driver [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.148 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:75:fd 19.80.0.43'], port_security=['fa:16:3e:a7:75:fd 19.80.0.43'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['54433c73-7e5c-481c-b64c-19e9cfd6e56f'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1664568330', 'neutron:cidrs': '19.80.0.43/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c40d86e4-7101-443b-abce-328f7d1ea40e', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1664568330', 'neutron:project_id': 'd048f19ff5fc47dc88162ef5f9cebe8b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '576d6513-029b-4880-bb0b-58094b586b90', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=e1e893da-07af-44e3-945f-c862571583e8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=ffcaba02-6808-4409-8458-941ca0af2e66) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.150 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:b6:1c 10.100.0.13'], port_security=['fa:16:3e:bb:b6:1c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain,np0005541913.localdomain', 'activation-strategy': 'rarp', 'additional-chassis-activated': 'cd2e60f3-a677-4ac1-88e4-9a23beb0fcdd'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-146896978', 'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '82e23ec3-1d57-4166-9ba0-839ded943a78', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-13bbad22-ab61-4b1f-849e-c651aa8f3297', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-146896978', 'neutron:project_id': 'd048f19ff5fc47dc88162ef5f9cebe8b', 'neutron:revision_number': '8', 'neutron:security_group_ids': '576d6513-029b-4880-bb0b-58094b586b90', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51e42abf-8647-4013-9c62-778191c64ad0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=54433c73-7e5c-481c-b64c-19e9cfd6e56f) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:04:47 np0005541914.localdomain podman[311035]: 2025-12-02 10:04:47.144652073 +0000 UTC m=+0.141257454 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, vcs-type=git, version=9.6, release=1755695350, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9)
Dec 02 10:04:47 np0005541914.localdomain podman[311035]: 2025-12-02 10:04:47.234184037 +0000 UTC m=+0.230789438 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, architecture=x86_64, io.buildah.version=1.33.7, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, version=9.6, distribution-scope=public, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 02 10:04:47 np0005541914.localdomain neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e[310609]: [NOTICE]   (310613) : haproxy version is 2.8.14-c23fe91
Dec 02 10:04:47 np0005541914.localdomain neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e[310609]: [NOTICE]   (310613) : path to executable is /usr/sbin/haproxy
Dec 02 10:04:47 np0005541914.localdomain neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e[310609]: [WARNING]  (310613) : Exiting Master process...
Dec 02 10:04:47 np0005541914.localdomain neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e[310609]: [ALERT]    (310613) : Current worker (310615) exited with code 143 (Terminated)
Dec 02 10:04:47 np0005541914.localdomain neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e[310609]: [WARNING]  (310613) : All workers exited. Exiting... (0)
Dec 02 10:04:47 np0005541914.localdomain systemd[1]: libpod-2cc5d349e43bb674d5121150df24056df04782ad376f6f5a22a4da7efb6a7e68.scope: Deactivated successfully.
Dec 02 10:04:47 np0005541914.localdomain podman[311103]: 2025-12-02 10:04:47.245299999 +0000 UTC m=+0.066058803 container died 2cc5d349e43bb674d5121150df24056df04782ad376f6f5a22a4da7efb6a7e68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 02 10:04:47 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:04:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:47.260 281049 DEBUG nova.virt.libvirt.guest [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid '82e23ec3-1d57-4166-9ba0-839ded943a78' (instance-00000008) get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688
Dec 02 10:04:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:47.261 281049 INFO nova.virt.libvirt.driver [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Migration operation has completed
Dec 02 10:04:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:47.262 281049 INFO nova.compute.manager [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] _post_live_migration() is started..
Dec 02 10:04:47 np0005541914.localdomain podman[311103]: 2025-12-02 10:04:47.351257907 +0000 UTC m=+0.172016661 container cleanup 2cc5d349e43bb674d5121150df24056df04782ad376f6f5a22a4da7efb6a7e68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Dec 02 10:04:47 np0005541914.localdomain podman[311116]: 2025-12-02 10:04:47.365512686 +0000 UTC m=+0.119092404 container cleanup 2cc5d349e43bb674d5121150df24056df04782ad376f6f5a22a4da7efb6a7e68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:04:47 np0005541914.localdomain systemd[1]: libpod-conmon-2cc5d349e43bb674d5121150df24056df04782ad376f6f5a22a4da7efb6a7e68.scope: Deactivated successfully.
Dec 02 10:04:47 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:04:47 np0005541914.localdomain podman[311131]: 2025-12-02 10:04:47.445204256 +0000 UTC m=+0.075317267 container remove 2cc5d349e43bb674d5121150df24056df04782ad376f6f5a22a4da7efb6a7e68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.449 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[cafdccee-a8f9-4f0b-919b-e6f1fc2ccb58]: (4, ('Tue Dec  2 10:04:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e (2cc5d349e43bb674d5121150df24056df04782ad376f6f5a22a4da7efb6a7e68)\n2cc5d349e43bb674d5121150df24056df04782ad376f6f5a22a4da7efb6a7e68\nTue Dec  2 10:04:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e (2cc5d349e43bb674d5121150df24056df04782ad376f6f5a22a4da7efb6a7e68)\n2cc5d349e43bb674d5121150df24056df04782ad376f6f5a22a4da7efb6a7e68\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.451 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[886530bc-1d2d-4ab5-86f9-1c252820e804]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.452 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc40d86e4-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:04:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:47.455 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:47 np0005541914.localdomain kernel: device tapc40d86e4-70 left promiscuous mode
Dec 02 10:04:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:47.469 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.472 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[7aca7714-d0fd-4123-810e-17f07fca4d98]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.487 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[69c1a516-07ec-442e-b4a5-cceb63ccb022]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.491 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[7cd1ff6c-1608-444d-b806-840c0bf270f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.510 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[ed936f86-4c46-4e3d-a0c9-f7e38f2a322c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1203256, 'reachable_time': 17453, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311152, 'error': None, 'target': 'ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.513 159602 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c40d86e4-7101-443b-abce-328f7d1ea40e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.513 159602 DEBUG oslo.privsep.daemon [-] privsep: reply[e322d757-f502-48ac-a172-0cd7e4f54163]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.514 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 54433c73-7e5c-481c-b64c-19e9cfd6e56f in datapath 13bbad22-ab61-4b1f-849e-c651aa8f3297 unbound from our chassis
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.517 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 13bbad22-ab61-4b1f-849e-c651aa8f3297, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.518 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[463cadb3-1d07-415a-933c-33f1fa9bbe92]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.519 159483 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297 namespace which is not needed anymore
Dec 02 10:04:47 np0005541914.localdomain neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297[310682]: [NOTICE]   (310686) : haproxy version is 2.8.14-c23fe91
Dec 02 10:04:47 np0005541914.localdomain neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297[310682]: [NOTICE]   (310686) : path to executable is /usr/sbin/haproxy
Dec 02 10:04:47 np0005541914.localdomain neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297[310682]: [WARNING]  (310686) : Exiting Master process...
Dec 02 10:04:47 np0005541914.localdomain neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297[310682]: [ALERT]    (310686) : Current worker (310688) exited with code 143 (Terminated)
Dec 02 10:04:47 np0005541914.localdomain neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297[310682]: [WARNING]  (310686) : All workers exited. Exiting... (0)
Dec 02 10:04:47 np0005541914.localdomain systemd[1]: libpod-ace0705f7911dc8a9f0c9c950296f1f3829bcf23699368c75ed6ffd69d3d23fe.scope: Deactivated successfully.
Dec 02 10:04:47 np0005541914.localdomain podman[311168]: 2025-12-02 10:04:47.713229189 +0000 UTC m=+0.078419963 container died ace0705f7911dc8a9f0c9c950296f1f3829bcf23699368c75ed6ffd69d3d23fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 02 10:04:47 np0005541914.localdomain podman[311168]: 2025-12-02 10:04:47.76300811 +0000 UTC m=+0.128198844 container cleanup ace0705f7911dc8a9f0c9c950296f1f3829bcf23699368c75ed6ffd69d3d23fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:04:47 np0005541914.localdomain podman[311180]: 2025-12-02 10:04:47.795691804 +0000 UTC m=+0.076112601 container cleanup ace0705f7911dc8a9f0c9c950296f1f3829bcf23699368c75ed6ffd69d3d23fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:04:47 np0005541914.localdomain systemd[1]: libpod-conmon-ace0705f7911dc8a9f0c9c950296f1f3829bcf23699368c75ed6ffd69d3d23fe.scope: Deactivated successfully.
Dec 02 10:04:47 np0005541914.localdomain podman[311198]: 2025-12-02 10:04:47.87326911 +0000 UTC m=+0.085556061 container remove ace0705f7911dc8a9f0c9c950296f1f3829bcf23699368c75ed6ffd69d3d23fe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.880 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[f8afc288-f700-41d1-8fa1-c89db23447d1]: (4, ('Tue Dec  2 10:04:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297 (ace0705f7911dc8a9f0c9c950296f1f3829bcf23699368c75ed6ffd69d3d23fe)\nace0705f7911dc8a9f0c9c950296f1f3829bcf23699368c75ed6ffd69d3d23fe\nTue Dec  2 10:04:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297 (ace0705f7911dc8a9f0c9c950296f1f3829bcf23699368c75ed6ffd69d3d23fe)\nace0705f7911dc8a9f0c9c950296f1f3829bcf23699368c75ed6ffd69d3d23fe\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.882 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[98c0f259-ab3e-4a6e-ae06-cfee06393ecf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.883 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13bbad22-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:04:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:47.911 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:47 np0005541914.localdomain kernel: device tap13bbad22-a0 left promiscuous mode
Dec 02 10:04:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:47.923 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.926 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[fa71a966-9fbe-4266-bd57-680b0d8e629b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.938 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[82a113e4-18fa-46ed-a9f5-c6428fdfb65c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.940 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[3fd245de-c87c-41d2-86ec-8910f114a11f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.959 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[d4e0c83e-7314-4537-a8de-13b8adc496b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1203345, 'reachable_time': 25885, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311221, 'error': None, 'target': 'ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.962 159602 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-13bbad22-ab61-4b1f-849e-c651aa8f3297 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.962 159602 DEBUG oslo.privsep.daemon [-] privsep: reply[de576496-61f5-49a9-a292-566e1881259d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.963 159483 INFO neutron.agent.ovn.metadata.agent [-] Port ffcaba02-6808-4409-8458-941ca0af2e66 in datapath c40d86e4-7101-443b-abce-328f7d1ea40e unbound from our chassis
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.967 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Port d8d3f12d-b617-495e-ba6c-02c2da59133c IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.967 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c40d86e4-7101-443b-abce-328f7d1ea40e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.968 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[c3ad162d-b4f0-4d6a-9fad-0ec8fd5f19a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.969 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 54433c73-7e5c-481c-b64c-19e9cfd6e56f in datapath 13bbad22-ab61-4b1f-849e-c651aa8f3297 unbound from our chassis
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.973 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 13bbad22-ab61-4b1f-849e-c651aa8f3297, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.973 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[86eef710-bf89-4046-98c9-ced84a70c56c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.974 159483 INFO neutron.agent.ovn.metadata.agent [-] Port ffcaba02-6808-4409-8458-941ca0af2e66 in datapath c40d86e4-7101-443b-abce-328f7d1ea40e unbound from our chassis
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.977 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Port d8d3f12d-b617-495e-ba6c-02c2da59133c IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.978 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c40d86e4-7101-443b-abce-328f7d1ea40e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.978 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[0015f13c-d332-4592-b6ce-f1695a9e4cba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.979 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 54433c73-7e5c-481c-b64c-19e9cfd6e56f in datapath 13bbad22-ab61-4b1f-849e-c651aa8f3297 unbound from our chassis
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.982 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 13bbad22-ab61-4b1f-849e-c651aa8f3297, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:04:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:04:47.983 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[f86195f7-f153-4ff6-81d4-68a969f48314]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:04:48 np0005541914.localdomain systemd[1]: tmp-crun.LFKO8B.mount: Deactivated successfully.
Dec 02 10:04:48 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-a2da036c8c12fc8adba381824ea5735259dd4f5bbe0a18fc50f819cf9bed9c07-merged.mount: Deactivated successfully.
Dec 02 10:04:48 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ace0705f7911dc8a9f0c9c950296f1f3829bcf23699368c75ed6ffd69d3d23fe-userdata-shm.mount: Deactivated successfully.
Dec 02 10:04:48 np0005541914.localdomain systemd[1]: run-netns-ovnmeta\x2d13bbad22\x2dab61\x2d4b1f\x2d849e\x2dc651aa8f3297.mount: Deactivated successfully.
Dec 02 10:04:48 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-ba06f5d13e82d475898b43f8dfdffe45494dd6b8060a149bf598427f2a15c274-merged.mount: Deactivated successfully.
Dec 02 10:04:48 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2cc5d349e43bb674d5121150df24056df04782ad376f6f5a22a4da7efb6a7e68-userdata-shm.mount: Deactivated successfully.
Dec 02 10:04:48 np0005541914.localdomain systemd[1]: run-netns-ovnmeta\x2dc40d86e4\x2d7101\x2d443b\x2dabce\x2d328f7d1ea40e.mount: Deactivated successfully.
Dec 02 10:04:48 np0005541914.localdomain ceph-mon[301710]: pgmap v160: 177 pgs: 177 active+clean; 192 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 63 op/s
Dec 02 10:04:48 np0005541914.localdomain snmpd[69217]: empty variable list in _query
Dec 02 10:04:48 np0005541914.localdomain snmpd[69217]: empty variable list in _query
Dec 02 10:04:48 np0005541914.localdomain snmpd[69217]: empty variable list in _query
Dec 02 10:04:48 np0005541914.localdomain snmpd[69217]: empty variable list in _query
Dec 02 10:04:48 np0005541914.localdomain snmpd[69217]: empty variable list in _query
Dec 02 10:04:48 np0005541914.localdomain snmpd[69217]: empty variable list in _query
Dec 02 10:04:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:48.722 281049 DEBUG nova.network.neutron [req-b912a26d-203e-47cf-b3b1-a11e64eda8be req-e71a2a54-72d8-433d-a783-05d6808955cd dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Updated VIF entry in instance network info cache for port 54433c73-7e5c-481c-b64c-19e9cfd6e56f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 02 10:04:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:48.724 281049 DEBUG nova.network.neutron [req-b912a26d-203e-47cf-b3b1-a11e64eda8be req-e71a2a54-72d8-433d-a783-05d6808955cd dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Updating instance_info_cache with network_info: [{"id": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "address": "fa:16:3e:bb:b6:1c", "network": {"id": "13bbad22-ab61-4b1f-849e-c651aa8f3297", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1859087569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "d048f19ff5fc47dc88162ef5f9cebe8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54433c73-7e", "ovs_interfaceid": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "np0005541913.localdomain"}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 02 10:04:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:48.744 281049 DEBUG oslo_concurrency.lockutils [req-b912a26d-203e-47cf-b3b1-a11e64eda8be req-e71a2a54-72d8-433d-a783-05d6808955cd dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Releasing lock "refresh_cache-82e23ec3-1d57-4166-9ba0-839ded943a78" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 02 10:04:48 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:04:48.812 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:04:47Z, description=, device_id=3c297297-876e-43ee-83e5-1e1ff7b8f51c, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4035542340>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4035542c70>], id=eb25b32f-4168-45b7-be29-c5d1e26399ec, ip_allocation=immediate, mac_address=fa:16:3e:e1:cd:7c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=723, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:04:48Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:04:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:48.827 281049 DEBUG nova.compute.manager [req-c99bae35-794e-46b5-92c1-e75ef9279250 req-1c2da56a-54ca-4f2b-a3c4-a1d5c5ec8942 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Received event network-vif-unplugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 02 10:04:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:48.827 281049 DEBUG oslo_concurrency.lockutils [req-c99bae35-794e-46b5-92c1-e75ef9279250 req-1c2da56a-54ca-4f2b-a3c4-a1d5c5ec8942 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:04:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:48.828 281049 DEBUG oslo_concurrency.lockutils [req-c99bae35-794e-46b5-92c1-e75ef9279250 req-1c2da56a-54ca-4f2b-a3c4-a1d5c5ec8942 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:04:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:48.828 281049 DEBUG oslo_concurrency.lockutils [req-c99bae35-794e-46b5-92c1-e75ef9279250 req-1c2da56a-54ca-4f2b-a3c4-a1d5c5ec8942 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:04:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:48.829 281049 DEBUG nova.compute.manager [req-c99bae35-794e-46b5-92c1-e75ef9279250 req-1c2da56a-54ca-4f2b-a3c4-a1d5c5ec8942 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] No waiting events found dispatching network-vif-unplugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 02 10:04:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:48.829 281049 DEBUG nova.compute.manager [req-c99bae35-794e-46b5-92c1-e75ef9279250 req-1c2da56a-54ca-4f2b-a3c4-a1d5c5ec8942 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Received event network-vif-unplugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 02 10:04:48 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v161: 177 pgs: 177 active+clean; 218 MiB data, 874 MiB used, 41 GiB / 42 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Dec 02 10:04:49 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 8 addresses
Dec 02 10:04:49 np0005541914.localdomain podman[311238]: 2025-12-02 10:04:49.035386148 +0000 UTC m=+0.065227737 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:04:49 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:04:49 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:04:49 np0005541914.localdomain systemd[1]: tmp-crun.PkU9QX.mount: Deactivated successfully.
Dec 02 10:04:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:04:49.275 262347 INFO neutron.agent.dhcp.agent [None req-1e32bc6b-15d3-4dfe-b4fd-25ae583ac136 - - - - - -] DHCP configuration for ports {'eb25b32f-4168-45b7-be29-c5d1e26399ec'} is completed
Dec 02 10:04:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:50.256 281049 DEBUG nova.network.neutron [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Activated binding for port 54433c73-7e5c-481c-b64c-19e9cfd6e56f and host np0005541913.localdomain migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181
Dec 02 10:04:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:50.257 281049 DEBUG nova.compute.manager [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "address": "fa:16:3e:bb:b6:1c", "network": {"id": "13bbad22-ab61-4b1f-849e-c651aa8f3297", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1859087569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "d048f19ff5fc47dc88162ef5f9cebe8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54433c73-7e", "ovs_interfaceid": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326
Dec 02 10:04:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:50.258 281049 DEBUG nova.virt.libvirt.vif [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T10:04:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-39688497',display_name='tempest-LiveMigrationTest-server-39688497',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005541914.localdomain',hostname='tempest-livemigrationtest-server-39688497',id=8,image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-02T10:04:33Z,launched_on='np0005541914.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005541914.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d048f19ff5fc47dc88162ef5f9cebe8b',ramdisk_id='',reservation_id='r-lnn0by93',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-1345186206',owner_user_name='tempest-LiveMigrationTest-1345186206-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T10:04:36Z,user_data=None,user_id='ec20a6cceee246d6b46878df263d30a4',uuid=82e23ec3-1d57-4166-9ba0-839ded943a78,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "address": "fa:16:3e:bb:b6:1c", "network": {"id": "13bbad22-ab61-4b1f-849e-c651aa8f3297", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1859087569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "d048f19ff5fc47dc88162ef5f9cebe8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54433c73-7e", "ovs_interfaceid": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 02 10:04:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:50.258 281049 DEBUG nova.network.os_vif_util [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Converting VIF {"id": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "address": "fa:16:3e:bb:b6:1c", "network": {"id": "13bbad22-ab61-4b1f-849e-c651aa8f3297", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1859087569-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "d048f19ff5fc47dc88162ef5f9cebe8b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54433c73-7e", "ovs_interfaceid": "54433c73-7e5c-481c-b64c-19e9cfd6e56f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 02 10:04:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:50.259 281049 DEBUG nova.network.os_vif_util [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:b6:1c,bridge_name='br-int',has_traffic_filtering=True,id=54433c73-7e5c-481c-b64c-19e9cfd6e56f,network=Network(13bbad22-ab61-4b1f-849e-c651aa8f3297),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap54433c73-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 02 10:04:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:50.260 281049 DEBUG os_vif [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:b6:1c,bridge_name='br-int',has_traffic_filtering=True,id=54433c73-7e5c-481c-b64c-19e9cfd6e56f,network=Network(13bbad22-ab61-4b1f-849e-c651aa8f3297),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap54433c73-7e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 02 10:04:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:50.261 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:50.262 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54433c73-7e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:04:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:50.299 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:50.303 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 02 10:04:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:50.306 281049 INFO os_vif [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:b6:1c,bridge_name='br-int',has_traffic_filtering=True,id=54433c73-7e5c-481c-b64c-19e9cfd6e56f,network=Network(13bbad22-ab61-4b1f-849e-c651aa8f3297),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap54433c73-7e')
Dec 02 10:04:50 np0005541914.localdomain ceph-mon[301710]: pgmap v161: 177 pgs: 177 active+clean; 218 MiB data, 874 MiB used, 41 GiB / 42 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Dec 02 10:04:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:50.307 281049 DEBUG oslo_concurrency.lockutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:04:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:50.307 281049 DEBUG oslo_concurrency.lockutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:04:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:50.308 281049 DEBUG oslo_concurrency.lockutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:04:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:50.308 281049 DEBUG nova.compute.manager [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349
Dec 02 10:04:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:50.309 281049 INFO nova.virt.libvirt.driver [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Deleting instance files /var/lib/nova/instances/82e23ec3-1d57-4166-9ba0-839ded943a78_del
Dec 02 10:04:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:50.309 281049 INFO nova.virt.libvirt.driver [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Deletion of /var/lib/nova/instances/82e23ec3-1d57-4166-9ba0-839ded943a78_del complete
Dec 02 10:04:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:50.728 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:50.979 281049 DEBUG nova.compute.manager [req-8e388069-c41a-4a06-8d77-cc6d194efd73 req-c896b4b7-2f45-4358-94cc-7af150d45236 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Received event network-vif-plugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 02 10:04:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:50.980 281049 DEBUG oslo_concurrency.lockutils [req-8e388069-c41a-4a06-8d77-cc6d194efd73 req-c896b4b7-2f45-4358-94cc-7af150d45236 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:04:50 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v162: 177 pgs: 177 active+clean; 218 MiB data, 874 MiB used, 41 GiB / 42 GiB avail; 298 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Dec 02 10:04:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:50.980 281049 DEBUG oslo_concurrency.lockutils [req-8e388069-c41a-4a06-8d77-cc6d194efd73 req-c896b4b7-2f45-4358-94cc-7af150d45236 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:04:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:50.981 281049 DEBUG oslo_concurrency.lockutils [req-8e388069-c41a-4a06-8d77-cc6d194efd73 req-c896b4b7-2f45-4358-94cc-7af150d45236 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:04:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:50.983 281049 DEBUG nova.compute.manager [req-8e388069-c41a-4a06-8d77-cc6d194efd73 req-c896b4b7-2f45-4358-94cc-7af150d45236 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] No waiting events found dispatching network-vif-plugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 02 10:04:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:50.983 281049 WARNING nova.compute.manager [req-8e388069-c41a-4a06-8d77-cc6d194efd73 req-c896b4b7-2f45-4358-94cc-7af150d45236 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Received unexpected event network-vif-plugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f for instance with vm_state active and task_state migrating.
Dec 02 10:04:51 np0005541914.localdomain systemd[1]: tmp-crun.BCxZd7.mount: Deactivated successfully.
Dec 02 10:04:51 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 7 addresses
Dec 02 10:04:51 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:04:51 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:04:51 np0005541914.localdomain podman[311273]: 2025-12-02 10:04:51.434940031 +0000 UTC m=+0.076125793 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:04:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:51.543 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:51 np0005541914.localdomain ceph-mon[301710]: pgmap v162: 177 pgs: 177 active+clean; 218 MiB data, 874 MiB used, 41 GiB / 42 GiB avail; 298 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Dec 02 10:04:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:51.942 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:52 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:04:52 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v163: 177 pgs: 177 active+clean; 225 MiB data, 877 MiB used, 41 GiB / 42 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Dec 02 10:04:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:53.031 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:53 np0005541914.localdomain podman[311313]: 2025-12-02 10:04:53.183945357 +0000 UTC m=+0.064177524 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 02 10:04:53 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 6 addresses
Dec 02 10:04:53 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:04:53 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:04:53 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:04:53 np0005541914.localdomain podman[311328]: 2025-12-02 10:04:53.31054385 +0000 UTC m=+0.095593690 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:04:53 np0005541914.localdomain podman[311328]: 2025-12-02 10:04:53.32904745 +0000 UTC m=+0.114097300 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 02 10:04:53 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:04:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:53.368 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:53.519 281049 DEBUG nova.compute.manager [req-7d5afed1-9e05-4cbd-baf7-22087d56638a req-04e77fdc-ea5e-4235-9716-48b5f9f068d9 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Received event network-vif-plugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 02 10:04:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:53.520 281049 DEBUG oslo_concurrency.lockutils [req-7d5afed1-9e05-4cbd-baf7-22087d56638a req-04e77fdc-ea5e-4235-9716-48b5f9f068d9 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:04:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:53.520 281049 DEBUG oslo_concurrency.lockutils [req-7d5afed1-9e05-4cbd-baf7-22087d56638a req-04e77fdc-ea5e-4235-9716-48b5f9f068d9 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:04:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:53.520 281049 DEBUG oslo_concurrency.lockutils [req-7d5afed1-9e05-4cbd-baf7-22087d56638a req-04e77fdc-ea5e-4235-9716-48b5f9f068d9 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:04:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:53.520 281049 DEBUG nova.compute.manager [req-7d5afed1-9e05-4cbd-baf7-22087d56638a req-04e77fdc-ea5e-4235-9716-48b5f9f068d9 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] No waiting events found dispatching network-vif-plugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 02 10:04:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:53.521 281049 WARNING nova.compute.manager [req-7d5afed1-9e05-4cbd-baf7-22087d56638a req-04e77fdc-ea5e-4235-9716-48b5f9f068d9 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Received unexpected event network-vif-plugged-54433c73-7e5c-481c-b64c-19e9cfd6e56f for instance with vm_state active and task_state migrating.
Dec 02 10:04:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:53.579 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:54 np0005541914.localdomain ceph-mon[301710]: pgmap v163: 177 pgs: 177 active+clean; 225 MiB data, 877 MiB used, 41 GiB / 42 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Dec 02 10:04:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:54.228 281049 DEBUG oslo_concurrency.lockutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Acquiring lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:04:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:54.229 281049 DEBUG oslo_concurrency.lockutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:04:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:54.229 281049 DEBUG oslo_concurrency.lockutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Lock "82e23ec3-1d57-4166-9ba0-839ded943a78-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:04:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:54.260 281049 DEBUG oslo_concurrency.lockutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:04:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:54.260 281049 DEBUG oslo_concurrency.lockutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:04:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:54.261 281049 DEBUG oslo_concurrency.lockutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:04:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:54.261 281049 DEBUG nova.compute.resource_tracker [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 10:04:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:54.262 281049 DEBUG oslo_concurrency.processutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:04:54 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:04:54.636 2 INFO neutron.agent.securitygroups_rpc [None req-5252ab83-90b7-4c17-ab41-150a0f430946 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Security group rule updated ['2e537c1e-d2f3-49fb-8c4c-0f6b2c3e354b']
Dec 02 10:04:54 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:04:54 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3831071445' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:04:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:54.698 281049 DEBUG oslo_concurrency.processutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:04:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:54.908 281049 WARNING nova.virt.libvirt.driver [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 10:04:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:54.910 281049 DEBUG nova.compute.resource_tracker [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11558MB free_disk=41.70097732543945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 10:04:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:54.910 281049 DEBUG oslo_concurrency.lockutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:04:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:54.910 281049 DEBUG oslo_concurrency.lockutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:04:54 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:04:54.924 2 INFO neutron.agent.securitygroups_rpc [None req-a8a8282d-6793-4a84-80fc-24e3966f9a17 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Security group rule updated ['2e537c1e-d2f3-49fb-8c4c-0f6b2c3e354b']
Dec 02 10:04:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:54.951 281049 DEBUG nova.compute.resource_tracker [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Migration for instance 82e23ec3-1d57-4166-9ba0-839ded943a78 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Dec 02 10:04:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:54.974 281049 DEBUG nova.compute.resource_tracker [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491
Dec 02 10:04:54 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v164: 177 pgs: 177 active+clean; 225 MiB data, 877 MiB used, 41 GiB / 42 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Dec 02 10:04:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:55.025 281049 DEBUG nova.compute.resource_tracker [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Migration f83e1b81-4647-4642-b7c4-b4f369bef051 is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Dec 02 10:04:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:55.025 281049 DEBUG nova.compute.resource_tracker [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 10:04:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:55.026 281049 DEBUG nova.compute.resource_tracker [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 10:04:55 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/3831071445' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:04:55 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/4293966053' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:04:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:55.070 281049 DEBUG oslo_concurrency.processutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:04:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:55.300 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:55 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:04:55 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4098882359' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:04:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:55.538 281049 DEBUG oslo_concurrency.processutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:04:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:55.546 281049 DEBUG nova.compute.provider_tree [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 10:04:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:55.570 281049 DEBUG nova.scheduler.client.report [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 10:04:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:55.598 281049 DEBUG nova.compute.resource_tracker [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 10:04:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:55.599 281049 DEBUG oslo_concurrency.lockutils [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:04:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:55.607 281049 INFO nova.compute.manager [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Migrating instance to np0005541913.localdomain finished successfully.
Dec 02 10:04:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:55.694 281049 INFO nova.scheduler.client.report [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] Deleted allocation for migration f83e1b81-4647-4642-b7c4-b4f369bef051
Dec 02 10:04:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:55.695 281049 DEBUG nova.virt.libvirt.driver [None req-cf58f353-04b9-463a-832f-2ee6517a222b 128dc0e572734d9083e5bf6378255d58 dc1edab5ae5d43f08b967b5bf594f8b5 - - default default] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662
Dec 02 10:04:56 np0005541914.localdomain ceph-mon[301710]: pgmap v164: 177 pgs: 177 active+clean; 225 MiB data, 877 MiB used, 41 GiB / 42 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Dec 02 10:04:56 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/4098882359' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:04:56 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/1384798435' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:04:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:56.589 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:56 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v165: 177 pgs: 177 active+clean; 225 MiB data, 877 MiB used, 41 GiB / 42 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Dec 02 10:04:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:04:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:57.529 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:04:58 np0005541914.localdomain ceph-mon[301710]: pgmap v165: 177 pgs: 177 active+clean; 225 MiB data, 877 MiB used, 41 GiB / 42 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Dec 02 10:04:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:58.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:04:58 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v166: 177 pgs: 177 active+clean; 145 MiB data, 738 MiB used, 41 GiB / 42 GiB avail; 349 KiB/s rd, 2.1 MiB/s wr, 96 op/s
Dec 02 10:04:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:59.158 281049 DEBUG oslo_concurrency.lockutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Acquiring lock "abf8d33c-4e24-4d26-af41-b01c828c67e0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:04:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:59.159 281049 DEBUG oslo_concurrency.lockutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Lock "abf8d33c-4e24-4d26-af41-b01c828c67e0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:04:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:59.192 281049 DEBUG nova.compute.manager [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 02 10:04:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:59.270 281049 DEBUG oslo_concurrency.lockutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:04:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:59.270 281049 DEBUG oslo_concurrency.lockutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:04:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:59.273 281049 DEBUG nova.virt.hardware [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 02 10:04:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:59.274 281049 INFO nova.compute.claims [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Claim successful on node np0005541914.localdomain
Dec 02 10:04:59 np0005541914.localdomain podman[311414]: 2025-12-02 10:04:59.328985612 +0000 UTC m=+0.067355612 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 02 10:04:59 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 5 addresses
Dec 02 10:04:59 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:04:59 np0005541914.localdomain systemd[1]: tmp-crun.yCKiyw.mount: Deactivated successfully.
Dec 02 10:04:59 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:04:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:59.408 281049 DEBUG oslo_concurrency.processutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:04:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:59.483 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:04:59 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:04:59 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1410163043' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:04:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:59.858 281049 DEBUG oslo_concurrency.processutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:04:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:59.864 281049 DEBUG nova.compute.provider_tree [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 10:04:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:59.894 281049 DEBUG nova.scheduler.client.report [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 10:04:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:59.917 281049 DEBUG oslo_concurrency.lockutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:04:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:59.919 281049 DEBUG nova.compute.manager [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 02 10:04:59 np0005541914.localdomain ceph-mon[301710]: pgmap v166: 177 pgs: 177 active+clean; 145 MiB data, 738 MiB used, 41 GiB / 42 GiB avail; 349 KiB/s rd, 2.1 MiB/s wr, 96 op/s
Dec 02 10:04:59 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/1410163043' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:04:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:59.967 281049 DEBUG nova.compute.manager [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 02 10:04:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:59.968 281049 DEBUG nova.network.neutron [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 02 10:04:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:04:59.982 281049 INFO nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 02 10:05:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:00.012 281049 DEBUG nova.compute.manager [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 02 10:05:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:00.024 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:00.071 281049 DEBUG nova.policy [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '955214da09cd44dba70e1a06eabc9023', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '50df25ee29424615807a458690cdf8d7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 02 10:05:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:00.119 281049 DEBUG nova.compute.manager [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 02 10:05:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:00.122 281049 DEBUG nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 02 10:05:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:00.122 281049 INFO nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Creating image(s)
Dec 02 10:05:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:00.155 281049 DEBUG nova.storage.rbd_utils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] rbd image abf8d33c-4e24-4d26-af41-b01c828c67e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 02 10:05:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:00.182 281049 DEBUG nova.storage.rbd_utils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] rbd image abf8d33c-4e24-4d26-af41-b01c828c67e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 02 10:05:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:00.208 281049 DEBUG nova.storage.rbd_utils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] rbd image abf8d33c-4e24-4d26-af41-b01c828c67e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 02 10:05:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:00.211 281049 DEBUG oslo_concurrency.processutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:05:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:00.281 281049 DEBUG oslo_concurrency.processutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:05:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:00.282 281049 DEBUG oslo_concurrency.lockutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Acquiring lock "43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:05:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:00.284 281049 DEBUG oslo_concurrency.lockutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Lock "43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:05:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:00.284 281049 DEBUG oslo_concurrency.lockutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Lock "43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:05:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:00.319 281049 DEBUG nova.storage.rbd_utils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] rbd image abf8d33c-4e24-4d26-af41-b01c828c67e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 02 10:05:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:00.323 281049 DEBUG oslo_concurrency.processutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc abf8d33c-4e24-4d26-af41-b01c828c67e0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:05:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:00.342 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:00 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:05:00.408 2 INFO neutron.agent.securitygroups_rpc [req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 req-4740c003-3af7-4933-8b00-851aa84e7e55 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Security group member updated ['2e537c1e-d2f3-49fb-8c4c-0f6b2c3e354b']
Dec 02 10:05:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:00.524 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:05:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:00.526 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:05:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:00.633 281049 DEBUG nova.network.neutron [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Successfully created port: a0a73e76-685f-4ba0-87b5-5dd27b54fab4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 02 10:05:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:00.889 281049 DEBUG oslo_concurrency.processutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc abf8d33c-4e24-4d26-af41-b01c828c67e0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:05:00 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/557978648' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:05:00 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v167: 177 pgs: 177 active+clean; 145 MiB data, 738 MiB used, 41 GiB / 42 GiB avail; 51 KiB/s rd, 24 KiB/s wr, 36 op/s
Dec 02 10:05:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:00.991 281049 DEBUG nova.storage.rbd_utils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] resizing rbd image abf8d33c-4e24-4d26-af41-b01c828c67e0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 02 10:05:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:01.152 281049 DEBUG nova.objects.instance [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Lazy-loading 'migration_context' on Instance uuid abf8d33c-4e24-4d26-af41-b01c828c67e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 02 10:05:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:01.171 281049 DEBUG nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 02 10:05:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:01.171 281049 DEBUG nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Ensure instance console log exists: /var/lib/nova/instances/abf8d33c-4e24-4d26-af41-b01c828c67e0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 02 10:05:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:01.172 281049 DEBUG oslo_concurrency.lockutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:05:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:01.172 281049 DEBUG oslo_concurrency.lockutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:05:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:01.173 281049 DEBUG oslo_concurrency.lockutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:05:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:01.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:05:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:01.529 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 10:05:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:01.530 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 10:05:01 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:05:01.532 2 INFO neutron.agent.securitygroups_rpc [None req-9eec1e00-2947-423e-88e8-b2e4c78afea0 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Security group member updated ['576d6513-029b-4880-bb0b-58094b586b90']
Dec 02 10:05:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:01.557 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 02 10:05:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:01.557 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 10:05:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:01.636 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:01.691 281049 DEBUG nova.network.neutron [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Successfully updated port: a0a73e76-685f-4ba0-87b5-5dd27b54fab4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 02 10:05:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:01.709 281049 DEBUG oslo_concurrency.lockutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Acquiring lock "refresh_cache-abf8d33c-4e24-4d26-af41-b01c828c67e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 02 10:05:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:01.709 281049 DEBUG oslo_concurrency.lockutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Acquired lock "refresh_cache-abf8d33c-4e24-4d26-af41-b01c828c67e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 02 10:05:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:01.709 281049 DEBUG nova.network.neutron [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 02 10:05:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:01.774 281049 DEBUG nova.compute.manager [req-ca4cb7d3-3342-473a-b5d8-5d60dec9e872 req-5028eba1-7705-4918-84cd-eb1af601c653 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Received event network-changed-a0a73e76-685f-4ba0-87b5-5dd27b54fab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 02 10:05:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:01.775 281049 DEBUG nova.compute.manager [req-ca4cb7d3-3342-473a-b5d8-5d60dec9e872 req-5028eba1-7705-4918-84cd-eb1af601c653 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Refreshing instance network info cache due to event network-changed-a0a73e76-685f-4ba0-87b5-5dd27b54fab4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 02 10:05:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:01.775 281049 DEBUG oslo_concurrency.lockutils [req-ca4cb7d3-3342-473a-b5d8-5d60dec9e872 req-5028eba1-7705-4918-84cd-eb1af601c653 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "refresh_cache-abf8d33c-4e24-4d26-af41-b01c828c67e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 02 10:05:01 np0005541914.localdomain dnsmasq[309707]: read /var/lib/neutron/dhcp/c40d86e4-7101-443b-abce-328f7d1ea40e/addn_hosts - 0 addresses
Dec 02 10:05:01 np0005541914.localdomain dnsmasq-dhcp[309707]: read /var/lib/neutron/dhcp/c40d86e4-7101-443b-abce-328f7d1ea40e/host
Dec 02 10:05:01 np0005541914.localdomain dnsmasq-dhcp[309707]: read /var/lib/neutron/dhcp/c40d86e4-7101-443b-abce-328f7d1ea40e/opts
Dec 02 10:05:01 np0005541914.localdomain podman[311639]: 2025-12-02 10:05:01.802906822 +0000 UTC m=+0.057980284 container kill 8a85197bd70814f58cb15afaa29c7b2cca6e3e23fc2d2480aab6c6637289b464 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c40d86e4-7101-443b-abce-328f7d1ea40e, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:05:01 np0005541914.localdomain systemd[1]: tmp-crun.waA8Jh.mount: Deactivated successfully.
Dec 02 10:05:01 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e109 e109: 6 total, 6 up, 6 in
Dec 02 10:05:01 np0005541914.localdomain ceph-mon[301710]: pgmap v167: 177 pgs: 177 active+clean; 145 MiB data, 738 MiB used, 41 GiB / 42 GiB avail; 51 KiB/s rd, 24 KiB/s wr, 36 op/s
Dec 02 10:05:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:02.109 281049 DEBUG nova.network.neutron [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 02 10:05:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:02.125 281049 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764669887.1237745, 82e23ec3-1d57-4166-9ba0-839ded943a78 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 02 10:05:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:02.125 281049 INFO nova.compute.manager [-] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] VM Stopped (Lifecycle Event)
Dec 02 10:05:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:02.176 281049 DEBUG nova.compute.manager [None req-7c1fcdaa-7955-47f3-abcb-d07c22c61577 - - - - - -] [instance: 82e23ec3-1d57-4166-9ba0-839ded943a78] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 02 10:05:02 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:05:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:02.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:05:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:02.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:05:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:02.876 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:05:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:02.877 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:05:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:02.877 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:05:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:02.878 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 10:05:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:02.878 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:05:02 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v169: 177 pgs: 177 active+clean; 192 MiB data, 809 MiB used, 41 GiB / 42 GiB avail; 54 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Dec 02 10:05:03 np0005541914.localdomain ceph-mon[301710]: osdmap e109: 6 total, 6 up, 6 in
Dec 02 10:05:03 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/4275910269' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:05:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:03.111 281049 DEBUG nova.network.neutron [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Updating instance_info_cache with network_info: [{"id": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "address": "fa:16:3e:16:9d:c1", "network": {"id": "45d02cf1-f511-4416-b7c1-b37c417f16f9", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1627103925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "50df25ee29424615807a458690cdf8d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a73e76-68", "ovs_interfaceid": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 02 10:05:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:03.128 281049 DEBUG oslo_concurrency.lockutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Releasing lock "refresh_cache-abf8d33c-4e24-4d26-af41-b01c828c67e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 02 10:05:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:03.129 281049 DEBUG nova.compute.manager [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Instance network_info: |[{"id": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "address": "fa:16:3e:16:9d:c1", "network": {"id": "45d02cf1-f511-4416-b7c1-b37c417f16f9", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1627103925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "50df25ee29424615807a458690cdf8d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a73e76-68", "ovs_interfaceid": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 02 10:05:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:03.130 281049 DEBUG oslo_concurrency.lockutils [req-ca4cb7d3-3342-473a-b5d8-5d60dec9e872 req-5028eba1-7705-4918-84cd-eb1af601c653 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquired lock "refresh_cache-abf8d33c-4e24-4d26-af41-b01c828c67e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 02 10:05:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:03.131 281049 DEBUG nova.network.neutron [req-ca4cb7d3-3342-473a-b5d8-5d60dec9e872 req-5028eba1-7705-4918-84cd-eb1af601c653 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Refreshing network info cache for port a0a73e76-685f-4ba0-87b5-5dd27b54fab4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 02 10:05:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:03.137 281049 DEBUG nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Start _get_guest_xml network_info=[{"id": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "address": "fa:16:3e:16:9d:c1", "network": {"id": "45d02cf1-f511-4416-b7c1-b37c417f16f9", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1627103925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "50df25ee29424615807a458690cdf8d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a73e76-68", "ovs_interfaceid": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T10:01:53Z,direct_url=<?>,disk_format='qcow2',id=d85e840d-fa56-497b-b5bd-b49584d3e97a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e2d97696ab6749899bb8ba5ce29a3de2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T10:01:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encryption_secret_uuid': None, 'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'disk_bus': 'virtio', 'encrypted': False, 'size': 0, 'device_name': '/dev/vda', 'image_id': 'd85e840d-fa56-497b-b5bd-b49584d3e97a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 02 10:05:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:03.142 281049 WARNING nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 10:05:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:03.151 281049 DEBUG nova.virt.libvirt.host [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Searching host: 'np0005541914.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 02 10:05:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:03.152 281049 DEBUG nova.virt.libvirt.host [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 02 10:05:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:03.154 281049 DEBUG nova.virt.libvirt.host [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Searching host: 'np0005541914.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 02 10:05:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:03.154 281049 DEBUG nova.virt.libvirt.host [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 02 10:05:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:03.155 281049 DEBUG nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 02 10:05:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:03.156 281049 DEBUG nova.virt.hardware [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T10:01:31Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='82beb986-6d20-42dc-b738-1cef87dee30f',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T10:01:53Z,direct_url=<?>,disk_format='qcow2',id=d85e840d-fa56-497b-b5bd-b49584d3e97a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e2d97696ab6749899bb8ba5ce29a3de2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T10:01:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 02 10:05:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:03.156 281049 DEBUG nova.virt.hardware [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 02 10:05:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:03.157 281049 DEBUG nova.virt.hardware [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 02 10:05:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:03.157 281049 DEBUG nova.virt.hardware [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 02 10:05:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:03.158 281049 DEBUG nova.virt.hardware [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 02 10:05:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:03.158 281049 DEBUG nova.virt.hardware [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 02 10:05:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:03.158 281049 DEBUG nova.virt.hardware [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 02 10:05:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:03.159 281049 DEBUG nova.virt.hardware [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 02 10:05:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:03.159 281049 DEBUG nova.virt.hardware [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 02 10:05:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:03.160 281049 DEBUG nova.virt.hardware [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 02 10:05:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:03.160 281049 DEBUG nova.virt.hardware [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 02 10:05:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:03.165 281049 DEBUG oslo_concurrency.processutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:05:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:03.176 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:05:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:03.177 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:05:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:03.177 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:05:03 np0005541914.localdomain systemd[1]: tmp-crun.7uO6rC.mount: Deactivated successfully.
Dec 02 10:05:03 np0005541914.localdomain dnsmasq[309707]: exiting on receipt of SIGTERM
Dec 02 10:05:03 np0005541914.localdomain podman[311698]: 2025-12-02 10:05:03.247774725 +0000 UTC m=+0.065071162 container kill 8a85197bd70814f58cb15afaa29c7b2cca6e3e23fc2d2480aab6c6637289b464 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c40d86e4-7101-443b-abce-328f7d1ea40e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 02 10:05:03 np0005541914.localdomain systemd[1]: libpod-8a85197bd70814f58cb15afaa29c7b2cca6e3e23fc2d2480aab6c6637289b464.scope: Deactivated successfully.
Dec 02 10:05:03 np0005541914.localdomain podman[311711]: 2025-12-02 10:05:03.315772457 +0000 UTC m=+0.053901550 container died 8a85197bd70814f58cb15afaa29c7b2cca6e3e23fc2d2480aab6c6637289b464 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c40d86e4-7101-443b-abce-328f7d1ea40e, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:05:03 np0005541914.localdomain podman[311711]: 2025-12-02 10:05:03.347222603 +0000 UTC m=+0.085351616 container cleanup 8a85197bd70814f58cb15afaa29c7b2cca6e3e23fc2d2480aab6c6637289b464 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c40d86e4-7101-443b-abce-328f7d1ea40e, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:05:03 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:05:03 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2077633090' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:05:03 np0005541914.localdomain systemd[1]: libpod-conmon-8a85197bd70814f58cb15afaa29c7b2cca6e3e23fc2d2480aab6c6637289b464.scope: Deactivated successfully.
Dec 02 10:05:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:03.366 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:05:03 np0005541914.localdomain podman[311713]: 2025-12-02 10:05:03.402514514 +0000 UTC m=+0.135313763 container remove 8a85197bd70814f58cb15afaa29c7b2cca6e3e23fc2d2480aab6c6637289b464 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c40d86e4-7101-443b-abce-328f7d1ea40e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec 02 10:05:03 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:05:03Z|00105|binding|INFO|Releasing lport de515592-061d-469f-83fb-52a8d86b335c from this chassis (sb_readonly=0)
Dec 02 10:05:03 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:05:03Z|00106|binding|INFO|Setting lport de515592-061d-469f-83fb-52a8d86b335c down in Southbound
Dec 02 10:05:03 np0005541914.localdomain kernel: device tapde515592-06 left promiscuous mode
Dec 02 10:05:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:03.420 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:03.432 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '19.80.0.2/24', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-c40d86e4-7101-443b-abce-328f7d1ea40e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c40d86e4-7101-443b-abce-328f7d1ea40e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd048f19ff5fc47dc88162ef5f9cebe8b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e1e893da-07af-44e3-945f-c862571583e8, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=de515592-061d-469f-83fb-52a8d86b335c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:05:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:03.434 159483 INFO neutron.agent.ovn.metadata.agent [-] Port de515592-061d-469f-83fb-52a8d86b335c in datapath c40d86e4-7101-443b-abce-328f7d1ea40e unbound from our chassis
Dec 02 10:05:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:03.437 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c40d86e4-7101-443b-abce-328f7d1ea40e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:05:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:03.438 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[cc5d1c8e-7413-4fba-a469-3471e6e923ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:05:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:03.442 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:03.624 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 10:05:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:03.626 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11516MB free_disk=41.774757385253906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 10:05:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:03.626 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:05:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:03.628 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:05:03 np0005541914.localdomain podman[239757]: time="2025-12-02T10:05:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:05:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:05:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 10:05:03 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 02 10:05:03 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/402438692' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:05:03 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:05:03.686 262347 INFO neutron.agent.dhcp.agent [None req-7315e879-b571-440a-8acf-f1bb6471da4b - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:05:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:03.687 281049 DEBUG oslo_concurrency.processutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:05:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:05:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19203 "" "Go-http-client/1.1"
Dec 02 10:05:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:03.716 281049 DEBUG nova.storage.rbd_utils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] rbd image abf8d33c-4e24-4d26-af41-b01c828c67e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 02 10:05:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:03.720 281049 DEBUG oslo_concurrency.processutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:05:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:03.743 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Instance abf8d33c-4e24-4d26-af41-b01c828c67e0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 02 10:05:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:03.743 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 10:05:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:03.743 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=640MB phys_disk=41GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 10:05:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:03.787 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:05:04 np0005541914.localdomain ceph-mon[301710]: pgmap v169: 177 pgs: 177 active+clean; 192 MiB data, 809 MiB used, 41 GiB / 42 GiB avail; 54 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Dec 02 10:05:04 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/2077633090' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:05:04 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/920620619' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:05:04 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/402438692' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:05:04 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 02 10:05:04 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3823900154' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:04.174 281049 DEBUG oslo_concurrency.processutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:04.178 281049 DEBUG nova.virt.libvirt.vif [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T10:04:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005541914.localdomain',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=10,image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP/dbfwF7RFRTDuwB6jwzuSQ/IcUc/koBGae2h16UX9iSnGmmWafAjmR0zhsoi8E87Oi2Cm1JEv8wzMjtBlM1hsGOt9Lg/6ZEqGVxh82xbfu37aVfdDp2kn2MPZvfs8d3A==',key_name='tempest-keypair-1080862001',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005541914.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005541914.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='50df25ee29424615807a458690cdf8d7',ramdisk_id='',reservation_id='r-5yub6qye',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersV294TestFqdnHostnames-2112874438',owner_user_name='tempest-ServersV294TestFqdnHostnames-2112874438-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T10:05:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='955214da09cd44dba70e1a06eabc9023',uuid=abf8d33c-4e24-4d26-af41-b01c828c67e0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "address": "fa:16:3e:16:9d:c1", "network": {"id": "45d02cf1-f511-4416-b7c1-b37c417f16f9", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1627103925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "50df25ee29424615807a458690cdf8d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a73e76-68", "ovs_interfaceid": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:04.179 281049 DEBUG nova.network.os_vif_util [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Converting VIF {"id": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "address": "fa:16:3e:16:9d:c1", "network": {"id": "45d02cf1-f511-4416-b7c1-b37c417f16f9", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1627103925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "50df25ee29424615807a458690cdf8d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a73e76-68", "ovs_interfaceid": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:04.180 281049 DEBUG nova.network.os_vif_util [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:9d:c1,bridge_name='br-int',has_traffic_filtering=True,id=a0a73e76-685f-4ba0-87b5-5dd27b54fab4,network=Network(45d02cf1-f511-4416-b7c1-b37c417f16f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0a73e76-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 02 10:05:04 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:05:04 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4000286707' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:04.182 281049 DEBUG nova.objects.instance [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Lazy-loading 'pci_devices' on Instance uuid abf8d33c-4e24-4d26-af41-b01c828c67e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:04.199 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:04.205 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 10:05:04 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-ffda5ed93ce183a48e515ae2d9c5dd554b9888bdc7086e24a8673d10ea36adfd-merged.mount: Deactivated successfully.
Dec 02 10:05:04 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8a85197bd70814f58cb15afaa29c7b2cca6e3e23fc2d2480aab6c6637289b464-userdata-shm.mount: Deactivated successfully.
Dec 02 10:05:04 np0005541914.localdomain systemd[1]: run-netns-qdhcp\x2dc40d86e4\x2d7101\x2d443b\x2dabce\x2d328f7d1ea40e.mount: Deactivated successfully.
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:04.282 281049 DEBUG nova.network.neutron [req-ca4cb7d3-3342-473a-b5d8-5d60dec9e872 req-5028eba1-7705-4918-84cd-eb1af601c653 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Updated VIF entry in instance network info cache for port a0a73e76-685f-4ba0-87b5-5dd27b54fab4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:04.283 281049 DEBUG nova.network.neutron [req-ca4cb7d3-3342-473a-b5d8-5d60dec9e872 req-5028eba1-7705-4918-84cd-eb1af601c653 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Updating instance_info_cache with network_info: [{"id": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "address": "fa:16:3e:16:9d:c1", "network": {"id": "45d02cf1-f511-4416-b7c1-b37c417f16f9", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1627103925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "50df25ee29424615807a458690cdf8d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a73e76-68", "ovs_interfaceid": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 02 10:05:04 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:05:04.544 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:04.633 281049 DEBUG nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] End _get_guest_xml xml=<domain type="kvm">
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:   <uuid>abf8d33c-4e24-4d26-af41-b01c828c67e0</uuid>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:   <name>instance-0000000a</name>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:   <memory>131072</memory>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:   <vcpu>1</vcpu>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:   <metadata>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:       <nova:name>guest-instance-1</nova:name>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:       <nova:creationTime>2025-12-02 10:05:03</nova:creationTime>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:       <nova:flavor name="m1.nano">
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:         <nova:memory>128</nova:memory>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:         <nova:disk>1</nova:disk>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:         <nova:swap>0</nova:swap>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:         <nova:ephemeral>0</nova:ephemeral>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:         <nova:vcpus>1</nova:vcpus>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:       </nova:flavor>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:       <nova:owner>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:         <nova:user uuid="955214da09cd44dba70e1a06eabc9023">tempest-ServersV294TestFqdnHostnames-2112874438-project-member</nova:user>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:         <nova:project uuid="50df25ee29424615807a458690cdf8d7">tempest-ServersV294TestFqdnHostnames-2112874438</nova:project>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:       </nova:owner>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:       <nova:root type="image" uuid="d85e840d-fa56-497b-b5bd-b49584d3e97a"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:       <nova:ports>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:         <nova:port uuid="a0a73e76-685f-4ba0-87b5-5dd27b54fab4">
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:         </nova:port>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:       </nova:ports>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     </nova:instance>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:   </metadata>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:   <sysinfo type="smbios">
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <system>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:       <entry name="manufacturer">RDO</entry>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:       <entry name="product">OpenStack Compute</entry>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:       <entry name="serial">abf8d33c-4e24-4d26-af41-b01c828c67e0</entry>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:       <entry name="uuid">abf8d33c-4e24-4d26-af41-b01c828c67e0</entry>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:       <entry name="family">Virtual Machine</entry>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     </system>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:   </sysinfo>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:   <os>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <boot dev="hd"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <smbios mode="sysinfo"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:   </os>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:   <features>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <acpi/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <apic/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <vmcoreinfo/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:   </features>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:   <clock offset="utc">
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <timer name="pit" tickpolicy="delay"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <timer name="hpet" present="no"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:   </clock>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:   <cpu mode="host-model" match="exact">
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <topology sockets="1" cores="1" threads="1"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:   </cpu>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:   <devices>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <disk type="network" device="disk">
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:       <driver type="raw" cache="none"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:       <source protocol="rbd" name="vms/abf8d33c-4e24-4d26-af41-b01c828c67e0_disk">
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:         <host name="172.18.0.103" port="6789"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:         <host name="172.18.0.104" port="6789"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:         <host name="172.18.0.105" port="6789"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:       </source>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:       <auth username="openstack">
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:         <secret type="ceph" uuid="c7c8e171-a193-56fb-95fa-8879fcfa7074"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:       </auth>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:       <target dev="vda" bus="virtio"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     </disk>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <disk type="network" device="cdrom">
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:       <driver type="raw" cache="none"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:       <source protocol="rbd" name="vms/abf8d33c-4e24-4d26-af41-b01c828c67e0_disk.config">
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:         <host name="172.18.0.103" port="6789"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:         <host name="172.18.0.104" port="6789"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:         <host name="172.18.0.105" port="6789"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:       </source>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:       <auth username="openstack">
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:         <secret type="ceph" uuid="c7c8e171-a193-56fb-95fa-8879fcfa7074"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:       </auth>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:       <target dev="sda" bus="sata"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     </disk>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <interface type="ethernet">
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:       <mac address="fa:16:3e:16:9d:c1"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:       <model type="virtio"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:       <driver name="vhost" rx_queue_size="512"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:       <mtu size="1442"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:       <target dev="tapa0a73e76-68"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     </interface>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <serial type="pty">
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:       <log file="/var/lib/nova/instances/abf8d33c-4e24-4d26-af41-b01c828c67e0/console.log" append="off"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     </serial>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <video>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:       <model type="virtio"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     </video>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <input type="tablet" bus="usb"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <rng model="virtio">
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:       <backend model="random">/dev/urandom</backend>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     </rng>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <controller type="usb" index="0"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     <memballoon model="virtio">
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:       <stats period="10"/>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:     </memballoon>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:   </devices>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]: </domain>
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:04.635 281049 DEBUG nova.compute.manager [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Preparing to wait for external event network-vif-plugged-a0a73e76-685f-4ba0-87b5-5dd27b54fab4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:04.635 281049 DEBUG oslo_concurrency.lockutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Acquiring lock "abf8d33c-4e24-4d26-af41-b01c828c67e0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:04.636 281049 DEBUG oslo_concurrency.lockutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Lock "abf8d33c-4e24-4d26-af41-b01c828c67e0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:04.637 281049 DEBUG oslo_concurrency.lockutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Lock "abf8d33c-4e24-4d26-af41-b01c828c67e0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:04.638 281049 DEBUG nova.virt.libvirt.vif [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T10:04:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005541914.localdomain',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=10,image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP/dbfwF7RFRTDuwB6jwzuSQ/IcUc/koBGae2h16UX9iSnGmmWafAjmR0zhsoi8E87Oi2Cm1JEv8wzMjtBlM1hsGOt9Lg/6ZEqGVxh82xbfu37aVfdDp2kn2MPZvfs8d3A==',key_name='tempest-keypair-1080862001',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005541914.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005541914.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='50df25ee29424615807a458690cdf8d7',ramdisk_id='',reservation_id='r-5yub6qye',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersV294TestFqdnHostnames-2112874438',owner_user_name='tempest-ServersV294TestFqdnHostnames-2112874438-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T10:05:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='955214da09cd44dba70e1a06eabc9023',uuid=abf8d33c-4e24-4d26-af41-b01c828c67e0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "address": "fa:16:3e:16:9d:c1", "network": {"id": "45d02cf1-f511-4416-b7c1-b37c417f16f9", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1627103925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "50df25ee29424615807a458690cdf8d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a73e76-68", "ovs_interfaceid": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:04.638 281049 DEBUG nova.network.os_vif_util [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Converting VIF {"id": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "address": "fa:16:3e:16:9d:c1", "network": {"id": "45d02cf1-f511-4416-b7c1-b37c417f16f9", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1627103925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "50df25ee29424615807a458690cdf8d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a73e76-68", "ovs_interfaceid": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:04.639 281049 DEBUG nova.network.os_vif_util [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:9d:c1,bridge_name='br-int',has_traffic_filtering=True,id=a0a73e76-685f-4ba0-87b5-5dd27b54fab4,network=Network(45d02cf1-f511-4416-b7c1-b37c417f16f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0a73e76-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:04.640 281049 DEBUG os_vif [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:9d:c1,bridge_name='br-int',has_traffic_filtering=True,id=a0a73e76-685f-4ba0-87b5-5dd27b54fab4,network=Network(45d02cf1-f511-4416-b7c1-b37c417f16f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0a73e76-68') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:04.642 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:04.642 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:04.643 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:04.646 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:04.653 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:04.653 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0a73e76-68, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:04.654 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa0a73e76-68, col_values=(('external_ids', {'iface-id': 'a0a73e76-685f-4ba0-87b5-5dd27b54fab4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:16:9d:c1', 'vm-uuid': 'abf8d33c-4e24-4d26-af41-b01c828c67e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:04.656 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:04.662 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:04.664 281049 INFO os_vif [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:9d:c1,bridge_name='br-int',has_traffic_filtering=True,id=a0a73e76-685f-4ba0-87b5-5dd27b54fab4,network=Network(45d02cf1-f511-4416-b7c1-b37c417f16f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0a73e76-68')
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:04.699 281049 DEBUG oslo_concurrency.lockutils [req-ca4cb7d3-3342-473a-b5d8-5d60dec9e872 req-5028eba1-7705-4918-84cd-eb1af601c653 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Releasing lock "refresh_cache-abf8d33c-4e24-4d26-af41-b01c828c67e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:04.706 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:04.707 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:04.770 281049 DEBUG nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:04.771 281049 DEBUG nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:04.772 281049 DEBUG nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] No VIF found with MAC fa:16:3e:16:9d:c1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:04.773 281049 INFO nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Using config drive
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:04.815 281049 DEBUG nova.storage.rbd_utils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] rbd image abf8d33c-4e24-4d26-af41-b01c828c67e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 02 10:05:04 np0005541914.localdomain systemd[1]: tmp-crun.YuNJHz.mount: Deactivated successfully.
Dec 02 10:05:04 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 4 addresses
Dec 02 10:05:04 np0005541914.localdomain podman[311860]: 2025-12-02 10:05:04.909850798 +0000 UTC m=+0.059593914 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:05:04 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:05:04 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:05:04 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:05:04.931 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:05:04 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v170: 177 pgs: 177 active+clean; 192 MiB data, 809 MiB used, 41 GiB / 42 GiB avail; 54 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Dec 02 10:05:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:04.997 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:05.002 281049 INFO nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Creating config drive at /var/lib/nova/instances/abf8d33c-4e24-4d26-af41-b01c828c67e0/disk.config
Dec 02 10:05:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:05.007 281049 DEBUG oslo_concurrency.processutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/abf8d33c-4e24-4d26-af41-b01c828c67e0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6b54pamg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:05:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/3823900154' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:05:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/4000286707' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:05:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2916680470' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:05:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2916680470' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:05:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:05.140 281049 DEBUG oslo_concurrency.processutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/abf8d33c-4e24-4d26-af41-b01c828c67e0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6b54pamg" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:05:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:05.183 281049 DEBUG nova.storage.rbd_utils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] rbd image abf8d33c-4e24-4d26-af41-b01c828c67e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 02 10:05:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:05.188 281049 DEBUG oslo_concurrency.processutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/abf8d33c-4e24-4d26-af41-b01c828c67e0/disk.config abf8d33c-4e24-4d26-af41-b01c828c67e0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:05:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:05.440 281049 DEBUG oslo_concurrency.processutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/abf8d33c-4e24-4d26-af41-b01c828c67e0/disk.config abf8d33c-4e24-4d26-af41-b01c828c67e0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.252s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:05:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:05.441 281049 INFO nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Deleting local config drive /var/lib/nova/instances/abf8d33c-4e24-4d26-af41-b01c828c67e0/disk.config because it was imported into RBD.
Dec 02 10:05:05 np0005541914.localdomain kernel: device tapa0a73e76-68 entered promiscuous mode
Dec 02 10:05:05 np0005541914.localdomain NetworkManager[5967]: <info>  [1764669905.4893] manager: (tapa0a73e76-68): new Tun device (/org/freedesktop/NetworkManager/Devices/25)
Dec 02 10:05:05 np0005541914.localdomain systemd-udevd[311930]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 10:05:05 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:05:05Z|00107|binding|INFO|Claiming lport a0a73e76-685f-4ba0-87b5-5dd27b54fab4 for this chassis.
Dec 02 10:05:05 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:05:05Z|00108|binding|INFO|a0a73e76-685f-4ba0-87b5-5dd27b54fab4: Claiming fa:16:3e:16:9d:c1 10.100.0.12
Dec 02 10:05:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:05.494 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:05.501 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:9d:c1 10.100.0.12'], port_security=['fa:16:3e:16:9d:c1 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'abf8d33c-4e24-4d26-af41-b01c828c67e0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45d02cf1-f511-4416-b7c1-b37c417f16f9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '50df25ee29424615807a458690cdf8d7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2e537c1e-d2f3-49fb-8c4c-0f6b2c3e354b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b257864-5151-448f-941d-2c9a748f5881, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=a0a73e76-685f-4ba0-87b5-5dd27b54fab4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:05.503 159483 INFO neutron.agent.ovn.metadata.agent [-] Port a0a73e76-685f-4ba0-87b5-5dd27b54fab4 in datapath 45d02cf1-f511-4416-b7c1-b37c417f16f9 bound to our chassis
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:05.506 159483 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 45d02cf1-f511-4416-b7c1-b37c417f16f9
Dec 02 10:05:05 np0005541914.localdomain NetworkManager[5967]: <info>  [1764669905.5065] device (tapa0a73e76-68): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external')
Dec 02 10:05:05 np0005541914.localdomain NetworkManager[5967]: <info>  [1764669905.5071] device (tapa0a73e76-68): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external')
Dec 02 10:05:05 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:05:05Z|00109|binding|INFO|Setting lport a0a73e76-685f-4ba0-87b5-5dd27b54fab4 ovn-installed in OVS
Dec 02 10:05:05 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:05:05Z|00110|binding|INFO|Setting lport a0a73e76-685f-4ba0-87b5-5dd27b54fab4 up in Southbound
Dec 02 10:05:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:05.516 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:05.516 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[5ed4ca5d-c74d-468b-8794-1f1751d54102]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:05.518 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap45d02cf1-f1 in ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:05.520 262550 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap45d02cf1-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:05.520 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[4e72ebfb-60dc-4c50-b481-6b8e6b9adf6c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:05.523 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[0dcabd1c-4570-45a0-b129-e73040a0711e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:05:05 np0005541914.localdomain systemd-machined[202765]: New machine qemu-5-instance-0000000a.
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:05.536 159602 DEBUG oslo.privsep.daemon [-] privsep: reply[8d67b885-79fc-4fef-99ea-e23e27858457]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:05:05 np0005541914.localdomain systemd[1]: Started Virtual Machine qemu-5-instance-0000000a.
Dec 02 10:05:05 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:05:05.552 2 INFO neutron.agent.securitygroups_rpc [None req-7954669c-1491-4ccc-a463-0efe07ba8bc3 ec20a6cceee246d6b46878df263d30a4 d048f19ff5fc47dc88162ef5f9cebe8b - - default default] Security group member updated ['576d6513-029b-4880-bb0b-58094b586b90']
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:05.549 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[1cc4ae7e-1ed4-40e6-ac20-8b406bca8500]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:05.580 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[e13f6324-3374-48e8-9392-bbe6fb955fcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:05:05 np0005541914.localdomain systemd-udevd[311933]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 10:05:05 np0005541914.localdomain NetworkManager[5967]: <info>  [1764669905.5871] manager: (tap45d02cf1-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/26)
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:05.585 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[d5af2d69-87d9-461b-86f6-9e624ceea1ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:05.615 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[0a1e896b-9e02-44a6-8f44-13ab8e75677d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:05.619 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[7b9d23f5-dcb6-498f-b712-97a93ddcadb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:05:05 np0005541914.localdomain NetworkManager[5967]: <info>  [1764669905.6419] device (tap45d02cf1-f0): carrier: link connected
Dec 02 10:05:05 np0005541914.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap45d02cf1-f1: link becomes ready
Dec 02 10:05:05 np0005541914.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap45d02cf1-f0: link becomes ready
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:05.646 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[3cf20856-263a-4d1c-a68f-e9e47605d56d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:05.667 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[e7732a74-70d7-47ae-abcd-266c69e50aac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap45d02cf1-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:f8:d7:c5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1206980, 'reachable_time': 33855, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311966, 'error': None, 'target': 'ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:05:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:05.685 281049 DEBUG nova.compute.manager [req-ff8e1c12-af22-407f-8951-a83dee130508 req-84862912-e993-46e7-a72e-ca9ac8a5379d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Received event network-vif-plugged-a0a73e76-685f-4ba0-87b5-5dd27b54fab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 02 10:05:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:05.686 281049 DEBUG oslo_concurrency.lockutils [req-ff8e1c12-af22-407f-8951-a83dee130508 req-84862912-e993-46e7-a72e-ca9ac8a5379d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "abf8d33c-4e24-4d26-af41-b01c828c67e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:05.685 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[ba7da269-d5eb-44c9-85a4-b2b4936e3b00]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef8:d7c5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1206980, 'tstamp': 1206980}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311982, 'error': None, 'target': 'ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:05:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:05.687 281049 DEBUG oslo_concurrency.lockutils [req-ff8e1c12-af22-407f-8951-a83dee130508 req-84862912-e993-46e7-a72e-ca9ac8a5379d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "abf8d33c-4e24-4d26-af41-b01c828c67e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:05:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:05.687 281049 DEBUG oslo_concurrency.lockutils [req-ff8e1c12-af22-407f-8951-a83dee130508 req-84862912-e993-46e7-a72e-ca9ac8a5379d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "abf8d33c-4e24-4d26-af41-b01c828c67e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:05:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:05.688 281049 DEBUG nova.compute.manager [req-ff8e1c12-af22-407f-8951-a83dee130508 req-84862912-e993-46e7-a72e-ca9ac8a5379d dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Processing event network-vif-plugged-a0a73e76-685f-4ba0-87b5-5dd27b54fab4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:05.706 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[63aef844-019b-4bf5-99ab-c653d8dcdeb5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap45d02cf1-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:f8:d7:c5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1206980, 'reachable_time': 33855, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311984, 'error': None, 'target': 'ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:05.748 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[979deb73-93b2-42e0-b1a8-743067cc5b32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:05.820 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[8fad0057-406f-4046-9f8f-09a92f9afeec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:05.821 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45d02cf1-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:05.822 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:05.823 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap45d02cf1-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:05:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:05.825 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:05 np0005541914.localdomain kernel: device tap45d02cf1-f0 entered promiscuous mode
Dec 02 10:05:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:05.828 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:05.834 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap45d02cf1-f0, col_values=(('external_ids', {'iface-id': '0999b431-c362-4180-a7a9-8664fe007369'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:05:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:05.836 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:05 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:05:05Z|00111|binding|INFO|Releasing lport 0999b431-c362-4180-a7a9-8664fe007369 from this chassis (sb_readonly=0)
Dec 02 10:05:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:05.837 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:05.840 159483 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/45d02cf1-f511-4416-b7c1-b37c417f16f9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/45d02cf1-f511-4416-b7c1-b37c417f16f9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:05.841 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[53ff3763-2af8-4cad-8948-7895a47a9a5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:05.842 159483 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]: global
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]:     log         /dev/log local0 debug
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]:     log-tag     haproxy-metadata-proxy-45d02cf1-f511-4416-b7c1-b37c417f16f9
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]:     user        root
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]:     group       root
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]:     maxconn     1024
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]:     pidfile     /var/lib/neutron/external/pids/45d02cf1-f511-4416-b7c1-b37c417f16f9.pid.haproxy
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]:     daemon
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]: 
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]: defaults
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]:     log global
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]:     mode http
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]:     option httplog
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]:     option dontlognull
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]:     option http-server-close
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]:     option forwardfor
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]:     retries                 3
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]:     timeout http-request    30s
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]:     timeout connect         30s
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]:     timeout client          32s
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]:     timeout server          32s
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]:     timeout http-keep-alive 30s
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]: 
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]: 
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]: listen listener
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]:     bind 169.254.169.254:80
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]:     server metadata /var/lib/neutron/metadata_proxy
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]:     http-request add-header X-OVN-Network-ID 45d02cf1-f511-4416-b7c1-b37c417f16f9
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 02 10:05:05 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:05.843 159483 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9', 'env', 'PROCESS_TAG=haproxy-45d02cf1-f511-4416-b7c1-b37c417f16f9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/45d02cf1-f511-4416-b7c1-b37c417f16f9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 02 10:05:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:05.848 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:06 np0005541914.localdomain ceph-mon[301710]: pgmap v170: 177 pgs: 177 active+clean; 192 MiB data, 809 MiB used, 41 GiB / 42 GiB avail; 54 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Dec 02 10:05:06 np0005541914.localdomain podman[312036]: 
Dec 02 10:05:06 np0005541914.localdomain podman[312036]: 2025-12-02 10:05:06.242119388 +0000 UTC m=+0.077802253 container create bd104b98203782d1c4a69cfe6105f5bbd3adf18759a9e589f2f068d902481241 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:05:06 np0005541914.localdomain systemd[1]: Started libpod-conmon-bd104b98203782d1c4a69cfe6105f5bbd3adf18759a9e589f2f068d902481241.scope.
Dec 02 10:05:06 np0005541914.localdomain systemd[1]: tmp-crun.F0bET5.mount: Deactivated successfully.
Dec 02 10:05:06 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 10:05:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:06.278 281049 DEBUG nova.virt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Emitting event <LifecycleEvent: 1764669906.27823, abf8d33c-4e24-4d26-af41-b01c828c67e0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 02 10:05:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:06.280 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] VM Started (Lifecycle Event)
Dec 02 10:05:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:06.282 281049 DEBUG nova.compute.manager [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 02 10:05:06 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5520c5903efa84481bfea75a392635964a1c0dc18350544c148b6b54d47a4621/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 10:05:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:06.288 281049 DEBUG nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 02 10:05:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:06.291 281049 INFO nova.virt.libvirt.driver [-] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Instance spawned successfully.
Dec 02 10:05:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:06.292 281049 DEBUG nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 02 10:05:06 np0005541914.localdomain podman[312036]: 2025-12-02 10:05:06.292823858 +0000 UTC m=+0.128506723 container init bd104b98203782d1c4a69cfe6105f5bbd3adf18759a9e589f2f068d902481241 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 02 10:05:06 np0005541914.localdomain podman[312036]: 2025-12-02 10:05:06.198121676 +0000 UTC m=+0.033804601 image pull  quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 02 10:05:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:06.301 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 02 10:05:06 np0005541914.localdomain podman[312036]: 2025-12-02 10:05:06.30331425 +0000 UTC m=+0.138997115 container start bd104b98203782d1c4a69cfe6105f5bbd3adf18759a9e589f2f068d902481241 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:05:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:06.306 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 02 10:05:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:06.320 281049 DEBUG nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 02 10:05:06 np0005541914.localdomain neutron-haproxy-ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9[312057]: [NOTICE]   (312061) : New worker (312063) forked
Dec 02 10:05:06 np0005541914.localdomain neutron-haproxy-ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9[312057]: [NOTICE]   (312061) : Loading success.
Dec 02 10:05:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:06.323 281049 DEBUG nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 02 10:05:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:06.324 281049 DEBUG nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 02 10:05:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:06.325 281049 DEBUG nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 02 10:05:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:06.326 281049 DEBUG nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 02 10:05:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:06.327 281049 DEBUG nova.virt.libvirt.driver [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 02 10:05:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:06.333 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 02 10:05:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:06.333 281049 DEBUG nova.virt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Emitting event <LifecycleEvent: 1764669906.2792675, abf8d33c-4e24-4d26-af41-b01c828c67e0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 02 10:05:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:06.334 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] VM Paused (Lifecycle Event)
Dec 02 10:05:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:06.362 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 02 10:05:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:06.365 281049 DEBUG nova.virt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Emitting event <LifecycleEvent: 1764669906.288466, abf8d33c-4e24-4d26-af41-b01c828c67e0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 02 10:05:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:06.366 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] VM Resumed (Lifecycle Event)
Dec 02 10:05:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:06.387 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 02 10:05:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:06.391 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 02 10:05:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:06.401 281049 INFO nova.compute.manager [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Took 6.28 seconds to spawn the instance on the hypervisor.
Dec 02 10:05:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:06.402 281049 DEBUG nova.compute.manager [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 02 10:05:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:06.414 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 02 10:05:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:06.474 281049 INFO nova.compute.manager [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Took 7.23 seconds to build instance.
Dec 02 10:05:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:06.492 281049 DEBUG oslo_concurrency.lockutils [None req-80b5ad2b-fb4c-4362-be26-82a96d5f7828 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Lock "abf8d33c-4e24-4d26-af41-b01c828c67e0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.334s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:05:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:06.662 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:06.708 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:05:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:06.709 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:05:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:06.709 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 10:05:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:05:06
Dec 02 10:05:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 02 10:05:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] do_upmap
Dec 02 10:05:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] pools ['backups', 'manila_data', 'vms', '.mgr', 'images', 'volumes', 'manila_metadata']
Dec 02 10:05:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes
Dec 02 10:05:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:05:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:05:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:05:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:05:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:05:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:05:06 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v171: 177 pgs: 177 active+clean; 192 MiB data, 809 MiB used, 41 GiB / 42 GiB avail; 54 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Dec 02 10:05:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 02 10:05:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:05:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 02 10:05:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:05:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust
Dec 02 10:05:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:05:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:05:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 02 10:05:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:05:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004807566478224456 of space, bias 1.0, pg target 0.9615132956448912 quantized to 32 (current 32)
Dec 02 10:05:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:05:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 02 10:05:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:05:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00430047372278057 of space, bias 1.0, pg target 0.8572277620742602 quantized to 32 (current 32)
Dec 02 10:05:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:05:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 02 10:05:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:05:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 02 10:05:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:05:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001949853433835846 quantized to 16 (current 16)
Dec 02 10:05:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:05:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:05:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:05:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:05:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:05:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e110 e110: 6 total, 6 up, 6 in
Dec 02 10:05:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:05:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:07.727 281049 DEBUG nova.compute.manager [req-ecb3fb56-acce-4414-b123-f3f1a288e861 req-161feddb-d36b-4dda-84ba-0ff7d1e62664 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Received event network-vif-plugged-a0a73e76-685f-4ba0-87b5-5dd27b54fab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 02 10:05:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:07.728 281049 DEBUG oslo_concurrency.lockutils [req-ecb3fb56-acce-4414-b123-f3f1a288e861 req-161feddb-d36b-4dda-84ba-0ff7d1e62664 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "abf8d33c-4e24-4d26-af41-b01c828c67e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:05:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:07.729 281049 DEBUG oslo_concurrency.lockutils [req-ecb3fb56-acce-4414-b123-f3f1a288e861 req-161feddb-d36b-4dda-84ba-0ff7d1e62664 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "abf8d33c-4e24-4d26-af41-b01c828c67e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:05:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:07.729 281049 DEBUG oslo_concurrency.lockutils [req-ecb3fb56-acce-4414-b123-f3f1a288e861 req-161feddb-d36b-4dda-84ba-0ff7d1e62664 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "abf8d33c-4e24-4d26-af41-b01c828c67e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:05:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:07.730 281049 DEBUG nova.compute.manager [req-ecb3fb56-acce-4414-b123-f3f1a288e861 req-161feddb-d36b-4dda-84ba-0ff7d1e62664 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] No waiting events found dispatching network-vif-plugged-a0a73e76-685f-4ba0-87b5-5dd27b54fab4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 02 10:05:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:07.730 281049 WARNING nova.compute.manager [req-ecb3fb56-acce-4414-b123-f3f1a288e861 req-161feddb-d36b-4dda-84ba-0ff7d1e62664 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Received unexpected event network-vif-plugged-a0a73e76-685f-4ba0-87b5-5dd27b54fab4 for instance with vm_state active and task_state None.
Dec 02 10:05:08 np0005541914.localdomain ceph-mon[301710]: pgmap v171: 177 pgs: 177 active+clean; 192 MiB data, 809 MiB used, 41 GiB / 42 GiB avail; 54 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Dec 02 10:05:08 np0005541914.localdomain ceph-mon[301710]: osdmap e110: 6 total, 6 up, 6 in
Dec 02 10:05:08 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:05:08 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:05:08 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:05:08 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:05:08 np0005541914.localdomain systemd[1]: tmp-crun.rp1vlC.mount: Deactivated successfully.
Dec 02 10:05:08 np0005541914.localdomain systemd[1]: tmp-crun.uqV21J.mount: Deactivated successfully.
Dec 02 10:05:08 np0005541914.localdomain podman[312073]: 2025-12-02 10:05:08.644924091 +0000 UTC m=+0.129098111 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 10:05:08 np0005541914.localdomain podman[312072]: 2025-12-02 10:05:08.605297312 +0000 UTC m=+0.090412221 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 02 10:05:08 np0005541914.localdomain podman[312074]: 2025-12-02 10:05:08.654359681 +0000 UTC m=+0.132832916 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 02 10:05:08 np0005541914.localdomain podman[312074]: 2025-12-02 10:05:08.666801334 +0000 UTC m=+0.145274579 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec 02 10:05:08 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:05:08 np0005541914.localdomain podman[312075]: 2025-12-02 10:05:08.705287247 +0000 UTC m=+0.180287815 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 02 10:05:08 np0005541914.localdomain podman[312072]: 2025-12-02 10:05:08.738528659 +0000 UTC m=+0.223643608 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 02 10:05:08 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:05:08 np0005541914.localdomain podman[312073]: 2025-12-02 10:05:08.758747932 +0000 UTC m=+0.242922002 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 10:05:08 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:05:08 np0005541914.localdomain podman[312075]: 2025-12-02 10:05:08.79542565 +0000 UTC m=+0.270426238 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 02 10:05:08 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:05:08 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v173: 177 pgs: 177 active+clean; 192 MiB data, 810 MiB used, 41 GiB / 42 GiB avail; 1.2 MiB/s rd, 2.7 MiB/s wr, 145 op/s
Dec 02 10:05:09 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:09.699 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:10 np0005541914.localdomain ceph-mon[301710]: pgmap v173: 177 pgs: 177 active+clean; 192 MiB data, 810 MiB used, 41 GiB / 42 GiB avail; 1.2 MiB/s rd, 2.7 MiB/s wr, 145 op/s
Dec 02 10:05:10 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e111 e111: 6 total, 6 up, 6 in
Dec 02 10:05:10 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v175: 177 pgs: 177 active+clean; 192 MiB data, 810 MiB used, 41 GiB / 42 GiB avail; 1.2 MiB/s rd, 23 KiB/s wr, 86 op/s
Dec 02 10:05:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:11.662 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:11 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:05:11.865 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:05:10Z, description=, device_id=1ad64abe-8977-48b7-83a3-2b942dce5ba9, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034ccc0a0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034ccc3d0>], id=a6214a1b-e83a-4e97-8baa-487aca9c15e4, ip_allocation=immediate, mac_address=fa:16:3e:e0:ca:9c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=794, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:05:11Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:05:11 np0005541914.localdomain ceph-mon[301710]: osdmap e111: 6 total, 6 up, 6 in
Dec 02 10:05:11 np0005541914.localdomain ceph-mon[301710]: pgmap v175: 177 pgs: 177 active+clean; 192 MiB data, 810 MiB used, 41 GiB / 42 GiB avail; 1.2 MiB/s rd, 23 KiB/s wr, 86 op/s
Dec 02 10:05:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:11.974 281049 DEBUG nova.compute.manager [req-177b259c-21c9-4643-91dc-33f7d48dd5a2 req-6c85a92f-9eff-48a7-a3c9-760c8b2c7e8a dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Received event network-changed-a0a73e76-685f-4ba0-87b5-5dd27b54fab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 02 10:05:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:11.975 281049 DEBUG nova.compute.manager [req-177b259c-21c9-4643-91dc-33f7d48dd5a2 req-6c85a92f-9eff-48a7-a3c9-760c8b2c7e8a dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Refreshing instance network info cache due to event network-changed-a0a73e76-685f-4ba0-87b5-5dd27b54fab4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 02 10:05:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:11.976 281049 DEBUG oslo_concurrency.lockutils [req-177b259c-21c9-4643-91dc-33f7d48dd5a2 req-6c85a92f-9eff-48a7-a3c9-760c8b2c7e8a dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "refresh_cache-abf8d33c-4e24-4d26-af41-b01c828c67e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 02 10:05:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:11.976 281049 DEBUG oslo_concurrency.lockutils [req-177b259c-21c9-4643-91dc-33f7d48dd5a2 req-6c85a92f-9eff-48a7-a3c9-760c8b2c7e8a dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquired lock "refresh_cache-abf8d33c-4e24-4d26-af41-b01c828c67e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 02 10:05:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:11.977 281049 DEBUG nova.network.neutron [req-177b259c-21c9-4643-91dc-33f7d48dd5a2 req-6c85a92f-9eff-48a7-a3c9-760c8b2c7e8a dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Refreshing network info cache for port a0a73e76-685f-4ba0-87b5-5dd27b54fab4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 02 10:05:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:05:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:05:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:05:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:05:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:05:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:05:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:05:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:05:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:05:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:05:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:05:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:05:12 np0005541914.localdomain systemd[1]: tmp-crun.EiORcO.mount: Deactivated successfully.
Dec 02 10:05:12 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 5 addresses
Dec 02 10:05:12 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:05:12 np0005541914.localdomain podman[312169]: 2025-12-02 10:05:12.251253324 +0000 UTC m=+0.079055072 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:05:12 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:05:12 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:05:12 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:05:12.716 262347 INFO neutron.agent.dhcp.agent [None req-4a24a7ce-3642-4068-9348-3d5c3eb3f10e - - - - - -] DHCP configuration for ports {'a6214a1b-e83a-4e97-8baa-487aca9c15e4'} is completed
Dec 02 10:05:12 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v176: 177 pgs: 177 active+clean; 192 MiB data, 810 MiB used, 41 GiB / 42 GiB avail; 2.9 MiB/s rd, 23 KiB/s wr, 141 op/s
Dec 02 10:05:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:13.824 281049 DEBUG nova.network.neutron [req-177b259c-21c9-4643-91dc-33f7d48dd5a2 req-6c85a92f-9eff-48a7-a3c9-760c8b2c7e8a dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Updated VIF entry in instance network info cache for port a0a73e76-685f-4ba0-87b5-5dd27b54fab4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 02 10:05:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:13.825 281049 DEBUG nova.network.neutron [req-177b259c-21c9-4643-91dc-33f7d48dd5a2 req-6c85a92f-9eff-48a7-a3c9-760c8b2c7e8a dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Updating instance_info_cache with network_info: [{"id": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "address": "fa:16:3e:16:9d:c1", "network": {"id": "45d02cf1-f511-4416-b7c1-b37c417f16f9", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1627103925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "50df25ee29424615807a458690cdf8d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a73e76-68", "ovs_interfaceid": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 02 10:05:14 np0005541914.localdomain ceph-mon[301710]: pgmap v176: 177 pgs: 177 active+clean; 192 MiB data, 810 MiB used, 41 GiB / 42 GiB avail; 2.9 MiB/s rd, 23 KiB/s wr, 141 op/s
Dec 02 10:05:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:14.753 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:14 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v177: 177 pgs: 177 active+clean; 192 MiB data, 810 MiB used, 41 GiB / 42 GiB avail; 2.9 MiB/s rd, 23 KiB/s wr, 141 op/s
Dec 02 10:05:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:15.202 281049 DEBUG oslo_concurrency.lockutils [req-177b259c-21c9-4643-91dc-33f7d48dd5a2 req-6c85a92f-9eff-48a7-a3c9-760c8b2c7e8a dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Releasing lock "refresh_cache-abf8d33c-4e24-4d26-af41-b01c828c67e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 02 10:05:15 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:05:15Z|00112|binding|INFO|Releasing lport 0999b431-c362-4180-a7a9-8664fe007369 from this chassis (sb_readonly=0)
Dec 02 10:05:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:16.002 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:16.022 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:16 np0005541914.localdomain systemd[1]: tmp-crun.zOFsQU.mount: Deactivated successfully.
Dec 02 10:05:16 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 4 addresses
Dec 02 10:05:16 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:05:16 np0005541914.localdomain podman[312206]: 2025-12-02 10:05:16.098559079 +0000 UTC m=+0.070058755 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:05:16 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:05:16 np0005541914.localdomain ceph-mon[301710]: pgmap v177: 177 pgs: 177 active+clean; 192 MiB data, 810 MiB used, 41 GiB / 42 GiB avail; 2.9 MiB/s rd, 23 KiB/s wr, 141 op/s
Dec 02 10:05:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:16.665 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:16 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v178: 177 pgs: 177 active+clean; 192 MiB data, 810 MiB used, 41 GiB / 42 GiB avail; 2.4 MiB/s rd, 19 KiB/s wr, 116 op/s
Dec 02 10:05:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:05:17 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:05:17 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:05:18 np0005541914.localdomain podman[312229]: 2025-12-02 10:05:18.080498188 +0000 UTC m=+0.081842138 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, vcs-type=git, managed_by=edpm_ansible, name=ubi9-minimal, io.buildah.version=1.33.7, config_id=edpm, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc.)
Dec 02 10:05:18 np0005541914.localdomain podman[312229]: 2025-12-02 10:05:18.095734547 +0000 UTC m=+0.097078417 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, architecture=x86_64, vendor=Red Hat, Inc., config_id=edpm, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-type=git, io.buildah.version=1.33.7, version=9.6)
Dec 02 10:05:18 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:05:18 np0005541914.localdomain systemd[1]: tmp-crun.j7WSSm.mount: Deactivated successfully.
Dec 02 10:05:18 np0005541914.localdomain podman[312228]: 2025-12-02 10:05:18.152799612 +0000 UTC m=+0.153643826 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 10:05:18 np0005541914.localdomain podman[312228]: 2025-12-02 10:05:18.167985288 +0000 UTC m=+0.168829562 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 10:05:18 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:05:18 np0005541914.localdomain ceph-mon[301710]: pgmap v178: 177 pgs: 177 active+clean; 192 MiB data, 810 MiB used, 41 GiB / 42 GiB avail; 2.4 MiB/s rd, 19 KiB/s wr, 116 op/s
Dec 02 10:05:18 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v179: 177 pgs: 177 active+clean; 192 MiB data, 810 MiB used, 41 GiB / 42 GiB avail; 1.4 MiB/s rd, 45 op/s
Dec 02 10:05:19 np0005541914.localdomain sudo[312269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 10:05:19 np0005541914.localdomain sudo[312269]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:05:19 np0005541914.localdomain sudo[312269]: pam_unix(sudo:session): session closed for user root
Dec 02 10:05:19 np0005541914.localdomain sudo[312287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 10:05:19 np0005541914.localdomain sudo[312287]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:05:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:19.755 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:19.796 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:20 np0005541914.localdomain sudo[312287]: pam_unix(sudo:session): session closed for user root
Dec 02 10:05:20 np0005541914.localdomain ceph-mon[301710]: pgmap v179: 177 pgs: 177 active+clean; 192 MiB data, 810 MiB used, 41 GiB / 42 GiB avail; 1.4 MiB/s rd, 45 op/s
Dec 02 10:05:20 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 10:05:20 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:05:20 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 02 10:05:20 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:05:20 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 02 10:05:20 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] update: starting ev 46303660-eb77-4f4b-b8c5-e98abc359912 (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:05:20 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] complete: finished ev 46303660-eb77-4f4b-b8c5-e98abc359912 (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:05:20 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Completed event 46303660-eb77-4f4b-b8c5-e98abc359912 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 02 10:05:20 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 02 10:05:20 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:05:20 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses
Dec 02 10:05:20 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:05:20 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:05:20 np0005541914.localdomain podman[312353]: 2025-12-02 10:05:20.514792279 +0000 UTC m=+0.051485624 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 02 10:05:20 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:05:20Z|00113|binding|INFO|Releasing lport 0999b431-c362-4180-a7a9-8664fe007369 from this chassis (sb_readonly=0)
Dec 02 10:05:20 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:20.579 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:20 np0005541914.localdomain sudo[312372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 10:05:20 np0005541914.localdomain sudo[312372]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:05:20 np0005541914.localdomain sudo[312372]: pam_unix(sudo:session): session closed for user root
Dec 02 10:05:20 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:05:20Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:16:9d:c1 10.100.0.12
Dec 02 10:05:20 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:05:20Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:16:9d:c1 10.100.0.12
Dec 02 10:05:20 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v180: 177 pgs: 177 active+clean; 192 MiB data, 810 MiB used, 41 GiB / 42 GiB avail; 1.3 MiB/s rd, 44 op/s
Dec 02 10:05:21 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:05:21 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:05:21 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:05:21 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:05:21 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:21.709 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:22 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events
Dec 02 10:05:22 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 02 10:05:22 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:05:22 np0005541914.localdomain ceph-mon[301710]: pgmap v180: 177 pgs: 177 active+clean; 192 MiB data, 810 MiB used, 41 GiB / 42 GiB avail; 1.3 MiB/s rd, 44 op/s
Dec 02 10:05:22 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:05:22 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v181: 177 pgs: 177 active+clean; 225 MiB data, 889 MiB used, 41 GiB / 42 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 101 op/s
Dec 02 10:05:23 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:05:24 np0005541914.localdomain podman[312391]: 2025-12-02 10:05:24.07843817 +0000 UTC m=+0.079600799 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:05:24 np0005541914.localdomain podman[312391]: 2025-12-02 10:05:24.116589253 +0000 UTC m=+0.117751892 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, tcib_managed=true)
Dec 02 10:05:24 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:05:24 np0005541914.localdomain ceph-mon[301710]: pgmap v181: 177 pgs: 177 active+clean; 225 MiB data, 889 MiB used, 41 GiB / 42 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 101 op/s
Dec 02 10:05:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:24.801 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:24 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v182: 177 pgs: 177 active+clean; 225 MiB data, 889 MiB used, 41 GiB / 42 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 02 10:05:26 np0005541914.localdomain ceph-mon[301710]: pgmap v182: 177 pgs: 177 active+clean; 225 MiB data, 889 MiB used, 41 GiB / 42 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 02 10:05:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:26.713 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:26 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v183: 177 pgs: 177 active+clean; 225 MiB data, 889 MiB used, 41 GiB / 42 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 02 10:05:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:05:27 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:27.744 159597 DEBUG eventlet.wsgi.server [-] (159597) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Dec 02 10:05:27 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:27.746 159597 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /openstack/latest/meta_data.json HTTP/1.0
Dec 02 10:05:27 np0005541914.localdomain ovn_metadata_agent[159477]: Accept: */*
Dec 02 10:05:27 np0005541914.localdomain ovn_metadata_agent[159477]: Connection: close
Dec 02 10:05:27 np0005541914.localdomain ovn_metadata_agent[159477]: Content-Type: text/plain
Dec 02 10:05:27 np0005541914.localdomain ovn_metadata_agent[159477]: Host: 169.254.169.254
Dec 02 10:05:27 np0005541914.localdomain ovn_metadata_agent[159477]: User-Agent: curl/7.84.0
Dec 02 10:05:27 np0005541914.localdomain ovn_metadata_agent[159477]: X-Forwarded-For: 10.100.0.12
Dec 02 10:05:27 np0005541914.localdomain ovn_metadata_agent[159477]: X-Ovn-Network-Id: 45d02cf1-f511-4416-b7c1-b37c417f16f9 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Dec 02 10:05:28 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:05:28Z|00114|binding|INFO|Releasing lport 0999b431-c362-4180-a7a9-8664fe007369 from this chassis (sb_readonly=0)
Dec 02 10:05:28 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:28.203 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:28 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses
Dec 02 10:05:28 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:05:28 np0005541914.localdomain podman[312426]: 2025-12-02 10:05:28.206112736 +0000 UTC m=+0.098173000 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:05:28 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:05:28 np0005541914.localdomain ceph-mon[301710]: pgmap v183: 177 pgs: 177 active+clean; 225 MiB data, 889 MiB used, 41 GiB / 42 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 02 10:05:28 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v184: 177 pgs: 177 active+clean; 225 MiB data, 890 MiB used, 41 GiB / 42 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 02 10:05:29 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:29.803 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:30 np0005541914.localdomain haproxy-metadata-proxy-45d02cf1-f511-4416-b7c1-b37c417f16f9[312063]: 10.100.0.12:53634 [02/Dec/2025:10:05:27.743] listener listener/metadata 0/0/0/2378/2378 200 1657 - - ---- 1/1/0/0/0 0/0 "GET /openstack/latest/meta_data.json HTTP/1.1"
Dec 02 10:05:30 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:30.121 159597 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Dec 02 10:05:30 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:30.122 159597 INFO eventlet.wsgi.server [-] 10.100.0.12,<local> "GET /openstack/latest/meta_data.json HTTP/1.1" status: 200  len: 1673 time: 2.3756862
Dec 02 10:05:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:30.243 281049 DEBUG oslo_concurrency.lockutils [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Acquiring lock "abf8d33c-4e24-4d26-af41-b01c828c67e0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:05:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:30.244 281049 DEBUG oslo_concurrency.lockutils [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Lock "abf8d33c-4e24-4d26-af41-b01c828c67e0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:05:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:30.244 281049 DEBUG oslo_concurrency.lockutils [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Acquiring lock "abf8d33c-4e24-4d26-af41-b01c828c67e0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:05:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:30.245 281049 DEBUG oslo_concurrency.lockutils [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Lock "abf8d33c-4e24-4d26-af41-b01c828c67e0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:05:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:30.245 281049 DEBUG oslo_concurrency.lockutils [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Lock "abf8d33c-4e24-4d26-af41-b01c828c67e0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:05:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:30.247 281049 INFO nova.compute.manager [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Terminating instance
Dec 02 10:05:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:30.248 281049 DEBUG nova.compute.manager [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 02 10:05:30 np0005541914.localdomain kernel: device tapa0a73e76-68 left promiscuous mode
Dec 02 10:05:30 np0005541914.localdomain NetworkManager[5967]: <info>  [1764669930.3291] device (tapa0a73e76-68): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed')
Dec 02 10:05:30 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:05:30Z|00115|binding|INFO|Releasing lport a0a73e76-685f-4ba0-87b5-5dd27b54fab4 from this chassis (sb_readonly=0)
Dec 02 10:05:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:30.359 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:30 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:05:30Z|00116|binding|INFO|Setting lport a0a73e76-685f-4ba0-87b5-5dd27b54fab4 down in Southbound
Dec 02 10:05:30 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:05:30Z|00117|binding|INFO|Removing iface tapa0a73e76-68 ovn-installed in OVS
Dec 02 10:05:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:30.363 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:30 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:30.368 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:9d:c1 10.100.0.12'], port_security=['fa:16:3e:16:9d:c1 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'abf8d33c-4e24-4d26-af41-b01c828c67e0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-45d02cf1-f511-4416-b7c1-b37c417f16f9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '50df25ee29424615807a458690cdf8d7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2e537c1e-d2f3-49fb-8c4c-0f6b2c3e354b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2b257864-5151-448f-941d-2c9a748f5881, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=a0a73e76-685f-4ba0-87b5-5dd27b54fab4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:05:30 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:30.369 159483 INFO neutron.agent.ovn.metadata.agent [-] Port a0a73e76-685f-4ba0-87b5-5dd27b54fab4 in datapath 45d02cf1-f511-4416-b7c1-b37c417f16f9 unbound from our chassis
Dec 02 10:05:30 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:30.370 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 45d02cf1-f511-4416-b7c1-b37c417f16f9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:05:30 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:30.371 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[cbe9994f-d7c2-4cb0-a4bc-5d29824c2ed4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:05:30 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:30.371 159483 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9 namespace which is not needed anymore
Dec 02 10:05:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:30.375 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:30 np0005541914.localdomain systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Dec 02 10:05:30 np0005541914.localdomain systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000a.scope: Consumed 14.122s CPU time.
Dec 02 10:05:30 np0005541914.localdomain systemd-machined[202765]: Machine qemu-5-instance-0000000a terminated.
Dec 02 10:05:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:30.472 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:30.477 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:30.489 281049 INFO nova.virt.libvirt.driver [-] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Instance destroyed successfully.
Dec 02 10:05:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:30.491 281049 DEBUG nova.objects.instance [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Lazy-loading 'resources' on Instance uuid abf8d33c-4e24-4d26-af41-b01c828c67e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 02 10:05:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:30.507 281049 DEBUG nova.virt.libvirt.vif [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T10:04:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='guest-instance-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005541914.localdomain',hostname='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-guest-test.domaintest.com',id=10,image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP/dbfwF7RFRTDuwB6jwzuSQ/IcUc/koBGae2h16UX9iSnGmmWafAjmR0zhsoi8E87Oi2Cm1JEv8wzMjtBlM1hsGOt9Lg/6ZEqGVxh82xbfu37aVfdDp2kn2MPZvfs8d3A==',key_name='tempest-keypair-1080862001',keypairs=<?>,launch_index=0,launched_at=2025-12-02T10:05:06Z,launched_on='np0005541914.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='np0005541914.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='50df25ee29424615807a458690cdf8d7',ramdisk_id='',reservation_id='r-5yub6qye',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersV294TestFqdnHostnames-2112874438',owner_user_name='tempest-ServersV294TestFqdnHostnames-2112874438-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T10:05:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='955214da09cd44dba70e1a06eabc9023',uuid=abf8d33c-4e24-4d26-af41-b01c828c67e0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "address": "fa:16:3e:16:9d:c1", "network": {"id": "45d02cf1-f511-4416-b7c1-b37c417f16f9", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1627103925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "50df25ee29424615807a458690cdf8d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a73e76-68", "ovs_interfaceid": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 02 10:05:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:30.508 281049 DEBUG nova.network.os_vif_util [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Converting VIF {"id": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "address": "fa:16:3e:16:9d:c1", "network": {"id": "45d02cf1-f511-4416-b7c1-b37c417f16f9", "bridge": "br-int", "label": "tempest-ServersV294TestFqdnHostnames-1627103925-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "50df25ee29424615807a458690cdf8d7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0a73e76-68", "ovs_interfaceid": "a0a73e76-685f-4ba0-87b5-5dd27b54fab4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 02 10:05:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:30.509 281049 DEBUG nova.network.os_vif_util [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:16:9d:c1,bridge_name='br-int',has_traffic_filtering=True,id=a0a73e76-685f-4ba0-87b5-5dd27b54fab4,network=Network(45d02cf1-f511-4416-b7c1-b37c417f16f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0a73e76-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 02 10:05:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:30.509 281049 DEBUG os_vif [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:16:9d:c1,bridge_name='br-int',has_traffic_filtering=True,id=a0a73e76-685f-4ba0-87b5-5dd27b54fab4,network=Network(45d02cf1-f511-4416-b7c1-b37c417f16f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0a73e76-68') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 02 10:05:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:30.511 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:30.512 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0a73e76-68, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:05:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:30.513 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:30.516 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 02 10:05:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:30.518 281049 INFO os_vif [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:16:9d:c1,bridge_name='br-int',has_traffic_filtering=True,id=a0a73e76-685f-4ba0-87b5-5dd27b54fab4,network=Network(45d02cf1-f511-4416-b7c1-b37c417f16f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0a73e76-68')
Dec 02 10:05:30 np0005541914.localdomain neutron-haproxy-ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9[312057]: [NOTICE]   (312061) : haproxy version is 2.8.14-c23fe91
Dec 02 10:05:30 np0005541914.localdomain neutron-haproxy-ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9[312057]: [NOTICE]   (312061) : path to executable is /usr/sbin/haproxy
Dec 02 10:05:30 np0005541914.localdomain neutron-haproxy-ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9[312057]: [WARNING]  (312061) : Exiting Master process...
Dec 02 10:05:30 np0005541914.localdomain neutron-haproxy-ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9[312057]: [ALERT]    (312061) : Current worker (312063) exited with code 143 (Terminated)
Dec 02 10:05:30 np0005541914.localdomain neutron-haproxy-ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9[312057]: [WARNING]  (312061) : All workers exited. Exiting... (0)
Dec 02 10:05:30 np0005541914.localdomain systemd[1]: libpod-bd104b98203782d1c4a69cfe6105f5bbd3adf18759a9e589f2f068d902481241.scope: Deactivated successfully.
Dec 02 10:05:30 np0005541914.localdomain podman[312471]: 2025-12-02 10:05:30.55241396 +0000 UTC m=+0.081188187 container died bd104b98203782d1c4a69cfe6105f5bbd3adf18759a9e589f2f068d902481241 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Dec 02 10:05:30 np0005541914.localdomain ceph-mon[301710]: pgmap v184: 177 pgs: 177 active+clean; 225 MiB data, 890 MiB used, 41 GiB / 42 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 02 10:05:30 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bd104b98203782d1c4a69cfe6105f5bbd3adf18759a9e589f2f068d902481241-userdata-shm.mount: Deactivated successfully.
Dec 02 10:05:30 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-5520c5903efa84481bfea75a392635964a1c0dc18350544c148b6b54d47a4621-merged.mount: Deactivated successfully.
Dec 02 10:05:30 np0005541914.localdomain podman[312471]: 2025-12-02 10:05:30.591134161 +0000 UTC m=+0.119908328 container cleanup bd104b98203782d1c4a69cfe6105f5bbd3adf18759a9e589f2f068d902481241 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 02 10:05:30 np0005541914.localdomain podman[312508]: 2025-12-02 10:05:30.617664397 +0000 UTC m=+0.061260694 container cleanup bd104b98203782d1c4a69cfe6105f5bbd3adf18759a9e589f2f068d902481241 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 02 10:05:30 np0005541914.localdomain systemd[1]: libpod-conmon-bd104b98203782d1c4a69cfe6105f5bbd3adf18759a9e589f2f068d902481241.scope: Deactivated successfully.
Dec 02 10:05:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:30.630 281049 DEBUG nova.compute.manager [req-95756fc1-76c1-4302-90f7-9c8a1ec7b14b req-63c8e17c-e9cd-4fd3-97d2-90e216b2f2b8 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Received event network-vif-unplugged-a0a73e76-685f-4ba0-87b5-5dd27b54fab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 02 10:05:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:30.631 281049 DEBUG oslo_concurrency.lockutils [req-95756fc1-76c1-4302-90f7-9c8a1ec7b14b req-63c8e17c-e9cd-4fd3-97d2-90e216b2f2b8 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "abf8d33c-4e24-4d26-af41-b01c828c67e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:05:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:30.631 281049 DEBUG oslo_concurrency.lockutils [req-95756fc1-76c1-4302-90f7-9c8a1ec7b14b req-63c8e17c-e9cd-4fd3-97d2-90e216b2f2b8 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "abf8d33c-4e24-4d26-af41-b01c828c67e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:05:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:30.631 281049 DEBUG oslo_concurrency.lockutils [req-95756fc1-76c1-4302-90f7-9c8a1ec7b14b req-63c8e17c-e9cd-4fd3-97d2-90e216b2f2b8 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "abf8d33c-4e24-4d26-af41-b01c828c67e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:05:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:30.631 281049 DEBUG nova.compute.manager [req-95756fc1-76c1-4302-90f7-9c8a1ec7b14b req-63c8e17c-e9cd-4fd3-97d2-90e216b2f2b8 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] No waiting events found dispatching network-vif-unplugged-a0a73e76-685f-4ba0-87b5-5dd27b54fab4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 02 10:05:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:30.632 281049 DEBUG nova.compute.manager [req-95756fc1-76c1-4302-90f7-9c8a1ec7b14b req-63c8e17c-e9cd-4fd3-97d2-90e216b2f2b8 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Received event network-vif-unplugged-a0a73e76-685f-4ba0-87b5-5dd27b54fab4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 02 10:05:30 np0005541914.localdomain podman[312526]: 2025-12-02 10:05:30.689662341 +0000 UTC m=+0.081221549 container remove bd104b98203782d1c4a69cfe6105f5bbd3adf18759a9e589f2f068d902481241 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:05:30 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:30.693 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[d5926755-ff09-4456-a791-7d44ecb1da11]: (4, ('Tue Dec  2 10:05:30 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9 (bd104b98203782d1c4a69cfe6105f5bbd3adf18759a9e589f2f068d902481241)\nbd104b98203782d1c4a69cfe6105f5bbd3adf18759a9e589f2f068d902481241\nTue Dec  2 10:05:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9 (bd104b98203782d1c4a69cfe6105f5bbd3adf18759a9e589f2f068d902481241)\nbd104b98203782d1c4a69cfe6105f5bbd3adf18759a9e589f2f068d902481241\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:05:30 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:30.696 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[9974c7cc-7aa1-421f-b895-5221962c29ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:05:30 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:30.697 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap45d02cf1-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:05:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:30.699 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:30 np0005541914.localdomain kernel: device tap45d02cf1-f0 left promiscuous mode
Dec 02 10:05:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:30.710 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:30 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:30.713 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[626fcfc3-fee1-46e8-952f-9968fa860a4a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:05:30 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:30.726 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[9bd88f42-d911-4f84-ae0c-8882ac582ee5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:05:30 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:30.728 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[4a936d11-4d83-4184-9fcb-f6170d0a8ecd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:05:30 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:30.739 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[c227989c-aebf-476d-90f2-9314faa20104]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1206974, 'reachable_time': 21153, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312543, 'error': None, 'target': 'ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:05:30 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:30.741 159602 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-45d02cf1-f511-4416-b7c1-b37c417f16f9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 02 10:05:30 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:30.741 159602 DEBUG oslo.privsep.daemon [-] privsep: reply[2fb53196-8b04-421e-a35c-05d7752315af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:05:30 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v185: 177 pgs: 177 active+clean; 225 MiB data, 890 MiB used, 41 GiB / 42 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 02 10:05:31 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:31.206 281049 INFO nova.virt.libvirt.driver [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Deleting instance files /var/lib/nova/instances/abf8d33c-4e24-4d26-af41-b01c828c67e0_del
Dec 02 10:05:31 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:31.207 281049 INFO nova.virt.libvirt.driver [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Deletion of /var/lib/nova/instances/abf8d33c-4e24-4d26-af41-b01c828c67e0_del complete
Dec 02 10:05:31 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:31.298 281049 INFO nova.compute.manager [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Took 1.05 seconds to destroy the instance on the hypervisor.
Dec 02 10:05:31 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:31.298 281049 DEBUG oslo.service.loopingcall [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 02 10:05:31 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:31.299 281049 DEBUG nova.compute.manager [-] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 02 10:05:31 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:31.299 281049 DEBUG nova.network.neutron [-] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 02 10:05:31 np0005541914.localdomain systemd[1]: run-netns-ovnmeta\x2d45d02cf1\x2df511\x2d4416\x2db7c1\x2db37c417f16f9.mount: Deactivated successfully.
Dec 02 10:05:31 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:31.729 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:31 np0005541914.localdomain ceph-mon[301710]: pgmap v185: 177 pgs: 177 active+clean; 225 MiB data, 890 MiB used, 41 GiB / 42 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 02 10:05:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:05:32 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:05:32.561 2 INFO neutron.agent.securitygroups_rpc [req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 req-ec3b1f9e-8373-4159-93fc-d4de0998f605 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Security group member updated ['2e537c1e-d2f3-49fb-8c4c-0f6b2c3e354b']
Dec 02 10:05:32 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:32.662 281049 DEBUG nova.network.neutron [-] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 02 10:05:32 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:32.682 281049 INFO nova.compute.manager [-] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Took 1.38 seconds to deallocate network for instance.
Dec 02 10:05:32 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:32.699 281049 DEBUG nova.compute.manager [req-10bc4d71-e39c-459c-bfd8-3fa1c5c9d909 req-b8ecff71-8327-428a-84bb-293f464d942f dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Received event network-vif-deleted-a0a73e76-685f-4ba0-87b5-5dd27b54fab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 02 10:05:32 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:32.743 281049 DEBUG oslo_concurrency.lockutils [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:05:32 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:32.743 281049 DEBUG oslo_concurrency.lockutils [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:05:32 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:32.793 281049 DEBUG oslo_concurrency.processutils [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:05:32 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:32.966 281049 DEBUG nova.compute.manager [req-c4042815-cffe-404e-898b-acca60f0c037 req-606d737d-196f-4bcd-b340-87afb17d5e0e dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Received event network-vif-plugged-a0a73e76-685f-4ba0-87b5-5dd27b54fab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 02 10:05:32 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:32.967 281049 DEBUG oslo_concurrency.lockutils [req-c4042815-cffe-404e-898b-acca60f0c037 req-606d737d-196f-4bcd-b340-87afb17d5e0e dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "abf8d33c-4e24-4d26-af41-b01c828c67e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:05:32 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:32.967 281049 DEBUG oslo_concurrency.lockutils [req-c4042815-cffe-404e-898b-acca60f0c037 req-606d737d-196f-4bcd-b340-87afb17d5e0e dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "abf8d33c-4e24-4d26-af41-b01c828c67e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:05:32 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:32.967 281049 DEBUG oslo_concurrency.lockutils [req-c4042815-cffe-404e-898b-acca60f0c037 req-606d737d-196f-4bcd-b340-87afb17d5e0e dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "abf8d33c-4e24-4d26-af41-b01c828c67e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:05:32 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:32.968 281049 DEBUG nova.compute.manager [req-c4042815-cffe-404e-898b-acca60f0c037 req-606d737d-196f-4bcd-b340-87afb17d5e0e dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] No waiting events found dispatching network-vif-plugged-a0a73e76-685f-4ba0-87b5-5dd27b54fab4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 02 10:05:32 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:32.968 281049 WARNING nova.compute.manager [req-c4042815-cffe-404e-898b-acca60f0c037 req-606d737d-196f-4bcd-b340-87afb17d5e0e dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Received unexpected event network-vif-plugged-a0a73e76-685f-4ba0-87b5-5dd27b54fab4 for instance with vm_state deleted and task_state None.
Dec 02 10:05:33 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v186: 177 pgs: 177 active+clean; 145 MiB data, 767 MiB used, 41 GiB / 42 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Dec 02 10:05:33 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:05:33 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2691401196' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:05:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:33.248 281049 DEBUG oslo_concurrency.processutils [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:05:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:33.255 281049 DEBUG nova.compute.provider_tree [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 10:05:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:33.277 281049 DEBUG nova.scheduler.client.report [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 10:05:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:33.304 281049 DEBUG oslo_concurrency.lockutils [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:05:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:33.335 281049 INFO nova.scheduler.client.report [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Deleted allocations for instance abf8d33c-4e24-4d26-af41-b01c828c67e0
Dec 02 10:05:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:33.403 281049 DEBUG oslo_concurrency.lockutils [None req-b389ec80-c5bd-4cf8-bcab-ec830d82cd86 955214da09cd44dba70e1a06eabc9023 50df25ee29424615807a458690cdf8d7 - - default default] Lock "abf8d33c-4e24-4d26-af41-b01c828c67e0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:05:33 np0005541914.localdomain podman[239757]: time="2025-12-02T10:05:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:05:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:05:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 10:05:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:33.675 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:33 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:33.676 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:05:33 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:33.678 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 02 10:05:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:05:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19214 "" "Go-http-client/1.1"
Dec 02 10:05:34 np0005541914.localdomain ceph-mon[301710]: pgmap v186: 177 pgs: 177 active+clean; 145 MiB data, 767 MiB used, 41 GiB / 42 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Dec 02 10:05:34 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/2691401196' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:05:35 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v187: 177 pgs: 177 active+clean; 145 MiB data, 767 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Dec 02 10:05:35 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:35.515 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:36 np0005541914.localdomain ceph-mon[301710]: pgmap v187: 177 pgs: 177 active+clean; 145 MiB data, 767 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Dec 02 10:05:36 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:36.765 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:05:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:05:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:05:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:05:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:05:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:05:37 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v188: 177 pgs: 177 active+clean; 145 MiB data, 767 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Dec 02 10:05:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:05:37 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 1 addresses
Dec 02 10:05:37 np0005541914.localdomain podman[312585]: 2025-12-02 10:05:37.561107675 +0000 UTC m=+0.051153164 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 02 10:05:37 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:05:37 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:05:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:37.644 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:38 np0005541914.localdomain ceph-mon[301710]: pgmap v188: 177 pgs: 177 active+clean; 145 MiB data, 767 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Dec 02 10:05:38 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:05:38 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:05:39 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:05:39 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v189: 177 pgs: 177 active+clean; 145 MiB data, 767 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Dec 02 10:05:39 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:05:39 np0005541914.localdomain podman[312609]: 2025-12-02 10:05:39.091754186 +0000 UTC m=+0.079477635 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:05:39 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Dec 02 10:05:39 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:05:39.140949) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 02 10:05:39 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Dec 02 10:05:39 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669939141014, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 2494, "num_deletes": 256, "total_data_size": 3266003, "memory_usage": 3315520, "flush_reason": "Manual Compaction"}
Dec 02 10:05:39 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Dec 02 10:05:39 np0005541914.localdomain systemd[1]: tmp-crun.NrPpdt.mount: Deactivated successfully.
Dec 02 10:05:39 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669939156245, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 2111274, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17047, "largest_seqno": 19536, "table_properties": {"data_size": 2102355, "index_size": 5489, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19897, "raw_average_key_size": 21, "raw_value_size": 2083908, "raw_average_value_size": 2205, "num_data_blocks": 241, "num_entries": 945, "num_filter_entries": 945, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669769, "oldest_key_time": 1764669769, "file_creation_time": 1764669939, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Dec 02 10:05:39 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 15340 microseconds, and 5843 cpu microseconds.
Dec 02 10:05:39 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 10:05:39 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:05:39.156292) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 2111274 bytes OK
Dec 02 10:05:39 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:05:39.156318) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Dec 02 10:05:39 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:05:39.158137) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Dec 02 10:05:39 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:05:39.158162) EVENT_LOG_v1 {"time_micros": 1764669939158156, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 02 10:05:39 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:05:39.158187) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 02 10:05:39 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3254733, prev total WAL file size 3254733, number of live WAL files 2.
Dec 02 10:05:39 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:05:39 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:05:39.159365) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131373937' seq:72057594037927935, type:22 .. '7061786F73003132303439' seq:0, type:0; will stop at (end)
Dec 02 10:05:39 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 02 10:05:39 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(2061KB)], [24(16MB)]
Dec 02 10:05:39 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669939159479, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 19278954, "oldest_snapshot_seqno": -1}
Dec 02 10:05:39 np0005541914.localdomain podman[312609]: 2025-12-02 10:05:39.182996852 +0000 UTC m=+0.170720301 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 02 10:05:39 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:05:39 np0005541914.localdomain podman[312607]: 2025-12-02 10:05:39.198485308 +0000 UTC m=+0.195919435 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 02 10:05:39 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 12431 keys, 16638727 bytes, temperature: kUnknown
Dec 02 10:05:39 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669939255275, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 16638727, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16567706, "index_size": 38856, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31109, "raw_key_size": 333168, "raw_average_key_size": 26, "raw_value_size": 16355659, "raw_average_value_size": 1315, "num_data_blocks": 1480, "num_entries": 12431, "num_filter_entries": 12431, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669502, "oldest_key_time": 0, "file_creation_time": 1764669939, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Dec 02 10:05:39 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 10:05:39 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:05:39.255720) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 16638727 bytes
Dec 02 10:05:39 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:05:39.258329) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 201.0 rd, 173.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 16.4 +0.0 blob) out(15.9 +0.0 blob), read-write-amplify(17.0) write-amplify(7.9) OK, records in: 12960, records dropped: 529 output_compression: NoCompression
Dec 02 10:05:39 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:05:39.258366) EVENT_LOG_v1 {"time_micros": 1764669939258349, "job": 12, "event": "compaction_finished", "compaction_time_micros": 95931, "compaction_time_cpu_micros": 44149, "output_level": 6, "num_output_files": 1, "total_output_size": 16638727, "num_input_records": 12960, "num_output_records": 12431, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 02 10:05:39 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:05:39 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669939259030, "job": 12, "event": "table_file_deletion", "file_number": 26}
Dec 02 10:05:39 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:05:39 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764669939261678, "job": 12, "event": "table_file_deletion", "file_number": 24}
Dec 02 10:05:39 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:05:39.159258) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:05:39 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:05:39.261743) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:05:39 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:05:39.261749) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:05:39 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:05:39.261752) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:05:39 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:05:39.261755) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:05:39 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:05:39.261758) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:05:39 np0005541914.localdomain podman[312615]: 2025-12-02 10:05:39.163358918 +0000 UTC m=+0.145571847 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller)
Dec 02 10:05:39 np0005541914.localdomain podman[312608]: 2025-12-02 10:05:39.272526835 +0000 UTC m=+0.262964967 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 10:05:39 np0005541914.localdomain podman[312607]: 2025-12-02 10:05:39.280580053 +0000 UTC m=+0.278014180 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Dec 02 10:05:39 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:05:39 np0005541914.localdomain podman[312615]: 2025-12-02 10:05:39.297108782 +0000 UTC m=+0.279321671 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true)
Dec 02 10:05:39 np0005541914.localdomain podman[312608]: 2025-12-02 10:05:39.308852963 +0000 UTC m=+0.299291045 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 02 10:05:39 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:05:39 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:05:39 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:39.680 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:05:40 np0005541914.localdomain ceph-mon[301710]: pgmap v189: 177 pgs: 177 active+clean; 145 MiB data, 767 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Dec 02 10:05:40 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:40.517 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:41 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v190: 177 pgs: 177 active+clean; 145 MiB data, 767 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 02 10:05:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:41.791 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:41 np0005541914.localdomain ceph-mon[301710]: pgmap v190: 177 pgs: 177 active+clean; 145 MiB data, 767 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 02 10:05:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:05:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:05:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:05:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:05:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:05:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:05:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:05:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:05:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:05:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:05:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:05:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:05:42 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:05:43 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v191: 177 pgs: 177 active+clean; 145 MiB data, 767 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 02 10:05:44 np0005541914.localdomain ceph-mon[301710]: pgmap v191: 177 pgs: 177 active+clean; 145 MiB data, 767 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Dec 02 10:05:45 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v192: 177 pgs: 177 active+clean; 145 MiB data, 767 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:05:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:45.488 281049 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764669930.4870412, abf8d33c-4e24-4d26-af41-b01c828c67e0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 02 10:05:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:45.489 281049 INFO nova.compute.manager [-] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] VM Stopped (Lifecycle Event)
Dec 02 10:05:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:45.521 281049 DEBUG nova.compute.manager [None req-9516d488-b71b-4427-a395-0218dbfe3eca - - - - - -] [instance: abf8d33c-4e24-4d26-af41-b01c828c67e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 02 10:05:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:45.521 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:46 np0005541914.localdomain ceph-mon[301710]: pgmap v192: 177 pgs: 177 active+clean; 145 MiB data, 767 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:05:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:46.834 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:47 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v193: 177 pgs: 177 active+clean; 145 MiB data, 767 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:05:47 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:05:48 np0005541914.localdomain ceph-mon[301710]: pgmap v193: 177 pgs: 177 active+clean; 145 MiB data, 767 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:05:48 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e112 e112: 6 total, 6 up, 6 in
Dec 02 10:05:48 np0005541914.localdomain snmpd[69217]: empty variable list in _query
Dec 02 10:05:48 np0005541914.localdomain snmpd[69217]: empty variable list in _query
Dec 02 10:05:48 np0005541914.localdomain snmpd[69217]: empty variable list in _query
Dec 02 10:05:48 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:05:48 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:05:49 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v195: 177 pgs: 177 active+clean; 145 MiB data, 767 MiB used, 41 GiB / 42 GiB avail; 7.1 KiB/s rd, 1.2 KiB/s wr, 10 op/s
Dec 02 10:05:49 np0005541914.localdomain podman[312692]: 2025-12-02 10:05:49.086940083 +0000 UTC m=+0.087784510 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 02 10:05:49 np0005541914.localdomain podman[312692]: 2025-12-02 10:05:49.095973051 +0000 UTC m=+0.096817478 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 10:05:49 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:05:49 np0005541914.localdomain ceph-mon[301710]: osdmap e112: 6 total, 6 up, 6 in
Dec 02 10:05:49 np0005541914.localdomain systemd[1]: tmp-crun.Koyr2x.mount: Deactivated successfully.
Dec 02 10:05:49 np0005541914.localdomain podman[312693]: 2025-12-02 10:05:49.182098089 +0000 UTC m=+0.181196012 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc., container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, version=9.6, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Dec 02 10:05:49 np0005541914.localdomain podman[312693]: 2025-12-02 10:05:49.199909648 +0000 UTC m=+0.199007571 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, vendor=Red Hat, Inc., config_id=edpm, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 02 10:05:49 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:05:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:05:49.440 262347 INFO neutron.agent.linux.ip_lib [None req-2c160541-b8d6-4fe6-9f02-3e7f05aabdcf - - - - - -] Device tap53cc812e-e4 cannot be used as it has no MAC address
Dec 02 10:05:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:49.461 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:49 np0005541914.localdomain kernel: device tap53cc812e-e4 entered promiscuous mode
Dec 02 10:05:49 np0005541914.localdomain NetworkManager[5967]: <info>  [1764669949.4696] manager: (tap53cc812e-e4): new Generic device (/org/freedesktop/NetworkManager/Devices/27)
Dec 02 10:05:49 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:05:49Z|00118|binding|INFO|Claiming lport 53cc812e-e4ad-4ac9-b3ba-9a1841a8d8aa for this chassis.
Dec 02 10:05:49 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:05:49Z|00119|binding|INFO|53cc812e-e4ad-4ac9-b3ba-9a1841a8d8aa: Claiming unknown
Dec 02 10:05:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:49.472 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:49 np0005541914.localdomain systemd-udevd[312744]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 10:05:49 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:49.478 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-ea526ff6-f129-410b-b41d-c614aa65ab89', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ea526ff6-f129-410b-b41d-c614aa65ab89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '91b4824d03bd43c4aca137037a18bd3d', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c5b1c8ab-7d8a-4df0-87a7-0efffca70a64, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=53cc812e-e4ad-4ac9-b3ba-9a1841a8d8aa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:05:49 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:49.481 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 53cc812e-e4ad-4ac9-b3ba-9a1841a8d8aa in datapath ea526ff6-f129-410b-b41d-c614aa65ab89 bound to our chassis
Dec 02 10:05:49 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:49.482 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ea526ff6-f129-410b-b41d-c614aa65ab89 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 02 10:05:49 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:49.485 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[5ab287f3-a202-4d71-af12-2fe3e92a1b05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:05:49 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap53cc812e-e4: No such device
Dec 02 10:05:49 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:05:49Z|00120|binding|INFO|Setting lport 53cc812e-e4ad-4ac9-b3ba-9a1841a8d8aa ovn-installed in OVS
Dec 02 10:05:49 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:05:49Z|00121|binding|INFO|Setting lport 53cc812e-e4ad-4ac9-b3ba-9a1841a8d8aa up in Southbound
Dec 02 10:05:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:49.506 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:49 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap53cc812e-e4: No such device
Dec 02 10:05:49 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap53cc812e-e4: No such device
Dec 02 10:05:49 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap53cc812e-e4: No such device
Dec 02 10:05:49 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap53cc812e-e4: No such device
Dec 02 10:05:49 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap53cc812e-e4: No such device
Dec 02 10:05:49 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap53cc812e-e4: No such device
Dec 02 10:05:49 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap53cc812e-e4: No such device
Dec 02 10:05:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:49.543 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:49.569 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:05:49.645 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:05:49Z, description=, device_id=798ad2c1-39c2-42cf-b43f-5f28ae054b5b, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4035542310>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c63a30>], id=e5f7578c-6093-4cfa-893b-bf9285530f81, ip_allocation=immediate, mac_address=fa:16:3e:80:3f:18, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1009, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:05:49Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:05:49 np0005541914.localdomain podman[312796]: 2025-12-02 10:05:49.863356989 +0000 UTC m=+0.045813179 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 02 10:05:49 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses
Dec 02 10:05:49 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:05:49 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:05:50 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:05:50.072 262347 INFO neutron.agent.dhcp.agent [None req-b3734218-3da9-483f-bc04-d6eb76ad8c9c - - - - - -] DHCP configuration for ports {'e5f7578c-6093-4cfa-893b-bf9285530f81'} is completed
Dec 02 10:05:50 np0005541914.localdomain ceph-mon[301710]: pgmap v195: 177 pgs: 177 active+clean; 145 MiB data, 767 MiB used, 41 GiB / 42 GiB avail; 7.1 KiB/s rd, 1.2 KiB/s wr, 10 op/s
Dec 02 10:05:50 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e113 e113: 6 total, 6 up, 6 in
Dec 02 10:05:50 np0005541914.localdomain podman[312851]: 
Dec 02 10:05:50 np0005541914.localdomain podman[312851]: 2025-12-02 10:05:50.374244191 +0000 UTC m=+0.101832403 container create c6cc2c5d523b8387b7a3c2b5844b1940bf4afbc48195adf5cae6d2900ee23f68 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ea526ff6-f129-410b-b41d-c614aa65ab89, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 02 10:05:50 np0005541914.localdomain systemd[1]: Started libpod-conmon-c6cc2c5d523b8387b7a3c2b5844b1940bf4afbc48195adf5cae6d2900ee23f68.scope.
Dec 02 10:05:50 np0005541914.localdomain podman[312851]: 2025-12-02 10:05:50.326428771 +0000 UTC m=+0.054017043 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 02 10:05:50 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 10:05:50 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/277e8dcddf9fb3f284bdf1370a104896fdcdf3617b2ea15ab092627caa367817/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 10:05:50 np0005541914.localdomain podman[312851]: 2025-12-02 10:05:50.454568732 +0000 UTC m=+0.182156964 container init c6cc2c5d523b8387b7a3c2b5844b1940bf4afbc48195adf5cae6d2900ee23f68 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ea526ff6-f129-410b-b41d-c614aa65ab89, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 02 10:05:50 np0005541914.localdomain podman[312851]: 2025-12-02 10:05:50.463378102 +0000 UTC m=+0.190966334 container start c6cc2c5d523b8387b7a3c2b5844b1940bf4afbc48195adf5cae6d2900ee23f68 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ea526ff6-f129-410b-b41d-c614aa65ab89, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 02 10:05:50 np0005541914.localdomain dnsmasq[312871]: started, version 2.85 cachesize 150
Dec 02 10:05:50 np0005541914.localdomain dnsmasq[312871]: DNS service limited to local subnets
Dec 02 10:05:50 np0005541914.localdomain dnsmasq[312871]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 02 10:05:50 np0005541914.localdomain dnsmasq[312871]: warning: no upstream servers configured
Dec 02 10:05:50 np0005541914.localdomain dnsmasq-dhcp[312871]: DHCPv6, static leases only on 2001:db8:1::, lease time 1d
Dec 02 10:05:50 np0005541914.localdomain dnsmasq[312871]: read /var/lib/neutron/dhcp/ea526ff6-f129-410b-b41d-c614aa65ab89/addn_hosts - 0 addresses
Dec 02 10:05:50 np0005541914.localdomain dnsmasq-dhcp[312871]: read /var/lib/neutron/dhcp/ea526ff6-f129-410b-b41d-c614aa65ab89/host
Dec 02 10:05:50 np0005541914.localdomain dnsmasq-dhcp[312871]: read /var/lib/neutron/dhcp/ea526ff6-f129-410b-b41d-c614aa65ab89/opts
Dec 02 10:05:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:50.525 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:50 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:05:50.648 262347 INFO neutron.agent.dhcp.agent [None req-916945a0-bb02-4589-8d2d-b48f8ce87a93 - - - - - -] DHCP configuration for ports {'7e5719fd-341c-4709-ab71-aae4434cc175'} is completed
Dec 02 10:05:51 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v197: 177 pgs: 177 active+clean; 145 MiB data, 767 MiB used, 41 GiB / 42 GiB avail; 8.9 KiB/s rd, 1.5 KiB/s wr, 12 op/s
Dec 02 10:05:51 np0005541914.localdomain ceph-mon[301710]: osdmap e113: 6 total, 6 up, 6 in
Dec 02 10:05:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:51.531 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:51.835 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:52 np0005541914.localdomain ceph-mon[301710]: pgmap v197: 177 pgs: 177 active+clean; 145 MiB data, 767 MiB used, 41 GiB / 42 GiB avail; 8.9 KiB/s rd, 1.5 KiB/s wr, 12 op/s
Dec 02 10:05:52 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:05:53 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v198: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 3.9 KiB/s wr, 49 op/s
Dec 02 10:05:53 np0005541914.localdomain dnsmasq[312871]: exiting on receipt of SIGTERM
Dec 02 10:05:53 np0005541914.localdomain podman[312890]: 2025-12-02 10:05:53.717286538 +0000 UTC m=+0.057898861 container kill c6cc2c5d523b8387b7a3c2b5844b1940bf4afbc48195adf5cae6d2900ee23f68 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ea526ff6-f129-410b-b41d-c614aa65ab89, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 02 10:05:53 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:05:53Z|00122|binding|INFO|Removing iface tap53cc812e-e4 ovn-installed in OVS
Dec 02 10:05:53 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:05:53Z|00123|binding|INFO|Removing lport 53cc812e-e4ad-4ac9-b3ba-9a1841a8d8aa ovn-installed in OVS
Dec 02 10:05:53 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:53.730 159483 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port a58a7c6f-60c0-401e-9cb3-444e772d3aeb with type ""
Dec 02 10:05:53 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:53.732 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-ea526ff6-f129-410b-b41d-c614aa65ab89', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ea526ff6-f129-410b-b41d-c614aa65ab89', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '91b4824d03bd43c4aca137037a18bd3d', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c5b1c8ab-7d8a-4df0-87a7-0efffca70a64, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=53cc812e-e4ad-4ac9-b3ba-9a1841a8d8aa) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:05:53 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:53.734 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 53cc812e-e4ad-4ac9-b3ba-9a1841a8d8aa in datapath ea526ff6-f129-410b-b41d-c614aa65ab89 unbound from our chassis
Dec 02 10:05:53 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:53.736 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ea526ff6-f129-410b-b41d-c614aa65ab89 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 02 10:05:53 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:05:53.737 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[724e625f-f205-4a3d-8c8e-3239717ecb7c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:05:53 np0005541914.localdomain systemd[1]: libpod-c6cc2c5d523b8387b7a3c2b5844b1940bf4afbc48195adf5cae6d2900ee23f68.scope: Deactivated successfully.
Dec 02 10:05:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:53.766 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:53 np0005541914.localdomain podman[312902]: 2025-12-02 10:05:53.786730864 +0000 UTC m=+0.056755466 container died c6cc2c5d523b8387b7a3c2b5844b1940bf4afbc48195adf5cae6d2900ee23f68 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ea526ff6-f129-410b-b41d-c614aa65ab89, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Dec 02 10:05:53 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c6cc2c5d523b8387b7a3c2b5844b1940bf4afbc48195adf5cae6d2900ee23f68-userdata-shm.mount: Deactivated successfully.
Dec 02 10:05:53 np0005541914.localdomain podman[312902]: 2025-12-02 10:05:53.893988272 +0000 UTC m=+0.164012844 container cleanup c6cc2c5d523b8387b7a3c2b5844b1940bf4afbc48195adf5cae6d2900ee23f68 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ea526ff6-f129-410b-b41d-c614aa65ab89, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 02 10:05:53 np0005541914.localdomain systemd[1]: libpod-conmon-c6cc2c5d523b8387b7a3c2b5844b1940bf4afbc48195adf5cae6d2900ee23f68.scope: Deactivated successfully.
Dec 02 10:05:53 np0005541914.localdomain podman[312914]: 2025-12-02 10:05:53.919826846 +0000 UTC m=+0.137065835 container remove c6cc2c5d523b8387b7a3c2b5844b1940bf4afbc48195adf5cae6d2900ee23f68 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ea526ff6-f129-410b-b41d-c614aa65ab89, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:05:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:53.931 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:53 np0005541914.localdomain kernel: device tap53cc812e-e4 left promiscuous mode
Dec 02 10:05:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:53.944 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:54 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:05:54.070 262347 INFO neutron.agent.dhcp.agent [None req-fc5b86c3-8d59-4bf9-b63e-da03ae6e02d7 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:05:54 np0005541914.localdomain ceph-mon[301710]: pgmap v198: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 3.9 KiB/s wr, 49 op/s
Dec 02 10:05:54 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:05:54.255 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:05:54 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:05:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:54.573 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:54 np0005541914.localdomain podman[312933]: 2025-12-02 10:05:54.58230776 +0000 UTC m=+0.084306214 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 02 10:05:54 np0005541914.localdomain podman[312933]: 2025-12-02 10:05:54.619575266 +0000 UTC m=+0.121573720 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:05:54 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:05:54 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-277e8dcddf9fb3f284bdf1370a104896fdcdf3617b2ea15ab092627caa367817-merged.mount: Deactivated successfully.
Dec 02 10:05:54 np0005541914.localdomain systemd[1]: run-netns-qdhcp\x2dea526ff6\x2df129\x2d410b\x2db41d\x2dc614aa65ab89.mount: Deactivated successfully.
Dec 02 10:05:55 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v199: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 3.9 KiB/s wr, 49 op/s
Dec 02 10:05:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:55.391 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:55.526 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:55 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e114 e114: 6 total, 6 up, 6 in
Dec 02 10:05:56 np0005541914.localdomain ceph-mon[301710]: pgmap v199: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 3.9 KiB/s wr, 49 op/s
Dec 02 10:05:56 np0005541914.localdomain ceph-mon[301710]: osdmap e114: 6 total, 6 up, 6 in
Dec 02 10:05:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:56.865 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:05:57 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v201: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 2.4 KiB/s wr, 36 op/s
Dec 02 10:05:57 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/2634255040' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:05:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:05:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:57.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:05:58 np0005541914.localdomain ceph-mon[301710]: pgmap v201: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 2.4 KiB/s wr, 36 op/s
Dec 02 10:05:58 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/2150290219' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:05:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v202: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 2.2 KiB/s wr, 33 op/s
Dec 02 10:05:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:05:59.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:06:00 np0005541914.localdomain ceph-mon[301710]: pgmap v202: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 2.2 KiB/s wr, 33 op/s
Dec 02 10:06:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:00.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:06:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:00.529 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:06:00 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 1 addresses
Dec 02 10:06:00 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:06:00 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:06:00 np0005541914.localdomain podman[312969]: 2025-12-02 10:06:00.871501208 +0000 UTC m=+0.067441325 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 02 10:06:00 np0005541914.localdomain systemd[1]: tmp-crun.01FkQT.mount: Deactivated successfully.
Dec 02 10:06:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:00.975 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:06:01 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v203: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 1.9 KiB/s wr, 29 op/s
Dec 02 10:06:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:01.523 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:06:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:01.524 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:06:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:01.898 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:06:01 np0005541914.localdomain ceph-mon[301710]: pgmap v203: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 1.9 KiB/s wr, 29 op/s
Dec 02 10:06:02 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:06:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:02.526 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:06:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:02.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:06:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:02.546 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:06:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:02.546 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:06:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:02.547 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:06:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:02.547 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 10:06:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:02.547 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:06:03 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:06:03 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/209252438' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:06:03 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v204: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:06:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:03.016 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:06:03 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/209252438' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:06:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:06:03.177 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:06:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:06:03.179 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:06:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:06:03.179 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:06:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:03.244 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 10:06:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:03.246 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11545MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 10:06:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:03.246 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:06:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:03.247 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:06:03 np0005541914.localdomain podman[239757]: time="2025-12-02T10:06:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:06:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:06:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 10:06:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:03.660 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 10:06:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:03.661 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 10:06:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:03.680 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:06:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:06:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19214 "" "Go-http-client/1.1"
Dec 02 10:06:04 np0005541914.localdomain ceph-mon[301710]: pgmap v204: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:06:04 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/840405198' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:06:04 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:06:04 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/354601536' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:06:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:04.127 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:06:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:04.132 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 10:06:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:04.151 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 10:06:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:04.169 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 10:06:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:04.169 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.922s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:06:04 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 02 10:06:04 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2210170190' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:06:04 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 02 10:06:04 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2210170190' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:06:05 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v205: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:06:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/568623931' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:06:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/354601536' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:06:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2210170190' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:06:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2210170190' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:06:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:05.171 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:06:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:05.172 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 10:06:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:05.173 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 10:06:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:05.189 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 10:06:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:05.190 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:06:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:05.531 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:06:05 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:05.630 2 INFO neutron.agent.securitygroups_rpc [None req-616a401d-f858-48e0-bbb1-73e58fa51cbe 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['1e6a52d4-a530-4d1c-b3c3-fd5c65190a35']
Dec 02 10:06:05 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:05.896 2 INFO neutron.agent.securitygroups_rpc [None req-7c216765-b201-4648-8f14-301becf47f8c 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['1e6a52d4-a530-4d1c-b3c3-fd5c65190a35']
Dec 02 10:06:06 np0005541914.localdomain ceph-mon[301710]: pgmap v205: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:06:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:06.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:06:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:06.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 10:06:06 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:06.717 2 INFO neutron.agent.securitygroups_rpc [None req-008f26a2-2a9a-4275-8fd3-0db0ae3965dc 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['9b89faa6-0a2d-4787-9ca6-c2d15c18c0cd']
Dec 02 10:06:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:06:06
Dec 02 10:06:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 02 10:06:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] do_upmap
Dec 02 10:06:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] pools ['manila_data', '.mgr', 'volumes', 'vms', 'backups', 'images', 'manila_metadata']
Dec 02 10:06:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes
Dec 02 10:06:06 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:06.888 2 INFO neutron.agent.securitygroups_rpc [None req-2ee1f36b-3e28-45da-9995-f4334e4d09c3 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['9b89faa6-0a2d-4787-9ca6-c2d15c18c0cd']
Dec 02 10:06:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:06.947 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:06:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:06:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:06:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:06:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:06:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:06:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:06:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v206: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:06:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust
Dec 02 10:06:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:06:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 02 10:06:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:06:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32)
Dec 02 10:06:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:06:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 02 10:06:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:06:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32)
Dec 02 10:06:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:06:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 02 10:06:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:06:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 02 10:06:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:06:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16)
Dec 02 10:06:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 02 10:06:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:06:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 02 10:06:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:06:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:06:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:06:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:06:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:06:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:06:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:06:07 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:07.147 2 INFO neutron.agent.securitygroups_rpc [None req-07b24d70-b40e-4b6b-a4d7-126ffe953c74 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['9b89faa6-0a2d-4787-9ca6-c2d15c18c0cd']
Dec 02 10:06:07 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:07.299 2 INFO neutron.agent.securitygroups_rpc [None req-44887692-98ed-4bee-8196-cf2b44c61f3b 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['9b89faa6-0a2d-4787-9ca6-c2d15c18c0cd']
Dec 02 10:06:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:06:07 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:07.454 2 INFO neutron.agent.securitygroups_rpc [None req-b501e38f-a705-4a3f-a758-0a1e958e6279 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['9b89faa6-0a2d-4787-9ca6-c2d15c18c0cd']
Dec 02 10:06:07 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:07.591 2 INFO neutron.agent.securitygroups_rpc [None req-7ad226a4-da1c-44ea-8953-97466b6b7a50 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['9b89faa6-0a2d-4787-9ca6-c2d15c18c0cd']
Dec 02 10:06:07 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:07.947 2 INFO neutron.agent.securitygroups_rpc [None req-45db658a-901d-430c-aa11-8109c9f781eb 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['9b89faa6-0a2d-4787-9ca6-c2d15c18c0cd']
Dec 02 10:06:08 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:08.147 2 INFO neutron.agent.securitygroups_rpc [None req-525377d0-23a2-43bc-9048-c1bbf6915b2f 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['9b89faa6-0a2d-4787-9ca6-c2d15c18c0cd']
Dec 02 10:06:08 np0005541914.localdomain ceph-mon[301710]: pgmap v206: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:06:08 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:06:08.252 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:06:08Z, description=, device_id=60f2c6f6-f230-49f9-b983-bd94d1e33602, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034cbd970>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034cbdd60>], id=2c681799-15f6-4f8c-b3e7-c3f2ec57646e, ip_allocation=immediate, mac_address=fa:16:3e:7f:05:c2, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1118, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:06:08Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:06:08 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:08.366 2 INFO neutron.agent.securitygroups_rpc [None req-09e30383-7aff-4fc0-a180-6509153c799d 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['9b89faa6-0a2d-4787-9ca6-c2d15c18c0cd']
Dec 02 10:06:08 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses
Dec 02 10:06:08 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:06:08 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:06:08 np0005541914.localdomain podman[313051]: 2025-12-02 10:06:08.480799163 +0000 UTC m=+0.065593578 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 02 10:06:08 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:08.540 2 INFO neutron.agent.securitygroups_rpc [None req-0d565c71-9045-4969-b270-4f682b986cb3 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['9b89faa6-0a2d-4787-9ca6-c2d15c18c0cd']
Dec 02 10:06:08 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:06:08.687 262347 INFO neutron.agent.dhcp.agent [None req-e6f9e950-c382-40ad-8b4a-ba9dfbaa032e - - - - - -] DHCP configuration for ports {'2c681799-15f6-4f8c-b3e7-c3f2ec57646e'} is completed
Dec 02 10:06:09 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v207: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:06:09 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:09.129 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:06:09 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:09.731 2 INFO neutron.agent.securitygroups_rpc [None req-61a0e9aa-9477-43cf-af07-616f49d4b972 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['94ceebea-e233-4f36-9a23-49456abf3258']
Dec 02 10:06:09 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:06:09 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:06:09 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:06:10 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:06:10 np0005541914.localdomain systemd[1]: tmp-crun.OcqfEk.mount: Deactivated successfully.
Dec 02 10:06:10 np0005541914.localdomain podman[313073]: 2025-12-02 10:06:10.102183074 +0000 UTC m=+0.099672836 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:06:10 np0005541914.localdomain podman[313074]: 2025-12-02 10:06:10.064941019 +0000 UTC m=+0.062197933 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 10:06:10 np0005541914.localdomain podman[313074]: 2025-12-02 10:06:10.148992594 +0000 UTC m=+0.146249578 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 10:06:10 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:06:10 np0005541914.localdomain ceph-mon[301710]: pgmap v207: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:06:10 np0005541914.localdomain podman[313073]: 2025-12-02 10:06:10.184989841 +0000 UTC m=+0.182479643 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 02 10:06:10 np0005541914.localdomain podman[313086]: 2025-12-02 10:06:10.169372071 +0000 UTC m=+0.153465221 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 02 10:06:10 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:06:10 np0005541914.localdomain podman[313086]: 2025-12-02 10:06:10.26236002 +0000 UTC m=+0.246453180 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:06:10 np0005541914.localdomain podman[313075]: 2025-12-02 10:06:10.271732709 +0000 UTC m=+0.260879735 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 02 10:06:10 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:06:10 np0005541914.localdomain podman[313075]: 2025-12-02 10:06:10.293132616 +0000 UTC m=+0.282279652 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 02 10:06:10 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:06:10 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:10.534 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:06:11 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v208: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:06:11 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:11.483 2 INFO neutron.agent.securitygroups_rpc [None req-a112143f-1afa-4cab-a314-c7a0cf01690b 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['bc246512-f2e7-49c6-b3c6-e51d67208518']
Dec 02 10:06:11 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:11.736 2 INFO neutron.agent.securitygroups_rpc [None req-2651c95b-dd09-4d1c-945d-8112466f351e 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['bc246512-f2e7-49c6-b3c6-e51d67208518']
Dec 02 10:06:11 np0005541914.localdomain ceph-mon[301710]: pgmap v208: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:06:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:11.987 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:06:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:12.006 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:06:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:06:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:06:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:06:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:06:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:06:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:06:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:06:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:06:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:06:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:06:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:06:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:06:12 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:06:12 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:12.928 2 INFO neutron.agent.securitygroups_rpc [None req-b3b89eff-a637-4d13-a86c-dcfc347ff722 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['482dba13-8db1-4254-a853-7fa4b3df0a8e']
Dec 02 10:06:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v209: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:06:13 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:13.105 2 INFO neutron.agent.securitygroups_rpc [None req-19b017d9-1c5d-463c-8fd1-6e01ac942ab6 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['482dba13-8db1-4254-a853-7fa4b3df0a8e']
Dec 02 10:06:13 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:13.582 2 INFO neutron.agent.securitygroups_rpc [None req-fdfabc3e-7f87-43e4-a533-8789992c1455 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['4bfb4e85-1f55-46a0-9d89-e38518cc2b18']
Dec 02 10:06:13 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:13.952 2 INFO neutron.agent.securitygroups_rpc [None req-64456867-c89f-4fe7-8d63-40c9e8a08ded e6f97ef89976422db171867e1c0c59f0 3f0966ca3eec4301b9d84b4543ff9fdf - - default default] Security group member updated ['a857935d-02ea-4e3d-98f4-258f4647959a']
Dec 02 10:06:14 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:14.035 2 INFO neutron.agent.securitygroups_rpc [None req-5a1a208f-7619-41f5-8ff5-57ae95d40ae9 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['4bfb4e85-1f55-46a0-9d89-e38518cc2b18']
Dec 02 10:06:14 np0005541914.localdomain ceph-mon[301710]: pgmap v209: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:06:14 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:14.234 2 INFO neutron.agent.securitygroups_rpc [None req-ba5eb572-7899-4268-a4d4-30f2b4a8108a 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['4bfb4e85-1f55-46a0-9d89-e38518cc2b18']
Dec 02 10:06:14 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:14.414 2 INFO neutron.agent.securitygroups_rpc [None req-9ba9d0cb-ece7-40eb-b408-fce3b926db2b e6f97ef89976422db171867e1c0c59f0 3f0966ca3eec4301b9d84b4543ff9fdf - - default default] Security group member updated ['a857935d-02ea-4e3d-98f4-258f4647959a']
Dec 02 10:06:14 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:14.449 2 INFO neutron.agent.securitygroups_rpc [None req-1dee4ff0-8cdb-4950-b86d-3d1f17272691 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['4bfb4e85-1f55-46a0-9d89-e38518cc2b18']
Dec 02 10:06:14 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:14.868 2 INFO neutron.agent.securitygroups_rpc [None req-c68f94c3-1ecf-4886-9e93-85ed57a0441f 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['4bfb4e85-1f55-46a0-9d89-e38518cc2b18']
Dec 02 10:06:15 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v210: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:06:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.441 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:06:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:06:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:06:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:06:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:06:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:06:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:06:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:06:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:06:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:06:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:06:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:06:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:06:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:06:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:06:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:06:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:06:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:06:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:06:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:06:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:06:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:06:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:06:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:06:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:06:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:06:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:15.536 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:06:15 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:15.566 2 INFO neutron.agent.securitygroups_rpc [None req-301242b3-4b9d-48c8-8e81-d6b06b4fcc41 e6f97ef89976422db171867e1c0c59f0 3f0966ca3eec4301b9d84b4543ff9fdf - - default default] Security group member updated ['a857935d-02ea-4e3d-98f4-258f4647959a']
Dec 02 10:06:15 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:15.600 2 INFO neutron.agent.securitygroups_rpc [None req-296124d6-ee7a-42c7-8fb1-fa352c7e4ccb 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['4bfb4e85-1f55-46a0-9d89-e38518cc2b18']
Dec 02 10:06:16 np0005541914.localdomain ceph-mon[301710]: pgmap v210: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:06:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:17.008 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:06:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v211: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:06:17 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:17.264 2 INFO neutron.agent.securitygroups_rpc [None req-bb5bb253-d3fa-4182-a08a-6c77d73857f6 07e8b8b380b44de1bc08a311f30e4dd1 4ae019f3db6248368641f9ff9e7acce4 - - default default] Security group rule updated ['dc8aaaaf-7a11-4a4d-8334-5511e0a6c147']
Dec 02 10:06:17 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:17.420 2 INFO neutron.agent.securitygroups_rpc [None req-c73fb2ed-db0d-4633-bf8b-3646a66fbf65 e6f97ef89976422db171867e1c0c59f0 3f0966ca3eec4301b9d84b4543ff9fdf - - default default] Security group member updated ['a857935d-02ea-4e3d-98f4-258f4647959a']
Dec 02 10:06:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:06:18 np0005541914.localdomain ceph-mon[301710]: pgmap v211: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:06:19 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v212: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:06:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:19.659 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:06:19 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 1 addresses
Dec 02 10:06:19 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:06:19 np0005541914.localdomain systemd[1]: tmp-crun.5MCw1Q.mount: Deactivated successfully.
Dec 02 10:06:19 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:06:19 np0005541914.localdomain podman[313175]: 2025-12-02 10:06:19.731819559 +0000 UTC m=+0.073483090 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec 02 10:06:19 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:06:19 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:06:19 np0005541914.localdomain podman[313190]: 2025-12-02 10:06:19.866327886 +0000 UTC m=+0.099375997 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 10:06:19 np0005541914.localdomain podman[313191]: 2025-12-02 10:06:19.91848356 +0000 UTC m=+0.147403934 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, name=ubi9-minimal, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 02 10:06:19 np0005541914.localdomain podman[313191]: 2025-12-02 10:06:19.931530361 +0000 UTC m=+0.160450745 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, name=ubi9-minimal, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_id=edpm, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41)
Dec 02 10:06:19 np0005541914.localdomain podman[313190]: 2025-12-02 10:06:19.932001746 +0000 UTC m=+0.165049847 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 10:06:19 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:06:19 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:06:20 np0005541914.localdomain ceph-mon[301710]: pgmap v212: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:06:20 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:20.539 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:06:20 np0005541914.localdomain sudo[313239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 10:06:20 np0005541914.localdomain sudo[313239]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:06:20 np0005541914.localdomain sudo[313239]: pam_unix(sudo:session): session closed for user root
Dec 02 10:06:20 np0005541914.localdomain sudo[313257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 10:06:20 np0005541914.localdomain sudo[313257]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:06:21 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v213: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:06:21 np0005541914.localdomain sudo[313257]: pam_unix(sudo:session): session closed for user root
Dec 02 10:06:21 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 10:06:21 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:06:21 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 02 10:06:21 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:06:21 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 02 10:06:21 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] update: starting ev 2758bebc-bcf8-46ea-bb25-bcf4ca2ba5c2 (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:06:21 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] complete: finished ev 2758bebc-bcf8-46ea-bb25-bcf4ca2ba5c2 (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:06:21 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Completed event 2758bebc-bcf8-46ea-bb25-bcf4ca2ba5c2 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 02 10:06:21 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 02 10:06:21 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:06:21 np0005541914.localdomain sudo[313306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 10:06:21 np0005541914.localdomain sudo[313306]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:06:21 np0005541914.localdomain sudo[313306]: pam_unix(sudo:session): session closed for user root
Dec 02 10:06:21 np0005541914.localdomain ceph-mon[301710]: pgmap v213: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:06:21 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:06:21 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:06:21 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:06:21 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:06:22 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:22.066 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:06:22 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events
Dec 02 10:06:22 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 02 10:06:22 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:06:23 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v214: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:06:23 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:06:24 np0005541914.localdomain ceph-mon[301710]: pgmap v214: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:06:24 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:06:25 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v215: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:06:25 np0005541914.localdomain systemd[1]: tmp-crun.1ztNir.mount: Deactivated successfully.
Dec 02 10:06:25 np0005541914.localdomain podman[313324]: 2025-12-02 10:06:25.084801047 +0000 UTC m=+0.086016006 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 02 10:06:25 np0005541914.localdomain podman[313324]: 2025-12-02 10:06:25.092303618 +0000 UTC m=+0.093518567 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 02 10:06:25 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:06:25 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:25.542 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:06:26 np0005541914.localdomain ceph-mon[301710]: pgmap v215: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:06:26 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:26.457 2 INFO neutron.agent.securitygroups_rpc [None req-55db991b-3b52-42dd-b07a-80ab3fefc470 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['72de153d-340c-4642-ae21-72dcd91d8ceb']
Dec 02 10:06:26 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:26.625 2 INFO neutron.agent.securitygroups_rpc [None req-3231b6f4-a459-453f-8b66-50672725d94d ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['72de153d-340c-4642-ae21-72dcd91d8ceb']
Dec 02 10:06:27 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v216: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:06:27 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:27.107 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:06:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:06:27 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:06:27.566 262347 INFO neutron.agent.linux.ip_lib [None req-953882f4-31f1-4474-9600-974a5d01de43 - - - - - -] Device tap21e6c00c-53 cannot be used as it has no MAC address
Dec 02 10:06:27 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:27.583 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:06:27 np0005541914.localdomain kernel: device tap21e6c00c-53 entered promiscuous mode
Dec 02 10:06:27 np0005541914.localdomain NetworkManager[5967]: <info>  [1764669987.5920] manager: (tap21e6c00c-53): new Generic device (/org/freedesktop/NetworkManager/Devices/28)
Dec 02 10:06:27 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:27.592 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:06:27 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:06:27Z|00124|binding|INFO|Claiming lport 21e6c00c-53a4-4738-8a05-387fdaa114da for this chassis.
Dec 02 10:06:27 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:06:27Z|00125|binding|INFO|21e6c00c-53a4-4738-8a05-387fdaa114da: Claiming unknown
Dec 02 10:06:27 np0005541914.localdomain systemd-udevd[313352]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 10:06:27 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:06:27.602 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-e6b3959d-7904-44ed-92bd-ec1be2b402a9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e6b3959d-7904-44ed-92bd-ec1be2b402a9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21d4d3b48096450197194eed29ad68df', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=23a6fa6a-5ec4-4b90-b32a-df62162296c7, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=21e6c00c-53a4-4738-8a05-387fdaa114da) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:06:27 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:06:27.604 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 21e6c00c-53a4-4738-8a05-387fdaa114da in datapath e6b3959d-7904-44ed-92bd-ec1be2b402a9 bound to our chassis
Dec 02 10:06:27 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:06:27.605 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e6b3959d-7904-44ed-92bd-ec1be2b402a9 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 02 10:06:27 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:06:27.606 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[e326871f-e926-49a4-a6e3-aae5278a370c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:06:27 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:06:27Z|00126|binding|INFO|Setting lport 21e6c00c-53a4-4738-8a05-387fdaa114da ovn-installed in OVS
Dec 02 10:06:27 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:06:27Z|00127|binding|INFO|Setting lport 21e6c00c-53a4-4738-8a05-387fdaa114da up in Southbound
Dec 02 10:06:27 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:27.635 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:06:27 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:27.676 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:06:27 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:27.703 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:06:27 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:27.732 2 INFO neutron.agent.securitygroups_rpc [None req-c8b7833a-90ae-4c85-a76b-1cc28b8de3de ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['df2c54da-38ba-4edc-acd1-4c6b2da63f7a']
Dec 02 10:06:28 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:28.064 2 INFO neutron.agent.securitygroups_rpc [None req-7e69b2ff-a5bb-4d6e-b782-47cf6529c7b2 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['df2c54da-38ba-4edc-acd1-4c6b2da63f7a']
Dec 02 10:06:28 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:28.202 2 INFO neutron.agent.securitygroups_rpc [None req-01786b72-4967-4664-b0a8-c04c5ee043aa ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['df2c54da-38ba-4edc-acd1-4c6b2da63f7a']
Dec 02 10:06:28 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:28.410 2 INFO neutron.agent.securitygroups_rpc [None req-660a054c-2d13-4375-bdeb-e4f5c9661010 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['df2c54da-38ba-4edc-acd1-4c6b2da63f7a']
Dec 02 10:06:28 np0005541914.localdomain ceph-mon[301710]: pgmap v216: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:06:28 np0005541914.localdomain podman[313405]: 
Dec 02 10:06:28 np0005541914.localdomain podman[313405]: 2025-12-02 10:06:28.488287283 +0000 UTC m=+0.086878963 container create 70a39957ac2f84b3bf60ff0c60dadcb03046c3355e9d5e3f1118090f6b2eb691 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e6b3959d-7904-44ed-92bd-ec1be2b402a9, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 02 10:06:28 np0005541914.localdomain systemd[1]: Started libpod-conmon-70a39957ac2f84b3bf60ff0c60dadcb03046c3355e9d5e3f1118090f6b2eb691.scope.
Dec 02 10:06:28 np0005541914.localdomain podman[313405]: 2025-12-02 10:06:28.445443125 +0000 UTC m=+0.044034845 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 02 10:06:28 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 10:06:28 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/537a4fb83ba17556b8b20d1c5f8dbe5ca7c6131c1a44ffe17c98b82654d3b13f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 10:06:28 np0005541914.localdomain podman[313405]: 2025-12-02 10:06:28.560987378 +0000 UTC m=+0.159579058 container init 70a39957ac2f84b3bf60ff0c60dadcb03046c3355e9d5e3f1118090f6b2eb691 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e6b3959d-7904-44ed-92bd-ec1be2b402a9, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:06:28 np0005541914.localdomain podman[313405]: 2025-12-02 10:06:28.5747043 +0000 UTC m=+0.173295970 container start 70a39957ac2f84b3bf60ff0c60dadcb03046c3355e9d5e3f1118090f6b2eb691 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e6b3959d-7904-44ed-92bd-ec1be2b402a9, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 02 10:06:28 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:06:28Z|00128|binding|INFO|Removing iface tap21e6c00c-53 ovn-installed in OVS
Dec 02 10:06:28 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:06:28Z|00129|binding|INFO|Removing lport 21e6c00c-53a4-4738-8a05-387fdaa114da ovn-installed in OVS
Dec 02 10:06:28 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:06:28.577 159483 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port faf58706-f1ff-4f07-97fb-9bba4fb63b23 with type ""
Dec 02 10:06:28 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:06:28.579 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-e6b3959d-7904-44ed-92bd-ec1be2b402a9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e6b3959d-7904-44ed-92bd-ec1be2b402a9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21d4d3b48096450197194eed29ad68df', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=23a6fa6a-5ec4-4b90-b32a-df62162296c7, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=21e6c00c-53a4-4738-8a05-387fdaa114da) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:06:28 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:06:28.581 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 21e6c00c-53a4-4738-8a05-387fdaa114da in datapath e6b3959d-7904-44ed-92bd-ec1be2b402a9 unbound from our chassis
Dec 02 10:06:28 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:06:28.581 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e6b3959d-7904-44ed-92bd-ec1be2b402a9 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 02 10:06:28 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:06:28.582 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[d1ad0850-8a86-4899-ada1-a61fd4310b6c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:06:28 np0005541914.localdomain dnsmasq[313423]: started, version 2.85 cachesize 150
Dec 02 10:06:28 np0005541914.localdomain dnsmasq[313423]: DNS service limited to local subnets
Dec 02 10:06:28 np0005541914.localdomain dnsmasq[313423]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 02 10:06:28 np0005541914.localdomain dnsmasq[313423]: warning: no upstream servers configured
Dec 02 10:06:28 np0005541914.localdomain dnsmasq-dhcp[313423]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 02 10:06:28 np0005541914.localdomain dnsmasq[313423]: read /var/lib/neutron/dhcp/e6b3959d-7904-44ed-92bd-ec1be2b402a9/addn_hosts - 0 addresses
Dec 02 10:06:28 np0005541914.localdomain dnsmasq-dhcp[313423]: read /var/lib/neutron/dhcp/e6b3959d-7904-44ed-92bd-ec1be2b402a9/host
Dec 02 10:06:28 np0005541914.localdomain dnsmasq-dhcp[313423]: read /var/lib/neutron/dhcp/e6b3959d-7904-44ed-92bd-ec1be2b402a9/opts
Dec 02 10:06:28 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:28.615 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:06:28 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:28.616 2 INFO neutron.agent.securitygroups_rpc [None req-62dae022-f3c9-4be5-80ba-7d0557eedc03 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['df2c54da-38ba-4edc-acd1-4c6b2da63f7a']
Dec 02 10:06:28 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:06:28.744 262347 INFO neutron.agent.dhcp.agent [None req-7f95caa9-976b-43ee-b5a8-51c6575fd272 - - - - - -] DHCP configuration for ports {'5b49bd3c-cffb-4469-b050-cedaa6445f9f'} is completed
Dec 02 10:06:28 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:28.846 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:06:28 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:28.889 2 INFO neutron.agent.securitygroups_rpc [None req-ec75e4da-10e7-4730-8e6b-71c48e471048 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['df2c54da-38ba-4edc-acd1-4c6b2da63f7a']
Dec 02 10:06:28 np0005541914.localdomain podman[313439]: 2025-12-02 10:06:28.90312743 +0000 UTC m=+0.063916957 container kill 70a39957ac2f84b3bf60ff0c60dadcb03046c3355e9d5e3f1118090f6b2eb691 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e6b3959d-7904-44ed-92bd-ec1be2b402a9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:06:28 np0005541914.localdomain dnsmasq[313423]: exiting on receipt of SIGTERM
Dec 02 10:06:28 np0005541914.localdomain systemd[1]: libpod-70a39957ac2f84b3bf60ff0c60dadcb03046c3355e9d5e3f1118090f6b2eb691.scope: Deactivated successfully.
Dec 02 10:06:28 np0005541914.localdomain podman[313452]: 2025-12-02 10:06:28.95127448 +0000 UTC m=+0.033609564 container died 70a39957ac2f84b3bf60ff0c60dadcb03046c3355e9d5e3f1118090f6b2eb691 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e6b3959d-7904-44ed-92bd-ec1be2b402a9, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec 02 10:06:28 np0005541914.localdomain podman[313452]: 2025-12-02 10:06:28.994238272 +0000 UTC m=+0.076573376 container remove 70a39957ac2f84b3bf60ff0c60dadcb03046c3355e9d5e3f1118090f6b2eb691 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e6b3959d-7904-44ed-92bd-ec1be2b402a9, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Dec 02 10:06:29 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:29.005 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:06:29 np0005541914.localdomain kernel: device tap21e6c00c-53 left promiscuous mode
Dec 02 10:06:29 np0005541914.localdomain systemd[1]: libpod-conmon-70a39957ac2f84b3bf60ff0c60dadcb03046c3355e9d5e3f1118090f6b2eb691.scope: Deactivated successfully.
Dec 02 10:06:29 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:29.019 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:06:29 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v217: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:06:29 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:06:29.047 262347 INFO neutron.agent.dhcp.agent [None req-1f322521-9352-46a5-b051-e7de73180fca - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:06:29 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:06:29.047 262347 INFO neutron.agent.dhcp.agent [None req-1f322521-9352-46a5-b051-e7de73180fca - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:06:29 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:29.257 2 INFO neutron.agent.securitygroups_rpc [None req-405af18e-31af-407d-8854-f380b293accc ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['df2c54da-38ba-4edc-acd1-4c6b2da63f7a']
Dec 02 10:06:29 np0005541914.localdomain systemd[1]: tmp-crun.2A2M0W.mount: Deactivated successfully.
Dec 02 10:06:29 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-537a4fb83ba17556b8b20d1c5f8dbe5ca7c6131c1a44ffe17c98b82654d3b13f-merged.mount: Deactivated successfully.
Dec 02 10:06:29 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-70a39957ac2f84b3bf60ff0c60dadcb03046c3355e9d5e3f1118090f6b2eb691-userdata-shm.mount: Deactivated successfully.
Dec 02 10:06:29 np0005541914.localdomain systemd[1]: run-netns-qdhcp\x2de6b3959d\x2d7904\x2d44ed\x2d92bd\x2dec1be2b402a9.mount: Deactivated successfully.
Dec 02 10:06:29 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:29.524 2 INFO neutron.agent.securitygroups_rpc [None req-f8e3c35c-c137-4121-a943-a4b83494d8a2 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['df2c54da-38ba-4edc-acd1-4c6b2da63f7a']
Dec 02 10:06:30 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:30.179 2 INFO neutron.agent.securitygroups_rpc [None req-9f2bf60d-db80-42ad-806a-1445118d8a03 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['df2c54da-38ba-4edc-acd1-4c6b2da63f7a']
Dec 02 10:06:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:30.544 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:06:30 np0005541914.localdomain ceph-mon[301710]: pgmap v217: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:06:30 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:30.786 2 INFO neutron.agent.securitygroups_rpc [None req-9a1f4b78-6988-4ca6-b0f0-b52a2438af33 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['df2c54da-38ba-4edc-acd1-4c6b2da63f7a']
Dec 02 10:06:31 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v218: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:06:31 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:31.765 2 INFO neutron.agent.securitygroups_rpc [None req-1c27ab60-bacc-4e9e-b1c9-d6f9cf0e1b32 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['8ebd5526-cfd6-4dd0-8888-3d40098feb1a']
Dec 02 10:06:32 np0005541914.localdomain ceph-mon[301710]: pgmap v218: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:06:32 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:32.142 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:06:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:06:33 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v219: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:06:33 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/705508170' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:06:33 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/705508170' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:06:33 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:33.364 2 INFO neutron.agent.securitygroups_rpc [None req-ec9364dd-dd89-44e2-a668-097e6474f1a7 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['f835d0d9-69c7-416b-b19f-71e98abbea19']
Dec 02 10:06:33 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:33.624 2 INFO neutron.agent.securitygroups_rpc [None req-37778547-7b0e-4196-bc87-08fdc55b8adf ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['f835d0d9-69c7-416b-b19f-71e98abbea19']
Dec 02 10:06:33 np0005541914.localdomain podman[239757]: time="2025-12-02T10:06:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:06:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:06:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 10:06:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:06:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19213 "" "Go-http-client/1.1"
Dec 02 10:06:33 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:06:33.899 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:06:33 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:06:33.900 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 02 10:06:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:33.950 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:06:34 np0005541914.localdomain ceph-mon[301710]: pgmap v219: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:06:35 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v220: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:06:35 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:35.546 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:06:36 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:36.119 2 INFO neutron.agent.securitygroups_rpc [None req-4d405741-5ff5-4de3-bee3-cffdae397b25 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['a13484c0-648a-48f0-a8cb-29cdca97e066']
Dec 02 10:06:36 np0005541914.localdomain ceph-mon[301710]: pgmap v220: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:06:36 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:36.314 2 INFO neutron.agent.securitygroups_rpc [None req-b1eefb6d-0b2e-4576-bbe4-d313eb7d9799 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['a13484c0-648a-48f0-a8cb-29cdca97e066']
Dec 02 10:06:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:06:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:06:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:06:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:06:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:06:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:06:36 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:36.982 2 INFO neutron.agent.securitygroups_rpc [None req-9e362cac-bb61-4369-ad00-9f073d908c17 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['0d1aa800-00f4-4e0d-be41-caba26c873bd']
Dec 02 10:06:37 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v221: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:06:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:37.174 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:06:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e115 e115: 6 total, 6 up, 6 in
Dec 02 10:06:37 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:37.404 2 INFO neutron.agent.securitygroups_rpc [None req-24128e00-40c9-486f-b5be-6d4fcff90c40 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['0d1aa800-00f4-4e0d-be41-caba26c873bd']
Dec 02 10:06:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:06:37 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:37.944 2 INFO neutron.agent.securitygroups_rpc [None req-c72c5dd9-5995-4df5-a6ad-e067a8fdaf10 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['0d1aa800-00f4-4e0d-be41-caba26c873bd']
Dec 02 10:06:38 np0005541914.localdomain ceph-mon[301710]: pgmap v221: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:06:38 np0005541914.localdomain ceph-mon[301710]: osdmap e115: 6 total, 6 up, 6 in
Dec 02 10:06:38 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:38.315 2 INFO neutron.agent.securitygroups_rpc [None req-1079cc4d-64e8-4687-8a8b-22fa9980bbe7 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['0d1aa800-00f4-4e0d-be41-caba26c873bd']
Dec 02 10:06:38 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:38.598 2 INFO neutron.agent.securitygroups_rpc [None req-37425f6d-3678-4ce3-8643-3e657bae2eff ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['0d1aa800-00f4-4e0d-be41-caba26c873bd']
Dec 02 10:06:38 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:38.920 2 INFO neutron.agent.securitygroups_rpc [None req-5dbcc1aa-f594-4336-8b87-6f8ec002ed0b ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['0d1aa800-00f4-4e0d-be41-caba26c873bd']
Dec 02 10:06:39 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v223: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Dec 02 10:06:39 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e116 e116: 6 total, 6 up, 6 in
Dec 02 10:06:40 np0005541914.localdomain ceph-mon[301710]: pgmap v223: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Dec 02 10:06:40 np0005541914.localdomain ceph-mon[301710]: osdmap e116: 6 total, 6 up, 6 in
Dec 02 10:06:40 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:40.548 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:06:40 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:06:40.868 2 INFO neutron.agent.securitygroups_rpc [None req-bbaaa1af-170e-49f9-ab51-fca299624b09 ed3b4dffeb0d4a4f93cbf0470d2fba06 1512807ff8de4caaab2cbe4666784e7d - - default default] Security group rule updated ['ef31c17a-e7e0-47e3-9c93-83c68ae18a93']
Dec 02 10:06:40 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:06:40 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:06:40 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:06:40 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:06:41 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v225: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Dec 02 10:06:41 np0005541914.localdomain podman[313483]: 2025-12-02 10:06:41.143789801 +0000 UTC m=+0.134265190 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 02 10:06:41 np0005541914.localdomain podman[313479]: 2025-12-02 10:06:41.10897326 +0000 UTC m=+0.108650811 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 02 10:06:41 np0005541914.localdomain podman[313479]: 2025-12-02 10:06:41.192832789 +0000 UTC m=+0.192510370 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 02 10:06:41 np0005541914.localdomain podman[313483]: 2025-12-02 10:06:41.207279863 +0000 UTC m=+0.197755272 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Dec 02 10:06:41 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:06:41 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:06:41 np0005541914.localdomain podman[313480]: 2025-12-02 10:06:41.203104385 +0000 UTC m=+0.198271698 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 02 10:06:41 np0005541914.localdomain podman[313480]: 2025-12-02 10:06:41.2842161 +0000 UTC m=+0.279383393 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 02 10:06:41 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:06:41 np0005541914.localdomain podman[313481]: 2025-12-02 10:06:41.307190526 +0000 UTC m=+0.299603035 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute)
Dec 02 10:06:41 np0005541914.localdomain podman[313481]: 2025-12-02 10:06:41.316766421 +0000 UTC m=+0.309178920 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3)
Dec 02 10:06:41 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:06:41 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:06:41.902 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:06:42 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e117 e117: 6 total, 6 up, 6 in
Dec 02 10:06:42 np0005541914.localdomain ceph-mon[301710]: pgmap v225: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Dec 02 10:06:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:06:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:06:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:06:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:06:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:06:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:06:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:06:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:06:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:06:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:06:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:06:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:06:42 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:42.176 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:06:42 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:06:43 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v227: 177 pgs: 177 active+clean; 209 MiB data, 922 MiB used, 41 GiB / 42 GiB avail; 97 KiB/s rd, 11 MiB/s wr, 136 op/s
Dec 02 10:06:43 np0005541914.localdomain ceph-mon[301710]: osdmap e117: 6 total, 6 up, 6 in
Dec 02 10:06:44 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e118 e118: 6 total, 6 up, 6 in
Dec 02 10:06:44 np0005541914.localdomain ceph-mon[301710]: pgmap v227: 177 pgs: 177 active+clean; 209 MiB data, 922 MiB used, 41 GiB / 42 GiB avail; 97 KiB/s rd, 11 MiB/s wr, 136 op/s
Dec 02 10:06:45 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v229: 177 pgs: 177 active+clean; 209 MiB data, 922 MiB used, 41 GiB / 42 GiB avail; 79 KiB/s rd, 11 MiB/s wr, 111 op/s
Dec 02 10:06:45 np0005541914.localdomain ceph-mon[301710]: osdmap e118: 6 total, 6 up, 6 in
Dec 02 10:06:45 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e119 e119: 6 total, 6 up, 6 in
Dec 02 10:06:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:45.551 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:06:46 np0005541914.localdomain ceph-mon[301710]: pgmap v229: 177 pgs: 177 active+clean; 209 MiB data, 922 MiB used, 41 GiB / 42 GiB avail; 79 KiB/s rd, 11 MiB/s wr, 111 op/s
Dec 02 10:06:46 np0005541914.localdomain ceph-mon[301710]: osdmap e119: 6 total, 6 up, 6 in
Dec 02 10:06:47 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v231: 177 pgs: 177 active+clean; 209 MiB data, 922 MiB used, 41 GiB / 42 GiB avail; 79 KiB/s rd, 11 MiB/s wr, 111 op/s
Dec 02 10:06:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:47.209 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:06:47 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e120 e120: 6 total, 6 up, 6 in
Dec 02 10:06:47 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:06:48 np0005541914.localdomain ceph-mon[301710]: pgmap v231: 177 pgs: 177 active+clean; 209 MiB data, 922 MiB used, 41 GiB / 42 GiB avail; 79 KiB/s rd, 11 MiB/s wr, 111 op/s
Dec 02 10:06:48 np0005541914.localdomain ceph-mon[301710]: osdmap e120: 6 total, 6 up, 6 in
Dec 02 10:06:48 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e121 e121: 6 total, 6 up, 6 in
Dec 02 10:06:49 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v234: 177 pgs: 177 active+clean; 145 MiB data, 922 MiB used, 41 GiB / 42 GiB avail; 152 KiB/s rd, 9.8 MiB/s wr, 216 op/s
Dec 02 10:06:49 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e122 e122: 6 total, 6 up, 6 in
Dec 02 10:06:49 np0005541914.localdomain ceph-mon[301710]: osdmap e121: 6 total, 6 up, 6 in
Dec 02 10:06:49 np0005541914.localdomain sshd[313565]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 10:06:49 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:06:50 np0005541914.localdomain podman[313567]: 2025-12-02 10:06:50.092565678 +0000 UTC m=+0.090552395 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350)
Dec 02 10:06:50 np0005541914.localdomain podman[313567]: 2025-12-02 10:06:50.105875378 +0000 UTC m=+0.103862125 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64, maintainer=Red Hat, Inc., config_id=edpm, release=1755695350, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9)
Dec 02 10:06:50 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:06:50 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:06:50 np0005541914.localdomain podman[313585]: 2025-12-02 10:06:50.188274371 +0000 UTC m=+0.069770906 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 10:06:50 np0005541914.localdomain sshd[313565]: Invalid user ubuntu from 193.32.162.146 port 47686
Dec 02 10:06:50 np0005541914.localdomain podman[313585]: 2025-12-02 10:06:50.226014562 +0000 UTC m=+0.107511057 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 10:06:50 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:06:50 np0005541914.localdomain ceph-mon[301710]: pgmap v234: 177 pgs: 177 active+clean; 145 MiB data, 922 MiB used, 41 GiB / 42 GiB avail; 152 KiB/s rd, 9.8 MiB/s wr, 216 op/s
Dec 02 10:06:50 np0005541914.localdomain ceph-mon[301710]: osdmap e122: 6 total, 6 up, 6 in
Dec 02 10:06:50 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e123 e123: 6 total, 6 up, 6 in
Dec 02 10:06:50 np0005541914.localdomain sshd[313565]: Connection closed by invalid user ubuntu 193.32.162.146 port 47686 [preauth]
Dec 02 10:06:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:50.553 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:06:51 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e124 e124: 6 total, 6 up, 6 in
Dec 02 10:06:51 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v238: 177 pgs: 177 active+clean; 145 MiB data, 922 MiB used, 41 GiB / 42 GiB avail; 196 KiB/s rd, 13 MiB/s wr, 277 op/s
Dec 02 10:06:51 np0005541914.localdomain ceph-mon[301710]: osdmap e123: 6 total, 6 up, 6 in
Dec 02 10:06:51 np0005541914.localdomain ceph-mon[301710]: osdmap e124: 6 total, 6 up, 6 in
Dec 02 10:06:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:52.260 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:06:52 np0005541914.localdomain ceph-mon[301710]: pgmap v238: 177 pgs: 177 active+clean; 145 MiB data, 922 MiB used, 41 GiB / 42 GiB avail; 196 KiB/s rd, 13 MiB/s wr, 277 op/s
Dec 02 10:06:52 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e125 e125: 6 total, 6 up, 6 in
Dec 02 10:06:52 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:06:53 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v240: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 87 KiB/s rd, 5.5 KiB/s wr, 118 op/s
Dec 02 10:06:53 np0005541914.localdomain ceph-mon[301710]: osdmap e125: 6 total, 6 up, 6 in
Dec 02 10:06:54 np0005541914.localdomain ceph-mon[301710]: pgmap v240: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 87 KiB/s rd, 5.5 KiB/s wr, 118 op/s
Dec 02 10:06:54 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e126 e126: 6 total, 6 up, 6 in
Dec 02 10:06:55 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v242: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 73 KiB/s rd, 4.6 KiB/s wr, 99 op/s
Dec 02 10:06:55 np0005541914.localdomain ceph-mon[301710]: osdmap e126: 6 total, 6 up, 6 in
Dec 02 10:06:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:55.556 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:06:55 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:06:56 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e127 e127: 6 total, 6 up, 6 in
Dec 02 10:06:56 np0005541914.localdomain podman[313608]: 2025-12-02 10:06:56.074949272 +0000 UTC m=+0.076000178 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 02 10:06:56 np0005541914.localdomain podman[313608]: 2025-12-02 10:06:56.087747635 +0000 UTC m=+0.088798481 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:06:56 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:06:56 np0005541914.localdomain ceph-mon[301710]: pgmap v242: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 73 KiB/s rd, 4.6 KiB/s wr, 99 op/s
Dec 02 10:06:56 np0005541914.localdomain ceph-mon[301710]: osdmap e127: 6 total, 6 up, 6 in
Dec 02 10:06:57 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v244: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 58 KiB/s rd, 3.7 KiB/s wr, 78 op/s
Dec 02 10:06:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:57.263 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:06:57 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/2616537687' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:06:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e128 e128: 6 total, 6 up, 6 in
Dec 02 10:06:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:06:58 np0005541914.localdomain ceph-mon[301710]: pgmap v244: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 58 KiB/s rd, 3.7 KiB/s wr, 78 op/s
Dec 02 10:06:58 np0005541914.localdomain ceph-mon[301710]: osdmap e128: 6 total, 6 up, 6 in
Dec 02 10:06:58 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/3420145848' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:06:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v246: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 2.7 KiB/s wr, 56 op/s
Dec 02 10:06:59 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e129 e129: 6 total, 6 up, 6 in
Dec 02 10:06:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:59.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:06:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:06:59.529 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:07:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:00.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:07:00 np0005541914.localdomain ceph-mon[301710]: pgmap v246: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 2.7 KiB/s wr, 56 op/s
Dec 02 10:07:00 np0005541914.localdomain ceph-mon[301710]: osdmap e129: 6 total, 6 up, 6 in
Dec 02 10:07:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:00.559 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:01 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v248: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 2.7 KiB/s wr, 56 op/s
Dec 02 10:07:01 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e130 e130: 6 total, 6 up, 6 in
Dec 02 10:07:02 np0005541914.localdomain ceph-mon[301710]: pgmap v248: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 2.7 KiB/s wr, 56 op/s
Dec 02 10:07:02 np0005541914.localdomain ceph-mon[301710]: osdmap e130: 6 total, 6 up, 6 in
Dec 02 10:07:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:02.299 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:02 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:07:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:02.523 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:07:03 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v250: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 67 KiB/s rd, 4.2 KiB/s wr, 91 op/s
Dec 02 10:07:03 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e131 e131: 6 total, 6 up, 6 in
Dec 02 10:07:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:07:03.177 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:07:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:07:03.178 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:07:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:07:03.178 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:07:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:03.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:07:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:03.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:07:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:03.549 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:07:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:03.550 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:07:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:03.551 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:07:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:03.551 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 10:07:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:03.551 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:07:03 np0005541914.localdomain podman[239757]: time="2025-12-02T10:07:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:07:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:07:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 10:07:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:07:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19213 "" "Go-http-client/1.1"
Dec 02 10:07:04 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:07:04 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/7662486' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:07:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:04.032 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:07:04 np0005541914.localdomain ceph-mon[301710]: pgmap v250: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 67 KiB/s rd, 4.2 KiB/s wr, 91 op/s
Dec 02 10:07:04 np0005541914.localdomain ceph-mon[301710]: osdmap e131: 6 total, 6 up, 6 in
Dec 02 10:07:04 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/7662486' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:07:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:04.273 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 10:07:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:04.275 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11538MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 10:07:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:04.275 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:07:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:04.275 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:07:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:04.357 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 10:07:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:04.358 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 10:07:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:04.384 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:07:04 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 02 10:07:04 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/38028727' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:07:04 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 02 10:07:04 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/38028727' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:07:04 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:07:04 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2800528967' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:07:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:04.868 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:07:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:04.875 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 10:07:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:04.948 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 10:07:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:04.951 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 10:07:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:04.951 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:07:05 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v252: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 1.5 KiB/s wr, 34 op/s
Dec 02 10:07:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/2367667588' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:07:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/38028727' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:07:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/38028727' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:07:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/2800528967' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:07:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/3029616690' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:07:05 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e132 e132: 6 total, 6 up, 6 in
Dec 02 10:07:05 np0005541914.localdomain systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 02 10:07:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:05.562 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:05.953 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:07:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:05.954 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 10:07:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:05.955 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 10:07:06 np0005541914.localdomain ceph-mon[301710]: pgmap v252: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 1.5 KiB/s wr, 34 op/s
Dec 02 10:07:06 np0005541914.localdomain ceph-mon[301710]: osdmap e132: 6 total, 6 up, 6 in
Dec 02 10:07:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:06.243 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 10:07:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:06.245 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:07:06 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:06.862 262347 INFO neutron.agent.linux.ip_lib [None req-584f5906-6980-460f-861a-05b08d948459 - - - - - -] Device tap03c51554-b0 cannot be used as it has no MAC address
Dec 02 10:07:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:07:06
Dec 02 10:07:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 02 10:07:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] do_upmap
Dec 02 10:07:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] pools ['images', 'manila_metadata', '.mgr', 'backups', 'volumes', 'manila_data', 'vms']
Dec 02 10:07:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes
Dec 02 10:07:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:06.937 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:06 np0005541914.localdomain kernel: device tap03c51554-b0 entered promiscuous mode
Dec 02 10:07:06 np0005541914.localdomain NetworkManager[5967]: <info>  [1764670026.9468] manager: (tap03c51554-b0): new Generic device (/org/freedesktop/NetworkManager/Devices/29)
Dec 02 10:07:06 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:07:06Z|00130|binding|INFO|Claiming lport 03c51554-b0d7-401d-888a-0d8ea49e9e4d for this chassis.
Dec 02 10:07:06 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:07:06Z|00131|binding|INFO|03c51554-b0d7-401d-888a-0d8ea49e9e4d: Claiming unknown
Dec 02 10:07:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:06.946 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:06 np0005541914.localdomain systemd-udevd[313682]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 10:07:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:07:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:07:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:07:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:07:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:07:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:07:06 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:07:06.969 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-0e36d60b-6052-4646-b258-2b7e0612d401', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e36d60b-6052-4646-b258-2b7e0612d401', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21d4d3b48096450197194eed29ad68df', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1b0b592-a188-42df-896f-4a5386fe2db9, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=03c51554-b0d7-401d-888a-0d8ea49e9e4d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:07:06 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:07:06.970 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 03c51554-b0d7-401d-888a-0d8ea49e9e4d in datapath 0e36d60b-6052-4646-b258-2b7e0612d401 bound to our chassis
Dec 02 10:07:06 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:07:06.972 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Port 2324623e-fa65-4407-965f-e7d64ab2f4e6 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 02 10:07:06 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:07:06.972 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0e36d60b-6052-4646-b258-2b7e0612d401, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:07:06 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:07:06.973 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[8316f13e-3dd3-4b24-9852-53741d61103f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:07:06 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap03c51554-b0: No such device
Dec 02 10:07:06 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:07:06Z|00132|binding|INFO|Setting lport 03c51554-b0d7-401d-888a-0d8ea49e9e4d ovn-installed in OVS
Dec 02 10:07:06 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:07:06Z|00133|binding|INFO|Setting lport 03c51554-b0d7-401d-888a-0d8ea49e9e4d up in Southbound
Dec 02 10:07:06 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap03c51554-b0: No such device
Dec 02 10:07:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:06.987 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:06 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap03c51554-b0: No such device
Dec 02 10:07:07 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap03c51554-b0: No such device
Dec 02 10:07:07 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap03c51554-b0: No such device
Dec 02 10:07:07 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap03c51554-b0: No such device
Dec 02 10:07:07 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap03c51554-b0: No such device
Dec 02 10:07:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:07.023 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:07 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap03c51554-b0: No such device
Dec 02 10:07:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v254: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 1.5 KiB/s wr, 34 op/s
Dec 02 10:07:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:07.057 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust
Dec 02 10:07:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:07:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 02 10:07:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:07:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32)
Dec 02 10:07:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:07:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00021774090359203424 quantized to 32 (current 32)
Dec 02 10:07:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:07:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32)
Dec 02 10:07:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:07:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 02 10:07:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:07:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 02 10:07:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:07:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.001953125 quantized to 16 (current 16)
Dec 02 10:07:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 02 10:07:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:07:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 02 10:07:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:07:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:07:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:07:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:07:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:07:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:07:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:07:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:07.302 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:07 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/786304527' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:07:07 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/786304527' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:07:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:07:08 np0005541914.localdomain podman[313753]: 
Dec 02 10:07:08 np0005541914.localdomain podman[313753]: 2025-12-02 10:07:08.15847778 +0000 UTC m=+0.084862651 container create c6c2f93b9c8521ec8567baa7eeb3e78c16d6874b9a269cf592797e59ffa85ebf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0e36d60b-6052-4646-b258-2b7e0612d401, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec 02 10:07:08 np0005541914.localdomain systemd[1]: Started libpod-conmon-c6c2f93b9c8521ec8567baa7eeb3e78c16d6874b9a269cf592797e59ffa85ebf.scope.
Dec 02 10:07:08 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:07:08.206 159483 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 2324623e-fa65-4407-965f-e7d64ab2f4e6 with type ""
Dec 02 10:07:08 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:07:08.208 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-0e36d60b-6052-4646-b258-2b7e0612d401', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0e36d60b-6052-4646-b258-2b7e0612d401', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '21d4d3b48096450197194eed29ad68df', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d1b0b592-a188-42df-896f-4a5386fe2db9, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=03c51554-b0d7-401d-888a-0d8ea49e9e4d) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:07:08 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:07:08Z|00134|binding|INFO|Removing iface tap03c51554-b0 ovn-installed in OVS
Dec 02 10:07:08 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:07:08Z|00135|binding|INFO|Removing lport 03c51554-b0d7-401d-888a-0d8ea49e9e4d ovn-installed in OVS
Dec 02 10:07:08 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:07:08.212 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 03c51554-b0d7-401d-888a-0d8ea49e9e4d in datapath 0e36d60b-6052-4646-b258-2b7e0612d401 unbound from our chassis
Dec 02 10:07:08 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:07:08.214 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0e36d60b-6052-4646-b258-2b7e0612d401, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:07:08 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:07:08.215 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[c97e17a5-ad00-4ece-b179-1eeaf242154b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:07:08 np0005541914.localdomain podman[313753]: 2025-12-02 10:07:08.116727246 +0000 UTC m=+0.043112137 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 02 10:07:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:08.247 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:08 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 10:07:08 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6410f4a3cbc4419ce0d2d2f282ad5265db5a535e7cfdaa1501af2234dd5190fd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 10:07:08 np0005541914.localdomain podman[313753]: 2025-12-02 10:07:08.280678078 +0000 UTC m=+0.207062949 container init c6c2f93b9c8521ec8567baa7eeb3e78c16d6874b9a269cf592797e59ffa85ebf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0e36d60b-6052-4646-b258-2b7e0612d401, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 02 10:07:08 np0005541914.localdomain podman[313753]: 2025-12-02 10:07:08.292990786 +0000 UTC m=+0.219375667 container start c6c2f93b9c8521ec8567baa7eeb3e78c16d6874b9a269cf592797e59ffa85ebf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0e36d60b-6052-4646-b258-2b7e0612d401, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 02 10:07:08 np0005541914.localdomain dnsmasq[313771]: started, version 2.85 cachesize 150
Dec 02 10:07:08 np0005541914.localdomain dnsmasq[313771]: DNS service limited to local subnets
Dec 02 10:07:08 np0005541914.localdomain dnsmasq[313771]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 02 10:07:08 np0005541914.localdomain dnsmasq[313771]: warning: no upstream servers configured
Dec 02 10:07:08 np0005541914.localdomain dnsmasq-dhcp[313771]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 02 10:07:08 np0005541914.localdomain dnsmasq[313771]: read /var/lib/neutron/dhcp/0e36d60b-6052-4646-b258-2b7e0612d401/addn_hosts - 0 addresses
Dec 02 10:07:08 np0005541914.localdomain dnsmasq-dhcp[313771]: read /var/lib/neutron/dhcp/0e36d60b-6052-4646-b258-2b7e0612d401/host
Dec 02 10:07:08 np0005541914.localdomain dnsmasq-dhcp[313771]: read /var/lib/neutron/dhcp/0e36d60b-6052-4646-b258-2b7e0612d401/opts
Dec 02 10:07:08 np0005541914.localdomain ceph-mon[301710]: pgmap v254: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 1.5 KiB/s wr, 34 op/s
Dec 02 10:07:08 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:08.508 262347 INFO neutron.agent.dhcp.agent [None req-0eab09cc-2c3f-4379-b687-e0dfb32c385f - - - - - -] DHCP configuration for ports {'6bb8688d-8be9-4786-8741-458afd004055'} is completed
Dec 02 10:07:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:08.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:07:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:08.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 10:07:08 np0005541914.localdomain dnsmasq[313771]: read /var/lib/neutron/dhcp/0e36d60b-6052-4646-b258-2b7e0612d401/addn_hosts - 0 addresses
Dec 02 10:07:08 np0005541914.localdomain podman[313787]: 2025-12-02 10:07:08.656083343 +0000 UTC m=+0.050325709 container kill c6c2f93b9c8521ec8567baa7eeb3e78c16d6874b9a269cf592797e59ffa85ebf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0e36d60b-6052-4646-b258-2b7e0612d401, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 02 10:07:08 np0005541914.localdomain dnsmasq-dhcp[313771]: read /var/lib/neutron/dhcp/0e36d60b-6052-4646-b258-2b7e0612d401/host
Dec 02 10:07:08 np0005541914.localdomain dnsmasq-dhcp[313771]: read /var/lib/neutron/dhcp/0e36d60b-6052-4646-b258-2b7e0612d401/opts
Dec 02 10:07:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:08.810 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:08 np0005541914.localdomain kernel: device tap03c51554-b0 left promiscuous mode
Dec 02 10:07:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:08.830 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:09 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v255: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 46 KiB/s rd, 2.4 KiB/s wr, 61 op/s
Dec 02 10:07:09 np0005541914.localdomain systemd[1]: tmp-crun.mujDJm.mount: Deactivated successfully.
Dec 02 10:07:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.554 262347 INFO neutron.agent.dhcp.agent [None req-03e95dfb-f9cb-4bdd-956a-d5a5fc422372 - - - - - -] DHCP configuration for ports {'6bb8688d-8be9-4786-8741-458afd004055'} is completed
Dec 02 10:07:09 np0005541914.localdomain dnsmasq[313771]: read /var/lib/neutron/dhcp/0e36d60b-6052-4646-b258-2b7e0612d401/addn_hosts - 0 addresses
Dec 02 10:07:09 np0005541914.localdomain dnsmasq-dhcp[313771]: read /var/lib/neutron/dhcp/0e36d60b-6052-4646-b258-2b7e0612d401/host
Dec 02 10:07:09 np0005541914.localdomain dnsmasq-dhcp[313771]: read /var/lib/neutron/dhcp/0e36d60b-6052-4646-b258-2b7e0612d401/opts
Dec 02 10:07:09 np0005541914.localdomain podman[313827]: 2025-12-02 10:07:09.736197498 +0000 UTC m=+0.063406310 container kill c6c2f93b9c8521ec8567baa7eeb3e78c16d6874b9a269cf592797e59ffa85ebf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0e36d60b-6052-4646-b258-2b7e0612d401, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 02 10:07:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent [None req-4e182ea0-0e96-4cf5-b907-fabc3c917168 - - - - - -] Unable to reload_allocations dhcp for 0e36d60b-6052-4646-b258-2b7e0612d401.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap03c51554-b0 not found in namespace qdhcp-0e36d60b-6052-4646-b258-2b7e0612d401.
Dec 02 10:07:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent Traceback (most recent call last):
Dec 02 10:07:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver
Dec 02 10:07:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent     rv = getattr(driver, action)(**action_kwargs)
Dec 02 10:07:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations
Dec 02 10:07:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent     self.device_manager.update(self.network, self.interface_name)
Dec 02 10:07:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update
Dec 02 10:07:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent     self._set_default_route(network, device_name)
Dec 02 10:07:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route
Dec 02 10:07:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent     self._set_default_route_ip_version(network, device_name,
Dec 02 10:07:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version
Dec 02 10:07:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent     gateway = device.route.get_gateway(ip_version=ip_version)
Dec 02 10:07:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway
Dec 02 10:07:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent     routes = self.list_routes(ip_version, scope=scope, table=table)
Dec 02 10:07:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes
Dec 02 10:07:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent     return list_ip_routes(self._parent.namespace, ip_version, scope=scope,
Dec 02 10:07:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes
Dec 02 10:07:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent     routes = privileged.list_ip_routes(namespace, ip_version, device=device,
Dec 02 10:07:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f
Dec 02 10:07:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent     return self(f, *args, **kw)
Dec 02 10:07:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__
Dec 02 10:07:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent     do = self.iter(retry_state=retry_state)
Dec 02 10:07:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter
Dec 02 10:07:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent     return fut.result()
Dec 02 10:07:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result
Dec 02 10:07:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent     return self.__get_result()
Dec 02 10:07:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
Dec 02 10:07:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent     raise self._exception
Dec 02 10:07:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__
Dec 02 10:07:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent     result = fn(*args, **kwargs)
Dec 02 10:07:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Dec 02 10:07:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent     return self.channel.remote_call(name, args, kwargs,
Dec 02 10:07:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Dec 02 10:07:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent     raise exc_type(*result[2])
Dec 02 10:07:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap03c51554-b0 not found in namespace qdhcp-0e36d60b-6052-4646-b258-2b7e0612d401.
Dec 02 10:07:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.757 262347 ERROR neutron.agent.dhcp.agent 
Dec 02 10:07:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:09.761 262347 INFO neutron.agent.dhcp.agent [-] Synchronizing state
Dec 02 10:07:09 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:09.978 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:10 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:10.041 262347 INFO neutron.agent.dhcp.agent [None req-633af77c-d730-4d72-9be7-502ca6237d88 - - - - - -] All active networks have been fetched through RPC.
Dec 02 10:07:10 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:10.042 262347 INFO neutron.agent.dhcp.agent [-] Starting network 0e36d60b-6052-4646-b258-2b7e0612d401 dhcp configuration
Dec 02 10:07:10 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:10.044 262347 INFO neutron.agent.dhcp.agent [-] Finished network 0e36d60b-6052-4646-b258-2b7e0612d401 dhcp configuration
Dec 02 10:07:10 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:10.044 262347 INFO neutron.agent.dhcp.agent [-] Starting network 10e86610-feac-4352-ad95-9bedaf95124c dhcp configuration
Dec 02 10:07:10 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:10.044 262347 INFO neutron.agent.dhcp.agent [-] Finished network 10e86610-feac-4352-ad95-9bedaf95124c dhcp configuration
Dec 02 10:07:10 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:10.044 262347 INFO neutron.agent.dhcp.agent [None req-633af77c-d730-4d72-9be7-502ca6237d88 - - - - - -] Synchronizing state complete
Dec 02 10:07:10 np0005541914.localdomain dnsmasq[313771]: exiting on receipt of SIGTERM
Dec 02 10:07:10 np0005541914.localdomain podman[313857]: 2025-12-02 10:07:10.228603251 +0000 UTC m=+0.057464328 container kill c6c2f93b9c8521ec8567baa7eeb3e78c16d6874b9a269cf592797e59ffa85ebf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0e36d60b-6052-4646-b258-2b7e0612d401, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 02 10:07:10 np0005541914.localdomain systemd[1]: libpod-c6c2f93b9c8521ec8567baa7eeb3e78c16d6874b9a269cf592797e59ffa85ebf.scope: Deactivated successfully.
Dec 02 10:07:10 np0005541914.localdomain podman[313871]: 2025-12-02 10:07:10.304386762 +0000 UTC m=+0.057360675 container died c6c2f93b9c8521ec8567baa7eeb3e78c16d6874b9a269cf592797e59ffa85ebf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0e36d60b-6052-4646-b258-2b7e0612d401, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 02 10:07:10 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c6c2f93b9c8521ec8567baa7eeb3e78c16d6874b9a269cf592797e59ffa85ebf-userdata-shm.mount: Deactivated successfully.
Dec 02 10:07:10 np0005541914.localdomain podman[313871]: 2025-12-02 10:07:10.338211462 +0000 UTC m=+0.091185335 container cleanup c6c2f93b9c8521ec8567baa7eeb3e78c16d6874b9a269cf592797e59ffa85ebf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0e36d60b-6052-4646-b258-2b7e0612d401, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 02 10:07:10 np0005541914.localdomain systemd[1]: libpod-conmon-c6c2f93b9c8521ec8567baa7eeb3e78c16d6874b9a269cf592797e59ffa85ebf.scope: Deactivated successfully.
Dec 02 10:07:10 np0005541914.localdomain podman[313872]: 2025-12-02 10:07:10.37359415 +0000 UTC m=+0.122297062 container remove c6c2f93b9c8521ec8567baa7eeb3e78c16d6874b9a269cf592797e59ffa85ebf (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0e36d60b-6052-4646-b258-2b7e0612d401, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 02 10:07:10 np0005541914.localdomain ceph-mon[301710]: pgmap v255: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 46 KiB/s rd, 2.4 KiB/s wr, 61 op/s
Dec 02 10:07:10 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:10.424 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:07:10 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:10.565 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:10 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:07:10.772 2 INFO neutron.agent.securitygroups_rpc [None req-f9cb0719-6f1e-4498-ae5c-3d1490c7cf9b c04b0c1b682647b3a235292b9ca1b605 2b57b1fad39449b49cbbffbb5c62906d - - default default] Security group member updated ['dba82d8e-ac81-4899-ab61-fcab2136c60b']
Dec 02 10:07:11 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v256: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 35 op/s
Dec 02 10:07:11 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e133 e133: 6 total, 6 up, 6 in
Dec 02 10:07:11 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:07:11 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:07:11 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-6410f4a3cbc4419ce0d2d2f282ad5265db5a535e7cfdaa1501af2234dd5190fd-merged.mount: Deactivated successfully.
Dec 02 10:07:11 np0005541914.localdomain systemd[1]: run-netns-qdhcp\x2d0e36d60b\x2d6052\x2d4646\x2db258\x2d2b7e0612d401.mount: Deactivated successfully.
Dec 02 10:07:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 10:07:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 8400.1 total, 600.0 interval
                                                          Cumulative writes: 8408 writes, 34K keys, 8408 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s
                                                          Cumulative WAL: 8408 writes, 2192 syncs, 3.84 writes per sync, written: 0.03 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 3369 writes, 11K keys, 3369 commit groups, 1.0 writes per commit group, ingest: 11.49 MB, 0.02 MB/s
                                                          Interval WAL: 3369 writes, 1442 syncs, 2.34 writes per sync, written: 0.01 GB, 0.02 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 02 10:07:11 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:07:11 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:07:11 np0005541914.localdomain podman[313898]: 2025-12-02 10:07:11.345094427 +0000 UTC m=+0.098109979 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 02 10:07:11 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:07:11.355 2 INFO neutron.agent.securitygroups_rpc [None req-0dd50eb4-1108-4728-8f57-13a5878ac244 c04b0c1b682647b3a235292b9ca1b605 2b57b1fad39449b49cbbffbb5c62906d - - default default] Security group member updated ['dba82d8e-ac81-4899-ab61-fcab2136c60b']
Dec 02 10:07:11 np0005541914.localdomain systemd[1]: tmp-crun.GFI9zg.mount: Deactivated successfully.
Dec 02 10:07:11 np0005541914.localdomain podman[313898]: 2025-12-02 10:07:11.383062354 +0000 UTC m=+0.136077946 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent)
Dec 02 10:07:11 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:07:11 np0005541914.localdomain podman[313899]: 2025-12-02 10:07:11.385149048 +0000 UTC m=+0.134912700 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:07:11 np0005541914.localdomain podman[313930]: 2025-12-02 10:07:11.450283021 +0000 UTC m=+0.093259959 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 10:07:11 np0005541914.localdomain podman[313930]: 2025-12-02 10:07:11.461927019 +0000 UTC m=+0.104903977 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 10:07:11 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:07:11 np0005541914.localdomain podman[313931]: 2025-12-02 10:07:11.510180473 +0000 UTC m=+0.151663235 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 02 10:07:11 np0005541914.localdomain podman[313899]: 2025-12-02 10:07:11.518167828 +0000 UTC m=+0.267931510 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Dec 02 10:07:11 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:07:11 np0005541914.localdomain podman[313931]: 2025-12-02 10:07:11.575517432 +0000 UTC m=+0.217000214 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.build-date=20251125)
Dec 02 10:07:11 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:07:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:07:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:07:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:07:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:07:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:07:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:07:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:07:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:07:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:07:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:07:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:07:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:07:12 np0005541914.localdomain ceph-mon[301710]: pgmap v256: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 35 op/s
Dec 02 10:07:12 np0005541914.localdomain ceph-mon[301710]: osdmap e133: 6 total, 6 up, 6 in
Dec 02 10:07:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:12.304 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:12 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:07:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v258: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 35 op/s
Dec 02 10:07:14 np0005541914.localdomain ceph-mon[301710]: pgmap v258: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 35 op/s
Dec 02 10:07:15 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v259: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 1.0 KiB/s wr, 28 op/s
Dec 02 10:07:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:15.566 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 10:07:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 8400.2 total, 600.0 interval
                                                          Cumulative writes: 9909 writes, 40K keys, 9909 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s
                                                          Cumulative WAL: 9909 writes, 2429 syncs, 4.08 writes per sync, written: 0.03 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 4031 writes, 14K keys, 4031 commit groups, 1.0 writes per commit group, ingest: 14.71 MB, 0.02 MB/s
                                                          Interval WAL: 4031 writes, 1640 syncs, 2.46 writes per sync, written: 0.01 GB, 0.02 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 02 10:07:16 np0005541914.localdomain ceph-mon[301710]: pgmap v259: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 1.0 KiB/s wr, 28 op/s
Dec 02 10:07:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v260: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 1023 B/s wr, 28 op/s
Dec 02 10:07:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:17.342 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:07:18 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:18.066 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:07:17Z, description=, device_id=adc10196-e9bc-4c45-94b4-e5bb526e2d9c, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034beffd0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c37dc0>], id=0244b042-c6ca-4971-a425-6ec6d03f8746, ip_allocation=immediate, mac_address=fa:16:3e:58:38:6a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1579, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:07:17Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:07:18 np0005541914.localdomain ceph-mon[301710]: pgmap v260: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 1023 B/s wr, 28 op/s
Dec 02 10:07:18 np0005541914.localdomain systemd[1]: tmp-crun.7PkrO5.mount: Deactivated successfully.
Dec 02 10:07:18 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses
Dec 02 10:07:18 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:07:18 np0005541914.localdomain podman[314000]: 2025-12-02 10:07:18.286283764 +0000 UTC m=+0.069787607 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:07:18 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:07:18 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:18.563 262347 INFO neutron.agent.dhcp.agent [None req-0f9094a1-d847-4d51-a354-88a843aa1434 - - - - - -] DHCP configuration for ports {'0244b042-c6ca-4971-a425-6ec6d03f8746'} is completed
Dec 02 10:07:19 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v261: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:07:20 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:07:20.069 2 INFO neutron.agent.securitygroups_rpc [None req-0ca3d260-f659-40d6-b699-f106118e6211 f6abbbfcc7d54e81b5693b2401a25e09 5ea39db037534e2087a54e8a052ad24e - - default default] Security group member updated ['377ae0fe-81df-41e0-8ef6-1afd307f6beb']
Dec 02 10:07:20 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:20.098 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:07:19Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c43160>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034e36f40>], id=9b2c8ab4-2d26-4ee7-86fa-e39f4e601823, ip_allocation=immediate, mac_address=fa:16:3e:7a:be:2a, name=tempest-RoutersAdminNegativeTest-749451820, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=True, project_id=5ea39db037534e2087a54e8a052ad24e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['377ae0fe-81df-41e0-8ef6-1afd307f6beb'], standard_attr_id=1603, status=DOWN, tags=[], tenant_id=5ea39db037534e2087a54e8a052ad24e, updated_at=2025-12-02T10:07:19Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:07:20 np0005541914.localdomain ceph-mon[301710]: pgmap v261: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:07:20 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/3564427763' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:07:20 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/3564427763' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:07:20 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses
Dec 02 10:07:20 np0005541914.localdomain podman[314035]: 2025-12-02 10:07:20.30257032 +0000 UTC m=+0.050379610 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:07:20 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:07:20 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:07:20 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:07:20 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:07:20 np0005541914.localdomain podman[314049]: 2025-12-02 10:07:20.39131365 +0000 UTC m=+0.069282512 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 02 10:07:20 np0005541914.localdomain podman[314049]: 2025-12-02 10:07:20.397792169 +0000 UTC m=+0.075761031 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 10:07:20 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:07:20 np0005541914.localdomain podman[314050]: 2025-12-02 10:07:20.436303463 +0000 UTC m=+0.111517921 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, config_id=edpm, container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container)
Dec 02 10:07:20 np0005541914.localdomain podman[314050]: 2025-12-02 10:07:20.445828796 +0000 UTC m=+0.121043284 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, config_id=edpm, distribution-scope=public, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, io.buildah.version=1.33.7, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., architecture=x86_64, release=1755695350)
Dec 02 10:07:20 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:07:20 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:20.529 262347 INFO neutron.agent.dhcp.agent [None req-c8c4c1a7-a245-42b3-817a-a671973e1771 - - - - - -] DHCP configuration for ports {'9b2c8ab4-2d26-4ee7-86fa-e39f4e601823'} is completed
Dec 02 10:07:20 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:20.568 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:20 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses
Dec 02 10:07:20 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:07:20 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:07:20 np0005541914.localdomain podman[314111]: 2025-12-02 10:07:20.586809372 +0000 UTC m=+0.061693638 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:07:21 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v262: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:07:21 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:07:21.634 2 INFO neutron.agent.securitygroups_rpc [None req-9c96eaf7-e1a3-4805-a32c-c883041fe7ca f6abbbfcc7d54e81b5693b2401a25e09 5ea39db037534e2087a54e8a052ad24e - - default default] Security group member updated ['377ae0fe-81df-41e0-8ef6-1afd307f6beb']
Dec 02 10:07:21 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 1 addresses
Dec 02 10:07:21 np0005541914.localdomain podman[314150]: 2025-12-02 10:07:21.936527118 +0000 UTC m=+0.061232643 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 02 10:07:21 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:07:21 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:07:21 np0005541914.localdomain sudo[314161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 10:07:21 np0005541914.localdomain sudo[314161]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:07:21 np0005541914.localdomain sudo[314161]: pam_unix(sudo:session): session closed for user root
Dec 02 10:07:22 np0005541914.localdomain sudo[314183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 10:07:22 np0005541914.localdomain sudo[314183]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:07:22 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:22.052 262347 INFO neutron.agent.dhcp.agent [None req-49a98392-4bb0-40ef-bc50-4aaeb85b17cc - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:07:21Z, description=, device_id=faa39e96-d7c3-48ec-b5b0-f4420251b339, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c7ae80>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c7ac70>], id=d178cd50-67bc-475e-bb5c-e4cf16815921, ip_allocation=immediate, mac_address=fa:16:3e:ba:a2:17, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1616, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:07:21Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:07:22 np0005541914.localdomain ceph-mon[301710]: pgmap v262: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:07:22 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1857178396' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:07:22 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1857178396' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:07:22 np0005541914.localdomain systemd[1]: tmp-crun.q857O1.mount: Deactivated successfully.
Dec 02 10:07:22 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses
Dec 02 10:07:22 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:07:22 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:07:22 np0005541914.localdomain podman[314223]: 2025-12-02 10:07:22.223405681 +0000 UTC m=+0.049321008 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Dec 02 10:07:22 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:22.380 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:22 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:07:22 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:22.514 262347 INFO neutron.agent.dhcp.agent [None req-dee9b15f-eb72-4d73-84a0-710aeff6d1eb - - - - - -] DHCP configuration for ports {'d178cd50-67bc-475e-bb5c-e4cf16815921'} is completed
Dec 02 10:07:22 np0005541914.localdomain sudo[314183]: pam_unix(sudo:session): session closed for user root
Dec 02 10:07:22 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 10:07:22 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:07:22 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 02 10:07:22 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:07:22 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 02 10:07:22 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] update: starting ev 02546f0c-828a-4dca-a8d7-cf4771e45a9e (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:07:22 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] complete: finished ev 02546f0c-828a-4dca-a8d7-cf4771e45a9e (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:07:22 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Completed event 02546f0c-828a-4dca-a8d7-cf4771e45a9e (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 02 10:07:22 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 02 10:07:22 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:07:23 np0005541914.localdomain sudo[314274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 10:07:23 np0005541914.localdomain sudo[314274]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:07:23 np0005541914.localdomain sudo[314274]: pam_unix(sudo:session): session closed for user root
Dec 02 10:07:23 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v263: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 938 B/s wr, 15 op/s
Dec 02 10:07:23 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:07:23 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:07:23 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:07:23 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:07:23 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:23.456 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:24 np0005541914.localdomain ceph-mon[301710]: pgmap v263: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 938 B/s wr, 15 op/s
Dec 02 10:07:25 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v264: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 938 B/s wr, 15 op/s
Dec 02 10:07:25 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:25.571 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:25 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:25.683 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:07:25Z, description=, device_id=d23c300d-2106-463f-ba69-eebcc6860c57, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034bf8f40>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034bf8be0>], id=07c2ad56-fdc6-41b4-a849-7660e9700481, ip_allocation=immediate, mac_address=fa:16:3e:f0:02:af, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1644, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:07:25Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:07:25 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses
Dec 02 10:07:25 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:07:25 np0005541914.localdomain podman[314310]: 2025-12-02 10:07:25.896632362 +0000 UTC m=+0.064911567 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:07:25 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:07:26 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:26.090 262347 INFO neutron.agent.dhcp.agent [None req-dd692575-9ba4-42be-abc0-fb5009267c2b - - - - - -] DHCP configuration for ports {'07c2ad56-fdc6-41b4-a849-7660e9700481'} is completed
Dec 02 10:07:26 np0005541914.localdomain ceph-mon[301710]: pgmap v264: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 938 B/s wr, 15 op/s
Dec 02 10:07:26 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/3870090034' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:07:26 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/3870090034' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:07:26 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:26.317 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:07:25Z, description=, device_id=7bf2b4c4-6334-4a1f-8be8-1ca6d15e82eb, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034cb3ee0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034cb35e0>], id=d001dae5-639b-449c-a35c-a7a5c458790f, ip_allocation=immediate, mac_address=fa:16:3e:9b:85:74, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1647, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:07:25Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:07:26 np0005541914.localdomain podman[314348]: 2025-12-02 10:07:26.695901781 +0000 UTC m=+0.061181313 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:07:26 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 4 addresses
Dec 02 10:07:26 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:07:26 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:07:26 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:07:26 np0005541914.localdomain podman[314361]: 2025-12-02 10:07:26.811909929 +0000 UTC m=+0.085702087 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible)
Dec 02 10:07:26 np0005541914.localdomain podman[314361]: 2025-12-02 10:07:26.830006115 +0000 UTC m=+0.103798283 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 02 10:07:26 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:07:26 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:26.958 262347 INFO neutron.agent.dhcp.agent [None req-191dab6e-2d15-44a9-90a0-31a31133bd5a - - - - - -] DHCP configuration for ports {'d001dae5-639b-449c-a35c-a7a5c458790f'} is completed
Dec 02 10:07:27 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v265: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 938 B/s wr, 15 op/s
Dec 02 10:07:27 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events
Dec 02 10:07:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 02 10:07:27 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:27.382 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:07:28 np0005541914.localdomain ceph-mon[301710]: pgmap v265: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 938 B/s wr, 15 op/s
Dec 02 10:07:28 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:07:28 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:28.371 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:28 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 02 10:07:28 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1103887549' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:07:28 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 02 10:07:28 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1103887549' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:07:29 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v266: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 1.5 KiB/s wr, 29 op/s
Dec 02 10:07:29 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1103887549' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:07:29 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1103887549' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:07:30 np0005541914.localdomain ceph-mon[301710]: pgmap v266: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 1.5 KiB/s wr, 29 op/s
Dec 02 10:07:30 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses
Dec 02 10:07:30 np0005541914.localdomain podman[314402]: 2025-12-02 10:07:30.295654922 +0000 UTC m=+0.058513300 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:07:30 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:07:30 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:07:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:30.572 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:30.574 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:31 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v267: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 1.5 KiB/s wr, 29 op/s
Dec 02 10:07:32 np0005541914.localdomain ceph-mon[301710]: pgmap v267: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 1.5 KiB/s wr, 29 op/s
Dec 02 10:07:32 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:32.410 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:07:32 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:07:32.907 2 INFO neutron.agent.securitygroups_rpc [None req-4959ff7d-aca7-46d0-9143-48c9e561106c c695c8d7887d4f5d99397fbd9a108bd7 27cf39916c5c4bc1833487052acaa22a - - default default] Security group member updated ['202778bd-7cc5-43e0-846c-ad0385193194']
Dec 02 10:07:33 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v268: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 1.7 KiB/s wr, 43 op/s
Dec 02 10:07:33 np0005541914.localdomain podman[239757]: time="2025-12-02T10:07:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:07:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:07:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 10:07:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:07:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19215 "" "Go-http-client/1.1"
Dec 02 10:07:33 np0005541914.localdomain systemd[1]: tmp-crun.iiXWdl.mount: Deactivated successfully.
Dec 02 10:07:33 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses
Dec 02 10:07:33 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:07:33 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:07:33 np0005541914.localdomain podman[314440]: 2025-12-02 10:07:33.707865236 +0000 UTC m=+0.105175455 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 02 10:07:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:33.882 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:34 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:07:34.018 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:07:34 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:34.019 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:34 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:07:34.021 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 02 10:07:34 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:07:34.022 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:07:34 np0005541914.localdomain ceph-mon[301710]: pgmap v268: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 1.7 KiB/s wr, 43 op/s
Dec 02 10:07:34 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:07:34.798 2 INFO neutron.agent.securitygroups_rpc [None req-7ec4eb97-1d35-4cb1-ad23-be6283df01c3 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:07:34 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:34.838 262347 INFO neutron.agent.linux.ip_lib [None req-249a979f-df1d-417d-93da-1cfa4e879ae7 - - - - - -] Device tapc4e3a46c-de cannot be used as it has no MAC address
Dec 02 10:07:34 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:34.859 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:34 np0005541914.localdomain kernel: device tapc4e3a46c-de entered promiscuous mode
Dec 02 10:07:34 np0005541914.localdomain NetworkManager[5967]: <info>  [1764670054.8674] manager: (tapc4e3a46c-de): new Generic device (/org/freedesktop/NetworkManager/Devices/30)
Dec 02 10:07:34 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:34.867 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:34 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:07:34Z|00136|binding|INFO|Claiming lport c4e3a46c-de2f-4fec-815f-929a0c5cb506 for this chassis.
Dec 02 10:07:34 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:07:34Z|00137|binding|INFO|c4e3a46c-de2f-4fec-815f-929a0c5cb506: Claiming unknown
Dec 02 10:07:34 np0005541914.localdomain systemd-udevd[314471]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 10:07:34 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:07:34.881 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-f20ca235-57e0-46f5-9e44-df634db1299f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f20ca235-57e0-46f5-9e44-df634db1299f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c83c01183aba40c080a7dde4126b2e3b', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9e3578eb-6671-47df-b07e-7931aa192d6f, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=c4e3a46c-de2f-4fec-815f-929a0c5cb506) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:07:34 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:07:34.883 159483 INFO neutron.agent.ovn.metadata.agent [-] Port c4e3a46c-de2f-4fec-815f-929a0c5cb506 in datapath f20ca235-57e0-46f5-9e44-df634db1299f bound to our chassis
Dec 02 10:07:34 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:07:34.885 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network f20ca235-57e0-46f5-9e44-df634db1299f or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 02 10:07:34 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:07:34.887 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[8102d604-9491-4e1d-8fd0-2a4137c4efbd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:07:34 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapc4e3a46c-de: No such device
Dec 02 10:07:34 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapc4e3a46c-de: No such device
Dec 02 10:07:34 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:34.902 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:34 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapc4e3a46c-de: No such device
Dec 02 10:07:34 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:07:34Z|00138|binding|INFO|Setting lport c4e3a46c-de2f-4fec-815f-929a0c5cb506 ovn-installed in OVS
Dec 02 10:07:34 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:07:34Z|00139|binding|INFO|Setting lport c4e3a46c-de2f-4fec-815f-929a0c5cb506 up in Southbound
Dec 02 10:07:34 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapc4e3a46c-de: No such device
Dec 02 10:07:34 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapc4e3a46c-de: No such device
Dec 02 10:07:34 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapc4e3a46c-de: No such device
Dec 02 10:07:34 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapc4e3a46c-de: No such device
Dec 02 10:07:34 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapc4e3a46c-de: No such device
Dec 02 10:07:34 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:34.950 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:34 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:34.974 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:35 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:35.002 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:35 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v269: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 852 B/s wr, 27 op/s
Dec 02 10:07:35 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:35.311 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:07:34Z, description=, device_id=ee41f236-3144-44c4-a93c-6155ab400908, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f403553c4f0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034d3f5e0>], id=446070b9-f224-43e5-ab20-4688e138137b, ip_allocation=immediate, mac_address=fa:16:3e:98:f8:25, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1701, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:07:34Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:07:35 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses
Dec 02 10:07:35 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:07:35 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:07:35 np0005541914.localdomain podman[314537]: 2025-12-02 10:07:35.500130723 +0000 UTC m=+0.046331065 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:07:35 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:35.574 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:35 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:07:35.726 2 INFO neutron.agent.securitygroups_rpc [None req-87d4804a-2e84-429a-b45c-6794fadb1faa 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:07:35 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:35.761 262347 INFO neutron.agent.dhcp.agent [None req-722fce03-82e4-4640-bc71-c0e988c059da - - - - - -] DHCP configuration for ports {'446070b9-f224-43e5-ab20-4688e138137b'} is completed
Dec 02 10:07:35 np0005541914.localdomain podman[314580]: 
Dec 02 10:07:35 np0005541914.localdomain podman[314580]: 2025-12-02 10:07:35.80897176 +0000 UTC m=+0.071316323 container create 46ef14ebee02340a4cd386f790bd52bf2eccb64757d35c75a115ff7b7be5fae8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f20ca235-57e0-46f5-9e44-df634db1299f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:07:35 np0005541914.localdomain systemd[1]: Started libpod-conmon-46ef14ebee02340a4cd386f790bd52bf2eccb64757d35c75a115ff7b7be5fae8.scope.
Dec 02 10:07:35 np0005541914.localdomain podman[314580]: 2025-12-02 10:07:35.770694334 +0000 UTC m=+0.033038867 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 02 10:07:35 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 10:07:35 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ebe0b74ba4dda832e5b3490b63546e986a845c8fa34d1b6f07fa0ac860c1a3a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 10:07:35 np0005541914.localdomain podman[314580]: 2025-12-02 10:07:35.885798854 +0000 UTC m=+0.148143377 container init 46ef14ebee02340a4cd386f790bd52bf2eccb64757d35c75a115ff7b7be5fae8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f20ca235-57e0-46f5-9e44-df634db1299f, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:07:35 np0005541914.localdomain podman[314580]: 2025-12-02 10:07:35.895163471 +0000 UTC m=+0.157508004 container start 46ef14ebee02340a4cd386f790bd52bf2eccb64757d35c75a115ff7b7be5fae8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f20ca235-57e0-46f5-9e44-df634db1299f, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:07:35 np0005541914.localdomain dnsmasq[314598]: started, version 2.85 cachesize 150
Dec 02 10:07:35 np0005541914.localdomain dnsmasq[314598]: DNS service limited to local subnets
Dec 02 10:07:35 np0005541914.localdomain dnsmasq[314598]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 02 10:07:35 np0005541914.localdomain dnsmasq[314598]: warning: no upstream servers configured
Dec 02 10:07:35 np0005541914.localdomain dnsmasq-dhcp[314598]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 02 10:07:35 np0005541914.localdomain dnsmasq[314598]: read /var/lib/neutron/dhcp/f20ca235-57e0-46f5-9e44-df634db1299f/addn_hosts - 0 addresses
Dec 02 10:07:35 np0005541914.localdomain dnsmasq-dhcp[314598]: read /var/lib/neutron/dhcp/f20ca235-57e0-46f5-9e44-df634db1299f/host
Dec 02 10:07:35 np0005541914.localdomain dnsmasq-dhcp[314598]: read /var/lib/neutron/dhcp/f20ca235-57e0-46f5-9e44-df634db1299f/opts
Dec 02 10:07:36 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:36.012 262347 INFO neutron.agent.dhcp.agent [None req-d6d5210e-70fd-4811-ae62-610fcc48966e - - - - - -] DHCP configuration for ports {'bcca0f1d-63c7-4ef3-835f-3a1c95a17a98'} is completed
Dec 02 10:07:36 np0005541914.localdomain ceph-mon[301710]: pgmap v269: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 852 B/s wr, 27 op/s
Dec 02 10:07:36 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/792357925' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:07:36 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/792357925' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:07:36 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:07:36.384 2 INFO neutron.agent.securitygroups_rpc [None req-d47733e1-0ad6-43a1-b5b9-ff46ea82484c 71c1ab73f6584cdc8a5ac07abc1165b6 c83c01183aba40c080a7dde4126b2e3b - - default default] Security group member updated ['8d157c15-6c1c-467c-9dbb-a97c83d265b6']
Dec 02 10:07:36 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:36.435 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:07:36Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c31bb0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c31f40>], id=ed4e0a2d-2b7b-4ec8-8de0-8664040b563f, ip_allocation=immediate, mac_address=fa:16:3e:1c:8a:ed, name=tempest-TagsExtTest-1556791831, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:07:32Z, description=, dns_domain=, id=f20ca235-57e0-46f5-9e44-df634db1299f, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TagsExtTest-test-network-1118069821, port_security_enabled=True, project_id=c83c01183aba40c080a7dde4126b2e3b, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=28317, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1683, status=ACTIVE, subnets=['11a5f377-3c38-4f51-b0a0-6917a8d4f7db'], tags=[], tenant_id=c83c01183aba40c080a7dde4126b2e3b, updated_at=2025-12-02T10:07:33Z, vlan_transparent=None, network_id=f20ca235-57e0-46f5-9e44-df634db1299f, port_security_enabled=True, project_id=c83c01183aba40c080a7dde4126b2e3b, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['8d157c15-6c1c-467c-9dbb-a97c83d265b6'], standard_attr_id=1704, status=DOWN, tags=[], tenant_id=c83c01183aba40c080a7dde4126b2e3b, updated_at=2025-12-02T10:07:36Z on network f20ca235-57e0-46f5-9e44-df634db1299f
Dec 02 10:07:36 np0005541914.localdomain dnsmasq[314598]: read /var/lib/neutron/dhcp/f20ca235-57e0-46f5-9e44-df634db1299f/addn_hosts - 1 addresses
Dec 02 10:07:36 np0005541914.localdomain dnsmasq-dhcp[314598]: read /var/lib/neutron/dhcp/f20ca235-57e0-46f5-9e44-df634db1299f/host
Dec 02 10:07:36 np0005541914.localdomain dnsmasq-dhcp[314598]: read /var/lib/neutron/dhcp/f20ca235-57e0-46f5-9e44-df634db1299f/opts
Dec 02 10:07:36 np0005541914.localdomain podman[314615]: 2025-12-02 10:07:36.645849397 +0000 UTC m=+0.058707157 container kill 46ef14ebee02340a4cd386f790bd52bf2eccb64757d35c75a115ff7b7be5fae8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f20ca235-57e0-46f5-9e44-df634db1299f, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:07:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:07:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:07:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:07:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:07:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:07:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:07:36 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:36.969 262347 INFO neutron.agent.dhcp.agent [None req-59d6016e-b18a-434e-83ed-d551979c4e16 - - - - - -] DHCP configuration for ports {'ed4e0a2d-2b7b-4ec8-8de0-8664040b563f'} is completed
Dec 02 10:07:37 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v270: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 853 B/s wr, 27 op/s
Dec 02 10:07:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:37.442 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:07:38 np0005541914.localdomain ceph-mon[301710]: pgmap v270: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 853 B/s wr, 27 op/s
Dec 02 10:07:38 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:07:38.978 2 INFO neutron.agent.securitygroups_rpc [None req-c850bb74-fc0e-4a51-908b-f066332bb7ea 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:07:39 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v271: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 1.4 KiB/s wr, 42 op/s
Dec 02 10:07:40 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:07:40.176 2 INFO neutron.agent.securitygroups_rpc [None req-7bcf89da-35a6-4e58-8ea7-d94146bd4928 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:07:40 np0005541914.localdomain ceph-mon[301710]: pgmap v271: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 1.4 KiB/s wr, 42 op/s
Dec 02 10:07:40 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:40.576 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:41 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses
Dec 02 10:07:41 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:07:41 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:07:41 np0005541914.localdomain podman[314653]: 2025-12-02 10:07:41.022541341 +0000 UTC m=+0.058529800 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 02 10:07:41 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v272: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 852 B/s wr, 27 op/s
Dec 02 10:07:41 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:07:41.079 2 INFO neutron.agent.securitygroups_rpc [None req-5e11d272-dc8e-4f26-afbf-45da4f1c93dd c695c8d7887d4f5d99397fbd9a108bd7 27cf39916c5c4bc1833487052acaa22a - - default default] Security group member updated ['202778bd-7cc5-43e0-846c-ad0385193194']
Dec 02 10:07:41 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Dec 02 10:07:41 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:07:41.088268) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 02 10:07:41 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Dec 02 10:07:41 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670061088311, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1960, "num_deletes": 267, "total_data_size": 2547812, "memory_usage": 2596256, "flush_reason": "Manual Compaction"}
Dec 02 10:07:41 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Dec 02 10:07:41 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670061102020, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 1653647, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19541, "largest_seqno": 21496, "table_properties": {"data_size": 1646223, "index_size": 4318, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16375, "raw_average_key_size": 20, "raw_value_size": 1630923, "raw_average_value_size": 2054, "num_data_blocks": 189, "num_entries": 794, "num_filter_entries": 794, "num_deletions": 267, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669940, "oldest_key_time": 1764669940, "file_creation_time": 1764670061, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Dec 02 10:07:41 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 13816 microseconds, and 6873 cpu microseconds.
Dec 02 10:07:41 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 10:07:41 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:07:41.102079) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 1653647 bytes OK
Dec 02 10:07:41 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:07:41.102107) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Dec 02 10:07:41 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:07:41.104379) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Dec 02 10:07:41 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:07:41.104409) EVENT_LOG_v1 {"time_micros": 1764670061104401, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 02 10:07:41 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:07:41.104431) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 02 10:07:41 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 2538673, prev total WAL file size 2539422, number of live WAL files 2.
Dec 02 10:07:41 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:07:41 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:07:41.105437) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034303138' seq:72057594037927935, type:22 .. '6C6F676D0034323731' seq:0, type:0; will stop at (end)
Dec 02 10:07:41 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 02 10:07:41 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(1614KB)], [27(15MB)]
Dec 02 10:07:41 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670061105519, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 18292374, "oldest_snapshot_seqno": -1}
Dec 02 10:07:41 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 12679 keys, 17929036 bytes, temperature: kUnknown
Dec 02 10:07:41 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670061238104, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 17929036, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17854269, "index_size": 41962, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31749, "raw_key_size": 339656, "raw_average_key_size": 26, "raw_value_size": 17635765, "raw_average_value_size": 1390, "num_data_blocks": 1601, "num_entries": 12679, "num_filter_entries": 12679, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669502, "oldest_key_time": 0, "file_creation_time": 1764670061, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Dec 02 10:07:41 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 10:07:41 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:07:41.238632) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 17929036 bytes
Dec 02 10:07:41 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:07:41.240479) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 137.7 rd, 135.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 15.9 +0.0 blob) out(17.1 +0.0 blob), read-write-amplify(21.9) write-amplify(10.8) OK, records in: 13225, records dropped: 546 output_compression: NoCompression
Dec 02 10:07:41 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:07:41.240512) EVENT_LOG_v1 {"time_micros": 1764670061240499, "job": 14, "event": "compaction_finished", "compaction_time_micros": 132831, "compaction_time_cpu_micros": 46134, "output_level": 6, "num_output_files": 1, "total_output_size": 17929036, "num_input_records": 13225, "num_output_records": 12679, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 02 10:07:41 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:07:41 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670061241397, "job": 14, "event": "table_file_deletion", "file_number": 29}
Dec 02 10:07:41 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:07:41 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670061243810, "job": 14, "event": "table_file_deletion", "file_number": 27}
Dec 02 10:07:41 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:07:41.105385) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:07:41 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:07:41.244001) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:07:41 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:07:41.244007) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:07:41 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:07:41.244010) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:07:41 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:07:41.244013) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:07:41 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:07:41.244016) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:07:41 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:07:41 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:07:41 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:07:41 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:07:42 np0005541914.localdomain podman[314675]: 2025-12-02 10:07:42.087652986 +0000 UTC m=+0.087890894 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 02 10:07:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:07:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:07:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:07:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:07:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:07:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:07:42 np0005541914.localdomain podman[314675]: 2025-12-02 10:07:42.12584709 +0000 UTC m=+0.126084978 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 10:07:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:07:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:07:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:07:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:07:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:07:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:07:42 np0005541914.localdomain podman[314676]: 2025-12-02 10:07:42.06892591 +0000 UTC m=+0.069198919 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS)
Dec 02 10:07:42 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:07:42 np0005541914.localdomain podman[314674]: 2025-12-02 10:07:42.167477961 +0000 UTC m=+0.170434293 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 02 10:07:42 np0005541914.localdomain podman[314676]: 2025-12-02 10:07:42.176944162 +0000 UTC m=+0.177217171 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 02 10:07:42 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:07:42 np0005541914.localdomain podman[314682]: 2025-12-02 10:07:42.254694253 +0000 UTC m=+0.245184471 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 02 10:07:42 np0005541914.localdomain podman[314674]: 2025-12-02 10:07:42.277302158 +0000 UTC m=+0.280258540 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:07:42 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:07:42 np0005541914.localdomain ceph-mon[301710]: pgmap v272: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 852 B/s wr, 27 op/s
Dec 02 10:07:42 np0005541914.localdomain podman[314682]: 2025-12-02 10:07:42.31572948 +0000 UTC m=+0.306219698 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:07:42 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:07:42 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:42.441 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:42 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:07:42 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:07:42.807 2 INFO neutron.agent.securitygroups_rpc [None req-d64f03a9-f848-43c0-ae5d-2c025d3e76ac 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:07:43 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v273: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 852 B/s wr, 27 op/s
Dec 02 10:07:43 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:43.410 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:07:42Z, description=, device_id=28981963-6c8e-4dd9-bb45-3615e25a3e05, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c7a400>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c7a970>], id=1fb4aa21-b66c-43e1-8729-0b136cb670d6, ip_allocation=immediate, mac_address=fa:16:3e:f8:b2:7b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1744, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:07:42Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:07:43 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses
Dec 02 10:07:43 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:07:43 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:07:43 np0005541914.localdomain podman[314775]: 2025-12-02 10:07:43.593293328 +0000 UTC m=+0.030897211 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 02 10:07:43 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:43.932 262347 INFO neutron.agent.dhcp.agent [None req-fee1ccf2-56aa-40be-8f35-6cfb8256a1b1 - - - - - -] DHCP configuration for ports {'1fb4aa21-b66c-43e1-8729-0b136cb670d6'} is completed
Dec 02 10:07:44 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:07:44.321 2 INFO neutron.agent.securitygroups_rpc [None req-4929d793-e537-4c53-a2f5-ddc0b60f2500 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:07:44 np0005541914.localdomain ceph-mon[301710]: pgmap v273: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 852 B/s wr, 27 op/s
Dec 02 10:07:44 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "57640910-43d5-4fb7-b9cf-1d15a1cbc8ab", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:07:44 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:57640910-43d5-4fb7-b9cf-1d15a1cbc8ab, vol_name:cephfs) < ""
Dec 02 10:07:44 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:07:44 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:07:44 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:07:44 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:07:44 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:07:44 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:07:44.799+0000 7fd37dd6f640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:07:44 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:07:44.799+0000 7fd37dd6f640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:07:44 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:07:44.799+0000 7fd37dd6f640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:07:44 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:07:44.799+0000 7fd37dd6f640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:07:44 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:07:44.799+0000 7fd37dd6f640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:07:44 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/57640910-43d5-4fb7-b9cf-1d15a1cbc8ab/.meta.tmp'
Dec 02 10:07:44 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/57640910-43d5-4fb7-b9cf-1d15a1cbc8ab/.meta.tmp' to config b'/volumes/_nogroup/57640910-43d5-4fb7-b9cf-1d15a1cbc8ab/.meta'
Dec 02 10:07:44 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:57640910-43d5-4fb7-b9cf-1d15a1cbc8ab, vol_name:cephfs) < ""
Dec 02 10:07:44 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "57640910-43d5-4fb7-b9cf-1d15a1cbc8ab", "format": "json"}]: dispatch
Dec 02 10:07:44 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:57640910-43d5-4fb7-b9cf-1d15a1cbc8ab, vol_name:cephfs) < ""
Dec 02 10:07:44 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:57640910-43d5-4fb7-b9cf-1d15a1cbc8ab, vol_name:cephfs) < ""
Dec 02 10:07:45 np0005541914.localdomain systemd[1]: tmp-crun.Rsq8UK.mount: Deactivated successfully.
Dec 02 10:07:45 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses
Dec 02 10:07:45 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:07:45 np0005541914.localdomain podman[314828]: 2025-12-02 10:07:45.013320628 +0000 UTC m=+0.063500444 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 02 10:07:45 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:07:45 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v274: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s
Dec 02 10:07:45 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:07:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:45.580 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:46 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "57640910-43d5-4fb7-b9cf-1d15a1cbc8ab", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:07:46 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "57640910-43d5-4fb7-b9cf-1d15a1cbc8ab", "format": "json"}]: dispatch
Dec 02 10:07:46 np0005541914.localdomain ceph-mon[301710]: pgmap v274: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s
Dec 02 10:07:46 np0005541914.localdomain ceph-mon[301710]: mgrmap e44: np0005541914.lljzmk(active, since 7m), standbys: np0005541912.qwddia, np0005541913.mfesdm
Dec 02 10:07:46 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:07:46.987 2 INFO neutron.agent.securitygroups_rpc [None req-771b159d-2ba2-4111-b13e-47ca58a8e2e2 71c1ab73f6584cdc8a5ac07abc1165b6 c83c01183aba40c080a7dde4126b2e3b - - default default] Security group member updated ['8d157c15-6c1c-467c-9dbb-a97c83d265b6']
Dec 02 10:07:47 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v275: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s
Dec 02 10:07:47 np0005541914.localdomain dnsmasq[314598]: read /var/lib/neutron/dhcp/f20ca235-57e0-46f5-9e44-df634db1299f/addn_hosts - 0 addresses
Dec 02 10:07:47 np0005541914.localdomain dnsmasq-dhcp[314598]: read /var/lib/neutron/dhcp/f20ca235-57e0-46f5-9e44-df634db1299f/host
Dec 02 10:07:47 np0005541914.localdomain dnsmasq-dhcp[314598]: read /var/lib/neutron/dhcp/f20ca235-57e0-46f5-9e44-df634db1299f/opts
Dec 02 10:07:47 np0005541914.localdomain podman[314865]: 2025-12-02 10:07:47.198815237 +0000 UTC m=+0.050780473 container kill 46ef14ebee02340a4cd386f790bd52bf2eccb64757d35c75a115ff7b7be5fae8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f20ca235-57e0-46f5-9e44-df634db1299f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 02 10:07:47 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:07:47.427 2 INFO neutron.agent.securitygroups_rpc [None req-fa6ee8ca-ed1b-4c8f-b78c-b44d9f9936bd 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:07:47 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:07:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:47.483 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:47 np0005541914.localdomain dnsmasq[314598]: exiting on receipt of SIGTERM
Dec 02 10:07:47 np0005541914.localdomain podman[314900]: 2025-12-02 10:07:47.797841379 +0000 UTC m=+0.062457762 container kill 46ef14ebee02340a4cd386f790bd52bf2eccb64757d35c75a115ff7b7be5fae8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f20ca235-57e0-46f5-9e44-df634db1299f, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:07:47 np0005541914.localdomain systemd[1]: libpod-46ef14ebee02340a4cd386f790bd52bf2eccb64757d35c75a115ff7b7be5fae8.scope: Deactivated successfully.
Dec 02 10:07:47 np0005541914.localdomain podman[314920]: 2025-12-02 10:07:47.863074544 +0000 UTC m=+0.043018643 container died 46ef14ebee02340a4cd386f790bd52bf2eccb64757d35c75a115ff7b7be5fae8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f20ca235-57e0-46f5-9e44-df634db1299f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 02 10:07:47 np0005541914.localdomain systemd[1]: tmp-crun.B9gNmU.mount: Deactivated successfully.
Dec 02 10:07:47 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-46ef14ebee02340a4cd386f790bd52bf2eccb64757d35c75a115ff7b7be5fae8-userdata-shm.mount: Deactivated successfully.
Dec 02 10:07:47 np0005541914.localdomain podman[314920]: 2025-12-02 10:07:47.907496561 +0000 UTC m=+0.087440640 container remove 46ef14ebee02340a4cd386f790bd52bf2eccb64757d35c75a115ff7b7be5fae8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-f20ca235-57e0-46f5-9e44-df634db1299f, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 02 10:07:47 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:07:47Z|00140|binding|INFO|Releasing lport c4e3a46c-de2f-4fec-815f-929a0c5cb506 from this chassis (sb_readonly=0)
Dec 02 10:07:47 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:07:47Z|00141|binding|INFO|Setting lport c4e3a46c-de2f-4fec-815f-929a0c5cb506 down in Southbound
Dec 02 10:07:47 np0005541914.localdomain kernel: device tapc4e3a46c-de left promiscuous mode
Dec 02 10:07:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:47.918 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:07:47.926 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-f20ca235-57e0-46f5-9e44-df634db1299f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f20ca235-57e0-46f5-9e44-df634db1299f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c83c01183aba40c080a7dde4126b2e3b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9e3578eb-6671-47df-b07e-7931aa192d6f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=c4e3a46c-de2f-4fec-815f-929a0c5cb506) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:07:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:07:47.928 159483 INFO neutron.agent.ovn.metadata.agent [-] Port c4e3a46c-de2f-4fec-815f-929a0c5cb506 in datapath f20ca235-57e0-46f5-9e44-df634db1299f unbound from our chassis
Dec 02 10:07:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:07:47.931 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f20ca235-57e0-46f5-9e44-df634db1299f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:07:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:07:47.932 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[e1942d95-7f5a-4efe-af96-265ce71d9dca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:07:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:47.941 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:47 np0005541914.localdomain systemd[1]: libpod-conmon-46ef14ebee02340a4cd386f790bd52bf2eccb64757d35c75a115ff7b7be5fae8.scope: Deactivated successfully.
Dec 02 10:07:47 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:47.968 262347 INFO neutron.agent.dhcp.agent [None req-f31f1937-7b09-4696-81ad-787f3f41f226 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:07:48 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:48.177 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:07:48 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-2ebe0b74ba4dda832e5b3490b63546e986a845c8fa34d1b6f07fa0ac860c1a3a-merged.mount: Deactivated successfully.
Dec 02 10:07:48 np0005541914.localdomain systemd[1]: run-netns-qdhcp\x2df20ca235\x2d57e0\x2d46f5\x2d9e44\x2ddf634db1299f.mount: Deactivated successfully.
Dec 02 10:07:48 np0005541914.localdomain ceph-mon[301710]: pgmap v275: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s
Dec 02 10:07:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:48.500 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:48 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 1 addresses
Dec 02 10:07:48 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:07:48 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:07:48 np0005541914.localdomain podman[314956]: 2025-12-02 10:07:48.521554074 +0000 UTC m=+0.044923642 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 02 10:07:49 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v276: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 3.2 KiB/s wr, 15 op/s
Dec 02 10:07:49 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:07:49.110 2 INFO neutron.agent.securitygroups_rpc [None req-1c456783-7537-497f-860f-91e236f22124 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:07:49 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "57640910-43d5-4fb7-b9cf-1d15a1cbc8ab", "format": "json"}]: dispatch
Dec 02 10:07:49 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:57640910-43d5-4fb7-b9cf-1d15a1cbc8ab, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:07:49 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:57640910-43d5-4fb7-b9cf-1d15a1cbc8ab, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:07:49 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '57640910-43d5-4fb7-b9cf-1d15a1cbc8ab' of type subvolume
Dec 02 10:07:49 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:07:49.198+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '57640910-43d5-4fb7-b9cf-1d15a1cbc8ab' of type subvolume
Dec 02 10:07:49 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "57640910-43d5-4fb7-b9cf-1d15a1cbc8ab", "force": true, "format": "json"}]: dispatch
Dec 02 10:07:49 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:57640910-43d5-4fb7-b9cf-1d15a1cbc8ab, vol_name:cephfs) < ""
Dec 02 10:07:49 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/57640910-43d5-4fb7-b9cf-1d15a1cbc8ab'' moved to trashcan
Dec 02 10:07:49 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:07:49 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:57640910-43d5-4fb7-b9cf-1d15a1cbc8ab, vol_name:cephfs) < ""
Dec 02 10:07:49 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:07:49 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:07:49 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:07:49 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:07:49 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:07:49 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:07:49.219+0000 7fd37f572640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:07:49 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:07:49.219+0000 7fd37f572640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:07:49 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:07:49.219+0000 7fd37f572640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:07:49 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:07:49.219+0000 7fd37f572640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:07:49 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:07:49.219+0000 7fd37f572640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:07:49 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:07:49 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:07:49 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:07:49 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:07:49 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:07:49 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:07:49.251+0000 7fd37fd73640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:07:49 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:07:49.251+0000 7fd37fd73640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:07:49 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:07:49.251+0000 7fd37fd73640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:07:49 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:07:49.251+0000 7fd37fd73640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:07:49 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:07:49.251+0000 7fd37fd73640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:07:50 np0005541914.localdomain ceph-mon[301710]: pgmap v276: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 3.2 KiB/s wr, 15 op/s
Dec 02 10:07:50 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "57640910-43d5-4fb7-b9cf-1d15a1cbc8ab", "format": "json"}]: dispatch
Dec 02 10:07:50 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "57640910-43d5-4fb7-b9cf-1d15a1cbc8ab", "force": true, "format": "json"}]: dispatch
Dec 02 10:07:50 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:50.456 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:07:50Z, description=, device_id=78d092c4-a185-413f-9411-6829a90d534a, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c43850>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034bef490>], id=01ac7708-8593-4476-a708-c9acb0f1f95f, ip_allocation=immediate, mac_address=fa:16:3e:8b:f8:eb, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1769, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:07:50Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:07:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:50.582 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:50 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses
Dec 02 10:07:50 np0005541914.localdomain podman[315016]: 2025-12-02 10:07:50.653335342 +0000 UTC m=+0.058409798 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 02 10:07:50 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:07:50 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:07:50 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:07:50 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:07:50 np0005541914.localdomain podman[315029]: 2025-12-02 10:07:50.775084817 +0000 UTC m=+0.084706417 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 10:07:50 np0005541914.localdomain podman[315029]: 2025-12-02 10:07:50.794802033 +0000 UTC m=+0.104423623 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 10:07:50 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:07:50 np0005541914.localdomain podman[315031]: 2025-12-02 10:07:50.880429235 +0000 UTC m=+0.186211377 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, release=1755695350, container_name=openstack_network_exporter, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, version=9.6, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.expose-services=, config_id=edpm, maintainer=Red Hat, Inc.)
Dec 02 10:07:50 np0005541914.localdomain podman[315031]: 2025-12-02 10:07:50.896871481 +0000 UTC m=+0.202653693 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.openshift.expose-services=, name=ubi9-minimal, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, config_id=edpm, distribution-scope=public, maintainer=Red Hat, Inc., vcs-type=git)
Dec 02 10:07:50 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:07:50 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:50.962 262347 INFO neutron.agent.dhcp.agent [None req-700cc9f1-1174-4a17-8173-8a657a7233c9 - - - - - -] DHCP configuration for ports {'01ac7708-8593-4476-a708-c9acb0f1f95f'} is completed
Dec 02 10:07:51 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v277: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 2.6 KiB/s wr, 0 op/s
Dec 02 10:07:51 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:07:51.564 2 INFO neutron.agent.securitygroups_rpc [None req-3a04bc57-bfc9-42ed-a239-801c8326e405 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:07:52 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:07:52.379 2 INFO neutron.agent.securitygroups_rpc [None req-b88c63e0-efad-4ee2-bdbd-ab6bd93ed0e7 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:07:52 np0005541914.localdomain ceph-mon[301710]: pgmap v277: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 2.6 KiB/s wr, 0 op/s
Dec 02 10:07:52 np0005541914.localdomain ceph-mon[301710]: mgrmap e45: np0005541914.lljzmk(active, since 7m), standbys: np0005541912.qwddia, np0005541913.mfesdm
Dec 02 10:07:52 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:07:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:52.486 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:53 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v278: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 5.8 KiB/s wr, 2 op/s
Dec 02 10:07:53 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 1 addresses
Dec 02 10:07:53 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:07:53 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:07:53 np0005541914.localdomain podman[315094]: 2025-12-02 10:07:53.463579204 +0000 UTC m=+0.048574225 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:07:54 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:07:54.022 2 INFO neutron.agent.securitygroups_rpc [None req-1b0d4d6e-60bd-47bc-abae-8825e5c440ec 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:07:54 np0005541914.localdomain ceph-mon[301710]: pgmap v278: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 5.8 KiB/s wr, 2 op/s
Dec 02 10:07:54 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:07:54.810 2 INFO neutron.agent.securitygroups_rpc [None req-f2f34722-a858-417d-bb9f-16583a7fb9bc 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:07:54 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:54.920 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:07:54Z, description=, device_id=bd3a398a-f17b-4e2f-8103-70e4ec91527f, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034b84b50>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034b84520>], id=a5e3f6a7-8a59-48f3-a0a7-6a9123ac18db, ip_allocation=immediate, mac_address=fa:16:3e:90:c2:e4, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1799, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:07:54Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:07:55 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v279: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 5.8 KiB/s wr, 2 op/s
Dec 02 10:07:55 np0005541914.localdomain systemd[1]: tmp-crun.3i6Cp2.mount: Deactivated successfully.
Dec 02 10:07:55 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses
Dec 02 10:07:55 np0005541914.localdomain podman[315130]: 2025-12-02 10:07:55.148653074 +0000 UTC m=+0.078530846 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 02 10:07:55 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:07:55 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:07:55 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:55.325 262347 INFO neutron.agent.dhcp.agent [None req-70b55b8b-cc49-4fd8-980d-3f9c95a496dc - - - - - -] DHCP configuration for ports {'a5e3f6a7-8a59-48f3-a0a7-6a9123ac18db'} is completed
Dec 02 10:07:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:55.585 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:56 np0005541914.localdomain ceph-mon[301710]: pgmap v279: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 5.8 KiB/s wr, 2 op/s
Dec 02 10:07:56 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/4256956907' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:07:56 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:07:57 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v280: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 5.8 KiB/s wr, 2 op/s
Dec 02 10:07:57 np0005541914.localdomain systemd[1]: tmp-crun.lhs1w0.mount: Deactivated successfully.
Dec 02 10:07:57 np0005541914.localdomain podman[315151]: 2025-12-02 10:07:57.088161099 +0000 UTC m=+0.091793995 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, container_name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 02 10:07:57 np0005541914.localdomain podman[315151]: 2025-12-02 10:07:57.09959751 +0000 UTC m=+0.103230406 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 02 10:07:57 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:07:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:07:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:57.521 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:07:57 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:07:57.906 2 INFO neutron.agent.securitygroups_rpc [None req-2105c7c1-c7a9-4dc4-9a73-811f6d407872 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:07:58 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 1 addresses
Dec 02 10:07:58 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:07:58 np0005541914.localdomain podman[315188]: 2025-12-02 10:07:58.462947147 +0000 UTC m=+0.067834797 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:07:58 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:07:58 np0005541914.localdomain ceph-mon[301710]: pgmap v280: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 5.8 KiB/s wr, 2 op/s
Dec 02 10:07:58 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/1390269581' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:07:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v281: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 6.2 KiB/s wr, 2 op/s
Dec 02 10:07:59 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:59.235 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:07:58Z, description=, device_id=e7c99296-9174-4aa1-8a50-fbf38ad25ab2, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c631f0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034d34b20>], id=c4605c3a-263f-4351-968e-4bd2c4b38f47, ip_allocation=immediate, mac_address=fa:16:3e:59:89:ec, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1849, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:07:59Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:07:59 np0005541914.localdomain podman[315227]: 2025-12-02 10:07:59.427878961 +0000 UTC m=+0.050481374 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 02 10:07:59 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses
Dec 02 10:07:59 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:07:59 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:07:59 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:07:59.456 2 INFO neutron.agent.securitygroups_rpc [None req-abbcf6d8-096d-46e2-96f3-3a8543ab77e7 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:07:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:07:59.529 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:07:59 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:59.693 262347 INFO neutron.agent.dhcp.agent [None req-e4df9d5d-7f0a-47f5-b99b-75a474f9ba99 - - - - - -] DHCP configuration for ports {'c4605c3a-263f-4351-968e-4bd2c4b38f47'} is completed
Dec 02 10:07:59 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:07:59.734 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:07:59Z, description=, device_id=6e07310c-661c-4e65-99ba-3cf8abb9265e, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c4afa0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c4aaf0>], id=80ff45a8-fbd8-425c-9b74-a117d6b2b0b7, ip_allocation=immediate, mac_address=fa:16:3e:09:9b:fa, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1851, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:07:59Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:07:59 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses
Dec 02 10:07:59 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:07:59 np0005541914.localdomain podman[315266]: 2025-12-02 10:07:59.950686789 +0000 UTC m=+0.064398902 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 02 10:07:59 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:08:00 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:08:00.267 262347 INFO neutron.agent.dhcp.agent [None req-e2b2850a-fbe7-4f1c-95df-aae417ce2aed - - - - - -] DHCP configuration for ports {'80ff45a8-fbd8-425c-9b74-a117d6b2b0b7'} is completed
Dec 02 10:08:00 np0005541914.localdomain ceph-mon[301710]: pgmap v281: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 6.2 KiB/s wr, 2 op/s
Dec 02 10:08:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:00.529 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:08:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:00.587 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:08:01 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v282: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 3.7 KiB/s wr, 1 op/s
Dec 02 10:08:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:01.529 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:08:02 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:08:02 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e134 e134: 6 total, 6 up, 6 in
Dec 02 10:08:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:02.569 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:08:02 np0005541914.localdomain ceph-mon[301710]: pgmap v282: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 3.7 KiB/s wr, 1 op/s
Dec 02 10:08:02 np0005541914.localdomain systemd[1]: tmp-crun.cne5GG.mount: Deactivated successfully.
Dec 02 10:08:02 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses
Dec 02 10:08:02 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:08:02 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:08:02 np0005541914.localdomain podman[315303]: 2025-12-02 10:08:02.599665362 +0000 UTC m=+0.099596105 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 02 10:08:03 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v284: 177 pgs: 177 active+clean; 145 MiB data, 760 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 3.0 KiB/s wr, 20 op/s
Dec 02 10:08:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:08:03.178 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:08:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:08:03.179 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:08:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:08:03.179 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:08:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:03.523 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:08:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:03.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:08:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:03.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:08:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:03.550 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:08:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:03.551 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:08:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:03.551 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:08:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:03.552 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 10:08:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:03.552 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:08:03 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e135 e135: 6 total, 6 up, 6 in
Dec 02 10:08:03 np0005541914.localdomain ceph-mon[301710]: osdmap e134: 6 total, 6 up, 6 in
Dec 02 10:08:03 np0005541914.localdomain podman[239757]: time="2025-12-02T10:08:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:08:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:08:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 10:08:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:08:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19204 "" "Go-http-client/1.1"
Dec 02 10:08:04 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:08:04 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2242023877' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:08:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:04.032 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:08:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:04.270 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 10:08:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:04.273 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11546MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 10:08:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:04.273 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:08:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:04.274 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:08:04 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 02 10:08:04 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2808362731' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:08:04 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 02 10:08:04 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2808362731' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:08:04 np0005541914.localdomain ceph-mon[301710]: pgmap v284: 177 pgs: 177 active+clean; 145 MiB data, 760 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 3.0 KiB/s wr, 20 op/s
Dec 02 10:08:04 np0005541914.localdomain ceph-mon[301710]: osdmap e135: 6 total, 6 up, 6 in
Dec 02 10:08:04 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/2242023877' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:08:04 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2808362731' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:08:04 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2808362731' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:08:04 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e136 e136: 6 total, 6 up, 6 in
Dec 02 10:08:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:04.648 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 10:08:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:04.649 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 10:08:04 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:08:04.777 2 INFO neutron.agent.securitygroups_rpc [None req-22f3ee62-f7aa-4000-8792-d140ffb54960 ea09fd599b014976b4b6d101bd660615 64d30b95640d4bc4991756da49cb0163 - - default default] Security group member updated ['e4e82d11-7ddc-4424-b13a-044ca8b63239']
Dec 02 10:08:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:04.975 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Refreshing inventories for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 02 10:08:05 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v287: 177 pgs: 177 active+clean; 145 MiB data, 760 MiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 4.2 KiB/s wr, 33 op/s
Dec 02 10:08:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:05.589 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:08:05 np0005541914.localdomain ceph-mon[301710]: osdmap e136: 6 total, 6 up, 6 in
Dec 02 10:08:05 np0005541914.localdomain ceph-mon[301710]: pgmap v287: 177 pgs: 177 active+clean; 145 MiB data, 760 MiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 4.2 KiB/s wr, 33 op/s
Dec 02 10:08:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:05.670 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Updating ProviderTree inventory for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 02 10:08:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:05.671 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Updating inventory in ProviderTree for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 02 10:08:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:05.686 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Refreshing aggregate associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 02 10:08:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:05.713 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Refreshing trait associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, traits: HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,HW_CPU_X86_SSE42,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_IDE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AKI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 02 10:08:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:05.747 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:08:06 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:08:06.102 2 INFO neutron.agent.securitygroups_rpc [None req-c8dc5996-311b-454a-bef8-be44e05069d7 ea09fd599b014976b4b6d101bd660615 64d30b95640d4bc4991756da49cb0163 - - default default] Security group member updated ['e4e82d11-7ddc-4424-b13a-044ca8b63239']
Dec 02 10:08:06 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:08:06.139 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:08:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:08:06 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2347714614' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:08:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:06.218 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:08:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:06.225 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 10:08:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:06.252 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 10:08:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:06.254 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 10:08:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:06.255 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.981s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:08:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:06.256 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:08:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:06.256 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 02 10:08:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:06.277 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 02 10:08:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:06.278 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:08:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:06.278 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 02 10:08:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e137 e137: 6 total, 6 up, 6 in
Dec 02 10:08:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:08:06
Dec 02 10:08:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 02 10:08:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] do_upmap
Dec 02 10:08:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] pools ['vms', 'manila_data', '.mgr', 'backups', 'manila_metadata', 'volumes', 'images']
Dec 02 10:08:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes
Dec 02 10:08:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:08:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:08:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:08:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:08:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:08:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:08:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v289: 177 pgs: 177 active+clean; 145 MiB data, 760 MiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 5.5 KiB/s wr, 44 op/s
Dec 02 10:08:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust
Dec 02 10:08:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:08:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 02 10:08:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:08:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32)
Dec 02 10:08:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:08:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 8.17891541038526e-07 of space, bias 1.0, pg target 0.0001633056776940257 quantized to 32 (current 32)
Dec 02 10:08:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:08:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00430047372278057 of space, bias 1.0, pg target 0.8586612533151871 quantized to 32 (current 32)
Dec 02 10:08:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:08:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 02 10:08:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:08:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 02 10:08:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:08:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 6.815762841987716e-06 of space, bias 4.0, pg target 0.005425347222222221 quantized to 16 (current 16)
Dec 02 10:08:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 02 10:08:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:08:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 02 10:08:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:08:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:08:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:08:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:08:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:08:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:08:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:08:07 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/2347714614' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:08:07 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/2178363026' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:08:07 np0005541914.localdomain ceph-mon[301710]: osdmap e137: 6 total, 6 up, 6 in
Dec 02 10:08:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:07.288 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:08:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "60995fd9-a7c9-4e80-ba2f-4e09200b332e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:08:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:60995fd9-a7c9-4e80-ba2f-4e09200b332e, vol_name:cephfs) < ""
Dec 02 10:08:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:07.315 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:08:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:07.316 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 10:08:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:07.316 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 10:08:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:07.329 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 10:08:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:07.330 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:08:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/60995fd9-a7c9-4e80-ba2f-4e09200b332e/.meta.tmp'
Dec 02 10:08:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/60995fd9-a7c9-4e80-ba2f-4e09200b332e/.meta.tmp' to config b'/volumes/_nogroup/60995fd9-a7c9-4e80-ba2f-4e09200b332e/.meta'
Dec 02 10:08:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:60995fd9-a7c9-4e80-ba2f-4e09200b332e, vol_name:cephfs) < ""
Dec 02 10:08:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "60995fd9-a7c9-4e80-ba2f-4e09200b332e", "format": "json"}]: dispatch
Dec 02 10:08:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:60995fd9-a7c9-4e80-ba2f-4e09200b332e, vol_name:cephfs) < ""
Dec 02 10:08:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:60995fd9-a7c9-4e80-ba2f-4e09200b332e, vol_name:cephfs) < ""
Dec 02 10:08:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:08:07 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:08:07.528 2 INFO neutron.agent.securitygroups_rpc [None req-99b1e585-32ae-4cc8-9a4d-b88a12900723 b9c801fe16fd46b78d8c4d5c23cd99c7 50b20ebe68c9494a933fabe997d62528 - - default default] Security group member updated ['0990385a-b99f-41bd-8d17-8e7fb5ec4794']
Dec 02 10:08:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:07.572 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:08:08 np0005541914.localdomain ceph-mon[301710]: pgmap v289: 177 pgs: 177 active+clean; 145 MiB data, 760 MiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 5.5 KiB/s wr, 44 op/s
Dec 02 10:08:08 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "60995fd9-a7c9-4e80-ba2f-4e09200b332e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:08:08 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "60995fd9-a7c9-4e80-ba2f-4e09200b332e", "format": "json"}]: dispatch
Dec 02 10:08:08 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:08:08 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/2385997552' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:08:08 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1668397609' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:08:08 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1668397609' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:08:08 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e138 e138: 6 total, 6 up, 6 in
Dec 02 10:08:08 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:08:08.755 2 INFO neutron.agent.securitygroups_rpc [None req-c7a539d4-2f79-4a17-aaa4-1046dc1167cd b9c801fe16fd46b78d8c4d5c23cd99c7 50b20ebe68c9494a933fabe997d62528 - - default default] Security group member updated ['0990385a-b99f-41bd-8d17-8e7fb5ec4794']
Dec 02 10:08:08 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:08:08.999 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:08:09 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v291: 177 pgs: 177 active+clean; 145 MiB data, 760 MiB used, 41 GiB / 42 GiB avail; 128 KiB/s rd, 18 KiB/s wr, 178 op/s
Dec 02 10:08:09 np0005541914.localdomain ceph-mon[301710]: osdmap e138: 6 total, 6 up, 6 in
Dec 02 10:08:09 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e139 e139: 6 total, 6 up, 6 in
Dec 02 10:08:09 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:09.526 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:08:09 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:09.527 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 10:08:10 np0005541914.localdomain ceph-mon[301710]: pgmap v291: 177 pgs: 177 active+clean; 145 MiB data, 760 MiB used, 41 GiB / 42 GiB avail; 128 KiB/s rd, 18 KiB/s wr, 178 op/s
Dec 02 10:08:10 np0005541914.localdomain ceph-mon[301710]: osdmap e139: 6 total, 6 up, 6 in
Dec 02 10:08:10 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:10.592 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:08:10 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:08:10.645 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:08:10 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 02 10:08:10 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2189887501' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:08:10 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 02 10:08:10 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2189887501' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:08:11 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v293: 177 pgs: 177 active+clean; 145 MiB data, 760 MiB used, 41 GiB / 42 GiB avail; 118 KiB/s rd, 16 KiB/s wr, 163 op/s
Dec 02 10:08:11 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "aa7c0661-ed15-4711-8a85-f361d992598b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:08:11 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:aa7c0661-ed15-4711-8a85-f361d992598b, vol_name:cephfs) < ""
Dec 02 10:08:11 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/aa7c0661-ed15-4711-8a85-f361d992598b/.meta.tmp'
Dec 02 10:08:11 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/aa7c0661-ed15-4711-8a85-f361d992598b/.meta.tmp' to config b'/volumes/_nogroup/aa7c0661-ed15-4711-8a85-f361d992598b/.meta'
Dec 02 10:08:11 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:aa7c0661-ed15-4711-8a85-f361d992598b, vol_name:cephfs) < ""
Dec 02 10:08:11 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "aa7c0661-ed15-4711-8a85-f361d992598b", "format": "json"}]: dispatch
Dec 02 10:08:11 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:aa7c0661-ed15-4711-8a85-f361d992598b, vol_name:cephfs) < ""
Dec 02 10:08:11 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:aa7c0661-ed15-4711-8a85-f361d992598b, vol_name:cephfs) < ""
Dec 02 10:08:11 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e140 e140: 6 total, 6 up, 6 in
Dec 02 10:08:11 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2189887501' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:08:11 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2189887501' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:08:11 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:08:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:08:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:08:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:08:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:08:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:08:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:08:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:08:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:08:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:08:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:08:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:08:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:08:12 np0005541914.localdomain ceph-mon[301710]: pgmap v293: 177 pgs: 177 active+clean; 145 MiB data, 760 MiB used, 41 GiB / 42 GiB avail; 118 KiB/s rd, 16 KiB/s wr, 163 op/s
Dec 02 10:08:12 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "aa7c0661-ed15-4711-8a85-f361d992598b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:08:12 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "aa7c0661-ed15-4711-8a85-f361d992598b", "format": "json"}]: dispatch
Dec 02 10:08:12 np0005541914.localdomain ceph-mon[301710]: osdmap e140: 6 total, 6 up, 6 in
Dec 02 10:08:12 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:08:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:12.574 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:08:12 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:08:12 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:08:13 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:08:13 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:08:13 np0005541914.localdomain podman[315368]: 2025-12-02 10:08:13.057520588 +0000 UTC m=+0.060918813 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 02 10:08:13 np0005541914.localdomain podman[315368]: 2025-12-02 10:08:13.065808433 +0000 UTC m=+0.069206628 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:08:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v295: 177 pgs: 177 active+clean; 145 MiB data, 760 MiB used, 41 GiB / 42 GiB avail; 216 KiB/s rd, 30 KiB/s wr, 298 op/s
Dec 02 10:08:13 np0005541914.localdomain systemd[1]: tmp-crun.MWISR6.mount: Deactivated successfully.
Dec 02 10:08:13 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:08:13 np0005541914.localdomain podman[315392]: 2025-12-02 10:08:13.131197102 +0000 UTC m=+0.089154621 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Dec 02 10:08:13 np0005541914.localdomain podman[315369]: 2025-12-02 10:08:13.164437763 +0000 UTC m=+0.161444542 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 10:08:13 np0005541914.localdomain podman[315369]: 2025-12-02 10:08:13.17573055 +0000 UTC m=+0.172737379 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 10:08:13 np0005541914.localdomain podman[315392]: 2025-12-02 10:08:13.181841578 +0000 UTC m=+0.139799097 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 02 10:08:13 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:08:13 np0005541914.localdomain podman[315389]: 2025-12-02 10:08:13.098127646 +0000 UTC m=+0.062530612 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Dec 02 10:08:13 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:08:13 np0005541914.localdomain podman[315389]: 2025-12-02 10:08:13.22876897 +0000 UTC m=+0.193171886 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 02 10:08:13 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:08:13 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e141 e141: 6 total, 6 up, 6 in
Dec 02 10:08:14 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:08:14.142 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:08:13Z, description=, device_id=fa4c36f4-387f-4161-9755-5d4f05fecaf8, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c63970>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f40369a0220>], id=dd85ff93-7a22-4f13-8b46-aaa1f9cacdaa, ip_allocation=immediate, mac_address=fa:16:3e:ad:e0:85, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1918, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:08:13Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:08:14 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses
Dec 02 10:08:14 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:08:14 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:08:14 np0005541914.localdomain podman[315468]: 2025-12-02 10:08:14.366428275 +0000 UTC m=+0.062390888 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Dec 02 10:08:14 np0005541914.localdomain ceph-mon[301710]: pgmap v295: 177 pgs: 177 active+clean; 145 MiB data, 760 MiB used, 41 GiB / 42 GiB avail; 216 KiB/s rd, 30 KiB/s wr, 298 op/s
Dec 02 10:08:14 np0005541914.localdomain ceph-mon[301710]: osdmap e141: 6 total, 6 up, 6 in
Dec 02 10:08:14 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e142 e142: 6 total, 6 up, 6 in
Dec 02 10:08:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:14.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:08:14 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "aa7c0661-ed15-4711-8a85-f361d992598b", "snap_name": "c562bc6d-afee-44f1-9f1b-5b7fe43288c6", "format": "json"}]: dispatch
Dec 02 10:08:14 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c562bc6d-afee-44f1-9f1b-5b7fe43288c6, sub_name:aa7c0661-ed15-4711-8a85-f361d992598b, vol_name:cephfs) < ""
Dec 02 10:08:14 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:08:14.611 262347 INFO neutron.agent.dhcp.agent [None req-88192ab6-62ca-48db-8a74-d2e74f2dd84c - - - - - -] DHCP configuration for ports {'dd85ff93-7a22-4f13-8b46-aaa1f9cacdaa'} is completed
Dec 02 10:08:14 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:c562bc6d-afee-44f1-9f1b-5b7fe43288c6, sub_name:aa7c0661-ed15-4711-8a85-f361d992598b, vol_name:cephfs) < ""
Dec 02 10:08:15 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v298: 177 pgs: 177 active+clean; 145 MiB data, 760 MiB used, 41 GiB / 42 GiB avail; 102 KiB/s rd, 14 KiB/s wr, 141 op/s
Dec 02 10:08:15 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e143 e143: 6 total, 6 up, 6 in
Dec 02 10:08:15 np0005541914.localdomain ceph-mon[301710]: osdmap e142: 6 total, 6 up, 6 in
Dec 02 10:08:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:08:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:08:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.442 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:08:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:08:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:08:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:08:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:08:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:08:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:08:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:08:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:08:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:08:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:08:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:08:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:08:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:08:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:08:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:08:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:08:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:08:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:08:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:08:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:08:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:08:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:08:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:08:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:15.632 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:08:16 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e144 e144: 6 total, 6 up, 6 in
Dec 02 10:08:16 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "aa7c0661-ed15-4711-8a85-f361d992598b", "snap_name": "c562bc6d-afee-44f1-9f1b-5b7fe43288c6", "format": "json"}]: dispatch
Dec 02 10:08:16 np0005541914.localdomain ceph-mon[301710]: pgmap v298: 177 pgs: 177 active+clean; 145 MiB data, 760 MiB used, 41 GiB / 42 GiB avail; 102 KiB/s rd, 14 KiB/s wr, 141 op/s
Dec 02 10:08:16 np0005541914.localdomain ceph-mon[301710]: osdmap e143: 6 total, 6 up, 6 in
Dec 02 10:08:16 np0005541914.localdomain ceph-mon[301710]: osdmap e144: 6 total, 6 up, 6 in
Dec 02 10:08:16 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses
Dec 02 10:08:16 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:08:16 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:08:16 np0005541914.localdomain podman[315506]: 2025-12-02 10:08:16.794220013 +0000 UTC m=+0.055638360 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:08:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v301: 177 pgs: 177 active+clean; 145 MiB data, 760 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:08:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e145 e145: 6 total, 6 up, 6 in
Dec 02 10:08:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:08:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:17.577 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:08:18 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:08:18.372 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:08:17Z, description=, device_id=39bd4c60-e6fd-4810-b3aa-833809eba7cb, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c280d0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c28700>], id=817538ad-ee0b-4af8-be7e-37d998560f02, ip_allocation=immediate, mac_address=fa:16:3e:63:92:e4, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1928, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:08:18Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:08:18 np0005541914.localdomain ceph-mon[301710]: pgmap v301: 177 pgs: 177 active+clean; 145 MiB data, 760 MiB used, 41 GiB / 42 GiB avail
Dec 02 10:08:18 np0005541914.localdomain ceph-mon[301710]: osdmap e145: 6 total, 6 up, 6 in
Dec 02 10:08:18 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses
Dec 02 10:08:18 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:08:18 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:08:18 np0005541914.localdomain podman[315544]: 2025-12-02 10:08:18.581971174 +0000 UTC m=+0.055206367 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:08:18 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:08:18.821 262347 INFO neutron.agent.dhcp.agent [None req-09dd9778-8466-4385-b370-97ba4ba93c55 - - - - - -] DHCP configuration for ports {'817538ad-ee0b-4af8-be7e-37d998560f02'} is completed
Dec 02 10:08:18 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:08:18.904 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:19:93 10.100.0.2 2001:db8::f816:3eff:fee6:1993'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a59d5a92-7a77-419d-a87f-fbb46ea78955) old=Port_Binding(mac=['fa:16:3e:e6:19:93 2001:db8::f816:3eff:fee6:1993'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:08:18 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:08:18.906 159483 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a59d5a92-7a77-419d-a87f-fbb46ea78955 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 updated
Dec 02 10:08:18 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:08:18.908 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:08:18 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:08:18.909 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[b3c2dc4a-21d5-4ccc-a36a-544189da9972]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:08:19 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v303: 177 pgs: 177 active+clean; 145 MiB data, 769 MiB used, 41 GiB / 42 GiB avail; 183 KiB/s rd, 21 KiB/s wr, 256 op/s
Dec 02 10:08:19 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:08:19.295 2 INFO neutron.agent.securitygroups_rpc [None req-2bfb9ebd-1846-44bd-b2e4-1f309ec769c2 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:08:19 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:08:19.335 2 INFO neutron.agent.securitygroups_rpc [None req-a38d4309-d6ec-4127-b224-040aeb412100 b9c801fe16fd46b78d8c4d5c23cd99c7 50b20ebe68c9494a933fabe997d62528 - - default default] Security group member updated ['0990385a-b99f-41bd-8d17-8e7fb5ec4794']
Dec 02 10:08:20 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:08:20.054 2 INFO neutron.agent.securitygroups_rpc [None req-d321651e-4716-4e9e-b955-449cf71fa8bf 11daa5bc8801433f99b71663879a8016 62771fbe049e4d57aae1b3554ed3a36c - - default default] Security group member updated ['e79580ca-0f44-4e36-92d0-a0d65fb01c6b']
Dec 02 10:08:20 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:08:20.425 2 INFO neutron.agent.securitygroups_rpc [None req-d321651e-4716-4e9e-b955-449cf71fa8bf 11daa5bc8801433f99b71663879a8016 62771fbe049e4d57aae1b3554ed3a36c - - default default] Security group member updated ['e79580ca-0f44-4e36-92d0-a0d65fb01c6b']
Dec 02 10:08:20 np0005541914.localdomain ceph-mon[301710]: pgmap v303: 177 pgs: 177 active+clean; 145 MiB data, 769 MiB used, 41 GiB / 42 GiB avail; 183 KiB/s rd, 21 KiB/s wr, 256 op/s
Dec 02 10:08:20 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:20.580 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:08:20 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:20.634 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:08:20 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:08:20.646 2 INFO neutron.agent.securitygroups_rpc [None req-87212674-2d83-471b-8535-396909b240c7 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:08:20 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:08:20 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:08:21 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "aa7c0661-ed15-4711-8a85-f361d992598b", "snap_name": "c562bc6d-afee-44f1-9f1b-5b7fe43288c6_d6e2b43c-7ed8-4069-8208-0ab8116bd864", "force": true, "format": "json"}]: dispatch
Dec 02 10:08:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c562bc6d-afee-44f1-9f1b-5b7fe43288c6_d6e2b43c-7ed8-4069-8208-0ab8116bd864, sub_name:aa7c0661-ed15-4711-8a85-f361d992598b, vol_name:cephfs) < ""
Dec 02 10:08:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/aa7c0661-ed15-4711-8a85-f361d992598b/.meta.tmp'
Dec 02 10:08:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/aa7c0661-ed15-4711-8a85-f361d992598b/.meta.tmp' to config b'/volumes/_nogroup/aa7c0661-ed15-4711-8a85-f361d992598b/.meta'
Dec 02 10:08:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c562bc6d-afee-44f1-9f1b-5b7fe43288c6_d6e2b43c-7ed8-4069-8208-0ab8116bd864, sub_name:aa7c0661-ed15-4711-8a85-f361d992598b, vol_name:cephfs) < ""
Dec 02 10:08:21 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "aa7c0661-ed15-4711-8a85-f361d992598b", "snap_name": "c562bc6d-afee-44f1-9f1b-5b7fe43288c6", "force": true, "format": "json"}]: dispatch
Dec 02 10:08:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c562bc6d-afee-44f1-9f1b-5b7fe43288c6, sub_name:aa7c0661-ed15-4711-8a85-f361d992598b, vol_name:cephfs) < ""
Dec 02 10:08:21 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v304: 177 pgs: 177 active+clean; 145 MiB data, 769 MiB used, 41 GiB / 42 GiB avail; 143 KiB/s rd, 16 KiB/s wr, 199 op/s
Dec 02 10:08:21 np0005541914.localdomain systemd[1]: tmp-crun.BLY1Yw.mount: Deactivated successfully.
Dec 02 10:08:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/aa7c0661-ed15-4711-8a85-f361d992598b/.meta.tmp'
Dec 02 10:08:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/aa7c0661-ed15-4711-8a85-f361d992598b/.meta.tmp' to config b'/volumes/_nogroup/aa7c0661-ed15-4711-8a85-f361d992598b/.meta'
Dec 02 10:08:21 np0005541914.localdomain podman[315565]: 2025-12-02 10:08:21.095179606 +0000 UTC m=+0.098025863 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 10:08:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:c562bc6d-afee-44f1-9f1b-5b7fe43288c6, sub_name:aa7c0661-ed15-4711-8a85-f361d992598b, vol_name:cephfs) < ""
Dec 02 10:08:21 np0005541914.localdomain podman[315566]: 2025-12-02 10:08:21.153855129 +0000 UTC m=+0.150662350 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, name=ubi9-minimal, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git)
Dec 02 10:08:21 np0005541914.localdomain podman[315565]: 2025-12-02 10:08:21.18153193 +0000 UTC m=+0.184378197 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 10:08:21 np0005541914.localdomain podman[315566]: 2025-12-02 10:08:21.193990232 +0000 UTC m=+0.190797453 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, name=ubi9-minimal, io.openshift.expose-services=, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, version=9.6, managed_by=edpm_ansible, architecture=x86_64, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, build-date=2025-08-20T13:12:41)
Dec 02 10:08:21 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:08:21 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e146 e146: 6 total, 6 up, 6 in
Dec 02 10:08:21 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:08:21 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Dec 02 10:08:21 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:21.240547) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 02 10:08:21 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Dec 02 10:08:21 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670101240611, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 908, "num_deletes": 258, "total_data_size": 1857668, "memory_usage": 1876384, "flush_reason": "Manual Compaction"}
Dec 02 10:08:21 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Dec 02 10:08:21 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670101254089, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 1224963, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21501, "largest_seqno": 22404, "table_properties": {"data_size": 1220813, "index_size": 1877, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 10165, "raw_average_key_size": 21, "raw_value_size": 1212225, "raw_average_value_size": 2509, "num_data_blocks": 82, "num_entries": 483, "num_filter_entries": 483, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764670061, "oldest_key_time": 1764670061, "file_creation_time": 1764670101, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Dec 02 10:08:21 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 13597 microseconds, and 5869 cpu microseconds.
Dec 02 10:08:21 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 10:08:21 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:21.254145) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 1224963 bytes OK
Dec 02 10:08:21 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:21.254170) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Dec 02 10:08:21 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:21.256028) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done
Dec 02 10:08:21 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:21.256079) EVENT_LOG_v1 {"time_micros": 1764670101256044, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 02 10:08:21 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:21.256107) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 02 10:08:21 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 1852842, prev total WAL file size 1852842, number of live WAL files 2.
Dec 02 10:08:21 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:08:21 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:21.256768) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132303438' seq:72057594037927935, type:22 .. '7061786F73003132333030' seq:0, type:0; will stop at (end)
Dec 02 10:08:21 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 02 10:08:21 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(1196KB)], [30(17MB)]
Dec 02 10:08:21 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670101256808, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 19153999, "oldest_snapshot_seqno": -1}
Dec 02 10:08:21 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 12630 keys, 17217640 bytes, temperature: kUnknown
Dec 02 10:08:21 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670101336176, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 17217640, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17143771, "index_size": 41192, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31621, "raw_key_size": 339323, "raw_average_key_size": 26, "raw_value_size": 16926659, "raw_average_value_size": 1340, "num_data_blocks": 1564, "num_entries": 12630, "num_filter_entries": 12630, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669502, "oldest_key_time": 0, "file_creation_time": 1764670101, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}}
Dec 02 10:08:21 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 10:08:21 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:21.336507) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 17217640 bytes
Dec 02 10:08:21 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:21.340169) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 241.1 rd, 216.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 17.1 +0.0 blob) out(16.4 +0.0 blob), read-write-amplify(29.7) write-amplify(14.1) OK, records in: 13162, records dropped: 532 output_compression: NoCompression
Dec 02 10:08:21 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:21.340201) EVENT_LOG_v1 {"time_micros": 1764670101340186, "job": 16, "event": "compaction_finished", "compaction_time_micros": 79449, "compaction_time_cpu_micros": 33043, "output_level": 6, "num_output_files": 1, "total_output_size": 17217640, "num_input_records": 13162, "num_output_records": 12630, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 02 10:08:21 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:08:21 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670101340543, "job": 16, "event": "table_file_deletion", "file_number": 32}
Dec 02 10:08:21 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:08:21 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670101343424, "job": 16, "event": "table_file_deletion", "file_number": 30}
Dec 02 10:08:21 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:21.256700) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:08:21 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:21.343512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:08:21 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:21.343520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:08:21 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:21.343523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:08:21 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:21.343526) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:08:21 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:21.343529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:08:21 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:08:21.764 2 INFO neutron.agent.securitygroups_rpc [None req-7754a6d4-074d-4d03-86b1-db3804b94ab5 11daa5bc8801433f99b71663879a8016 62771fbe049e4d57aae1b3554ed3a36c - - default default] Security group member updated ['e79580ca-0f44-4e36-92d0-a0d65fb01c6b']
Dec 02 10:08:22 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "aa7c0661-ed15-4711-8a85-f361d992598b", "snap_name": "c562bc6d-afee-44f1-9f1b-5b7fe43288c6_d6e2b43c-7ed8-4069-8208-0ab8116bd864", "force": true, "format": "json"}]: dispatch
Dec 02 10:08:22 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "aa7c0661-ed15-4711-8a85-f361d992598b", "snap_name": "c562bc6d-afee-44f1-9f1b-5b7fe43288c6", "force": true, "format": "json"}]: dispatch
Dec 02 10:08:22 np0005541914.localdomain ceph-mon[301710]: pgmap v304: 177 pgs: 177 active+clean; 145 MiB data, 769 MiB used, 41 GiB / 42 GiB avail; 143 KiB/s rd, 16 KiB/s wr, 199 op/s
Dec 02 10:08:22 np0005541914.localdomain ceph-mon[301710]: osdmap e146: 6 total, 6 up, 6 in
Dec 02 10:08:22 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:08:22.307 2 INFO neutron.agent.securitygroups_rpc [None req-a959ed63-fb01-427b-9973-6a88ead4c1cf 11daa5bc8801433f99b71663879a8016 62771fbe049e4d57aae1b3554ed3a36c - - default default] Security group member updated ['e79580ca-0f44-4e36-92d0-a0d65fb01c6b']
Dec 02 10:08:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 10:08:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                           ** DB Stats **
                                                           Uptime(secs): 600.0 total, 600.0 interval
                                                           Cumulative writes: 2121 writes, 22K keys, 2121 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.06 MB/s
                                                           Cumulative WAL: 2121 writes, 2121 syncs, 1.00 writes per sync, written: 0.03 GB, 0.06 MB/s
                                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                           Interval writes: 2121 writes, 22K keys, 2121 commit groups, 1.0 writes per commit group, ingest: 35.72 MB, 0.06 MB/s
                                                           Interval WAL: 2121 writes, 2121 syncs, 1.00 writes per sync, written: 0.03 GB, 0.06 MB/s
                                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                           
                                                           ** Compaction Stats [default] **
                                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    130.6      0.20              0.07         8    0.025       0      0       0.0       0.0
                                                             L6      1/0   16.42 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.5    162.5    150.2      0.78              0.31         7    0.112     89K   3465       0.0       0.0
                                                            Sum      1/0   16.42 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   5.5    129.4    146.2      0.98              0.38        15    0.065     89K   3465       0.0       0.0
                                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   5.5    129.8    146.6      0.98              0.38        14    0.070     89K   3465       0.0       0.0
                                                           
                                                           ** Compaction Stats [default] **
                                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    162.5    150.2      0.78              0.31         7    0.112     89K   3465       0.0       0.0
                                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    132.2      0.20              0.07         7    0.028       0      0       0.0       0.0
                                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                                           
                                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                           
                                                           Uptime(secs): 600.0 total, 600.0 interval
                                                           Flush(GB): cumulative 0.025, interval 0.025
                                                           AddFile(GB): cumulative 0.000, interval 0.000
                                                           AddFile(Total Files): cumulative 0, interval 0
                                                           AddFile(L0 Files): cumulative 0, interval 0
                                                           AddFile(Keys): cumulative 0, interval 0
                                                           Cumulative compaction: 0.14 GB write, 0.24 MB/s write, 0.12 GB read, 0.21 MB/s read, 1.0 seconds
                                                           Interval compaction: 0.14 GB write, 0.24 MB/s write, 0.12 GB read, 0.21 MB/s read, 1.0 seconds
                                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                           Block cache BinnedLRUCache@0x562ea3bdf1f0#2 capacity: 308.00 MB usage: 11.81 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 9.6e-05 secs_since: 0
                                                           Block cache entry stats(count,size,portion): DataBlock(543,11.20 MB,3.63551%) FilterBlock(15,266.67 KB,0.0845525%) IndexBlock(15,355.95 KB,0.112861%) Misc(1,0.00 KB,0%)
                                                           
                                                           ** File Read Latency Histogram By Level [default] **
Dec 02 10:08:22 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:08:22.350 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:08:21Z, description=, device_id=360ff237-df4a-410c-985b-04b10ed3866a, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034ca0fa0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034ca0970>], id=34861965-833c-44c1-9b9e-024a5e7ba046, ip_allocation=immediate, mac_address=fa:16:3e:43:ac:7f, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1954, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:08:22Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:08:22 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:08:22 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e147 e147: 6 total, 6 up, 6 in
Dec 02 10:08:22 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 4 addresses
Dec 02 10:08:22 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:08:22 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:08:22 np0005541914.localdomain podman[315626]: 2025-12-02 10:08:22.538183605 +0000 UTC m=+0.055119715 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 02 10:08:22 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:22.580 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:08:22 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:08:22.775 262347 INFO neutron.agent.dhcp.agent [None req-b1c13362-4aea-4376-a3a1-260b5e8a1feb - - - - - -] DHCP configuration for ports {'34861965-833c-44c1-9b9e-024a5e7ba046'} is completed
Dec 02 10:08:23 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v307: 177 pgs: 177 active+clean; 146 MiB data, 769 MiB used, 41 GiB / 42 GiB avail; 162 KiB/s rd, 25 KiB/s wr, 228 op/s
Dec 02 10:08:23 np0005541914.localdomain sudo[315646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 10:08:23 np0005541914.localdomain sudo[315646]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:08:23 np0005541914.localdomain sudo[315646]: pam_unix(sudo:session): session closed for user root
Dec 02 10:08:23 np0005541914.localdomain sudo[315664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 10:08:23 np0005541914.localdomain sudo[315664]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:08:23 np0005541914.localdomain ceph-mon[301710]: osdmap e147: 6 total, 6 up, 6 in
Dec 02 10:08:23 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e148 e148: 6 total, 6 up, 6 in
Dec 02 10:08:23 np0005541914.localdomain sudo[315664]: pam_unix(sudo:session): session closed for user root
Dec 02 10:08:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 10:08:24 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:08:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 02 10:08:24 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:08:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 02 10:08:24 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] update: starting ev 2ae50231-5da6-4ce4-8428-6b7d0b4f970c (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:08:24 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] complete: finished ev 2ae50231-5da6-4ce4-8428-6b7d0b4f970c (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:08:24 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Completed event 2ae50231-5da6-4ce4-8428-6b7d0b4f970c (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 02 10:08:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 02 10:08:24 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:08:24 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "aa7c0661-ed15-4711-8a85-f361d992598b", "format": "json"}]: dispatch
Dec 02 10:08:24 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:aa7c0661-ed15-4711-8a85-f361d992598b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:08:24 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:aa7c0661-ed15-4711-8a85-f361d992598b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:08:24 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'aa7c0661-ed15-4711-8a85-f361d992598b' of type subvolume
Dec 02 10:08:24 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:08:24.303+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'aa7c0661-ed15-4711-8a85-f361d992598b' of type subvolume
Dec 02 10:08:24 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "aa7c0661-ed15-4711-8a85-f361d992598b", "force": true, "format": "json"}]: dispatch
Dec 02 10:08:24 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:aa7c0661-ed15-4711-8a85-f361d992598b, vol_name:cephfs) < ""
Dec 02 10:08:24 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/aa7c0661-ed15-4711-8a85-f361d992598b'' moved to trashcan
Dec 02 10:08:24 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:08:24 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:aa7c0661-ed15-4711-8a85-f361d992598b, vol_name:cephfs) < ""
Dec 02 10:08:24 np0005541914.localdomain sudo[315714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 10:08:24 np0005541914.localdomain sudo[315714]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:08:24 np0005541914.localdomain sudo[315714]: pam_unix(sudo:session): session closed for user root
Dec 02 10:08:24 np0005541914.localdomain ceph-mon[301710]: pgmap v307: 177 pgs: 177 active+clean; 146 MiB data, 769 MiB used, 41 GiB / 42 GiB avail; 162 KiB/s rd, 25 KiB/s wr, 228 op/s
Dec 02 10:08:24 np0005541914.localdomain ceph-mon[301710]: osdmap e148: 6 total, 6 up, 6 in
Dec 02 10:08:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:08:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:08:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:08:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:08:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e149 e149: 6 total, 6 up, 6 in
Dec 02 10:08:24 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:08:24.636 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:19:93 2001:db8::f816:3eff:fee6:1993'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a59d5a92-7a77-419d-a87f-fbb46ea78955) old=Port_Binding(mac=['fa:16:3e:e6:19:93 10.100.0.2 2001:db8::f816:3eff:fee6:1993'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:08:24 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:08:24.638 159483 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a59d5a92-7a77-419d-a87f-fbb46ea78955 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 updated
Dec 02 10:08:24 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:08:24.640 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:08:24 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:08:24.641 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[b397f820-8c50-427a-ba9b-8a07f006834a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:08:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:24.736 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:08:25 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v310: 177 pgs: 177 active+clean; 146 MiB data, 769 MiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 13 KiB/s wr, 42 op/s
Dec 02 10:08:25 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "aa7c0661-ed15-4711-8a85-f361d992598b", "format": "json"}]: dispatch
Dec 02 10:08:25 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "aa7c0661-ed15-4711-8a85-f361d992598b", "force": true, "format": "json"}]: dispatch
Dec 02 10:08:25 np0005541914.localdomain ceph-mon[301710]: osdmap e149: 6 total, 6 up, 6 in
Dec 02 10:08:25 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e150 e150: 6 total, 6 up, 6 in
Dec 02 10:08:25 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:25.636 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:08:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:26.224 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:08:26 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses
Dec 02 10:08:26 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:08:26 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:08:26 np0005541914.localdomain podman[315748]: 2025-12-02 10:08:26.261604394 +0000 UTC m=+0.064627527 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 02 10:08:26 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:08:26.352 2 INFO neutron.agent.securitygroups_rpc [None req-e2093ff2-f702-4cf4-8beb-c324b04696df b9c801fe16fd46b78d8c4d5c23cd99c7 50b20ebe68c9494a933fabe997d62528 - - default default] Security group member updated ['0990385a-b99f-41bd-8d17-8e7fb5ec4794']
Dec 02 10:08:26 np0005541914.localdomain ceph-mon[301710]: pgmap v310: 177 pgs: 177 active+clean; 146 MiB data, 769 MiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 13 KiB/s wr, 42 op/s
Dec 02 10:08:26 np0005541914.localdomain ceph-mon[301710]: osdmap e150: 6 total, 6 up, 6 in
Dec 02 10:08:26 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:08:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6, vol_name:cephfs) < ""
Dec 02 10:08:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6/.meta.tmp'
Dec 02 10:08:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6/.meta.tmp' to config b'/volumes/_nogroup/9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6/.meta'
Dec 02 10:08:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6, vol_name:cephfs) < ""
Dec 02 10:08:27 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6", "format": "json"}]: dispatch
Dec 02 10:08:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6, vol_name:cephfs) < ""
Dec 02 10:08:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6, vol_name:cephfs) < ""
Dec 02 10:08:27 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:08:27.075 2 INFO neutron.agent.securitygroups_rpc [None req-95108d82-ee5d-48ad-b799-8f24c524b687 378bbf1156ab482eae3359fa477651da 13c70d8f74354389b175376619620536 - - default default] Security group member updated ['20308e6b-d2a0-4e90-a058-a0e30da512e9']
Dec 02 10:08:27 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v312: 177 pgs: 177 active+clean; 146 MiB data, 769 MiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 12 KiB/s wr, 37 op/s
Dec 02 10:08:27 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:08:27.113 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:08:26Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034cbdf10>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034cbd820>], id=491f7b83-ee3f-430e-932f-490e3e05d878, ip_allocation=immediate, mac_address=fa:16:3e:3d:ad:62, name=tempest-RoutersAdminNegativeIpV6Test-807161528, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=True, project_id=13c70d8f74354389b175376619620536, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['20308e6b-d2a0-4e90-a058-a0e30da512e9'], standard_attr_id=1979, status=DOWN, tags=[], tenant_id=13c70d8f74354389b175376619620536, updated_at=2025-12-02T10:08:26Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:08:27 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events
Dec 02 10:08:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 02 10:08:27 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 4 addresses
Dec 02 10:08:27 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:08:27 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:08:27 np0005541914.localdomain podman[315787]: 2025-12-02 10:08:27.334072817 +0000 UTC m=+0.069442825 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 02 10:08:27 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:08:27 np0005541914.localdomain podman[315800]: 2025-12-02 10:08:27.452277719 +0000 UTC m=+0.090374988 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 02 10:08:27 np0005541914.localdomain podman[315800]: 2025-12-02 10:08:27.466939369 +0000 UTC m=+0.105036678 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 02 10:08:27 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:08:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:08:27 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:08:27.543 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:19:93 10.100.0.2 2001:db8::f816:3eff:fee6:1993'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a59d5a92-7a77-419d-a87f-fbb46ea78955) old=Port_Binding(mac=['fa:16:3e:e6:19:93 2001:db8::f816:3eff:fee6:1993'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:08:27 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:08:27.545 159483 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a59d5a92-7a77-419d-a87f-fbb46ea78955 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 updated
Dec 02 10:08:27 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:08:27.548 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:08:27 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:08:27.549 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[1bcb9a95-cb84-4e09-ba20-13cf3905df1a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:08:27 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:27.613 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:08:27 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:08:27 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:08:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 02 10:08:27 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3666129381' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:08:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 02 10:08:27 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3666129381' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:08:27 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:08:27.742 262347 INFO neutron.agent.dhcp.agent [None req-d3c736b6-a536-43e4-90e1-0567dd9fd942 - - - - - -] DHCP configuration for ports {'491f7b83-ee3f-430e-932f-490e3e05d878'} is completed
Dec 02 10:08:27 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses
Dec 02 10:08:27 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:08:27 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:08:27 np0005541914.localdomain podman[315844]: 2025-12-02 10:08:27.883278772 +0000 UTC m=+0.054525036 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 02 10:08:28 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "60995fd9-a7c9-4e80-ba2f-4e09200b332e", "format": "json"}]: dispatch
Dec 02 10:08:28 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:60995fd9-a7c9-4e80-ba2f-4e09200b332e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:08:28 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:60995fd9-a7c9-4e80-ba2f-4e09200b332e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:08:28 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '60995fd9-a7c9-4e80-ba2f-4e09200b332e' of type subvolume
Dec 02 10:08:28 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:08:28.210+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '60995fd9-a7c9-4e80-ba2f-4e09200b332e' of type subvolume
Dec 02 10:08:28 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "60995fd9-a7c9-4e80-ba2f-4e09200b332e", "force": true, "format": "json"}]: dispatch
Dec 02 10:08:28 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:60995fd9-a7c9-4e80-ba2f-4e09200b332e, vol_name:cephfs) < ""
Dec 02 10:08:28 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/60995fd9-a7c9-4e80-ba2f-4e09200b332e'' moved to trashcan
Dec 02 10:08:28 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:08:28 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:60995fd9-a7c9-4e80-ba2f-4e09200b332e, vol_name:cephfs) < ""
Dec 02 10:08:28 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:28.471 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:08:28 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:08:28.619 262347 INFO neutron.agent.dhcp.agent [None req-633af77c-d730-4d72-9be7-502ca6237d88 - - - - - -] Synchronizing state
Dec 02 10:08:28 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:08:28.907 262347 INFO neutron.agent.dhcp.agent [None req-285c8656-c63d-4a62-a012-01c1dac192db - - - - - -] All active networks have been fetched through RPC.
Dec 02 10:08:28 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:08:28.908 262347 INFO neutron.agent.dhcp.agent [-] Starting network 82491f39-5f47-4832-8f3f-0918125a354c dhcp configuration
Dec 02 10:08:28 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:08:28.909 262347 INFO neutron.agent.dhcp.agent [-] Finished network 82491f39-5f47-4832-8f3f-0918125a354c dhcp configuration
Dec 02 10:08:28 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:08:28.909 262347 INFO neutron.agent.dhcp.agent [None req-285c8656-c63d-4a62-a012-01c1dac192db - - - - - -] Synchronizing state complete
Dec 02 10:08:28 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:08:28 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6", "format": "json"}]: dispatch
Dec 02 10:08:28 np0005541914.localdomain ceph-mon[301710]: pgmap v312: 177 pgs: 177 active+clean; 146 MiB data, 769 MiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 12 KiB/s wr, 37 op/s
Dec 02 10:08:28 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/3666129381' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:08:28 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/3666129381' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:08:29 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v313: 177 pgs: 177 active+clean; 146 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 17 KiB/s wr, 95 op/s
Dec 02 10:08:29 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:08:29.179 2 INFO neutron.agent.securitygroups_rpc [None req-0c3b85c4-8ee4-4ede-a5c5-9e006eeb1903 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:08:29 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:08:29.361 2 INFO neutron.agent.securitygroups_rpc [None req-29a3155c-2369-431f-9930-578d28142354 378bbf1156ab482eae3359fa477651da 13c70d8f74354389b175376619620536 - - default default] Security group member updated ['20308e6b-d2a0-4e90-a058-a0e30da512e9']
Dec 02 10:08:29 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:08:29.554 2 INFO neutron.agent.securitygroups_rpc [None req-4ad98beb-2033-4528-bdc9-387b15719003 b9c801fe16fd46b78d8c4d5c23cd99c7 50b20ebe68c9494a933fabe997d62528 - - default default] Security group member updated ['0990385a-b99f-41bd-8d17-8e7fb5ec4794']
Dec 02 10:08:29 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses
Dec 02 10:08:29 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:08:29 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:08:29 np0005541914.localdomain podman[315882]: 2025-12-02 10:08:29.593726319 +0000 UTC m=+0.060582652 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 02 10:08:29 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:08:29.707 262347 INFO neutron.agent.dhcp.agent [None req-ebba84e8-672c-4fcb-91f2-6a8526455fe8 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:08:29Z, description=, device_id=16d8413f-4499-43df-91d5-75a325d35422, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034bef7c0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034bef2b0>], id=9fbec2c7-c6c3-4e79-b4aa-9f65686bea53, ip_allocation=immediate, mac_address=fa:16:3e:85:6b:b5, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1990, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:08:29Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:08:29 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses
Dec 02 10:08:29 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:08:29 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:08:29 np0005541914.localdomain podman[315919]: 2025-12-02 10:08:29.870657328 +0000 UTC m=+0.046320754 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:08:29 np0005541914.localdomain systemd[1]: tmp-crun.XOGvZS.mount: Deactivated successfully.
Dec 02 10:08:29 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "60995fd9-a7c9-4e80-ba2f-4e09200b332e", "format": "json"}]: dispatch
Dec 02 10:08:29 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "60995fd9-a7c9-4e80-ba2f-4e09200b332e", "force": true, "format": "json"}]: dispatch
Dec 02 10:08:29 np0005541914.localdomain ceph-mon[301710]: pgmap v313: 177 pgs: 177 active+clean; 146 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 17 KiB/s wr, 95 op/s
Dec 02 10:08:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:08:30.049 262347 INFO neutron.agent.dhcp.agent [None req-15385913-4925-4c05-8b48-a1cf173ad0cc - - - - - -] DHCP configuration for ports {'9fbec2c7-c6c3-4e79-b4aa-9f65686bea53'} is completed
Dec 02 10:08:30 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6", "snap_name": "9664f206-7cb4-4a1a-b619-2a201c7ebe10", "format": "json"}]: dispatch
Dec 02 10:08:30 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:9664f206-7cb4-4a1a-b619-2a201c7ebe10, sub_name:9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6, vol_name:cephfs) < ""
Dec 02 10:08:30 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:9664f206-7cb4-4a1a-b619-2a201c7ebe10, sub_name:9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6, vol_name:cephfs) < ""
Dec 02 10:08:30 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:08:30.604 2 INFO neutron.agent.securitygroups_rpc [None req-84026af9-2eee-4701-994f-c9f2d1b31806 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:08:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:30.638 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:08:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:08:30.783 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:08:30 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6", "snap_name": "9664f206-7cb4-4a1a-b619-2a201c7ebe10", "format": "json"}]: dispatch
Dec 02 10:08:31 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:08:31.000 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:08:31 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v314: 177 pgs: 177 active+clean; 146 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 55 KiB/s rd, 14 KiB/s wr, 76 op/s
Dec 02 10:08:31 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e151 e151: 6 total, 6 up, 6 in
Dec 02 10:08:31 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0.
Dec 02 10:08:31 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:31.235497) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 02 10:08:31 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34
Dec 02 10:08:31 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670111235537, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 470, "num_deletes": 252, "total_data_size": 408037, "memory_usage": 417736, "flush_reason": "Manual Compaction"}
Dec 02 10:08:31 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started
Dec 02 10:08:31 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670111241603, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 267212, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22409, "largest_seqno": 22874, "table_properties": {"data_size": 264600, "index_size": 659, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 7423, "raw_average_key_size": 21, "raw_value_size": 259020, "raw_average_value_size": 742, "num_data_blocks": 29, "num_entries": 349, "num_filter_entries": 349, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764670101, "oldest_key_time": 1764670101, "file_creation_time": 1764670111, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 02 10:08:31 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 6154 microseconds, and 1994 cpu microseconds.
Dec 02 10:08:31 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 10:08:31 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:31.241650) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 267212 bytes OK
Dec 02 10:08:31 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:31.241671) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started
Dec 02 10:08:31 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:31.244117) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done
Dec 02 10:08:31 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:31.244142) EVENT_LOG_v1 {"time_micros": 1764670111244135, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 02 10:08:31 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:31.244166) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 02 10:08:31 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 405071, prev total WAL file size 405071, number of live WAL files 2.
Dec 02 10:08:31 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:08:31 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:31.244783) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740034303037' seq:72057594037927935, type:22 .. '6D6772737461740034323538' seq:0, type:0; will stop at (end)
Dec 02 10:08:31 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 02 10:08:31 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(260KB)], [33(16MB)]
Dec 02 10:08:31 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670111244833, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 17484852, "oldest_snapshot_seqno": -1}
Dec 02 10:08:31 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 12456 keys, 15368831 bytes, temperature: kUnknown
Dec 02 10:08:31 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670111331651, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 15368831, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15300752, "index_size": 35850, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31173, "raw_key_size": 336044, "raw_average_key_size": 26, "raw_value_size": 15091207, "raw_average_value_size": 1211, "num_data_blocks": 1340, "num_entries": 12456, "num_filter_entries": 12456, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669502, "oldest_key_time": 0, "file_creation_time": 1764670111, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 02 10:08:31 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 10:08:31 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:31.331932) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 15368831 bytes
Dec 02 10:08:31 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:31.334848) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 201.2 rd, 176.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 16.4 +0.0 blob) out(14.7 +0.0 blob), read-write-amplify(122.9) write-amplify(57.5) OK, records in: 12979, records dropped: 523 output_compression: NoCompression
Dec 02 10:08:31 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:31.334883) EVENT_LOG_v1 {"time_micros": 1764670111334867, "job": 18, "event": "compaction_finished", "compaction_time_micros": 86916, "compaction_time_cpu_micros": 47485, "output_level": 6, "num_output_files": 1, "total_output_size": 15368831, "num_input_records": 12979, "num_output_records": 12456, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 02 10:08:31 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:08:31 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670111335266, "job": 18, "event": "table_file_deletion", "file_number": 35}
Dec 02 10:08:31 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:08:31 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670111339231, "job": 18, "event": "table_file_deletion", "file_number": 33}
Dec 02 10:08:31 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:31.244698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:08:31 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:31.339308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:08:31 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:31.339316) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:08:31 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:31.339320) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:08:31 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:31.339325) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:08:31 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:08:31.339329) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:08:32 np0005541914.localdomain ceph-mon[301710]: pgmap v314: 177 pgs: 177 active+clean; 146 MiB data, 773 MiB used, 41 GiB / 42 GiB avail; 55 KiB/s rd, 14 KiB/s wr, 76 op/s
Dec 02 10:08:32 np0005541914.localdomain ceph-mon[301710]: osdmap e151: 6 total, 6 up, 6 in
Dec 02 10:08:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:08:32 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:32.615 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:08:33 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v316: 177 pgs: 177 active+clean; 146 MiB data, 774 MiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 22 KiB/s wr, 97 op/s
Dec 02 10:08:33 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e152 e152: 6 total, 6 up, 6 in
Dec 02 10:08:33 np0005541914.localdomain podman[239757]: time="2025-12-02T10:08:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:08:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:08:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 10:08:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:08:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19210 "" "Go-http-client/1.1"
Dec 02 10:08:33 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6", "snap_name": "9664f206-7cb4-4a1a-b619-2a201c7ebe10", "target_sub_name": "dd0a2a5c-154a-4d2f-9d8b-fce17b535945", "format": "json"}]: dispatch
Dec 02 10:08:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:9664f206-7cb4-4a1a-b619-2a201c7ebe10, sub_name:9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6, target_sub_name:dd0a2a5c-154a-4d2f-9d8b-fce17b535945, vol_name:cephfs) < ""
Dec 02 10:08:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/dd0a2a5c-154a-4d2f-9d8b-fce17b535945/.meta.tmp'
Dec 02 10:08:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/dd0a2a5c-154a-4d2f-9d8b-fce17b535945/.meta.tmp' to config b'/volumes/_nogroup/dd0a2a5c-154a-4d2f-9d8b-fce17b535945/.meta'
Dec 02 10:08:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 898ff1c3-8d33-4699-8b66-ad70851c4a10 for path b'/volumes/_nogroup/dd0a2a5c-154a-4d2f-9d8b-fce17b535945'
Dec 02 10:08:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6/.meta.tmp'
Dec 02 10:08:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6/.meta.tmp' to config b'/volumes/_nogroup/9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6/.meta'
Dec 02 10:08:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:08:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:9664f206-7cb4-4a1a-b619-2a201c7ebe10, sub_name:9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6, target_sub_name:dd0a2a5c-154a-4d2f-9d8b-fce17b535945, vol_name:cephfs) < ""
Dec 02 10:08:33 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dd0a2a5c-154a-4d2f-9d8b-fce17b535945", "format": "json"}]: dispatch
Dec 02 10:08:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:dd0a2a5c-154a-4d2f-9d8b-fce17b535945, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:08:33 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:08:33 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:08:33 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:08:33 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:08:33 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:08:33 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:08:33.900+0000 7fd382578640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:08:33 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:08:33.900+0000 7fd382578640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:08:33 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:08:33.900+0000 7fd382578640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:08:33 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:08:33.900+0000 7fd382578640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:08:33 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:08:33.900+0000 7fd382578640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:08:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:dd0a2a5c-154a-4d2f-9d8b-fce17b535945, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:08:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/dd0a2a5c-154a-4d2f-9d8b-fce17b535945
Dec 02 10:08:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, dd0a2a5c-154a-4d2f-9d8b-fce17b535945)
Dec 02 10:08:33 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:08:33.945+0000 7fd382d79640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:08:33 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:08:33 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:08:33.945+0000 7fd382d79640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:08:33 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:08:33 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:08:33 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:08:33.945+0000 7fd382d79640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:08:33 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:08:33 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:08:33.945+0000 7fd382d79640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:08:33 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:08:33.945+0000 7fd382d79640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:08:33 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:08:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, dd0a2a5c-154a-4d2f-9d8b-fce17b535945) -- by 0 seconds
Dec 02 10:08:33 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:08:33.974 2 INFO neutron.agent.securitygroups_rpc [None req-3fe80696-0018-4728-a361-06aaa88dce01 b9c801fe16fd46b78d8c4d5c23cd99c7 50b20ebe68c9494a933fabe997d62528 - - default default] Security group member updated ['0990385a-b99f-41bd-8d17-8e7fb5ec4794']
Dec 02 10:08:34 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/dd0a2a5c-154a-4d2f-9d8b-fce17b535945/.meta.tmp'
Dec 02 10:08:34 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/dd0a2a5c-154a-4d2f-9d8b-fce17b535945/.meta.tmp' to config b'/volumes/_nogroup/dd0a2a5c-154a-4d2f-9d8b-fce17b535945/.meta'
Dec 02 10:08:34 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:08:34.114 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:08:34 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:34.114 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:08:34 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:08:34.115 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 02 10:08:34 np0005541914.localdomain ceph-mon[301710]: pgmap v316: 177 pgs: 177 active+clean; 146 MiB data, 774 MiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 22 KiB/s wr, 97 op/s
Dec 02 10:08:34 np0005541914.localdomain ceph-mon[301710]: osdmap e152: 6 total, 6 up, 6 in
Dec 02 10:08:35 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v318: 177 pgs: 177 active+clean; 146 MiB data, 774 MiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 22 KiB/s wr, 97 op/s
Dec 02 10:08:35 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6", "snap_name": "9664f206-7cb4-4a1a-b619-2a201c7ebe10", "target_sub_name": "dd0a2a5c-154a-4d2f-9d8b-fce17b535945", "format": "json"}]: dispatch
Dec 02 10:08:35 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dd0a2a5c-154a-4d2f-9d8b-fce17b535945", "format": "json"}]: dispatch
Dec 02 10:08:35 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:08:35.356 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:19:93 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a59d5a92-7a77-419d-a87f-fbb46ea78955) old=Port_Binding(mac=['fa:16:3e:e6:19:93 10.100.0.2 2001:db8::f816:3eff:fee6:1993'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:08:35 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:08:35.358 159483 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a59d5a92-7a77-419d-a87f-fbb46ea78955 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 updated
Dec 02 10:08:35 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:08:35.361 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:08:35 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:08:35.362 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[041c7354-45a7-4073-9997-4bd8aa86216e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:08:35 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 02 10:08:35 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/640511349' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:08:35 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 02 10:08:35 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/640511349' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:08:35 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:35.679 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:08:36 np0005541914.localdomain ceph-mon[301710]: pgmap v318: 177 pgs: 177 active+clean; 146 MiB data, 774 MiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 22 KiB/s wr, 97 op/s
Dec 02 10:08:36 np0005541914.localdomain ceph-mon[301710]: mgrmap e46: np0005541914.lljzmk(active, since 8m), standbys: np0005541912.qwddia, np0005541913.mfesdm
Dec 02 10:08:36 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/640511349' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:08:36 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/640511349' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:08:36 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses
Dec 02 10:08:36 np0005541914.localdomain podman[315983]: 2025-12-02 10:08:36.689742805 +0000 UTC m=+0.067232357 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:08:36 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:08:36 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:08:36 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8796ecd6-5c3f-49b4-a11b-51a83206e216", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:08:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8796ecd6-5c3f-49b4-a11b-51a83206e216, vol_name:cephfs) < ""
Dec 02 10:08:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:08:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:08:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:08:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:08:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:08:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:08:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6/.snap/9664f206-7cb4-4a1a-b619-2a201c7ebe10/2b81ef38-0fbb-45e8-bcd4-3691655610b8' to b'/volumes/_nogroup/dd0a2a5c-154a-4d2f-9d8b-fce17b535945/b8120ca7-c290-4e3a-9129-3d9f8c2f97c9'
Dec 02 10:08:37 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v319: 177 pgs: 177 active+clean; 146 MiB data, 774 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 8.7 KiB/s wr, 26 op/s
Dec 02 10:08:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8796ecd6-5c3f-49b4-a11b-51a83206e216/.meta.tmp'
Dec 02 10:08:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8796ecd6-5c3f-49b4-a11b-51a83206e216/.meta.tmp' to config b'/volumes/_nogroup/8796ecd6-5c3f-49b4-a11b-51a83206e216/.meta'
Dec 02 10:08:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8796ecd6-5c3f-49b4-a11b-51a83206e216, vol_name:cephfs) < ""
Dec 02 10:08:37 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8796ecd6-5c3f-49b4-a11b-51a83206e216", "format": "json"}]: dispatch
Dec 02 10:08:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8796ecd6-5c3f-49b4-a11b-51a83206e216, vol_name:cephfs) < ""
Dec 02 10:08:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/dd0a2a5c-154a-4d2f-9d8b-fce17b535945/.meta.tmp'
Dec 02 10:08:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/dd0a2a5c-154a-4d2f-9d8b-fce17b535945/.meta.tmp' to config b'/volumes/_nogroup/dd0a2a5c-154a-4d2f-9d8b-fce17b535945/.meta'
Dec 02 10:08:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8796ecd6-5c3f-49b4-a11b-51a83206e216, vol_name:cephfs) < ""
Dec 02 10:08:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.clone_index] untracking 898ff1c3-8d33-4699-8b66-ad70851c4a10
Dec 02 10:08:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6/.meta.tmp'
Dec 02 10:08:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6/.meta.tmp' to config b'/volumes/_nogroup/9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6/.meta'
Dec 02 10:08:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/dd0a2a5c-154a-4d2f-9d8b-fce17b535945/.meta.tmp'
Dec 02 10:08:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/dd0a2a5c-154a-4d2f-9d8b-fce17b535945/.meta.tmp' to config b'/volumes/_nogroup/dd0a2a5c-154a-4d2f-9d8b-fce17b535945/.meta'
Dec 02 10:08:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, dd0a2a5c-154a-4d2f-9d8b-fce17b535945)
Dec 02 10:08:37 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:08:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:08:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:37.653 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:08:38 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:08:38.065 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:08:37Z, description=, device_id=c7017fcc-0436-48d2-a91c-540638aa1a1f, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034bdf790>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034bdf2e0>], id=0fe53480-3bce-4fec-ade2-8b47c031657a, ip_allocation=immediate, mac_address=fa:16:3e:90:66:d1, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2032, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:08:37Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:08:38 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8796ecd6-5c3f-49b4-a11b-51a83206e216", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:08:38 np0005541914.localdomain ceph-mon[301710]: pgmap v319: 177 pgs: 177 active+clean; 146 MiB data, 774 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 8.7 KiB/s wr, 26 op/s
Dec 02 10:08:38 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8796ecd6-5c3f-49b4-a11b-51a83206e216", "format": "json"}]: dispatch
Dec 02 10:08:38 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:08:38.658 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:19:93 10.100.0.2 2001:db8::f816:3eff:fee6:1993'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a59d5a92-7a77-419d-a87f-fbb46ea78955) old=Port_Binding(mac=['fa:16:3e:e6:19:93 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:08:38 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:08:38.659 159483 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a59d5a92-7a77-419d-a87f-fbb46ea78955 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 updated
Dec 02 10:08:38 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:08:38.660 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:08:38 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:08:38.661 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[abf91245-d36f-46f5-8e0e-c2673f8635a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:08:38 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses
Dec 02 10:08:38 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:08:38 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:08:38 np0005541914.localdomain podman[316021]: 2025-12-02 10:08:38.748904787 +0000 UTC m=+0.057778767 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 02 10:08:39 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v320: 177 pgs: 177 active+clean; 146 MiB data, 774 MiB used, 41 GiB / 42 GiB avail; 34 KiB/s rd, 32 KiB/s wr, 53 op/s
Dec 02 10:08:39 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:08:39.177 262347 INFO neutron.agent.dhcp.agent [None req-b09aea15-8d48-4c40-85c0-6b4bcd469730 - - - - - -] DHCP configuration for ports {'0fe53480-3bce-4fec-ade2-8b47c031657a'} is completed
Dec 02 10:08:40 np0005541914.localdomain ceph-mon[301710]: pgmap v320: 177 pgs: 177 active+clean; 146 MiB data, 774 MiB used, 41 GiB / 42 GiB avail; 34 KiB/s rd, 32 KiB/s wr, 53 op/s
Dec 02 10:08:40 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:40.725 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:08:40 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8796ecd6-5c3f-49b4-a11b-51a83206e216", "format": "json"}]: dispatch
Dec 02 10:08:40 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8796ecd6-5c3f-49b4-a11b-51a83206e216, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:08:40 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8796ecd6-5c3f-49b4-a11b-51a83206e216, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:08:40 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8796ecd6-5c3f-49b4-a11b-51a83206e216' of type subvolume
Dec 02 10:08:40 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:08:40.761+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8796ecd6-5c3f-49b4-a11b-51a83206e216' of type subvolume
Dec 02 10:08:40 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8796ecd6-5c3f-49b4-a11b-51a83206e216", "force": true, "format": "json"}]: dispatch
Dec 02 10:08:40 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8796ecd6-5c3f-49b4-a11b-51a83206e216, vol_name:cephfs) < ""
Dec 02 10:08:40 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8796ecd6-5c3f-49b4-a11b-51a83206e216'' moved to trashcan
Dec 02 10:08:40 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:08:40 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8796ecd6-5c3f-49b4-a11b-51a83206e216, vol_name:cephfs) < ""
Dec 02 10:08:40 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:08:40.866 2 INFO neutron.agent.securitygroups_rpc [None req-33ec59f6-70cd-4828-b040-1367d796c3cf 74c5eb8a019a4e62a5eaf3b3d37efc2b 013c3f934ab54b1a83f18d3dcf154dd0 - - default default] Security group member updated ['b78815c8-0800-4df2-8d06-dc1b5176ba24']
Dec 02 10:08:41 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v321: 177 pgs: 177 active+clean; 146 MiB data, 774 MiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 26 KiB/s wr, 43 op/s
Dec 02 10:08:41 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:08:41.117 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:08:41 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:08:41.223 2 INFO neutron.agent.securitygroups_rpc [None req-67c167f7-d811-43ca-8236-d9881acaf013 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:08:41 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e153 e153: 6 total, 6 up, 6 in
Dec 02 10:08:41 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:08:41.528 2 INFO neutron.agent.securitygroups_rpc [None req-77ccc14a-3033-433b-916a-b05c2a4a2183 74c5eb8a019a4e62a5eaf3b3d37efc2b 013c3f934ab54b1a83f18d3dcf154dd0 - - default default] Security group member updated ['b78815c8-0800-4df2-8d06-dc1b5176ba24']
Dec 02 10:08:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:08:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:08:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:08:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:08:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:08:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:08:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:08:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:08:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:08:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:08:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:08:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:08:42 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8796ecd6-5c3f-49b4-a11b-51a83206e216", "format": "json"}]: dispatch
Dec 02 10:08:42 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8796ecd6-5c3f-49b4-a11b-51a83206e216", "force": true, "format": "json"}]: dispatch
Dec 02 10:08:42 np0005541914.localdomain ceph-mon[301710]: pgmap v321: 177 pgs: 177 active+clean; 146 MiB data, 774 MiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 26 KiB/s wr, 43 op/s
Dec 02 10:08:42 np0005541914.localdomain ceph-mon[301710]: osdmap e153: 6 total, 6 up, 6 in
Dec 02 10:08:42 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:08:42 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:08:42.499 2 INFO neutron.agent.securitygroups_rpc [None req-fd3bc9cf-ed5a-495f-beed-1c7d898feb8a 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:08:42 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:42.656 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:08:42 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:08:42.951 2 INFO neutron.agent.securitygroups_rpc [None req-98ebf0df-0324-4fc7-82f5-9efe0544203a 74c5eb8a019a4e62a5eaf3b3d37efc2b 013c3f934ab54b1a83f18d3dcf154dd0 - - default default] Security group member updated ['b78815c8-0800-4df2-8d06-dc1b5176ba24']
Dec 02 10:08:43 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v323: 177 pgs: 177 active+clean; 146 MiB data, 774 MiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 26 KiB/s wr, 24 op/s
Dec 02 10:08:43 np0005541914.localdomain systemd[1]: tmp-crun.MyRFUn.mount: Deactivated successfully.
Dec 02 10:08:43 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses
Dec 02 10:08:43 np0005541914.localdomain podman[316059]: 2025-12-02 10:08:43.843807026 +0000 UTC m=+0.055258678 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:08:43 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:08:43 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:08:43 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:08:43 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:08:43 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:08:43 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:08:43 np0005541914.localdomain podman[316073]: 2025-12-02 10:08:43.9503552 +0000 UTC m=+0.084709934 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 02 10:08:43 np0005541914.localdomain podman[316077]: 2025-12-02 10:08:43.992838366 +0000 UTC m=+0.120354649 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 02 10:08:44 np0005541914.localdomain podman[316074]: 2025-12-02 10:08:43.999718237 +0000 UTC m=+0.128732376 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 10:08:44 np0005541914.localdomain podman[316074]: 2025-12-02 10:08:44.009809158 +0000 UTC m=+0.138823317 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 10:08:44 np0005541914.localdomain podman[316075]: 2025-12-02 10:08:43.969133127 +0000 UTC m=+0.096874207 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:08:44 np0005541914.localdomain podman[316077]: 2025-12-02 10:08:44.063074444 +0000 UTC m=+0.190590717 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 02 10:08:44 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:08:44 np0005541914.localdomain podman[316075]: 2025-12-02 10:08:44.082763339 +0000 UTC m=+0.210504429 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm)
Dec 02 10:08:44 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:08:44 np0005541914.localdomain podman[316073]: 2025-12-02 10:08:44.099526354 +0000 UTC m=+0.233881098 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 02 10:08:44 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:08:44 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:08:44 np0005541914.localdomain ceph-mon[301710]: pgmap v323: 177 pgs: 177 active+clean; 146 MiB data, 774 MiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 26 KiB/s wr, 24 op/s
Dec 02 10:08:44 np0005541914.localdomain systemd[1]: tmp-crun.obnRm0.mount: Deactivated successfully.
Dec 02 10:08:45 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v324: 177 pgs: 177 active+clean; 146 MiB data, 774 MiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 26 KiB/s wr, 23 op/s
Dec 02 10:08:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:45.727 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:08:45 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:08:45.825 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:08:45Z, description=, device_id=aa61d6a1-1090-4f06-abf3-fa0ed7c99a0f, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c57fa0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c576a0>], id=f08da1e0-d5a3-4e95-9204-7c3f32b4d715, ip_allocation=immediate, mac_address=fa:16:3e:83:07:58, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2057, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:08:45Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:08:45 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:08:45.829 2 INFO neutron.agent.securitygroups_rpc [None req-a2676272-e4a7-4aab-af43-dd7cd656aeb3 74c5eb8a019a4e62a5eaf3b3d37efc2b 013c3f934ab54b1a83f18d3dcf154dd0 - - default default] Security group member updated ['b78815c8-0800-4df2-8d06-dc1b5176ba24']
Dec 02 10:08:46 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses
Dec 02 10:08:46 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:08:46 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:08:46 np0005541914.localdomain podman[316177]: 2025-12-02 10:08:46.05081684 +0000 UTC m=+0.049102059 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:08:46 np0005541914.localdomain ceph-mon[301710]: pgmap v324: 177 pgs: 177 active+clean; 146 MiB data, 774 MiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 26 KiB/s wr, 23 op/s
Dec 02 10:08:46 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:08:46.419 262347 INFO neutron.agent.dhcp.agent [None req-c6915532-ee5a-4d49-9550-16e481e85ae6 - - - - - -] DHCP configuration for ports {'f08da1e0-d5a3-4e95-9204-7c3f32b4d715'} is completed
Dec 02 10:08:47 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v325: 177 pgs: 177 active+clean; 146 MiB data, 774 MiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 26 KiB/s wr, 23 op/s
Dec 02 10:08:47 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:08:47.337 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:08:46Z, description=, device_id=69f8516c-fa8d-4437-9829-7d5c8ddbd262, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034cae580>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034caedf0>], id=58f532c0-e523-4491-9cfc-d85fc74c7485, ip_allocation=immediate, mac_address=fa:16:3e:0f:8f:1f, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2061, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:08:47Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:08:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:08:47.360 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:19:93 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a59d5a92-7a77-419d-a87f-fbb46ea78955) old=Port_Binding(mac=['fa:16:3e:e6:19:93 10.100.0.2 2001:db8::f816:3eff:fee6:1993'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:08:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:08:47.362 159483 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a59d5a92-7a77-419d-a87f-fbb46ea78955 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 updated
Dec 02 10:08:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:08:47.364 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:08:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:08:47.364 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[f47f91e6-21cf-4e33-be0b-4f87e4b0a644]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:08:47 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:08:47 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 4 addresses
Dec 02 10:08:47 np0005541914.localdomain podman[316215]: 2025-12-02 10:08:47.545620251 +0000 UTC m=+0.056620861 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec 02 10:08:47 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:08:47 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:08:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:47.694 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:08:47 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:08:47.860 262347 INFO neutron.agent.dhcp.agent [None req-7a1811ca-8a4f-41de-9de7-7c5ae4b0c2d3 - - - - - -] DHCP configuration for ports {'58f532c0-e523-4491-9cfc-d85fc74c7485'} is completed
Dec 02 10:08:48 np0005541914.localdomain ceph-mon[301710]: pgmap v325: 177 pgs: 177 active+clean; 146 MiB data, 774 MiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 26 KiB/s wr, 23 op/s
Dec 02 10:08:48 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:08:48.544 2 INFO neutron.agent.securitygroups_rpc [None req-1e3a85d5-a4d4-4ac9-b4fb-7c32fb08bdf0 74c5eb8a019a4e62a5eaf3b3d37efc2b 013c3f934ab54b1a83f18d3dcf154dd0 - - default default] Security group member updated ['b78815c8-0800-4df2-8d06-dc1b5176ba24']
Dec 02 10:08:49 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v326: 177 pgs: 177 active+clean; 146 MiB data, 778 MiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 9.4 KiB/s wr, 3 op/s
Dec 02 10:08:49 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dd0a2a5c-154a-4d2f-9d8b-fce17b535945", "format": "json"}]: dispatch
Dec 02 10:08:49 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:dd0a2a5c-154a-4d2f-9d8b-fce17b535945, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:08:49 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:08:49.955 2 INFO neutron.agent.securitygroups_rpc [None req-59adf8f3-045e-4261-a367-eea8612462ef 74c5eb8a019a4e62a5eaf3b3d37efc2b 013c3f934ab54b1a83f18d3dcf154dd0 - - default default] Security group member updated ['b78815c8-0800-4df2-8d06-dc1b5176ba24']
Dec 02 10:08:50 np0005541914.localdomain ceph-mon[301710]: pgmap v326: 177 pgs: 177 active+clean; 146 MiB data, 778 MiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 9.4 KiB/s wr, 3 op/s
Dec 02 10:08:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:50.731 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:08:51 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v327: 177 pgs: 177 active+clean; 146 MiB data, 778 MiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 9.4 KiB/s wr, 3 op/s
Dec 02 10:08:51 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dd0a2a5c-154a-4d2f-9d8b-fce17b535945", "format": "json"}]: dispatch
Dec 02 10:08:51 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:08:51.457 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:19:93 10.100.0.2 2001:db8::f816:3eff:fee6:1993'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a59d5a92-7a77-419d-a87f-fbb46ea78955) old=Port_Binding(mac=['fa:16:3e:e6:19:93 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:08:51 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:08:51.460 159483 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a59d5a92-7a77-419d-a87f-fbb46ea78955 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 updated
Dec 02 10:08:51 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:08:51.462 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:08:51 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:08:51.463 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[db27c9c6-a71b-45f5-8752-bbd1d1e56256]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:08:51 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:08:51 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:08:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:dd0a2a5c-154a-4d2f-9d8b-fce17b535945, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:08:52 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "dd0a2a5c-154a-4d2f-9d8b-fce17b535945", "format": "json"}]: dispatch
Dec 02 10:08:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dd0a2a5c-154a-4d2f-9d8b-fce17b535945, vol_name:cephfs) < ""
Dec 02 10:08:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:dd0a2a5c-154a-4d2f-9d8b-fce17b535945, vol_name:cephfs) < ""
Dec 02 10:08:52 np0005541914.localdomain podman[316236]: 2025-12-02 10:08:52.075110537 +0000 UTC m=+0.081698611 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 10:08:52 np0005541914.localdomain podman[316236]: 2025-12-02 10:08:52.113105714 +0000 UTC m=+0.119693788 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 10:08:52 np0005541914.localdomain systemd[1]: tmp-crun.amLQFU.mount: Deactivated successfully.
Dec 02 10:08:52 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:08:52 np0005541914.localdomain podman[316237]: 2025-12-02 10:08:52.136648118 +0000 UTC m=+0.139109226 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, distribution-scope=public, name=ubi9-minimal, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, version=9.6, config_id=edpm, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9)
Dec 02 10:08:52 np0005541914.localdomain podman[316237]: 2025-12-02 10:08:52.152996149 +0000 UTC m=+0.155457257 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, distribution-scope=public, name=ubi9-minimal, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible)
Dec 02 10:08:52 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:08:52 np0005541914.localdomain ceph-mon[301710]: pgmap v327: 177 pgs: 177 active+clean; 146 MiB data, 778 MiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 9.4 KiB/s wr, 3 op/s
Dec 02 10:08:52 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:08:52 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:08:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:52.722 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:08:53 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v328: 177 pgs: 177 active+clean; 146 MiB data, 779 MiB used, 41 GiB / 42 GiB avail; 517 B/s rd, 8.7 KiB/s wr, 3 op/s
Dec 02 10:08:53 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "dd0a2a5c-154a-4d2f-9d8b-fce17b535945", "format": "json"}]: dispatch
Dec 02 10:08:53 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:08:53.584 2 INFO neutron.agent.securitygroups_rpc [None req-e08d9635-b9e8-48c9-978b-72b4270a2462 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:08:54 np0005541914.localdomain ceph-mon[301710]: pgmap v328: 177 pgs: 177 active+clean; 146 MiB data, 779 MiB used, 41 GiB / 42 GiB avail; 517 B/s rd, 8.7 KiB/s wr, 3 op/s
Dec 02 10:08:54 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses
Dec 02 10:08:54 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:08:54 np0005541914.localdomain podman[316296]: 2025-12-02 10:08:54.480094623 +0000 UTC m=+0.057353803 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 02 10:08:54 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:08:54 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:08:54.611 2 INFO neutron.agent.securitygroups_rpc [None req-3c6b2ae8-8852-4b91-8b57-db72052e455d 74c5eb8a019a4e62a5eaf3b3d37efc2b 013c3f934ab54b1a83f18d3dcf154dd0 - - default default] Security group member updated ['b78815c8-0800-4df2-8d06-dc1b5176ba24']
Dec 02 10:08:55 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:08:55.004 2 INFO neutron.agent.securitygroups_rpc [None req-c6b91e56-d8d3-49df-8fb0-b7b5f6e00308 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:08:55 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v329: 177 pgs: 177 active+clean; 146 MiB data, 779 MiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 2.7 KiB/s wr, 1 op/s
Dec 02 10:08:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:55.732 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:08:55 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:08:55.871 2 INFO neutron.agent.securitygroups_rpc [None req-62ccdf95-04fd-49bc-8e08-6a4afcc10f44 74c5eb8a019a4e62a5eaf3b3d37efc2b 013c3f934ab54b1a83f18d3dcf154dd0 - - default default] Security group member updated ['b78815c8-0800-4df2-8d06-dc1b5176ba24']
Dec 02 10:08:56 np0005541914.localdomain ceph-mon[301710]: pgmap v329: 177 pgs: 177 active+clean; 146 MiB data, 779 MiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 2.7 KiB/s wr, 1 op/s
Dec 02 10:08:56 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:08:56.443 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:08:55Z, description=, device_id=e151137c-6a3f-4ce9-9721-f5df26cfefd0, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c7aee0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f40355173a0>], id=fbfa2337-e476-4b17-b43e-e595dbb78ebf, ip_allocation=immediate, mac_address=fa:16:3e:12:b0:ca, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2083, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:08:55Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:08:56 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:08:56.645 2 INFO neutron.agent.securitygroups_rpc [None req-2779f3cb-e43d-46a6-b42c-1a159a69c67f b9c801fe16fd46b78d8c4d5c23cd99c7 50b20ebe68c9494a933fabe997d62528 - - default default] Security group member updated ['0990385a-b99f-41bd-8d17-8e7fb5ec4794']
Dec 02 10:08:56 np0005541914.localdomain systemd[1]: tmp-crun.CfQFpp.mount: Deactivated successfully.
Dec 02 10:08:56 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 4 addresses
Dec 02 10:08:56 np0005541914.localdomain podman[316334]: 2025-12-02 10:08:56.710209726 +0000 UTC m=+0.057881899 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 02 10:08:56 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:08:56 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:08:57 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v330: 177 pgs: 177 active+clean; 146 MiB data, 779 MiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 2.7 KiB/s wr, 1 op/s
Dec 02 10:08:57 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:08:57.206 262347 INFO neutron.agent.dhcp.agent [None req-30cf4cb4-3e3a-45a2-982a-3610b3cbd3c9 - - - - - -] DHCP configuration for ports {'fbfa2337-e476-4b17-b43e-e595dbb78ebf'} is completed
Dec 02 10:08:57 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:08:57.460 2 INFO neutron.agent.securitygroups_rpc [None req-675bc92b-ef71-4946-a4ea-2f67c0d27bea 74c5eb8a019a4e62a5eaf3b3d37efc2b 013c3f934ab54b1a83f18d3dcf154dd0 - - default default] Security group member updated ['b78815c8-0800-4df2-8d06-dc1b5176ba24']
Dec 02 10:08:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:08:57 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dd0a2a5c-154a-4d2f-9d8b-fce17b535945", "format": "json"}]: dispatch
Dec 02 10:08:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:dd0a2a5c-154a-4d2f-9d8b-fce17b535945, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:08:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:dd0a2a5c-154a-4d2f-9d8b-fce17b535945, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:08:57 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "dd0a2a5c-154a-4d2f-9d8b-fce17b535945", "force": true, "format": "json"}]: dispatch
Dec 02 10:08:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dd0a2a5c-154a-4d2f-9d8b-fce17b535945, vol_name:cephfs) < ""
Dec 02 10:08:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/dd0a2a5c-154a-4d2f-9d8b-fce17b535945'' moved to trashcan
Dec 02 10:08:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:08:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:dd0a2a5c-154a-4d2f-9d8b-fce17b535945, vol_name:cephfs) < ""
Dec 02 10:08:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:57.773 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:08:57 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:08:58 np0005541914.localdomain podman[316354]: 2025-12-02 10:08:58.072615768 +0000 UTC m=+0.080634009 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:08:58 np0005541914.localdomain podman[316354]: 2025-12-02 10:08:58.109249704 +0000 UTC m=+0.117267925 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 02 10:08:58 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:08:58 np0005541914.localdomain ceph-mon[301710]: pgmap v330: 177 pgs: 177 active+clean; 146 MiB data, 779 MiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 2.7 KiB/s wr, 1 op/s
Dec 02 10:08:58 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "dd0a2a5c-154a-4d2f-9d8b-fce17b535945", "format": "json"}]: dispatch
Dec 02 10:08:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:08:58.993 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:08:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v331: 177 pgs: 177 active+clean; 146 MiB data, 779 MiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 4.6 KiB/s wr, 1 op/s
Dec 02 10:08:59 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "dd0a2a5c-154a-4d2f-9d8b-fce17b535945", "force": true, "format": "json"}]: dispatch
Dec 02 10:09:00 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:00.328 2 INFO neutron.agent.securitygroups_rpc [None req-87a4be36-2a80-4d5d-bf3f-f65722b03fc3 b9c801fe16fd46b78d8c4d5c23cd99c7 50b20ebe68c9494a933fabe997d62528 - - default default] Security group member updated ['0990385a-b99f-41bd-8d17-8e7fb5ec4794']
Dec 02 10:09:00 np0005541914.localdomain ceph-mon[301710]: pgmap v331: 177 pgs: 177 active+clean; 146 MiB data, 779 MiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 4.6 KiB/s wr, 1 op/s
Dec 02 10:09:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:00.547 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:09:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:00.734 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:00 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:00.769 2 INFO neutron.agent.securitygroups_rpc [None req-a8153026-65f9-484e-923d-c2362124502e 74c5eb8a019a4e62a5eaf3b3d37efc2b 013c3f934ab54b1a83f18d3dcf154dd0 - - default default] Security group member updated ['b78815c8-0800-4df2-8d06-dc1b5176ba24']
Dec 02 10:09:00 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6", "snap_name": "9664f206-7cb4-4a1a-b619-2a201c7ebe10_4a876e35-722e-46c7-9bcc-640ed22bd047", "force": true, "format": "json"}]: dispatch
Dec 02 10:09:00 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9664f206-7cb4-4a1a-b619-2a201c7ebe10_4a876e35-722e-46c7-9bcc-640ed22bd047, sub_name:9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6, vol_name:cephfs) < ""
Dec 02 10:09:00 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6/.meta.tmp'
Dec 02 10:09:00 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6/.meta.tmp' to config b'/volumes/_nogroup/9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6/.meta'
Dec 02 10:09:00 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9664f206-7cb4-4a1a-b619-2a201c7ebe10_4a876e35-722e-46c7-9bcc-640ed22bd047, sub_name:9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6, vol_name:cephfs) < ""
Dec 02 10:09:00 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6", "snap_name": "9664f206-7cb4-4a1a-b619-2a201c7ebe10", "force": true, "format": "json"}]: dispatch
Dec 02 10:09:00 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9664f206-7cb4-4a1a-b619-2a201c7ebe10, sub_name:9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6, vol_name:cephfs) < ""
Dec 02 10:09:00 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6/.meta.tmp'
Dec 02 10:09:00 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6/.meta.tmp' to config b'/volumes/_nogroup/9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6/.meta'
Dec 02 10:09:00 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9664f206-7cb4-4a1a-b619-2a201c7ebe10, sub_name:9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6, vol_name:cephfs) < ""
Dec 02 10:09:01 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v332: 177 pgs: 177 active+clean; 146 MiB data, 779 MiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 2.6 KiB/s wr, 0 op/s
Dec 02 10:09:01 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:01.840 2 INFO neutron.agent.securitygroups_rpc [None req-a5be6bf0-ce48-4e65-99a4-3416f730b3a2 74c5eb8a019a4e62a5eaf3b3d37efc2b 013c3f934ab54b1a83f18d3dcf154dd0 - - default default] Security group member updated ['b78815c8-0800-4df2-8d06-dc1b5176ba24']
Dec 02 10:09:02 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6", "snap_name": "9664f206-7cb4-4a1a-b619-2a201c7ebe10_4a876e35-722e-46c7-9bcc-640ed22bd047", "force": true, "format": "json"}]: dispatch
Dec 02 10:09:02 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6", "snap_name": "9664f206-7cb4-4a1a-b619-2a201c7ebe10", "force": true, "format": "json"}]: dispatch
Dec 02 10:09:02 np0005541914.localdomain ceph-mon[301710]: pgmap v332: 177 pgs: 177 active+clean; 146 MiB data, 779 MiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 2.6 KiB/s wr, 0 op/s
Dec 02 10:09:02 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/2476910833' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:09:02 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses
Dec 02 10:09:02 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:09:02 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:09:02 np0005541914.localdomain podman[316392]: 2025-12-02 10:09:02.389488312 +0000 UTC m=+0.057434697 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0)
Dec 02 10:09:02 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:09:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:02.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:09:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:02.814 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:03 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v333: 177 pgs: 177 active+clean; 146 MiB data, 779 MiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 7.9 KiB/s wr, 2 op/s
Dec 02 10:09:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:09:03.178 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:09:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:09:03.179 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:09:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:09:03.179 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:09:03 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/2652316114' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:09:03 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:03.315 2 INFO neutron.agent.securitygroups_rpc [None req-11b93b80-ea7f-4df4-8312-05f04742e794 74c5eb8a019a4e62a5eaf3b3d37efc2b 013c3f934ab54b1a83f18d3dcf154dd0 - - default default] Security group member updated ['b78815c8-0800-4df2-8d06-dc1b5176ba24']
Dec 02 10:09:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:09:03.412 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:19:93 2001:db8::f816:3eff:fee6:1993'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a59d5a92-7a77-419d-a87f-fbb46ea78955) old=Port_Binding(mac=['fa:16:3e:e6:19:93 10.100.0.2 2001:db8::f816:3eff:fee6:1993'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:09:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:09:03.414 159483 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a59d5a92-7a77-419d-a87f-fbb46ea78955 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 updated
Dec 02 10:09:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:09:03.417 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:09:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:09:03.418 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[4ad65aaf-7c38-477d-96d6-ec4d54696013]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:09:03 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:03.421 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:03Z, description=, device_id=011c4c6c-4d4d-4af9-a901-565e82aa7620, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c31910>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c31190>], id=1952c163-508d-4f30-a86f-38b1231d5724, ip_allocation=immediate, mac_address=fa:16:3e:bb:fb:07, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2109, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:09:03Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:09:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:03.524 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:09:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:03.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:09:03 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 4 addresses
Dec 02 10:09:03 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:09:03 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:09:03 np0005541914.localdomain podman[316429]: 2025-12-02 10:09:03.599497901 +0000 UTC m=+0.041113434 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 02 10:09:03 np0005541914.localdomain podman[239757]: time="2025-12-02T10:09:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:09:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:09:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 10:09:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:09:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19213 "" "Go-http-client/1.1"
Dec 02 10:09:04 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:04.053 262347 INFO neutron.agent.dhcp.agent [None req-bce2d02b-99cb-4ef0-89f9-a6fca7912bbb - - - - - -] DHCP configuration for ports {'1952c163-508d-4f30-a86f-38b1231d5724'} is completed
Dec 02 10:09:04 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6", "format": "json"}]: dispatch
Dec 02 10:09:04 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:09:04 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:09:04 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6' of type subvolume
Dec 02 10:09:04 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:09:04.201+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6' of type subvolume
Dec 02 10:09:04 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6", "force": true, "format": "json"}]: dispatch
Dec 02 10:09:04 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6, vol_name:cephfs) < ""
Dec 02 10:09:04 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6'' moved to trashcan
Dec 02 10:09:04 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:09:04 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6, vol_name:cephfs) < ""
Dec 02 10:09:04 np0005541914.localdomain ceph-mon[301710]: pgmap v333: 177 pgs: 177 active+clean; 146 MiB data, 779 MiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 7.9 KiB/s wr, 2 op/s
Dec 02 10:09:04 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 02 10:09:04 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/251576263' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:09:04 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 02 10:09:04 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/251576263' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:09:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:04.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:09:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:04.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 10:09:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:04.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 10:09:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:04.552 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 10:09:05 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses
Dec 02 10:09:05 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:09:05 np0005541914.localdomain podman[316466]: 2025-12-02 10:09:05.065847628 +0000 UTC m=+0.059318374 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:09:05 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:09:05 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v334: 177 pgs: 177 active+clean; 146 MiB data, 779 MiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 7.2 KiB/s wr, 1 op/s
Dec 02 10:09:05 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6", "format": "json"}]: dispatch
Dec 02 10:09:05 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9dcfd3db-4efe-40f2-a49d-5d58c6cc71e6", "force": true, "format": "json"}]: dispatch
Dec 02 10:09:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/251576263' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:09:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/251576263' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:09:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:05.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:09:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:05.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:09:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:05.546 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:09:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:05.547 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:09:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:05.547 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:09:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:05.547 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 10:09:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:05.548 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:09:05 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:05.572 2 INFO neutron.agent.securitygroups_rpc [None req-d150629a-bcce-4a38-b00c-70964b564cd8 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:09:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:05.735 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:05 np0005541914.localdomain systemd[1]: tmp-crun.ZguiCm.mount: Deactivated successfully.
Dec 02 10:09:06 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses
Dec 02 10:09:06 np0005541914.localdomain podman[316525]: 2025-12-02 10:09:06.002313721 +0000 UTC m=+0.069605819 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:09:06 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:09:06 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:09:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:09:06 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4130488056' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:09:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:06.050 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:09:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:06.225 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 10:09:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:06.226 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11531MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 10:09:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:06.227 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:09:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:06.227 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:09:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:06.307 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 10:09:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:06.307 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 10:09:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:06.346 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:09:06 np0005541914.localdomain ceph-mon[301710]: pgmap v334: 177 pgs: 177 active+clean; 146 MiB data, 779 MiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 7.2 KiB/s wr, 1 op/s
Dec 02 10:09:06 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/4130488056' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:09:06 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:06.684 2 INFO neutron.agent.securitygroups_rpc [None req-eff2d2a9-9509-4ec3-933e-196163edb064 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:09:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:09:06 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1335571893' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:09:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:06.807 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:09:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:06.813 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 10:09:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:06.831 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 10:09:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:06.834 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 10:09:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:06.835 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:09:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:09:06
Dec 02 10:09:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 02 10:09:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] do_upmap
Dec 02 10:09:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] pools ['vms', 'backups', '.mgr', 'volumes', 'manila_metadata', 'manila_data', 'images']
Dec 02 10:09:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes
Dec 02 10:09:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:09:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:09:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:09:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:09:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:09:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:09:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v335: 177 pgs: 177 active+clean; 146 MiB data, 779 MiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 7.2 KiB/s wr, 1 op/s
Dec 02 10:09:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust
Dec 02 10:09:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:09:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 02 10:09:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:09:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32)
Dec 02 10:09:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:09:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.443522589800856e-05 quantized to 32 (current 32)
Dec 02 10:09:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:09:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32)
Dec 02 10:09:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:09:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 02 10:09:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:09:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00021701388888888888 quantized to 32 (current 32)
Dec 02 10:09:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:09:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 4.307562116136237e-05 of space, bias 4.0, pg target 0.03428819444444444 quantized to 16 (current 16)
Dec 02 10:09:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 02 10:09:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:09:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 02 10:09:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:09:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:09:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:09:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:09:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:09:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:09:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:09:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:09:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e154 e154: 6 total, 6 up, 6 in
Dec 02 10:09:07 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/1335571893' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:09:07 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/499415689' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:09:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:07.818 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:07.835 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:09:08 np0005541914.localdomain ceph-mon[301710]: pgmap v335: 177 pgs: 177 active+clean; 146 MiB data, 779 MiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 7.2 KiB/s wr, 1 op/s
Dec 02 10:09:08 np0005541914.localdomain ceph-mon[301710]: osdmap e154: 6 total, 6 up, 6 in
Dec 02 10:09:08 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/3559245647' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:09:09 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v337: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 14 KiB/s wr, 4 op/s
Dec 02 10:09:09 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:09.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:09:09 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:09.527 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 10:09:10 np0005541914.localdomain ceph-mon[301710]: pgmap v337: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 14 KiB/s wr, 4 op/s
Dec 02 10:09:10 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:10.738 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:11 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v338: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 14 KiB/s wr, 4 op/s
Dec 02 10:09:11 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:11.261 2 INFO neutron.agent.securitygroups_rpc [None req-d69c9fb8-aada-452c-807d-ffbf23ad4dde b9c801fe16fd46b78d8c4d5c23cd99c7 50b20ebe68c9494a933fabe997d62528 - - default default] Security group member updated ['0990385a-b99f-41bd-8d17-8e7fb5ec4794']
Dec 02 10:09:11 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:11.330 2 INFO neutron.agent.securitygroups_rpc [None req-158ba6e6-ae47-4633-afb7-8fe1fff090db 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:09:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:09:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:09:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:09:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:09:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:09:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:09:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:09:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:09:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:09:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:09:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:09:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:09:12 np0005541914.localdomain ceph-mon[301710]: pgmap v338: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 14 KiB/s wr, 4 op/s
Dec 02 10:09:12 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:12.331 2 INFO neutron.agent.securitygroups_rpc [None req-9fd8a609-568a-4b57-8025-f518255ff815 b9c801fe16fd46b78d8c4d5c23cd99c7 50b20ebe68c9494a933fabe997d62528 - - default default] Security group member updated ['0990385a-b99f-41bd-8d17-8e7fb5ec4794']
Dec 02 10:09:12 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:12.452 2 INFO neutron.agent.securitygroups_rpc [None req-4d3ff4f1-7788-4535-9205-e4647a2c3ad1 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:09:12 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:09:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:12.821 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v339: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 1.4 KiB/s rd, 8.9 KiB/s wr, 5 op/s
Dec 02 10:09:14 np0005541914.localdomain ceph-mon[301710]: pgmap v339: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 1.4 KiB/s rd, 8.9 KiB/s wr, 5 op/s
Dec 02 10:09:14 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:09:14 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:09:14 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:09:15 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:09:15 np0005541914.localdomain podman[316579]: 2025-12-02 10:09:15.085556591 +0000 UTC m=+0.074596974 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:09:15 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v340: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 1.4 KiB/s rd, 8.9 KiB/s wr, 5 op/s
Dec 02 10:09:15 np0005541914.localdomain podman[316579]: 2025-12-02 10:09:15.125117966 +0000 UTC m=+0.114158259 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 02 10:09:15 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:09:15 np0005541914.localdomain podman[316573]: 2025-12-02 10:09:15.141552511 +0000 UTC m=+0.132033139 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 02 10:09:15 np0005541914.localdomain podman[316573]: 2025-12-02 10:09:15.156936063 +0000 UTC m=+0.147416671 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 02 10:09:15 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:09:15 np0005541914.localdomain podman[316572]: 2025-12-02 10:09:15.253932124 +0000 UTC m=+0.248687272 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 10:09:15 np0005541914.localdomain podman[316572]: 2025-12-02 10:09:15.287064782 +0000 UTC m=+0.281819930 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 02 10:09:15 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:09:15 np0005541914.localdomain podman[316571]: 2025-12-02 10:09:15.301889618 +0000 UTC m=+0.300539396 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 02 10:09:15 np0005541914.localdomain podman[316571]: 2025-12-02 10:09:15.309986356 +0000 UTC m=+0.308636164 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec 02 10:09:15 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:09:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:15.791 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:16 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e155 e155: 6 total, 6 up, 6 in
Dec 02 10:09:16 np0005541914.localdomain ceph-mon[301710]: pgmap v340: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 1.4 KiB/s rd, 8.9 KiB/s wr, 5 op/s
Dec 02 10:09:16 np0005541914.localdomain ceph-mon[301710]: osdmap e155: 6 total, 6 up, 6 in
Dec 02 10:09:16 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:16.738 262347 INFO neutron.agent.linux.ip_lib [None req-79eb6d67-53bd-4790-9715-8c98e8b30979 - - - - - -] Device tap8d7aba05-5e cannot be used as it has no MAC address
Dec 02 10:09:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:16.760 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:16 np0005541914.localdomain kernel: device tap8d7aba05-5e entered promiscuous mode
Dec 02 10:09:16 np0005541914.localdomain NetworkManager[5967]: <info>  [1764670156.7698] manager: (tap8d7aba05-5e): new Generic device (/org/freedesktop/NetworkManager/Devices/31)
Dec 02 10:09:16 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:09:16Z|00142|binding|INFO|Claiming lport 8d7aba05-5eab-44a1-aacc-c2b62f525db1 for this chassis.
Dec 02 10:09:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:16.769 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:16 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:09:16Z|00143|binding|INFO|8d7aba05-5eab-44a1-aacc-c2b62f525db1: Claiming unknown
Dec 02 10:09:16 np0005541914.localdomain systemd-udevd[316666]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 10:09:16 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:09:16.782 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-669f1b01-3857-4ea6-8083-25e0b2ce70bc', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-669f1b01-3857-4ea6-8083-25e0b2ce70bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '873db74a4a7a4aad823d1b7e8b2d6c26', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d91ccfa3-a134-4f2e-be7a-020d064cc147, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=8d7aba05-5eab-44a1-aacc-c2b62f525db1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:09:16 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:09:16.784 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 8d7aba05-5eab-44a1-aacc-c2b62f525db1 in datapath 669f1b01-3857-4ea6-8083-25e0b2ce70bc bound to our chassis
Dec 02 10:09:16 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:09:16.785 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 669f1b01-3857-4ea6-8083-25e0b2ce70bc or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 02 10:09:16 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:09:16.786 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[77643317-c545-47b8-970d-72a95d765161]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:09:16 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:09:16Z|00144|binding|INFO|Setting lport 8d7aba05-5eab-44a1-aacc-c2b62f525db1 ovn-installed in OVS
Dec 02 10:09:16 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:09:16Z|00145|binding|INFO|Setting lport 8d7aba05-5eab-44a1-aacc-c2b62f525db1 up in Southbound
Dec 02 10:09:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:16.808 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:16 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap8d7aba05-5e: No such device
Dec 02 10:09:16 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap8d7aba05-5e: No such device
Dec 02 10:09:16 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap8d7aba05-5e: No such device
Dec 02 10:09:16 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap8d7aba05-5e: No such device
Dec 02 10:09:16 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap8d7aba05-5e: No such device
Dec 02 10:09:16 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap8d7aba05-5e: No such device
Dec 02 10:09:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:16.844 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:16 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap8d7aba05-5e: No such device
Dec 02 10:09:16 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap8d7aba05-5e: No such device
Dec 02 10:09:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:16.875 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:17 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:17.031 262347 INFO neutron.agent.linux.ip_lib [None req-a65c67f7-0dca-4ef5-9c28-9edf2bb2ef9b - - - - - -] Device tap21d38e5b-83 cannot be used as it has no MAC address
Dec 02 10:09:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:17.056 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:17 np0005541914.localdomain kernel: device tap21d38e5b-83 entered promiscuous mode
Dec 02 10:09:17 np0005541914.localdomain NetworkManager[5967]: <info>  [1764670157.0616] manager: (tap21d38e5b-83): new Generic device (/org/freedesktop/NetworkManager/Devices/32)
Dec 02 10:09:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:17.061 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:17 np0005541914.localdomain systemd-udevd[316668]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 10:09:17 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:09:17Z|00146|binding|INFO|Claiming lport 21d38e5b-83d6-443b-a9e2-4f6016ed9773 for this chassis.
Dec 02 10:09:17 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:09:17Z|00147|binding|INFO|21d38e5b-83d6-443b-a9e2-4f6016ed9773: Claiming unknown
Dec 02 10:09:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:17.081 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:17 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:09:17Z|00148|binding|INFO|Setting lport 21d38e5b-83d6-443b-a9e2-4f6016ed9773 ovn-installed in OVS
Dec 02 10:09:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:17.084 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v342: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 1.5 KiB/s rd, 9.4 KiB/s wr, 5 op/s
Dec 02 10:09:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:17.178 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:17.218 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:17.245 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:09:17 np0005541914.localdomain podman[316775]: 
Dec 02 10:09:17 np0005541914.localdomain podman[316775]: 2025-12-02 10:09:17.697168056 +0000 UTC m=+0.053744622 container create 093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-669f1b01-3857-4ea6-8083-25e0b2ce70bc, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 02 10:09:17 np0005541914.localdomain systemd[1]: Started libpod-conmon-093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed.scope.
Dec 02 10:09:17 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 10:09:17 np0005541914.localdomain podman[316775]: 2025-12-02 10:09:17.673999095 +0000 UTC m=+0.030575681 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 02 10:09:17 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39a3d1465d9dc770bb3a503daf7256c2208118b497345b7f792f0cc2082142ec/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 10:09:17 np0005541914.localdomain podman[316775]: 2025-12-02 10:09:17.785158819 +0000 UTC m=+0.141735395 container init 093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-669f1b01-3857-4ea6-8083-25e0b2ce70bc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 02 10:09:17 np0005541914.localdomain podman[316775]: 2025-12-02 10:09:17.794223688 +0000 UTC m=+0.150800264 container start 093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-669f1b01-3857-4ea6-8083-25e0b2ce70bc, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec 02 10:09:17 np0005541914.localdomain dnsmasq[316793]: started, version 2.85 cachesize 150
Dec 02 10:09:17 np0005541914.localdomain dnsmasq[316793]: DNS service limited to local subnets
Dec 02 10:09:17 np0005541914.localdomain dnsmasq[316793]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 02 10:09:17 np0005541914.localdomain dnsmasq[316793]: warning: no upstream servers configured
Dec 02 10:09:17 np0005541914.localdomain dnsmasq-dhcp[316793]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Dec 02 10:09:17 np0005541914.localdomain dnsmasq[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/addn_hosts - 0 addresses
Dec 02 10:09:17 np0005541914.localdomain dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/host
Dec 02 10:09:17 np0005541914.localdomain dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/opts
Dec 02 10:09:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:17.824 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:18 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:18.014 2 INFO neutron.agent.securitygroups_rpc [None req-1b7dd085-a5c1-4a81-bd02-4cabc7845a6f f91ea2f3e6064338bfd751b12b56ae7b 873db74a4a7a4aad823d1b7e8b2d6c26 - - default default] Security group member updated ['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6']
Dec 02 10:09:18 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:09:18Z|00149|binding|INFO|Setting lport 21d38e5b-83d6-443b-a9e2-4f6016ed9773 up in Southbound
Dec 02 10:09:18 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:09:18.263 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-1d564796-ac76-41ff-8a1e-6fbd19d356c5', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1d564796-ac76-41ff-8a1e-6fbd19d356c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '204a1137a20e40c995bb9cd512e75a5c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=45beaf16-b07c-44e8-bec5-71e8573d4df7, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=21d38e5b-83d6-443b-a9e2-4f6016ed9773) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:09:18 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:09:18.265 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 21d38e5b-83d6-443b-a9e2-4f6016ed9773 in datapath 1d564796-ac76-41ff-8a1e-6fbd19d356c5 bound to our chassis
Dec 02 10:09:18 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:09:18.267 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 1d564796-ac76-41ff-8a1e-6fbd19d356c5 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 02 10:09:18 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:09:18.267 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[67afd2e2-135e-4642-9917-d3dbea130796]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:09:18 np0005541914.localdomain ceph-mon[301710]: pgmap v342: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 1.5 KiB/s rd, 9.4 KiB/s wr, 5 op/s
Dec 02 10:09:18 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:18.985 262347 INFO neutron.agent.dhcp.agent [None req-0d750567-274c-4482-bf1b-de6317782f28 - - - - - -] DHCP configuration for ports {'bc29dfa6-ee29-4c8e-8916-3c71b290ac02'} is completed
Dec 02 10:09:18 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:18.987 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:16Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034af74f0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034aec460>], id=df3b3a5c-94ef-40df-81fb-64a96ab7af31, ip_allocation=immediate, mac_address=fa:16:3e:ad:98:42, name=tempest-AllowedAddressPairIpV6TestJSON-2067363510, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:09:13Z, description=, dns_domain=, id=669f1b01-3857-4ea6-8083-25e0b2ce70bc, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-1035488035, port_security_enabled=True, project_id=873db74a4a7a4aad823d1b7e8b2d6c26, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=12314, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2159, status=ACTIVE, subnets=['c6868224-de5b-425b-bf13-d943ff06e669'], tags=[], tenant_id=873db74a4a7a4aad823d1b7e8b2d6c26, updated_at=2025-12-02T10:09:15Z, vlan_transparent=None, network_id=669f1b01-3857-4ea6-8083-25e0b2ce70bc, port_security_enabled=True, project_id=873db74a4a7a4aad823d1b7e8b2d6c26, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6'], standard_attr_id=2177, status=DOWN, tags=[], tenant_id=873db74a4a7a4aad823d1b7e8b2d6c26, updated_at=2025-12-02T10:09:17Z on network 669f1b01-3857-4ea6-8083-25e0b2ce70bc
Dec 02 10:09:19 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v343: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 2.6 KiB/s wr, 37 op/s
Dec 02 10:09:19 np0005541914.localdomain systemd[1]: tmp-crun.v2soeA.mount: Deactivated successfully.
Dec 02 10:09:19 np0005541914.localdomain dnsmasq[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/addn_hosts - 1 addresses
Dec 02 10:09:19 np0005541914.localdomain dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/host
Dec 02 10:09:19 np0005541914.localdomain dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/opts
Dec 02 10:09:19 np0005541914.localdomain podman[316813]: 2025-12-02 10:09:19.180959858 +0000 UTC m=+0.066784753 container kill 093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-669f1b01-3857-4ea6-8083-25e0b2ce70bc, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 02 10:09:19 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/4204336087' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:09:19 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/4204336087' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:09:19 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2213891855' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:09:19 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2213891855' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:09:19 np0005541914.localdomain podman[316857]: 
Dec 02 10:09:19 np0005541914.localdomain podman[316857]: 2025-12-02 10:09:19.400791752 +0000 UTC m=+0.076567793 container create 5be375d594c5c169c744895ef026a5e71ad3963427d5365c1c05a1ac77a60d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1d564796-ac76-41ff-8a1e-6fbd19d356c5, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 02 10:09:19 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:19.425 2 INFO neutron.agent.securitygroups_rpc [None req-278e88f3-562a-4f7b-8f10-c3e2bfd4ee2e 8b49e5c866794aad866d55bb5f154d67 7dffef2e74844a7ebb6ee68826fb7e57 - - default default] Security group member updated ['32471057-4d02-424a-9e3e-19629ab1677d']
Dec 02 10:09:19 np0005541914.localdomain systemd[1]: Started libpod-conmon-5be375d594c5c169c744895ef026a5e71ad3963427d5365c1c05a1ac77a60d83.scope.
Dec 02 10:09:19 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:19.444 262347 INFO neutron.agent.dhcp.agent [None req-2d29b2b2-88a6-434e-bf1f-1144e3e682fc - - - - - -] DHCP configuration for ports {'df3b3a5c-94ef-40df-81fb-64a96ab7af31'} is completed
Dec 02 10:09:19 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 10:09:19 np0005541914.localdomain podman[316857]: 2025-12-02 10:09:19.360493855 +0000 UTC m=+0.036269896 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 02 10:09:19 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de1d49012998774e6961c351ee30dc6327212e570cd42de9146733d20d3604cc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 10:09:19 np0005541914.localdomain podman[316857]: 2025-12-02 10:09:19.471615418 +0000 UTC m=+0.147391459 container init 5be375d594c5c169c744895ef026a5e71ad3963427d5365c1c05a1ac77a60d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1d564796-ac76-41ff-8a1e-6fbd19d356c5, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 02 10:09:19 np0005541914.localdomain podman[316857]: 2025-12-02 10:09:19.481679538 +0000 UTC m=+0.157455579 container start 5be375d594c5c169c744895ef026a5e71ad3963427d5365c1c05a1ac77a60d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1d564796-ac76-41ff-8a1e-6fbd19d356c5, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 02 10:09:19 np0005541914.localdomain dnsmasq[316876]: started, version 2.85 cachesize 150
Dec 02 10:09:19 np0005541914.localdomain dnsmasq[316876]: DNS service limited to local subnets
Dec 02 10:09:19 np0005541914.localdomain dnsmasq[316876]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 02 10:09:19 np0005541914.localdomain dnsmasq[316876]: warning: no upstream servers configured
Dec 02 10:09:19 np0005541914.localdomain dnsmasq-dhcp[316876]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 02 10:09:19 np0005541914.localdomain dnsmasq[316876]: read /var/lib/neutron/dhcp/1d564796-ac76-41ff-8a1e-6fbd19d356c5/addn_hosts - 0 addresses
Dec 02 10:09:19 np0005541914.localdomain dnsmasq-dhcp[316876]: read /var/lib/neutron/dhcp/1d564796-ac76-41ff-8a1e-6fbd19d356c5/host
Dec 02 10:09:19 np0005541914.localdomain dnsmasq-dhcp[316876]: read /var/lib/neutron/dhcp/1d564796-ac76-41ff-8a1e-6fbd19d356c5/opts
Dec 02 10:09:19 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:19.601 262347 INFO neutron.agent.dhcp.agent [None req-7bdc6919-1204-438f-8683-d92b4f509fd2 - - - - - -] DHCP configuration for ports {'d0ba16e3-7d3e-48b5-87da-d48deb1e0c57'} is completed
Dec 02 10:09:19 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:19.739 2 INFO neutron.agent.securitygroups_rpc [None req-e9fc3440-8683-40fd-946b-446e84f960a4 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:09:20 np0005541914.localdomain ceph-mon[301710]: pgmap v343: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 2.6 KiB/s wr, 37 op/s
Dec 02 10:09:20 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:20.792 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:21 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v344: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 2.6 KiB/s wr, 37 op/s
Dec 02 10:09:21 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:21.179 2 INFO neutron.agent.securitygroups_rpc [None req-8371118d-5c83-45c5-bfa7-f542b4f1df3f 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:09:21 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1785879792' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:09:21 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1785879792' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:09:21 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:21.512 2 INFO neutron.agent.securitygroups_rpc [None req-10f867de-2584-4ed7-a0e8-fb9276ac33a8 f91ea2f3e6064338bfd751b12b56ae7b 873db74a4a7a4aad823d1b7e8b2d6c26 - - default default] Security group member updated ['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6']
Dec 02 10:09:21 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 1 addresses
Dec 02 10:09:21 np0005541914.localdomain podman[316894]: 2025-12-02 10:09:21.544733308 +0000 UTC m=+0.052680280 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:09:21 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:09:21 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:09:21 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:21.596 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c4af70>], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:20Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c4ac70>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c4a6d0>], id=f24d2970-a1cd-433b-82f0-ff8977158681, ip_allocation=immediate, mac_address=fa:16:3e:26:7b:6b, name=tempest-AllowedAddressPairIpV6TestJSON-328736239, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:09:13Z, description=, dns_domain=, id=669f1b01-3857-4ea6-8083-25e0b2ce70bc, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-1035488035, port_security_enabled=True, project_id=873db74a4a7a4aad823d1b7e8b2d6c26, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=12314, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2159, status=ACTIVE, subnets=['c6868224-de5b-425b-bf13-d943ff06e669'], tags=[], tenant_id=873db74a4a7a4aad823d1b7e8b2d6c26, updated_at=2025-12-02T10:09:15Z, vlan_transparent=None, network_id=669f1b01-3857-4ea6-8083-25e0b2ce70bc, port_security_enabled=True, project_id=873db74a4a7a4aad823d1b7e8b2d6c26, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6'], standard_attr_id=2182, status=DOWN, tags=[], tenant_id=873db74a4a7a4aad823d1b7e8b2d6c26, updated_at=2025-12-02T10:09:20Z on network 669f1b01-3857-4ea6-8083-25e0b2ce70bc
Dec 02 10:09:21 np0005541914.localdomain podman[316932]: 2025-12-02 10:09:21.753543804 +0000 UTC m=+0.043070795 container kill 093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-669f1b01-3857-4ea6-8083-25e0b2ce70bc, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 02 10:09:21 np0005541914.localdomain dnsmasq[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/addn_hosts - 2 addresses
Dec 02 10:09:21 np0005541914.localdomain dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/host
Dec 02 10:09:21 np0005541914.localdomain dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/opts
Dec 02 10:09:21 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 02 10:09:21 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/103747809' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:09:21 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 02 10:09:21 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/103747809' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:09:22 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:22.031 262347 INFO neutron.agent.dhcp.agent [None req-a364e023-3b56-4b4f-94dd-a490e36fe97e - - - - - -] DHCP configuration for ports {'f24d2970-a1cd-433b-82f0-ff8977158681'} is completed
Dec 02 10:09:22 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:22.202 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:21Z, description=, device_id=f27b9429-6ac7-4a48-8a17-a61fa778ae6e, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c31c70>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c31250>], id=65e2923a-c07d-4b60-8e34-1e64cbdb4494, ip_allocation=immediate, mac_address=fa:16:3e:7f:f9:37, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:09:12Z, description=, dns_domain=, id=1d564796-ac76-41ff-8a1e-6fbd19d356c5, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPNegativeTestJSON-test-network-1535011901, port_security_enabled=True, project_id=204a1137a20e40c995bb9cd512e75a5c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16351, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2149, status=ACTIVE, subnets=['29255b27-c554-4680-ad67-e29997db9d5a'], tags=[], tenant_id=204a1137a20e40c995bb9cd512e75a5c, updated_at=2025-12-02T10:09:15Z, vlan_transparent=None, network_id=1d564796-ac76-41ff-8a1e-6fbd19d356c5, port_security_enabled=False, project_id=204a1137a20e40c995bb9cd512e75a5c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2191, status=DOWN, tags=[], tenant_id=204a1137a20e40c995bb9cd512e75a5c, updated_at=2025-12-02T10:09:21Z on network 1d564796-ac76-41ff-8a1e-6fbd19d356c5
Dec 02 10:09:22 np0005541914.localdomain ceph-mon[301710]: pgmap v344: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 2.6 KiB/s wr, 37 op/s
Dec 02 10:09:22 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/103747809' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:09:22 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/103747809' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:09:22 np0005541914.localdomain dnsmasq[316876]: read /var/lib/neutron/dhcp/1d564796-ac76-41ff-8a1e-6fbd19d356c5/addn_hosts - 1 addresses
Dec 02 10:09:22 np0005541914.localdomain dnsmasq-dhcp[316876]: read /var/lib/neutron/dhcp/1d564796-ac76-41ff-8a1e-6fbd19d356c5/host
Dec 02 10:09:22 np0005541914.localdomain dnsmasq-dhcp[316876]: read /var/lib/neutron/dhcp/1d564796-ac76-41ff-8a1e-6fbd19d356c5/opts
Dec 02 10:09:22 np0005541914.localdomain podman[316970]: 2025-12-02 10:09:22.444602759 +0000 UTC m=+0.073051247 container kill 5be375d594c5c169c744895ef026a5e71ad3963427d5365c1c05a1ac77a60d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1d564796-ac76-41ff-8a1e-6fbd19d356c5, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 02 10:09:22 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:09:22 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:09:22 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:09:22 np0005541914.localdomain podman[316986]: 2025-12-02 10:09:22.539687459 +0000 UTC m=+0.068461004 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, version=9.6, com.redhat.component=ubi9-minimal-container, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_id=edpm, io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc.)
Dec 02 10:09:22 np0005541914.localdomain systemd[1]: tmp-crun.phhLTX.mount: Deactivated successfully.
Dec 02 10:09:22 np0005541914.localdomain podman[316985]: 2025-12-02 10:09:22.585819407 +0000 UTC m=+0.114745746 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 10:09:22 np0005541914.localdomain podman[316985]: 2025-12-02 10:09:22.599746495 +0000 UTC m=+0.128672854 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 10:09:22 np0005541914.localdomain podman[316986]: 2025-12-02 10:09:22.610812175 +0000 UTC m=+0.139585680 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1755695350, vendor=Red Hat, Inc., container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6)
Dec 02 10:09:22 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:09:22 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:09:22 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:22.679 262347 INFO neutron.agent.dhcp.agent [None req-3439f936-72de-446d-b723-e4be82544543 - - - - - -] DHCP configuration for ports {'65e2923a-c07d-4b60-8e34-1e64cbdb4494'} is completed
Dec 02 10:09:22 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:22.827 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:23 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:23.060 2 INFO neutron.agent.securitygroups_rpc [None req-7bc34f63-f96a-4396-b70c-07601d07dee2 f91ea2f3e6064338bfd751b12b56ae7b 873db74a4a7a4aad823d1b7e8b2d6c26 - - default default] Security group member updated ['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6']
Dec 02 10:09:23 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v345: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 49 KiB/s rd, 2.0 KiB/s wr, 66 op/s
Dec 02 10:09:23 np0005541914.localdomain systemd[1]: tmp-crun.IVs1lO.mount: Deactivated successfully.
Dec 02 10:09:23 np0005541914.localdomain dnsmasq[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/addn_hosts - 1 addresses
Dec 02 10:09:23 np0005541914.localdomain dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/host
Dec 02 10:09:23 np0005541914.localdomain dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/opts
Dec 02 10:09:23 np0005541914.localdomain podman[317050]: 2025-12-02 10:09:23.30057538 +0000 UTC m=+0.045415527 container kill 093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-669f1b01-3857-4ea6-8083-25e0b2ce70bc, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:09:23 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:23.773 2 INFO neutron.agent.securitygroups_rpc [None req-ef3a9568-c379-4b2a-a06d-b347ad68d0c7 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:09:23 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:23.834 2 INFO neutron.agent.securitygroups_rpc [None req-f87c0fb9-70c4-4316-8fe5-2d1d482ef952 f91ea2f3e6064338bfd751b12b56ae7b 873db74a4a7a4aad823d1b7e8b2d6c26 - - default default] Security group member updated ['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6']
Dec 02 10:09:23 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:23.876 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:23Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c17970>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c17dc0>], id=e381631f-20a2-4180-bd13-c2a6a1575aa1, ip_allocation=immediate, mac_address=fa:16:3e:9c:77:0d, name=tempest-AllowedAddressPairIpV6TestJSON-1257736356, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:09:13Z, description=, dns_domain=, id=669f1b01-3857-4ea6-8083-25e0b2ce70bc, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-1035488035, port_security_enabled=True, project_id=873db74a4a7a4aad823d1b7e8b2d6c26, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=12314, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2159, status=ACTIVE, subnets=['c6868224-de5b-425b-bf13-d943ff06e669'], tags=[], tenant_id=873db74a4a7a4aad823d1b7e8b2d6c26, updated_at=2025-12-02T10:09:15Z, vlan_transparent=None, network_id=669f1b01-3857-4ea6-8083-25e0b2ce70bc, port_security_enabled=True, project_id=873db74a4a7a4aad823d1b7e8b2d6c26, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6'], standard_attr_id=2202, status=DOWN, tags=[], tenant_id=873db74a4a7a4aad823d1b7e8b2d6c26, updated_at=2025-12-02T10:09:23Z on network 669f1b01-3857-4ea6-8083-25e0b2ce70bc
Dec 02 10:09:24 np0005541914.localdomain podman[317090]: 2025-12-02 10:09:24.039124313 +0000 UTC m=+0.053422262 container kill 093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-669f1b01-3857-4ea6-8083-25e0b2ce70bc, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:09:24 np0005541914.localdomain dnsmasq[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/addn_hosts - 2 addresses
Dec 02 10:09:24 np0005541914.localdomain dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/host
Dec 02 10:09:24 np0005541914.localdomain dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/opts
Dec 02 10:09:24 np0005541914.localdomain systemd[1]: tmp-crun.TsgDIv.mount: Deactivated successfully.
Dec 02 10:09:24 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:24.259 262347 INFO neutron.agent.dhcp.agent [None req-42c91359-43e3-4299-b62e-ab632e3bddac - - - - - -] DHCP configuration for ports {'e381631f-20a2-4180-bd13-c2a6a1575aa1'} is completed
Dec 02 10:09:24 np0005541914.localdomain ceph-mon[301710]: pgmap v345: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 49 KiB/s rd, 2.0 KiB/s wr, 66 op/s
Dec 02 10:09:24 np0005541914.localdomain sudo[317110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 10:09:24 np0005541914.localdomain sudo[317110]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:09:24 np0005541914.localdomain sudo[317110]: pam_unix(sudo:session): session closed for user root
Dec 02 10:09:24 np0005541914.localdomain sudo[317128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 10:09:24 np0005541914.localdomain sudo[317128]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:09:24 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:24.982 2 INFO neutron.agent.securitygroups_rpc [None req-b3aa5b43-46a5-4652-aa07-2f62355aecf1 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:09:25 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:25.013 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:21Z, description=, device_id=f27b9429-6ac7-4a48-8a17-a61fa778ae6e, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c3a520>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c3a310>], id=65e2923a-c07d-4b60-8e34-1e64cbdb4494, ip_allocation=immediate, mac_address=fa:16:3e:7f:f9:37, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:09:12Z, description=, dns_domain=, id=1d564796-ac76-41ff-8a1e-6fbd19d356c5, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPNegativeTestJSON-test-network-1535011901, port_security_enabled=True, project_id=204a1137a20e40c995bb9cd512e75a5c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16351, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2149, status=ACTIVE, subnets=['29255b27-c554-4680-ad67-e29997db9d5a'], tags=[], tenant_id=204a1137a20e40c995bb9cd512e75a5c, updated_at=2025-12-02T10:09:15Z, vlan_transparent=None, network_id=1d564796-ac76-41ff-8a1e-6fbd19d356c5, port_security_enabled=False, project_id=204a1137a20e40c995bb9cd512e75a5c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2191, status=DOWN, tags=[], tenant_id=204a1137a20e40c995bb9cd512e75a5c, updated_at=2025-12-02T10:09:21Z on network 1d564796-ac76-41ff-8a1e-6fbd19d356c5
Dec 02 10:09:25 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v346: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 49 KiB/s rd, 2.0 KiB/s wr, 66 op/s
Dec 02 10:09:25 np0005541914.localdomain sudo[317128]: pam_unix(sudo:session): session closed for user root
Dec 02 10:09:25 np0005541914.localdomain systemd[1]: tmp-crun.bqGfNW.mount: Deactivated successfully.
Dec 02 10:09:25 np0005541914.localdomain dnsmasq[316876]: read /var/lib/neutron/dhcp/1d564796-ac76-41ff-8a1e-6fbd19d356c5/addn_hosts - 1 addresses
Dec 02 10:09:25 np0005541914.localdomain dnsmasq-dhcp[316876]: read /var/lib/neutron/dhcp/1d564796-ac76-41ff-8a1e-6fbd19d356c5/host
Dec 02 10:09:25 np0005541914.localdomain dnsmasq-dhcp[316876]: read /var/lib/neutron/dhcp/1d564796-ac76-41ff-8a1e-6fbd19d356c5/opts
Dec 02 10:09:25 np0005541914.localdomain podman[317192]: 2025-12-02 10:09:25.259254473 +0000 UTC m=+0.063719738 container kill 5be375d594c5c169c744895ef026a5e71ad3963427d5365c1c05a1ac77a60d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1d564796-ac76-41ff-8a1e-6fbd19d356c5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:09:25 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 10:09:25 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:09:25 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 02 10:09:25 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:09:25 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 02 10:09:25 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] update: starting ev 6ff752fe-a7eb-4423-9e3b-217b2b5a72b8 (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:09:25 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] complete: finished ev 6ff752fe-a7eb-4423-9e3b-217b2b5a72b8 (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:09:25 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Completed event 6ff752fe-a7eb-4423-9e3b-217b2b5a72b8 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 02 10:09:25 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 02 10:09:25 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:09:25 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:09:25 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:09:25 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:09:25 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:09:25 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:25.511 262347 INFO neutron.agent.dhcp.agent [None req-320a362b-79a2-4930-bdff-79a7dbd98438 - - - - - -] DHCP configuration for ports {'65e2923a-c07d-4b60-8e34-1e64cbdb4494'} is completed
Dec 02 10:09:25 np0005541914.localdomain sudo[317213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 10:09:25 np0005541914.localdomain sudo[317213]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:09:25 np0005541914.localdomain sudo[317213]: pam_unix(sudo:session): session closed for user root
Dec 02 10:09:25 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:25.794 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:26 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:09:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:09:26 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:26.131 2 INFO neutron.agent.securitygroups_rpc [None req-28c3366c-1c91-49f5-b694-3c934cb049e5 f91ea2f3e6064338bfd751b12b56ae7b 873db74a4a7a4aad823d1b7e8b2d6c26 - - default default] Security group member updated ['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6']
Dec 02 10:09:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/.meta.tmp'
Dec 02 10:09:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/.meta.tmp' to config b'/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/.meta'
Dec 02 10:09:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:09:26 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "format": "json"}]: dispatch
Dec 02 10:09:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:09:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:09:26 np0005541914.localdomain dnsmasq[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/addn_hosts - 1 addresses
Dec 02 10:09:26 np0005541914.localdomain podman[317249]: 2025-12-02 10:09:26.344375586 +0000 UTC m=+0.048719028 container kill 093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-669f1b01-3857-4ea6-8083-25e0b2ce70bc, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:09:26 np0005541914.localdomain dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/host
Dec 02 10:09:26 np0005541914.localdomain dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/opts
Dec 02 10:09:26 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:26.379 2 INFO neutron.agent.securitygroups_rpc [None req-ec4a6adc-d68b-4418-9c41-c326e9a3fc34 49e91c7702d54b1ab47e5f6dec5e0208 204a1137a20e40c995bb9cd512e75a5c - - default default] Security group member updated ['53fe5435-6101-4ff1-81ad-b53da833172b']
Dec 02 10:09:26 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:26.414 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:25Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034b05f40>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034b05730>], id=c8669427-008b-4e2f-86c2-feca028efb63, ip_allocation=immediate, mac_address=fa:16:3e:cf:64:6b, name=tempest-FloatingIPNegativeTestJSON-119382670, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:09:12Z, description=, dns_domain=, id=1d564796-ac76-41ff-8a1e-6fbd19d356c5, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPNegativeTestJSON-test-network-1535011901, port_security_enabled=True, project_id=204a1137a20e40c995bb9cd512e75a5c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=16351, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2149, status=ACTIVE, subnets=['29255b27-c554-4680-ad67-e29997db9d5a'], tags=[], tenant_id=204a1137a20e40c995bb9cd512e75a5c, updated_at=2025-12-02T10:09:15Z, vlan_transparent=None, network_id=1d564796-ac76-41ff-8a1e-6fbd19d356c5, port_security_enabled=True, project_id=204a1137a20e40c995bb9cd512e75a5c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['53fe5435-6101-4ff1-81ad-b53da833172b'], standard_attr_id=2208, status=DOWN, tags=[], tenant_id=204a1137a20e40c995bb9cd512e75a5c, updated_at=2025-12-02T10:09:26Z on network 1d564796-ac76-41ff-8a1e-6fbd19d356c5
Dec 02 10:09:26 np0005541914.localdomain ceph-mon[301710]: pgmap v346: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 49 KiB/s rd, 2.0 KiB/s wr, 66 op/s
Dec 02 10:09:26 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:09:26 np0005541914.localdomain podman[317287]: 2025-12-02 10:09:26.647355026 +0000 UTC m=+0.059505360 container kill 5be375d594c5c169c744895ef026a5e71ad3963427d5365c1c05a1ac77a60d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1d564796-ac76-41ff-8a1e-6fbd19d356c5, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 02 10:09:26 np0005541914.localdomain dnsmasq[316876]: read /var/lib/neutron/dhcp/1d564796-ac76-41ff-8a1e-6fbd19d356c5/addn_hosts - 2 addresses
Dec 02 10:09:26 np0005541914.localdomain dnsmasq-dhcp[316876]: read /var/lib/neutron/dhcp/1d564796-ac76-41ff-8a1e-6fbd19d356c5/host
Dec 02 10:09:26 np0005541914.localdomain dnsmasq-dhcp[316876]: read /var/lib/neutron/dhcp/1d564796-ac76-41ff-8a1e-6fbd19d356c5/opts
Dec 02 10:09:27 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:27.038 262347 INFO neutron.agent.dhcp.agent [None req-6062ca53-8e6b-4d90-9e74-cc59a3f0a59c - - - - - -] DHCP configuration for ports {'c8669427-008b-4e2f-86c2-feca028efb63'} is completed
Dec 02 10:09:27 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v347: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 45 KiB/s rd, 1.8 KiB/s wr, 61 op/s
Dec 02 10:09:27 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events
Dec 02 10:09:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 02 10:09:27 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:27.367 2 INFO neutron.agent.securitygroups_rpc [None req-a7f54859-efcd-4ecf-b40a-33f0bd3f4545 f91ea2f3e6064338bfd751b12b56ae7b 873db74a4a7a4aad823d1b7e8b2d6c26 - - default default] Security group member updated ['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6']
Dec 02 10:09:27 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:09:27 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "format": "json"}]: dispatch
Dec 02 10:09:27 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:09:27 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:27.443 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:26Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034afe250>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034afeeb0>], id=dccd0b7e-05df-4bac-ab13-aec6a2f86fc9, ip_allocation=immediate, mac_address=fa:16:3e:db:d1:63, name=tempest-AllowedAddressPairIpV6TestJSON-1534878287, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:09:13Z, description=, dns_domain=, id=669f1b01-3857-4ea6-8083-25e0b2ce70bc, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-1035488035, port_security_enabled=True, project_id=873db74a4a7a4aad823d1b7e8b2d6c26, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=12314, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2159, status=ACTIVE, subnets=['c6868224-de5b-425b-bf13-d943ff06e669'], tags=[], tenant_id=873db74a4a7a4aad823d1b7e8b2d6c26, updated_at=2025-12-02T10:09:15Z, vlan_transparent=None, network_id=669f1b01-3857-4ea6-8083-25e0b2ce70bc, port_security_enabled=True, project_id=873db74a4a7a4aad823d1b7e8b2d6c26, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6'], standard_attr_id=2215, status=DOWN, tags=[], tenant_id=873db74a4a7a4aad823d1b7e8b2d6c26, updated_at=2025-12-02T10:09:26Z on network 669f1b01-3857-4ea6-8083-25e0b2ce70bc
Dec 02 10:09:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:09:27 np0005541914.localdomain dnsmasq[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/addn_hosts - 2 addresses
Dec 02 10:09:27 np0005541914.localdomain dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/host
Dec 02 10:09:27 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:27.616 2 INFO neutron.agent.securitygroups_rpc [None req-d5002157-4534-49a0-a135-4c64a8485ed7 8b49e5c866794aad866d55bb5f154d67 7dffef2e74844a7ebb6ee68826fb7e57 - - default default] Security group member updated ['32471057-4d02-424a-9e3e-19629ab1677d']
Dec 02 10:09:27 np0005541914.localdomain dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/opts
Dec 02 10:09:27 np0005541914.localdomain podman[317326]: 2025-12-02 10:09:27.616483284 +0000 UTC m=+0.050576925 container kill 093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-669f1b01-3857-4ea6-8083-25e0b2ce70bc, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:09:27 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:27.829 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:27 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:27.896 262347 INFO neutron.agent.dhcp.agent [None req-0e50c9c6-ba00-4350-93a5-583ee1d66e85 - - - - - -] DHCP configuration for ports {'dccd0b7e-05df-4bac-ab13-aec6a2f86fc9'} is completed
Dec 02 10:09:28 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:28.196 2 INFO neutron.agent.securitygroups_rpc [None req-31d4af97-7fb6-4706-a5b2-299b30ee98fa 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:09:29 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v348: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 4.7 KiB/s wr, 56 op/s
Dec 02 10:09:29 np0005541914.localdomain ceph-mon[301710]: pgmap v347: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 45 KiB/s rd, 1.8 KiB/s wr, 61 op/s
Dec 02 10:09:29 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:09:29 np0005541914.localdomain podman[317348]: 2025-12-02 10:09:29.359501371 +0000 UTC m=+0.079707511 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 02 10:09:29 np0005541914.localdomain podman[317348]: 2025-12-02 10:09:29.374921524 +0000 UTC m=+0.095127674 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 02 10:09:29 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:09:30 np0005541914.localdomain ceph-mon[301710]: pgmap v348: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 4.7 KiB/s wr, 56 op/s
Dec 02 10:09:30 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:09:30 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:09:30 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 02 10:09:30 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 02 10:09:30 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice with tenant a241a07e4161486091e8de3f95a1d6c6
Dec 02 10:09:30 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:09:30 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:09:30 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:09:30 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:30.732 2 INFO neutron.agent.securitygroups_rpc [None req-b3ef0962-bb50-4849-b9de-83492a397177 f91ea2f3e6064338bfd751b12b56ae7b 873db74a4a7a4aad823d1b7e8b2d6c26 - - default default] Security group member updated ['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6']
Dec 02 10:09:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:30.797 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:30 np0005541914.localdomain dnsmasq[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/addn_hosts - 1 addresses
Dec 02 10:09:30 np0005541914.localdomain podman[317384]: 2025-12-02 10:09:30.931573076 +0000 UTC m=+0.061173031 container kill 093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-669f1b01-3857-4ea6-8083-25e0b2ce70bc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 02 10:09:30 np0005541914.localdomain dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/host
Dec 02 10:09:30 np0005541914.localdomain dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/opts
Dec 02 10:09:31 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v349: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 3.6 KiB/s wr, 27 op/s
Dec 02 10:09:31 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:31.262 2 INFO neutron.agent.securitygroups_rpc [None req-b62fea3d-778e-4171-9633-628f1b789028 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:09:31 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:09:31 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 02 10:09:31 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:09:31 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:09:31 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:09:32 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:32.469 2 INFO neutron.agent.securitygroups_rpc [None req-1d81950f-2cd1-4171-b1b7-8ccf81612998 f91ea2f3e6064338bfd751b12b56ae7b 873db74a4a7a4aad823d1b7e8b2d6c26 - - default default] Security group member updated ['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6']
Dec 02 10:09:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:09:32 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:32.507 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:31Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034a8d1f0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034a8d100>], id=d1cb1123-b6c2-4482-8391-36bb1cb58ab8, ip_allocation=immediate, mac_address=fa:16:3e:18:fb:83, name=tempest-AllowedAddressPairIpV6TestJSON-890555371, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:09:13Z, description=, dns_domain=, id=669f1b01-3857-4ea6-8083-25e0b2ce70bc, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-1035488035, port_security_enabled=True, project_id=873db74a4a7a4aad823d1b7e8b2d6c26, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=12314, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2159, status=ACTIVE, subnets=['c6868224-de5b-425b-bf13-d943ff06e669'], tags=[], tenant_id=873db74a4a7a4aad823d1b7e8b2d6c26, updated_at=2025-12-02T10:09:15Z, vlan_transparent=None, network_id=669f1b01-3857-4ea6-8083-25e0b2ce70bc, port_security_enabled=True, project_id=873db74a4a7a4aad823d1b7e8b2d6c26, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6'], standard_attr_id=2232, status=DOWN, tags=[], tenant_id=873db74a4a7a4aad823d1b7e8b2d6c26, updated_at=2025-12-02T10:09:31Z on network 669f1b01-3857-4ea6-8083-25e0b2ce70bc
Dec 02 10:09:32 np0005541914.localdomain ceph-mon[301710]: pgmap v349: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 3.6 KiB/s wr, 27 op/s
Dec 02 10:09:32 np0005541914.localdomain dnsmasq[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/addn_hosts - 2 addresses
Dec 02 10:09:32 np0005541914.localdomain dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/host
Dec 02 10:09:32 np0005541914.localdomain podman[317424]: 2025-12-02 10:09:32.696144345 +0000 UTC m=+0.054738973 container kill 093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-669f1b01-3857-4ea6-8083-25e0b2ce70bc, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:09:32 np0005541914.localdomain dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/opts
Dec 02 10:09:32 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:32.831 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:33 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:33.079 262347 INFO neutron.agent.dhcp.agent [None req-f2ac2ad4-5582-4713-9d09-3251bba1f736 - - - - - -] DHCP configuration for ports {'d1cb1123-b6c2-4482-8391-36bb1cb58ab8'} is completed
Dec 02 10:09:33 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v350: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 8.7 KiB/s wr, 29 op/s
Dec 02 10:09:33 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:33.506 2 INFO neutron.agent.securitygroups_rpc [None req-2ab86868-457f-4852-a90e-5fcf962a86b2 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']
Dec 02 10:09:33 np0005541914.localdomain podman[239757]: time="2025-12-02T10:09:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:09:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:09:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 160387 "" "Go-http-client/1.1"
Dec 02 10:09:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:09:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20166 "" "Go-http-client/1.1"
Dec 02 10:09:33 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:33.706 2 INFO neutron.agent.securitygroups_rpc [None req-4574e29d-6803-42ec-b043-afe3e9e41c81 f91ea2f3e6064338bfd751b12b56ae7b 873db74a4a7a4aad823d1b7e8b2d6c26 - - default default] Security group member updated ['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6']
Dec 02 10:09:33 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:33.748 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:33Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c4ac70>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c4a700>], id=ac73148c-331f-4c9c-ac3a-daf3156fc54d, ip_allocation=immediate, mac_address=fa:16:3e:94:0e:a8, name=tempest-AllowedAddressPairIpV6TestJSON-603020251, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:09:13Z, description=, dns_domain=, id=669f1b01-3857-4ea6-8083-25e0b2ce70bc, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairIpV6TestJSON-test-network-1035488035, port_security_enabled=True, project_id=873db74a4a7a4aad823d1b7e8b2d6c26, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=12314, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2159, status=ACTIVE, subnets=['c6868224-de5b-425b-bf13-d943ff06e669'], tags=[], tenant_id=873db74a4a7a4aad823d1b7e8b2d6c26, updated_at=2025-12-02T10:09:15Z, vlan_transparent=None, network_id=669f1b01-3857-4ea6-8083-25e0b2ce70bc, port_security_enabled=True, project_id=873db74a4a7a4aad823d1b7e8b2d6c26, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6'], standard_attr_id=2236, status=DOWN, tags=[], tenant_id=873db74a4a7a4aad823d1b7e8b2d6c26, updated_at=2025-12-02T10:09:33Z on network 669f1b01-3857-4ea6-8083-25e0b2ce70bc
Dec 02 10:09:33 np0005541914.localdomain dnsmasq[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/addn_hosts - 3 addresses
Dec 02 10:09:33 np0005541914.localdomain dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/host
Dec 02 10:09:33 np0005541914.localdomain dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/opts
Dec 02 10:09:33 np0005541914.localdomain podman[317463]: 2025-12-02 10:09:33.912958733 +0000 UTC m=+0.046818918 container kill 093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-669f1b01-3857-4ea6-8083-25e0b2ce70bc, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 02 10:09:33 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch
Dec 02 10:09:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:09:34 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 02 10:09:34 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 02 10:09:34 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Dec 02 10:09:34 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 02 10:09:34 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:09:34 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch
Dec 02 10:09:34 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:09:34 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180
Dec 02 10:09:34 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:09:34 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:09:34 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:34.127 262347 INFO neutron.agent.dhcp.agent [None req-0e400471-fa0f-44a4-a4da-ef84dc52ed18 - - - - - -] DHCP configuration for ports {'ac73148c-331f-4c9c-ac3a-daf3156fc54d'} is completed
Dec 02 10:09:34 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:34.167 2 INFO neutron.agent.securitygroups_rpc [None req-808059d3-8bd0-4321-909f-628d45d51793 49e91c7702d54b1ab47e5f6dec5e0208 204a1137a20e40c995bb9cd512e75a5c - - default default] Security group member updated ['53fe5435-6101-4ff1-81ad-b53da833172b']
Dec 02 10:09:34 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:34.222 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:34 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:09:34.221 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:09:34 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:09:34.223 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 02 10:09:34 np0005541914.localdomain dnsmasq[316876]: read /var/lib/neutron/dhcp/1d564796-ac76-41ff-8a1e-6fbd19d356c5/addn_hosts - 1 addresses
Dec 02 10:09:34 np0005541914.localdomain podman[317504]: 2025-12-02 10:09:34.415638039 +0000 UTC m=+0.043849478 container kill 5be375d594c5c169c744895ef026a5e71ad3963427d5365c1c05a1ac77a60d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1d564796-ac76-41ff-8a1e-6fbd19d356c5, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 02 10:09:34 np0005541914.localdomain dnsmasq-dhcp[316876]: read /var/lib/neutron/dhcp/1d564796-ac76-41ff-8a1e-6fbd19d356c5/host
Dec 02 10:09:34 np0005541914.localdomain dnsmasq-dhcp[316876]: read /var/lib/neutron/dhcp/1d564796-ac76-41ff-8a1e-6fbd19d356c5/opts
Dec 02 10:09:34 np0005541914.localdomain ceph-mon[301710]: pgmap v350: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 8.7 KiB/s wr, 29 op/s
Dec 02 10:09:34 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 02 10:09:34 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 02 10:09:34 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 02 10:09:34 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 02 10:09:35 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v351: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 8.2 KiB/s wr, 2 op/s
Dec 02 10:09:35 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:09:35.225 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:09:35 np0005541914.localdomain dnsmasq[316876]: read /var/lib/neutron/dhcp/1d564796-ac76-41ff-8a1e-6fbd19d356c5/addn_hosts - 0 addresses
Dec 02 10:09:35 np0005541914.localdomain dnsmasq-dhcp[316876]: read /var/lib/neutron/dhcp/1d564796-ac76-41ff-8a1e-6fbd19d356c5/host
Dec 02 10:09:35 np0005541914.localdomain podman[317542]: 2025-12-02 10:09:35.520548409 +0000 UTC m=+0.048253043 container kill 5be375d594c5c169c744895ef026a5e71ad3963427d5365c1c05a1ac77a60d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1d564796-ac76-41ff-8a1e-6fbd19d356c5, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:09:35 np0005541914.localdomain dnsmasq-dhcp[316876]: read /var/lib/neutron/dhcp/1d564796-ac76-41ff-8a1e-6fbd19d356c5/opts
Dec 02 10:09:35 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:35.546 2 INFO neutron.agent.securitygroups_rpc [None req-3d05d8f5-1d82-449d-b4e5-f5f672622e53 f91ea2f3e6064338bfd751b12b56ae7b 873db74a4a7a4aad823d1b7e8b2d6c26 - - default default] Security group member updated ['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6']
Dec 02 10:09:35 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch
Dec 02 10:09:35 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch
Dec 02 10:09:35 np0005541914.localdomain ceph-mon[301710]: pgmap v351: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 8.2 KiB/s wr, 2 op/s
Dec 02 10:09:35 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:35.676 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:35 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:09:35Z|00150|binding|INFO|Releasing lport 21d38e5b-83d6-443b-a9e2-4f6016ed9773 from this chassis (sb_readonly=0)
Dec 02 10:09:35 np0005541914.localdomain kernel: device tap21d38e5b-83 left promiscuous mode
Dec 02 10:09:35 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:09:35Z|00151|binding|INFO|Setting lport 21d38e5b-83d6-443b-a9e2-4f6016ed9773 down in Southbound
Dec 02 10:09:35 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:09:35.684 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-1d564796-ac76-41ff-8a1e-6fbd19d356c5', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1d564796-ac76-41ff-8a1e-6fbd19d356c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '204a1137a20e40c995bb9cd512e75a5c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=45beaf16-b07c-44e8-bec5-71e8573d4df7, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=21d38e5b-83d6-443b-a9e2-4f6016ed9773) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:09:35 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:09:35.686 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 21d38e5b-83d6-443b-a9e2-4f6016ed9773 in datapath 1d564796-ac76-41ff-8a1e-6fbd19d356c5 unbound from our chassis
Dec 02 10:09:35 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:09:35.688 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1d564796-ac76-41ff-8a1e-6fbd19d356c5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:09:35 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:09:35.690 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[4dde967a-eae1-4cf4-b142-e3df74087533]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:09:35 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:35.699 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:35 np0005541914.localdomain dnsmasq[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/addn_hosts - 2 addresses
Dec 02 10:09:35 np0005541914.localdomain podman[317583]: 2025-12-02 10:09:35.784397876 +0000 UTC m=+0.045274311 container kill 093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-669f1b01-3857-4ea6-8083-25e0b2ce70bc, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 02 10:09:35 np0005541914.localdomain dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/host
Dec 02 10:09:35 np0005541914.localdomain dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/opts
Dec 02 10:09:35 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:35.799 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:36 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:36.621 2 INFO neutron.agent.securitygroups_rpc [None req-11576e49-abbc-421e-9ae1-ea6ee8281fd6 f91ea2f3e6064338bfd751b12b56ae7b 873db74a4a7a4aad823d1b7e8b2d6c26 - - default default] Security group member updated ['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6']
Dec 02 10:09:36 np0005541914.localdomain dnsmasq[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/addn_hosts - 1 addresses
Dec 02 10:09:36 np0005541914.localdomain dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/host
Dec 02 10:09:36 np0005541914.localdomain dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/opts
Dec 02 10:09:36 np0005541914.localdomain podman[317619]: 2025-12-02 10:09:36.832418468 +0000 UTC m=+0.061310454 container kill 093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-669f1b01-3857-4ea6-8083-25e0b2ce70bc, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:09:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:09:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:09:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:09:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:09:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:09:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fd3633e1430>)]
Dec 02 10:09:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 02 10:09:37 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:37.115 2 INFO neutron.agent.securitygroups_rpc [None req-841b4da2-cab1-42f7-ac13-ca29294f546a 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']
Dec 02 10:09:37 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v352: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 8.2 KiB/s wr, 2 op/s
Dec 02 10:09:37 np0005541914.localdomain systemd[1]: tmp-crun.5RmVI1.mount: Deactivated successfully.
Dec 02 10:09:37 np0005541914.localdomain dnsmasq[316876]: exiting on receipt of SIGTERM
Dec 02 10:09:37 np0005541914.localdomain podman[317656]: 2025-12-02 10:09:37.317786343 +0000 UTC m=+0.072088807 container kill 5be375d594c5c169c744895ef026a5e71ad3963427d5365c1c05a1ac77a60d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1d564796-ac76-41ff-8a1e-6fbd19d356c5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Dec 02 10:09:37 np0005541914.localdomain systemd[1]: libpod-5be375d594c5c169c744895ef026a5e71ad3963427d5365c1c05a1ac77a60d83.scope: Deactivated successfully.
Dec 02 10:09:37 np0005541914.localdomain podman[317672]: 2025-12-02 10:09:37.386931937 +0000 UTC m=+0.049540473 container died 5be375d594c5c169c744895ef026a5e71ad3963427d5365c1c05a1ac77a60d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1d564796-ac76-41ff-8a1e-6fbd19d356c5, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 02 10:09:37 np0005541914.localdomain podman[317672]: 2025-12-02 10:09:37.433944401 +0000 UTC m=+0.096553007 container remove 5be375d594c5c169c744895ef026a5e71ad3963427d5365c1c05a1ac77a60d83 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-1d564796-ac76-41ff-8a1e-6fbd19d356c5, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 02 10:09:37 np0005541914.localdomain systemd[1]: libpod-conmon-5be375d594c5c169c744895ef026a5e71ad3963427d5365c1c05a1ac77a60d83.scope: Deactivated successfully.
Dec 02 10:09:37 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "r", "format": "json"}]: dispatch
Dec 02 10:09:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:09:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 02 10:09:37 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 02 10:09:37 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice with tenant a241a07e4161486091e8de3f95a1d6c6
Dec 02 10:09:37 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:37.477 2 INFO neutron.agent.securitygroups_rpc [None req-841b4da2-cab1-42f7-ac13-ca29294f546a 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']
Dec 02 10:09:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:09:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:09:37 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:09:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:09:37 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:37.649 2 INFO neutron.agent.securitygroups_rpc [None req-fb787287-e6b7-452a-9552-33fb0c49fb57 f91ea2f3e6064338bfd751b12b56ae7b 873db74a4a7a4aad823d1b7e8b2d6c26 - - default default] Security group member updated ['faece1fb-3d42-4fda-a7a4-ce9b1aa942b6']
Dec 02 10:09:37 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:37.829 2 INFO neutron.agent.securitygroups_rpc [None req-f57a8374-1238-48d5-81d1-d11d5ba885ce 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:09:37 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-de1d49012998774e6961c351ee30dc6327212e570cd42de9146733d20d3604cc-merged.mount: Deactivated successfully.
Dec 02 10:09:37 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5be375d594c5c169c744895ef026a5e71ad3963427d5365c1c05a1ac77a60d83-userdata-shm.mount: Deactivated successfully.
Dec 02 10:09:37 np0005541914.localdomain systemd[1]: run-netns-qdhcp\x2d1d564796\x2dac76\x2d41ff\x2d8a1e\x2d6fbd19d356c5.mount: Deactivated successfully.
Dec 02 10:09:37 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:37.874 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:09:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:37.876 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:38 np0005541914.localdomain dnsmasq[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/addn_hosts - 0 addresses
Dec 02 10:09:38 np0005541914.localdomain dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/host
Dec 02 10:09:38 np0005541914.localdomain podman[317713]: 2025-12-02 10:09:38.031901815 +0000 UTC m=+0.062031228 container kill 093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-669f1b01-3857-4ea6-8083-25e0b2ce70bc, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:09:38 np0005541914.localdomain dnsmasq-dhcp[316793]: read /var/lib/neutron/dhcp/669f1b01-3857-4ea6-8083-25e0b2ce70bc/opts
Dec 02 10:09:38 np0005541914.localdomain ceph-mon[301710]: pgmap v352: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 8.2 KiB/s wr, 2 op/s
Dec 02 10:09:38 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "r", "format": "json"}]: dispatch
Dec 02 10:09:38 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 02 10:09:38 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:09:38 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:09:38 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:09:38 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:38.834 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:09:39 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:39.002 2 INFO neutron.agent.securitygroups_rpc [None req-1ad7ca5c-e344-40e1-8595-888c801ea96b 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']
Dec 02 10:09:39 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v353: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 18 KiB/s wr, 13 op/s
Dec 02 10:09:39 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:39.263 2 INFO neutron.agent.securitygroups_rpc [None req-cc3286c0-8479-41a4-833f-f53341ebdf18 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:09:39 np0005541914.localdomain dnsmasq[316793]: exiting on receipt of SIGTERM
Dec 02 10:09:39 np0005541914.localdomain podman[317751]: 2025-12-02 10:09:39.268837181 +0000 UTC m=+0.060452988 container kill 093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-669f1b01-3857-4ea6-8083-25e0b2ce70bc, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:09:39 np0005541914.localdomain systemd[1]: libpod-093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed.scope: Deactivated successfully.
Dec 02 10:09:39 np0005541914.localdomain podman[317766]: 2025-12-02 10:09:39.333339263 +0000 UTC m=+0.051537654 container died 093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-669f1b01-3857-4ea6-8083-25e0b2ce70bc, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 02 10:09:39 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:39.381 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:39 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed-userdata-shm.mount: Deactivated successfully.
Dec 02 10:09:39 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-39a3d1465d9dc770bb3a503daf7256c2208118b497345b7f792f0cc2082142ec-merged.mount: Deactivated successfully.
Dec 02 10:09:39 np0005541914.localdomain podman[317766]: 2025-12-02 10:09:39.411369051 +0000 UTC m=+0.129567412 container cleanup 093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-669f1b01-3857-4ea6-8083-25e0b2ce70bc, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:09:39 np0005541914.localdomain systemd[1]: libpod-conmon-093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed.scope: Deactivated successfully.
Dec 02 10:09:39 np0005541914.localdomain podman[317767]: 2025-12-02 10:09:39.430444757 +0000 UTC m=+0.139147117 container remove 093d2f14f3748acf22237bf7c55458d198a09e9d8aba543ab04eede9ccfb6bed (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-669f1b01-3857-4ea6-8083-25e0b2ce70bc, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 02 10:09:39 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:09:39Z|00152|binding|INFO|Releasing lport 8d7aba05-5eab-44a1-aacc-c2b62f525db1 from this chassis (sb_readonly=0)
Dec 02 10:09:39 np0005541914.localdomain kernel: device tap8d7aba05-5e left promiscuous mode
Dec 02 10:09:39 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:09:39Z|00153|binding|INFO|Setting lport 8d7aba05-5eab-44a1-aacc-c2b62f525db1 down in Southbound
Dec 02 10:09:39 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:39.441 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:39 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:09:39.451 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-669f1b01-3857-4ea6-8083-25e0b2ce70bc', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-669f1b01-3857-4ea6-8083-25e0b2ce70bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '873db74a4a7a4aad823d1b7e8b2d6c26', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d91ccfa3-a134-4f2e-be7a-020d064cc147, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=8d7aba05-5eab-44a1-aacc-c2b62f525db1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:09:39 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:09:39.452 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 8d7aba05-5eab-44a1-aacc-c2b62f525db1 in datapath 669f1b01-3857-4ea6-8083-25e0b2ce70bc unbound from our chassis
Dec 02 10:09:39 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:09:39.454 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 669f1b01-3857-4ea6-8083-25e0b2ce70bc or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 02 10:09:39 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:09:39.454 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[109e6ea6-e4f9-4f06-ba54-64be7ec6eb14]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:09:39 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:39.460 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:39 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:39.710 262347 INFO neutron.agent.dhcp.agent [None req-1900b73d-78ff-4d7b-9cc8-db694fa934ea - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:09:40 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:40.221 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:09:40 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:40.226 2 INFO neutron.agent.securitygroups_rpc [None req-d726f52f-c5d0-4b2e-935e-07d00a13737f 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']
Dec 02 10:09:40 np0005541914.localdomain systemd[1]: run-netns-qdhcp\x2d669f1b01\x2d3857\x2d4ea6\x2d8083\x2d25e0b2ce70bc.mount: Deactivated successfully.
Dec 02 10:09:40 np0005541914.localdomain ceph-mon[301710]: pgmap v353: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 18 KiB/s wr, 13 op/s
Dec 02 10:09:40 np0005541914.localdomain ceph-mon[301710]: mgrmap e47: np0005541914.lljzmk(active, since 9m), standbys: np0005541912.qwddia, np0005541913.mfesdm
Dec 02 10:09:40 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:40.801 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:40 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch
Dec 02 10:09:40 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:09:40 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 02 10:09:40 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 02 10:09:40 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Dec 02 10:09:40 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 02 10:09:40 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:09:40 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch
Dec 02 10:09:40 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:09:40 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180
Dec 02 10:09:40 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:09:40 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:09:41 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:41.067 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:09:41 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v354: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 14 KiB/s wr, 12 op/s
Dec 02 10:09:41 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 02 10:09:41 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 02 10:09:41 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 02 10:09:41 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 02 10:09:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:41.759 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:09:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:09:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:09:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:09:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:09:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:09:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:09:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:09:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:09:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:09:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:09:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:09:42 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch
Dec 02 10:09:42 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch
Dec 02 10:09:42 np0005541914.localdomain ceph-mon[301710]: pgmap v354: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 14 KiB/s wr, 12 op/s
Dec 02 10:09:42 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:09:43 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:43.003 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:43 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v355: 177 pgs: 177 active+clean; 146 MiB data, 870 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 63 op/s
Dec 02 10:09:43 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2055797992' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:09:43 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2055797992' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:09:44 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:09:44 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:09:44 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 02 10:09:44 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 02 10:09:44 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice_bob with tenant a241a07e4161486091e8de3f95a1d6c6
Dec 02 10:09:44 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:09:44 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:09:44 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:09:44 np0005541914.localdomain ceph-mon[301710]: pgmap v355: 177 pgs: 177 active+clean; 146 MiB data, 870 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 63 op/s
Dec 02 10:09:44 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 02 10:09:44 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:09:44 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:09:44 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:09:44 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:44.616 2 INFO neutron.agent.securitygroups_rpc [None req-fbbd4af2-250f-4ff2-b3ab-e75b109a47fa 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:09:44 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:44.682 2 INFO neutron.agent.securitygroups_rpc [None req-26916261-820c-405a-8570-4b6047e10a3c 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']
Dec 02 10:09:45 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v356: 177 pgs: 177 active+clean; 146 MiB data, 870 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 62 op/s
Dec 02 10:09:45 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:09:45 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:45.593 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:45Z, description=, device_id=09bd6e36-fd7e-4a01-b1f1-fbe7d4a09c35, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034ace3d0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034d75310>], id=ea175376-684d-4ede-b30d-777d1d743d12, ip_allocation=immediate, mac_address=fa:16:3e:f4:c5:54, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2291, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:09:45Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:09:45 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:45.752 2 INFO neutron.agent.securitygroups_rpc [None req-5632dc43-e5b5-45de-a516-10b988e48fe8 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']
Dec 02 10:09:45 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:45.798 2 INFO neutron.agent.securitygroups_rpc [None req-50568852-e227-40d9-a94b-d9d972f0134a 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:09:45 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses
Dec 02 10:09:45 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:09:45 np0005541914.localdomain podman[317812]: 2025-12-02 10:09:45.807146902 +0000 UTC m=+0.062236743 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 02 10:09:45 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:09:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:45.803 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:45 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:09:45 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:09:45 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:09:45 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:09:45 np0005541914.localdomain podman[317829]: 2025-12-02 10:09:45.938355974 +0000 UTC m=+0.091283056 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 02 10:09:45 np0005541914.localdomain systemd[1]: tmp-crun.9suV9c.mount: Deactivated successfully.
Dec 02 10:09:45 np0005541914.localdomain podman[317827]: 2025-12-02 10:09:45.958858034 +0000 UTC m=+0.113933211 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 02 10:09:45 np0005541914.localdomain podman[317827]: 2025-12-02 10:09:45.962081633 +0000 UTC m=+0.117156780 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 02 10:09:45 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:09:46 np0005541914.localdomain podman[317831]: 2025-12-02 10:09:46.002201756 +0000 UTC m=+0.143842791 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 02 10:09:46 np0005541914.localdomain podman[317828]: 2025-12-02 10:09:46.043761423 +0000 UTC m=+0.197766818 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 10:09:46 np0005541914.localdomain podman[317828]: 2025-12-02 10:09:46.055853325 +0000 UTC m=+0.209858760 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 10:09:46 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:09:46 np0005541914.localdomain podman[317829]: 2025-12-02 10:09:46.072543827 +0000 UTC m=+0.225470899 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 02 10:09:46 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:09:46 np0005541914.localdomain podman[317831]: 2025-12-02 10:09:46.097594687 +0000 UTC m=+0.239235772 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 02 10:09:46 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:09:46 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:46.157 262347 INFO neutron.agent.dhcp.agent [None req-a7307f93-5a1a-419b-b17d-8ec10364d511 - - - - - -] DHCP configuration for ports {'ea175376-684d-4ede-b30d-777d1d743d12'} is completed
Dec 02 10:09:46 np0005541914.localdomain ceph-mon[301710]: pgmap v356: 177 pgs: 177 active+clean; 146 MiB data, 870 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 62 op/s
Dec 02 10:09:47 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v357: 177 pgs: 177 active+clean; 146 MiB data, 870 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 62 op/s
Dec 02 10:09:47 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:09:47 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 02 10:09:47 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:09:47 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 02 10:09:47 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 02 10:09:47 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Dec 02 10:09:47 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 02 10:09:47 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:09:47 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 02 10:09:47 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:09:47 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180
Dec 02 10:09:47 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:09:47 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:09:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:48.007 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:48 np0005541914.localdomain ceph-mon[301710]: pgmap v357: 177 pgs: 177 active+clean; 146 MiB data, 870 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 62 op/s
Dec 02 10:09:48 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 02 10:09:48 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 02 10:09:48 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 02 10:09:48 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 02 10:09:49 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v358: 177 pgs: 177 active+clean; 146 MiB data, 815 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 65 op/s
Dec 02 10:09:49 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 02 10:09:49 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 02 10:09:50 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 1 addresses
Dec 02 10:09:50 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:09:50 np0005541914.localdomain podman[317931]: 2025-12-02 10:09:50.378521686 +0000 UTC m=+0.059133518 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 02 10:09:50 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:09:50 np0005541914.localdomain systemd[1]: tmp-crun.ht4zPZ.mount: Deactivated successfully.
Dec 02 10:09:50 np0005541914.localdomain ceph-mon[301710]: pgmap v358: 177 pgs: 177 active+clean; 146 MiB data, 815 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 65 op/s
Dec 02 10:09:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:50.809 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:51 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "r", "format": "json"}]: dispatch
Dec 02 10:09:51 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:09:51 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 02 10:09:51 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 02 10:09:51 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice_bob with tenant a241a07e4161486091e8de3f95a1d6c6
Dec 02 10:09:51 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v359: 177 pgs: 177 active+clean; 146 MiB data, 815 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Dec 02 10:09:51 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:09:51 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:09:51 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:09:51 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:09:51.476 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:19:93 2001:db8:0:1:f816:3eff:fee6:1993'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=55679031-13ed-4a23-9c9d-18d3c58230be, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a59d5a92-7a77-419d-a87f-fbb46ea78955) old=Port_Binding(mac=['fa:16:3e:e6:19:93 2001:db8::f816:3eff:fee6:1993'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fee6:1993/64', 'neutron:device_id': 'ovnmeta-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d517d9d-ba68-4c0f-b344-6c3be9d614a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '39113116e26e4da3a6194d2f44d952a8', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:09:51 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:09:51.478 159483 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a59d5a92-7a77-419d-a87f-fbb46ea78955 in datapath 7d517d9d-ba68-4c0f-b344-6c3be9d614a4 updated
Dec 02 10:09:51 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:09:51.480 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d517d9d-ba68-4c0f-b344-6c3be9d614a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:09:51 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:09:51.480 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[a0b899ee-0c72-4cff-a99b-2f7f8679961b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:09:51 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 02 10:09:51 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:09:51 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:09:51 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:09:52 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:52.051 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:51Z, description=, device_id=7059c3fd-a028-4cdb-9894-b6db3dc33369, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c3a280>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c3a250>], id=e0985da3-b049-4018-a9b6-76ca5bf11bab, ip_allocation=immediate, mac_address=fa:16:3e:14:f8:01, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2338, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:09:51Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:09:52 np0005541914.localdomain systemd[1]: tmp-crun.ESNPIX.mount: Deactivated successfully.
Dec 02 10:09:52 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses
Dec 02 10:09:52 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:09:52 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:09:52 np0005541914.localdomain podman[317970]: 2025-12-02 10:09:52.263465844 +0000 UTC m=+0.067128934 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:09:52 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:52.475 262347 INFO neutron.agent.dhcp.agent [None req-c5ced586-d563-45f7-9e01-5939ec796d3f - - - - - -] DHCP configuration for ports {'e0985da3-b049-4018-a9b6-76ca5bf11bab'} is completed
Dec 02 10:09:52 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:09:52 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "r", "format": "json"}]: dispatch
Dec 02 10:09:52 np0005541914.localdomain ceph-mon[301710]: pgmap v359: 177 pgs: 177 active+clean; 146 MiB data, 815 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Dec 02 10:09:52 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cc6380cd-d1fe-41c0-9f77-54a6bc7687ef", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:09:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cc6380cd-d1fe-41c0-9f77-54a6bc7687ef, vol_name:cephfs) < ""
Dec 02 10:09:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/cc6380cd-d1fe-41c0-9f77-54a6bc7687ef/.meta.tmp'
Dec 02 10:09:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/cc6380cd-d1fe-41c0-9f77-54a6bc7687ef/.meta.tmp' to config b'/volumes/_nogroup/cc6380cd-d1fe-41c0-9f77-54a6bc7687ef/.meta'
Dec 02 10:09:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:cc6380cd-d1fe-41c0-9f77-54a6bc7687ef, vol_name:cephfs) < ""
Dec 02 10:09:52 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cc6380cd-d1fe-41c0-9f77-54a6bc7687ef", "format": "json"}]: dispatch
Dec 02 10:09:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cc6380cd-d1fe-41c0-9f77-54a6bc7687ef, vol_name:cephfs) < ""
Dec 02 10:09:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:cc6380cd-d1fe-41c0-9f77-54a6bc7687ef, vol_name:cephfs) < ""
Dec 02 10:09:52 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:09:52 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:09:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:53.059 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:53 np0005541914.localdomain podman[317992]: 2025-12-02 10:09:53.108132986 +0000 UTC m=+0.114344593 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 10:09:53 np0005541914.localdomain podman[317992]: 2025-12-02 10:09:53.120973061 +0000 UTC m=+0.127184628 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 10:09:53 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v360: 177 pgs: 177 active+clean; 146 MiB data, 815 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Dec 02 10:09:53 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:09:53 np0005541914.localdomain podman[317993]: 2025-12-02 10:09:53.169876303 +0000 UTC m=+0.175087910 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, maintainer=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, container_name=openstack_network_exporter, distribution-scope=public, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 02 10:09:53 np0005541914.localdomain podman[317993]: 2025-12-02 10:09:53.207041266 +0000 UTC m=+0.212252863 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_id=edpm, version=9.6)
Dec 02 10:09:53 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:09:53 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:53.244 2 INFO neutron.agent.securitygroups_rpc [None req-2a02d4a7-eedb-47f7-975e-8a697d665d71 6a4701e292e04a82a827d127f0ef5b65 0b7e671d1f944c979f6feba0246d3141 - - default default] Security group member updated ['274309be-bd70-4043-9459-2a1d0784f871']
Dec 02 10:09:53 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:53.303 2 INFO neutron.agent.securitygroups_rpc [None req-384a8cd4-c502-4296-9a0a-cda4da9440fe 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:09:53 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:09:53 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:53.866 262347 INFO neutron.agent.linux.ip_lib [None req-85f18bf6-a1dd-4d9f-87b6-c2b326c4486a - - - - - -] Device tapd313ec5a-74 cannot be used as it has no MAC address
Dec 02 10:09:53 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:53.868 2 INFO neutron.agent.securitygroups_rpc [None req-552eb951-c19a-4f29-a133-451809159dee 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']
Dec 02 10:09:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:53.889 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:53 np0005541914.localdomain kernel: device tapd313ec5a-74 entered promiscuous mode
Dec 02 10:09:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:53.897 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:53 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:09:53Z|00154|binding|INFO|Claiming lport d313ec5a-74ee-4c97-a266-afe79cf4d76a for this chassis.
Dec 02 10:09:53 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:09:53Z|00155|binding|INFO|d313ec5a-74ee-4c97-a266-afe79cf4d76a: Claiming unknown
Dec 02 10:09:53 np0005541914.localdomain NetworkManager[5967]: <info>  [1764670193.9030] manager: (tapd313ec5a-74): new Generic device (/org/freedesktop/NetworkManager/Devices/33)
Dec 02 10:09:53 np0005541914.localdomain systemd-udevd[318043]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 10:09:53 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:09:53.912 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-8de9fa50-7037-4f69-a2b1-5be6f609300b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8de9fa50-7037-4f69-a2b1-5be6f609300b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0b7e671d1f944c979f6feba0246d3141', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5e595e1b-0a18-488d-bb72-ec4f6317b810, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=d313ec5a-74ee-4c97-a266-afe79cf4d76a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:09:53 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:09:53.915 159483 INFO neutron.agent.ovn.metadata.agent [-] Port d313ec5a-74ee-4c97-a266-afe79cf4d76a in datapath 8de9fa50-7037-4f69-a2b1-5be6f609300b bound to our chassis
Dec 02 10:09:53 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:09:53.917 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8de9fa50-7037-4f69-a2b1-5be6f609300b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 02 10:09:53 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:09:53.918 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[044d274a-a839-428e-9f7a-521609f9d94d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:09:53 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapd313ec5a-74: No such device
Dec 02 10:09:53 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapd313ec5a-74: No such device
Dec 02 10:09:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:53.935 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:53.940 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:53 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapd313ec5a-74: No such device
Dec 02 10:09:53 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:09:53Z|00156|binding|INFO|Setting lport d313ec5a-74ee-4c97-a266-afe79cf4d76a ovn-installed in OVS
Dec 02 10:09:53 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:09:53Z|00157|binding|INFO|Setting lport d313ec5a-74ee-4c97-a266-afe79cf4d76a up in Southbound
Dec 02 10:09:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:53.942 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:53 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapd313ec5a-74: No such device
Dec 02 10:09:53 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapd313ec5a-74: No such device
Dec 02 10:09:53 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:53.953 2 INFO neutron.agent.securitygroups_rpc [None req-793d7f6f-bcaa-4aba-a1ec-f239eb834fe6 6a4701e292e04a82a827d127f0ef5b65 0b7e671d1f944c979f6feba0246d3141 - - default default] Security group member updated ['274309be-bd70-4043-9459-2a1d0784f871']
Dec 02 10:09:53 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapd313ec5a-74: No such device
Dec 02 10:09:53 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapd313ec5a-74: No such device
Dec 02 10:09:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:53.966 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:53 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapd313ec5a-74: No such device
Dec 02 10:09:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:53.998 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:54 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 02 10:09:54 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:09:54 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 02 10:09:54 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 02 10:09:54 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Dec 02 10:09:54 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 02 10:09:54 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:09:54 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 02 10:09:54 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:09:54 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180
Dec 02 10:09:54 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:54.534 2 INFO neutron.agent.securitygroups_rpc [None req-ecc73d10-d9a3-477f-859a-88e3d0a4a336 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:09:54 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:09:54 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:09:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:54.601 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:54 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cc6380cd-d1fe-41c0-9f77-54a6bc7687ef", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:09:54 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cc6380cd-d1fe-41c0-9f77-54a6bc7687ef", "format": "json"}]: dispatch
Dec 02 10:09:54 np0005541914.localdomain ceph-mon[301710]: pgmap v360: 177 pgs: 177 active+clean; 146 MiB data, 815 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Dec 02 10:09:54 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 02 10:09:54 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 02 10:09:54 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 02 10:09:54 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 02 10:09:54 np0005541914.localdomain podman[318115]: 
Dec 02 10:09:54 np0005541914.localdomain podman[318115]: 2025-12-02 10:09:54.792585994 +0000 UTC m=+0.101802840 container create 860a06a7207b7e4dfaaccb8cc4c8ceb53cf9ef437dde0478cc74fa543edf322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8de9fa50-7037-4f69-a2b1-5be6f609300b, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:09:54 np0005541914.localdomain podman[318115]: 2025-12-02 10:09:54.737322825 +0000 UTC m=+0.046539731 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 02 10:09:54 np0005541914.localdomain systemd[1]: Started libpod-conmon-860a06a7207b7e4dfaaccb8cc4c8ceb53cf9ef437dde0478cc74fa543edf322f.scope.
Dec 02 10:09:54 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 10:09:54 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a9135aa181504ec852eaaa2d9517fcf20a06ac4cf2b278c7d3159af034da279/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 10:09:54 np0005541914.localdomain podman[318115]: 2025-12-02 10:09:54.86993741 +0000 UTC m=+0.179154236 container init 860a06a7207b7e4dfaaccb8cc4c8ceb53cf9ef437dde0478cc74fa543edf322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8de9fa50-7037-4f69-a2b1-5be6f609300b, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:09:54 np0005541914.localdomain podman[318115]: 2025-12-02 10:09:54.878163213 +0000 UTC m=+0.187380069 container start 860a06a7207b7e4dfaaccb8cc4c8ceb53cf9ef437dde0478cc74fa543edf322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8de9fa50-7037-4f69-a2b1-5be6f609300b, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:09:54 np0005541914.localdomain dnsmasq[318133]: started, version 2.85 cachesize 150
Dec 02 10:09:54 np0005541914.localdomain dnsmasq[318133]: DNS service limited to local subnets
Dec 02 10:09:54 np0005541914.localdomain dnsmasq[318133]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 02 10:09:54 np0005541914.localdomain dnsmasq[318133]: warning: no upstream servers configured
Dec 02 10:09:54 np0005541914.localdomain dnsmasq-dhcp[318133]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Dec 02 10:09:54 np0005541914.localdomain dnsmasq[318133]: read /var/lib/neutron/dhcp/8de9fa50-7037-4f69-a2b1-5be6f609300b/addn_hosts - 0 addresses
Dec 02 10:09:54 np0005541914.localdomain dnsmasq-dhcp[318133]: read /var/lib/neutron/dhcp/8de9fa50-7037-4f69-a2b1-5be6f609300b/host
Dec 02 10:09:54 np0005541914.localdomain dnsmasq-dhcp[318133]: read /var/lib/neutron/dhcp/8de9fa50-7037-4f69-a2b1-5be6f609300b/opts
Dec 02 10:09:54 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:54.944 262347 INFO neutron.agent.dhcp.agent [None req-85f18bf6-a1dd-4d9f-87b6-c2b326c4486a - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:53Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c17fd0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c17fa0>], id=35a195b4-4893-468b-bebd-aa243804f4fb, ip_allocation=immediate, mac_address=fa:16:3e:f0:2a:16, name=tempest-ExtraDHCPOptionsIpV6TestJSON-1912580586, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:09:51Z, description=, dns_domain=, id=8de9fa50-7037-4f69-a2b1-5be6f609300b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ExtraDHCPOptionsIpV6TestJSON-test-network-1209960196, port_security_enabled=True, project_id=0b7e671d1f944c979f6feba0246d3141, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=7693, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2340, status=ACTIVE, subnets=['9fa31bd1-0c42-435a-80e7-1590e36d6d8c'], tags=[], tenant_id=0b7e671d1f944c979f6feba0246d3141, updated_at=2025-12-02T10:09:52Z, vlan_transparent=None, network_id=8de9fa50-7037-4f69-a2b1-5be6f609300b, port_security_enabled=True, project_id=0b7e671d1f944c979f6feba0246d3141, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['274309be-bd70-4043-9459-2a1d0784f871'], standard_attr_id=2349, status=DOWN, tags=[], tenant_id=0b7e671d1f944c979f6feba0246d3141, updated_at=2025-12-02T10:09:53Z on network 8de9fa50-7037-4f69-a2b1-5be6f609300b
Dec 02 10:09:55 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:55.008 2 INFO neutron.agent.securitygroups_rpc [None req-c4292fab-d4f2-45ec-8373-3372677610e3 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']
Dec 02 10:09:55 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:55.041 262347 INFO neutron.agent.dhcp.agent [None req-4b6596ae-0cd4-4590-b49d-985494e1b502 - - - - - -] DHCP configuration for ports {'c0ec0206-98a8-40dc-a55b-75dc959f678e'} is completed
Dec 02 10:09:55 np0005541914.localdomain dnsmasq[318133]: read /var/lib/neutron/dhcp/8de9fa50-7037-4f69-a2b1-5be6f609300b/addn_hosts - 1 addresses
Dec 02 10:09:55 np0005541914.localdomain podman[318151]: 2025-12-02 10:09:55.115871037 +0000 UTC m=+0.043882580 container kill 860a06a7207b7e4dfaaccb8cc4c8ceb53cf9ef437dde0478cc74fa543edf322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8de9fa50-7037-4f69-a2b1-5be6f609300b, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:09:55 np0005541914.localdomain dnsmasq-dhcp[318133]: read /var/lib/neutron/dhcp/8de9fa50-7037-4f69-a2b1-5be6f609300b/host
Dec 02 10:09:55 np0005541914.localdomain dnsmasq-dhcp[318133]: read /var/lib/neutron/dhcp/8de9fa50-7037-4f69-a2b1-5be6f609300b/opts
Dec 02 10:09:55 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v361: 177 pgs: 177 active+clean; 146 MiB data, 815 MiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 16 KiB/s wr, 5 op/s
Dec 02 10:09:55 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:55.252 2 INFO neutron.agent.securitygroups_rpc [None req-4940f51c-3349-4656-978b-9a0b4cd29cb9 2903ef7b8c704dc09be34f96aeda2cff 6d11f96a2f644a22a82a6af9a2a1e5d2 - - default default] Security group member updated ['2e0224f5-51f6-419e-8240-7e06ddf53ec7']
Dec 02 10:09:55 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:55.263 262347 INFO neutron.agent.dhcp.agent [None req-85f18bf6-a1dd-4d9f-87b6-c2b326c4486a - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:53Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034b27df0>], dns_domain=, dns_name=, extra_dhcp_opts=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c43d90>, <neutron.agent.linux.dhcp.DictModel object at 0x7f4034b27d30>, <neutron.agent.linux.dhcp.DictModel object at 0x7f4034d700d0>], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c43490>], id=e3969589-82f8-477d-9278-7806f45e965e, ip_allocation=immediate, mac_address=fa:16:3e:73:da:f5, name=tempest-ExtraDHCPOptionsIpV6TestJSON-1381010091, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:09:51Z, description=, dns_domain=, id=8de9fa50-7037-4f69-a2b1-5be6f609300b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ExtraDHCPOptionsIpV6TestJSON-test-network-1209960196, port_security_enabled=True, project_id=0b7e671d1f944c979f6feba0246d3141, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=7693, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2340, status=ACTIVE, subnets=['9fa31bd1-0c42-435a-80e7-1590e36d6d8c'], tags=[], tenant_id=0b7e671d1f944c979f6feba0246d3141, updated_at=2025-12-02T10:09:52Z, vlan_transparent=None, network_id=8de9fa50-7037-4f69-a2b1-5be6f609300b, port_security_enabled=True, project_id=0b7e671d1f944c979f6feba0246d3141, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['274309be-bd70-4043-9459-2a1d0784f871'], standard_attr_id=2351, status=DOWN, tags=[], tenant_id=0b7e671d1f944c979f6feba0246d3141, updated_at=2025-12-02T10:09:53Z on network 8de9fa50-7037-4f69-a2b1-5be6f609300b
Dec 02 10:09:55 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:55.280 262347 INFO neutron.agent.linux.dhcp [None req-85f18bf6-a1dd-4d9f-87b6-c2b326c4486a - - - - - -] Cannot apply dhcp option tftp-server because it's ip_version 4 is not in port's address IP versions
Dec 02 10:09:55 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:55.281 262347 INFO neutron.agent.linux.dhcp [None req-85f18bf6-a1dd-4d9f-87b6-c2b326c4486a - - - - - -] Cannot apply dhcp option server-ip-address because it's ip_version 4 is not in port's address IP versions
Dec 02 10:09:55 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:55.282 262347 INFO neutron.agent.linux.dhcp [None req-85f18bf6-a1dd-4d9f-87b6-c2b326c4486a - - - - - -] Cannot apply dhcp option bootfile-name because it's ip_version 4 is not in port's address IP versions
Dec 02 10:09:55 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:55.290 2 INFO neutron.agent.securitygroups_rpc [None req-d1fab671-1814-41db-9614-65c239fa9e70 6a4701e292e04a82a827d127f0ef5b65 0b7e671d1f944c979f6feba0246d3141 - - default default] Security group member updated ['274309be-bd70-4043-9459-2a1d0784f871']
Dec 02 10:09:55 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:55.383 262347 INFO neutron.agent.dhcp.agent [None req-4856648b-e92e-4470-b926-c5749cdbbc49 - - - - - -] DHCP configuration for ports {'35a195b4-4893-468b-bebd-aa243804f4fb'} is completed
Dec 02 10:09:55 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:55.431 2 INFO neutron.agent.securitygroups_rpc [None req-4940f51c-3349-4656-978b-9a0b4cd29cb9 2903ef7b8c704dc09be34f96aeda2cff 6d11f96a2f644a22a82a6af9a2a1e5d2 - - default default] Security group member updated ['2e0224f5-51f6-419e-8240-7e06ddf53ec7']
Dec 02 10:09:55 np0005541914.localdomain dnsmasq[318133]: read /var/lib/neutron/dhcp/8de9fa50-7037-4f69-a2b1-5be6f609300b/addn_hosts - 2 addresses
Dec 02 10:09:55 np0005541914.localdomain dnsmasq-dhcp[318133]: read /var/lib/neutron/dhcp/8de9fa50-7037-4f69-a2b1-5be6f609300b/host
Dec 02 10:09:55 np0005541914.localdomain dnsmasq-dhcp[318133]: read /var/lib/neutron/dhcp/8de9fa50-7037-4f69-a2b1-5be6f609300b/opts
Dec 02 10:09:55 np0005541914.localdomain podman[318188]: 2025-12-02 10:09:55.45608618 +0000 UTC m=+0.054286428 container kill 860a06a7207b7e4dfaaccb8cc4c8ceb53cf9ef437dde0478cc74fa543edf322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8de9fa50-7037-4f69-a2b1-5be6f609300b, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 02 10:09:55 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 02 10:09:55 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 02 10:09:55 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:55.705 262347 INFO neutron.agent.dhcp.agent [None req-2d882893-811c-4f4d-8273-8c23f848d87b - - - - - -] DHCP configuration for ports {'e3969589-82f8-477d-9278-7806f45e965e'} is completed
Dec 02 10:09:55 np0005541914.localdomain dnsmasq[318133]: read /var/lib/neutron/dhcp/8de9fa50-7037-4f69-a2b1-5be6f609300b/addn_hosts - 1 addresses
Dec 02 10:09:55 np0005541914.localdomain dnsmasq-dhcp[318133]: read /var/lib/neutron/dhcp/8de9fa50-7037-4f69-a2b1-5be6f609300b/host
Dec 02 10:09:55 np0005541914.localdomain podman[318225]: 2025-12-02 10:09:55.791077844 +0000 UTC m=+0.061169421 container kill 860a06a7207b7e4dfaaccb8cc4c8ceb53cf9ef437dde0478cc74fa543edf322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8de9fa50-7037-4f69-a2b1-5be6f609300b, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 02 10:09:55 np0005541914.localdomain dnsmasq-dhcp[318133]: read /var/lib/neutron/dhcp/8de9fa50-7037-4f69-a2b1-5be6f609300b/opts
Dec 02 10:09:55 np0005541914.localdomain systemd[1]: tmp-crun.QQ7El9.mount: Deactivated successfully.
Dec 02 10:09:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:55.811 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:55 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:55.940 262347 INFO neutron.agent.dhcp.agent [None req-85f18bf6-a1dd-4d9f-87b6-c2b326c4486a - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:09:53Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034aec6a0>], dns_domain=, dns_name=, extra_dhcp_opts=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034aecee0>, <neutron.agent.linux.dhcp.DictModel object at 0x7f4034aec370>, <neutron.agent.linux.dhcp.DictModel object at 0x7f4034aec130>], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034aec310>], id=35a195b4-4893-468b-bebd-aa243804f4fb, ip_allocation=immediate, mac_address=fa:16:3e:f0:2a:16, name=tempest-new-port-name-454689627, network_id=8de9fa50-7037-4f69-a2b1-5be6f609300b, port_security_enabled=True, project_id=0b7e671d1f944c979f6feba0246d3141, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['274309be-bd70-4043-9459-2a1d0784f871'], standard_attr_id=2349, status=DOWN, tags=[], tenant_id=0b7e671d1f944c979f6feba0246d3141, updated_at=2025-12-02T10:09:55Z on network 8de9fa50-7037-4f69-a2b1-5be6f609300b
Dec 02 10:09:55 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:55.957 262347 INFO neutron.agent.linux.dhcp [None req-85f18bf6-a1dd-4d9f-87b6-c2b326c4486a - - - - - -] Cannot apply dhcp option server-ip-address because it's ip_version 4 is not in port's address IP versions
Dec 02 10:09:55 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:55.958 262347 INFO neutron.agent.linux.dhcp [None req-85f18bf6-a1dd-4d9f-87b6-c2b326c4486a - - - - - -] Cannot apply dhcp option bootfile-name because it's ip_version 4 is not in port's address IP versions
Dec 02 10:09:55 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:55.959 262347 INFO neutron.agent.linux.dhcp [None req-85f18bf6-a1dd-4d9f-87b6-c2b326c4486a - - - - - -] Cannot apply dhcp option tftp-server because it's ip_version 4 is not in port's address IP versions
Dec 02 10:09:56 np0005541914.localdomain dnsmasq[318133]: read /var/lib/neutron/dhcp/8de9fa50-7037-4f69-a2b1-5be6f609300b/addn_hosts - 1 addresses
Dec 02 10:09:56 np0005541914.localdomain podman[318261]: 2025-12-02 10:09:56.132583057 +0000 UTC m=+0.059362205 container kill 860a06a7207b7e4dfaaccb8cc4c8ceb53cf9ef437dde0478cc74fa543edf322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8de9fa50-7037-4f69-a2b1-5be6f609300b, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 02 10:09:56 np0005541914.localdomain dnsmasq-dhcp[318133]: read /var/lib/neutron/dhcp/8de9fa50-7037-4f69-a2b1-5be6f609300b/host
Dec 02 10:09:56 np0005541914.localdomain dnsmasq-dhcp[318133]: read /var/lib/neutron/dhcp/8de9fa50-7037-4f69-a2b1-5be6f609300b/opts
Dec 02 10:09:56 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:56.308 2 INFO neutron.agent.securitygroups_rpc [None req-2709b5dd-db11-4508-a989-29103dd3702e 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']
Dec 02 10:09:56 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:56.340 262347 INFO neutron.agent.dhcp.agent [None req-d45bc57f-25ab-431f-8684-fcced6ca090d - - - - - -] DHCP configuration for ports {'35a195b4-4893-468b-bebd-aa243804f4fb'} is completed
Dec 02 10:09:56 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cc6380cd-d1fe-41c0-9f77-54a6bc7687ef", "format": "json"}]: dispatch
Dec 02 10:09:56 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:cc6380cd-d1fe-41c0-9f77-54a6bc7687ef, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:09:56 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:cc6380cd-d1fe-41c0-9f77-54a6bc7687ef, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:09:56 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cc6380cd-d1fe-41c0-9f77-54a6bc7687ef' of type subvolume
Dec 02 10:09:56 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:09:56.513+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'cc6380cd-d1fe-41c0-9f77-54a6bc7687ef' of type subvolume
Dec 02 10:09:56 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cc6380cd-d1fe-41c0-9f77-54a6bc7687ef", "force": true, "format": "json"}]: dispatch
Dec 02 10:09:56 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cc6380cd-d1fe-41c0-9f77-54a6bc7687ef, vol_name:cephfs) < ""
Dec 02 10:09:56 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/cc6380cd-d1fe-41c0-9f77-54a6bc7687ef'' moved to trashcan
Dec 02 10:09:56 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:09:56 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:cc6380cd-d1fe-41c0-9f77-54a6bc7687ef, vol_name:cephfs) < ""
Dec 02 10:09:56 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:56.648 2 INFO neutron.agent.securitygroups_rpc [None req-4752b673-ecc5-46d9-8169-f464ead4adc9 6a4701e292e04a82a827d127f0ef5b65 0b7e671d1f944c979f6feba0246d3141 - - default default] Security group member updated ['274309be-bd70-4043-9459-2a1d0784f871']
Dec 02 10:09:56 np0005541914.localdomain ceph-mon[301710]: pgmap v361: 177 pgs: 177 active+clean; 146 MiB data, 815 MiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 16 KiB/s wr, 5 op/s
Dec 02 10:09:56 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cc6380cd-d1fe-41c0-9f77-54a6bc7687ef", "format": "json"}]: dispatch
Dec 02 10:09:56 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cc6380cd-d1fe-41c0-9f77-54a6bc7687ef", "force": true, "format": "json"}]: dispatch
Dec 02 10:09:56 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:56.667 2 INFO neutron.agent.securitygroups_rpc [None req-237e1b13-6077-411f-87b4-3c14ff8061ce 2903ef7b8c704dc09be34f96aeda2cff 6d11f96a2f644a22a82a6af9a2a1e5d2 - - default default] Security group member updated ['2e0224f5-51f6-419e-8240-7e06ddf53ec7']
Dec 02 10:09:56 np0005541914.localdomain podman[318298]: 2025-12-02 10:09:56.848459233 +0000 UTC m=+0.060878491 container kill 860a06a7207b7e4dfaaccb8cc4c8ceb53cf9ef437dde0478cc74fa543edf322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8de9fa50-7037-4f69-a2b1-5be6f609300b, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 02 10:09:56 np0005541914.localdomain dnsmasq[318133]: read /var/lib/neutron/dhcp/8de9fa50-7037-4f69-a2b1-5be6f609300b/addn_hosts - 0 addresses
Dec 02 10:09:56 np0005541914.localdomain dnsmasq-dhcp[318133]: read /var/lib/neutron/dhcp/8de9fa50-7037-4f69-a2b1-5be6f609300b/host
Dec 02 10:09:56 np0005541914.localdomain dnsmasq-dhcp[318133]: read /var/lib/neutron/dhcp/8de9fa50-7037-4f69-a2b1-5be6f609300b/opts
Dec 02 10:09:57 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v362: 177 pgs: 177 active+clean; 146 MiB data, 815 MiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 16 KiB/s wr, 5 op/s
Dec 02 10:09:57 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:57.442 2 INFO neutron.agent.securitygroups_rpc [None req-84ee1250-9ce8-4943-9e17-d2eb70522c28 2903ef7b8c704dc09be34f96aeda2cff 6d11f96a2f644a22a82a6af9a2a1e5d2 - - default default] Security group member updated ['2e0224f5-51f6-419e-8240-7e06ddf53ec7']
Dec 02 10:09:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:09:57 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:09:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:09:57 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:57.651 2 INFO neutron.agent.securitygroups_rpc [None req-f449fe39-4274-4abb-aff3-e3ba219c9fe2 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']
Dec 02 10:09:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 02 10:09:57 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 02 10:09:57 np0005541914.localdomain ceph-mon[301710]: pgmap v362: 177 pgs: 177 active+clean; 146 MiB data, 815 MiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 16 KiB/s wr, 5 op/s
Dec 02 10:09:57 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice bob with tenant a241a07e4161486091e8de3f95a1d6c6
Dec 02 10:09:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:09:57 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:09:57 np0005541914.localdomain systemd[1]: tmp-crun.QqoLvK.mount: Deactivated successfully.
Dec 02 10:09:57 np0005541914.localdomain dnsmasq[318133]: exiting on receipt of SIGTERM
Dec 02 10:09:57 np0005541914.localdomain podman[318335]: 2025-12-02 10:09:57.759747244 +0000 UTC m=+0.059467628 container kill 860a06a7207b7e4dfaaccb8cc4c8ceb53cf9ef437dde0478cc74fa543edf322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8de9fa50-7037-4f69-a2b1-5be6f609300b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 02 10:09:57 np0005541914.localdomain systemd[1]: libpod-860a06a7207b7e4dfaaccb8cc4c8ceb53cf9ef437dde0478cc74fa543edf322f.scope: Deactivated successfully.
Dec 02 10:09:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:09:57 np0005541914.localdomain podman[318357]: 2025-12-02 10:09:57.833575172 +0000 UTC m=+0.051143762 container died 860a06a7207b7e4dfaaccb8cc4c8ceb53cf9ef437dde0478cc74fa543edf322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8de9fa50-7037-4f69-a2b1-5be6f609300b, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 02 10:09:57 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-860a06a7207b7e4dfaaccb8cc4c8ceb53cf9ef437dde0478cc74fa543edf322f-userdata-shm.mount: Deactivated successfully.
Dec 02 10:09:57 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-2a9135aa181504ec852eaaa2d9517fcf20a06ac4cf2b278c7d3159af034da279-merged.mount: Deactivated successfully.
Dec 02 10:09:57 np0005541914.localdomain podman[318357]: 2025-12-02 10:09:57.934876645 +0000 UTC m=+0.152445245 container remove 860a06a7207b7e4dfaaccb8cc4c8ceb53cf9ef437dde0478cc74fa543edf322f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-8de9fa50-7037-4f69-a2b1-5be6f609300b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 02 10:09:57 np0005541914.localdomain systemd[1]: libpod-conmon-860a06a7207b7e4dfaaccb8cc4c8ceb53cf9ef437dde0478cc74fa543edf322f.scope: Deactivated successfully.
Dec 02 10:09:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:57.948 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:57 np0005541914.localdomain kernel: device tapd313ec5a-74 left promiscuous mode
Dec 02 10:09:57 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:09:57Z|00158|binding|INFO|Releasing lport d313ec5a-74ee-4c97-a266-afe79cf4d76a from this chassis (sb_readonly=0)
Dec 02 10:09:57 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:09:57Z|00159|binding|INFO|Setting lport d313ec5a-74ee-4c97-a266-afe79cf4d76a down in Southbound
Dec 02 10:09:57 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:09:57.964 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-8de9fa50-7037-4f69-a2b1-5be6f609300b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8de9fa50-7037-4f69-a2b1-5be6f609300b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0b7e671d1f944c979f6feba0246d3141', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5e595e1b-0a18-488d-bb72-ec4f6317b810, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=d313ec5a-74ee-4c97-a266-afe79cf4d76a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:09:57 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:09:57.966 159483 INFO neutron.agent.ovn.metadata.agent [-] Port d313ec5a-74ee-4c97-a266-afe79cf4d76a in datapath 8de9fa50-7037-4f69-a2b1-5be6f609300b unbound from our chassis
Dec 02 10:09:57 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:09:57.967 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8de9fa50-7037-4f69-a2b1-5be6f609300b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 02 10:09:57 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:09:57.968 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[bc27e5ce-4139-4c6c-950e-33e0fbf87067]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:09:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:57.970 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:57 np0005541914.localdomain systemd[1]: run-netns-qdhcp\x2d8de9fa50\x2d7037\x2d4f69\x2da2b1\x2d5be6f609300b.mount: Deactivated successfully.
Dec 02 10:09:57 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:57.994 262347 INFO neutron.agent.dhcp.agent [None req-7d4c5c76-9bf3-46e0-8245-b5d6024546a9 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:09:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:58.063 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:58 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:09:58.480 2 INFO neutron.agent.securitygroups_rpc [None req-c9840334-22ed-4fcf-9fb8-d440584d45ac 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']
Dec 02 10:09:58 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:09:58.573 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:09:58 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:09:58 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 02 10:09:58 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:09:58 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:09:58 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:09:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:09:59.039 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:09:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v363: 177 pgs: 177 active+clean; 147 MiB data, 816 MiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 30 KiB/s wr, 9 op/s
Dec 02 10:09:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "05ca7661-f391-4234-9c50-a2000ddc14bd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:09:59 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:05ca7661-f391-4234-9c50-a2000ddc14bd, vol_name:cephfs) < ""
Dec 02 10:09:59 np0005541914.localdomain ceph-mon[301710]: pgmap v363: 177 pgs: 177 active+clean; 147 MiB data, 816 MiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 30 KiB/s wr, 9 op/s
Dec 02 10:09:59 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/2398904436' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:09:59 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/05ca7661-f391-4234-9c50-a2000ddc14bd/.meta.tmp'
Dec 02 10:09:59 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/05ca7661-f391-4234-9c50-a2000ddc14bd/.meta.tmp' to config b'/volumes/_nogroup/05ca7661-f391-4234-9c50-a2000ddc14bd/.meta'
Dec 02 10:09:59 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:05ca7661-f391-4234-9c50-a2000ddc14bd, vol_name:cephfs) < ""
Dec 02 10:09:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "05ca7661-f391-4234-9c50-a2000ddc14bd", "format": "json"}]: dispatch
Dec 02 10:09:59 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:05ca7661-f391-4234-9c50-a2000ddc14bd, vol_name:cephfs) < ""
Dec 02 10:09:59 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:05ca7661-f391-4234-9c50-a2000ddc14bd, vol_name:cephfs) < ""
Dec 02 10:09:59 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:10:00 np0005541914.localdomain podman[318379]: 2025-12-02 10:10:00.074704704 +0000 UTC m=+0.081316879 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:10:00 np0005541914.localdomain podman[318379]: 2025-12-02 10:10:00.086970042 +0000 UTC m=+0.093582217 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd)
Dec 02 10:10:00 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:10:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:00.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:10:00 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:10:00.707 2 INFO neutron.agent.securitygroups_rpc [None req-3e03e95a-f561-4345-b792-65b4ec75916c 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']
Dec 02 10:10:00 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "05ca7661-f391-4234-9c50-a2000ddc14bd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:10:00 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "05ca7661-f391-4234-9c50-a2000ddc14bd", "format": "json"}]: dispatch
Dec 02 10:10:00 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:10:00 np0005541914.localdomain ceph-mon[301710]: overall HEALTH_OK
Dec 02 10:10:00 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/740472713' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:10:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:00.813 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:01 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 02 10:10:01 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:10:01 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 02 10:10:01 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 02 10:10:01 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Dec 02 10:10:01 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 02 10:10:01 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:10:01 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 02 10:10:01 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:10:01 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180
Dec 02 10:10:01 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v364: 177 pgs: 177 active+clean; 147 MiB data, 816 MiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 22 KiB/s wr, 6 op/s
Dec 02 10:10:01 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:10:01 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:10:01 np0005541914.localdomain sshd[318399]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 10:10:01 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 02 10:10:01 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 02 10:10:01 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 02 10:10:01 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 02 10:10:01 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 02 10:10:01 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 02 10:10:01 np0005541914.localdomain ceph-mon[301710]: pgmap v364: 177 pgs: 177 active+clean; 147 MiB data, 816 MiB used, 41 GiB / 42 GiB avail; 85 B/s rd, 22 KiB/s wr, 6 op/s
Dec 02 10:10:01 np0005541914.localdomain sshd[318399]: Invalid user validator from 193.32.162.146 port 37656
Dec 02 10:10:02 np0005541914.localdomain sshd[318399]: Connection closed by invalid user validator 193.32.162.146 port 37656 [preauth]
Dec 02 10:10:02 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:10:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:02.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:10:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:03.017 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:03.065 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:03 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v365: 177 pgs: 177 active+clean; 147 MiB data, 820 MiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 33 KiB/s wr, 9 op/s
Dec 02 10:10:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:03.180 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:10:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:03.180 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:10:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:03.180 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:10:03 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "05ca7661-f391-4234-9c50-a2000ddc14bd", "format": "json"}]: dispatch
Dec 02 10:10:03 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:05ca7661-f391-4234-9c50-a2000ddc14bd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:10:03 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:05ca7661-f391-4234-9c50-a2000ddc14bd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:10:03 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:10:03.213+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '05ca7661-f391-4234-9c50-a2000ddc14bd' of type subvolume
Dec 02 10:10:03 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '05ca7661-f391-4234-9c50-a2000ddc14bd' of type subvolume
Dec 02 10:10:03 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "05ca7661-f391-4234-9c50-a2000ddc14bd", "force": true, "format": "json"}]: dispatch
Dec 02 10:10:03 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:05ca7661-f391-4234-9c50-a2000ddc14bd, vol_name:cephfs) < ""
Dec 02 10:10:03 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/05ca7661-f391-4234-9c50-a2000ddc14bd'' moved to trashcan
Dec 02 10:10:03 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:10:03 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:05ca7661-f391-4234-9c50-a2000ddc14bd, vol_name:cephfs) < ""
Dec 02 10:10:03 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:10:03.549 2 INFO neutron.agent.securitygroups_rpc [None req-7ff6a690-1608-4241-962d-cf0eb5f2eb30 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:10:03 np0005541914.localdomain podman[239757]: time="2025-12-02T10:10:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:10:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:10:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 10:10:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:10:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19209 "" "Go-http-client/1.1"
Dec 02 10:10:04 np0005541914.localdomain ceph-mon[301710]: pgmap v365: 177 pgs: 177 active+clean; 147 MiB data, 820 MiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 33 KiB/s wr, 9 op/s
Dec 02 10:10:04 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "05ca7661-f391-4234-9c50-a2000ddc14bd", "format": "json"}]: dispatch
Dec 02 10:10:04 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "05ca7661-f391-4234-9c50-a2000ddc14bd", "force": true, "format": "json"}]: dispatch
Dec 02 10:10:04 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "r", "format": "json"}]: dispatch
Dec 02 10:10:04 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:10:04 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 02 10:10:04 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 02 10:10:04 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice bob with tenant a241a07e4161486091e8de3f95a1d6c6
Dec 02 10:10:04 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:10:04 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:10:04 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:10:04.408 2 INFO neutron.agent.securitygroups_rpc [None req-90a76f3a-a979-402e-97fd-700e856a8199 7602b6bff04a41118e902187d8f95daa 39113116e26e4da3a6194d2f44d952a8 - - default default] Security group member updated ['062c5d07-6a15-41a5-85bf-27aede3f5276']
Dec 02 10:10:04 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:10:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:04.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:10:04 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:10:04.927 2 INFO neutron.agent.securitygroups_rpc [None req-c77326cb-1074-4872-8ee0-28f281df7dfe 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']
Dec 02 10:10:05 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v366: 177 pgs: 177 active+clean; 147 MiB data, 820 MiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 24 KiB/s wr, 7 op/s
Dec 02 10:10:05 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "r", "format": "json"}]: dispatch
Dec 02 10:10:05 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 02 10:10:05 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:10:05 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:10:05 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:10:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1389731968' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:10:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1389731968' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:10:05 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:10:05.447 2 INFO neutron.agent.securitygroups_rpc [None req-9cd60b66-d893-4669-ab17-1eefaaf90d0c 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']
Dec 02 10:10:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:05.523 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:10:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:05.524 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:10:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:05.545 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:10:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:05.564 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:10:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:05.564 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:10:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:05.565 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:10:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:05.565 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 10:10:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:05.566 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:10:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:05.816 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:10:06 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3363842838' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:10:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:06.081 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:10:06 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:10:06.091 2 INFO neutron.agent.securitygroups_rpc [None req-9685cf0d-187e-491c-a3b3-f6b6113116e3 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']
Dec 02 10:10:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:06.257 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 10:10:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:06.258 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11513MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 10:10:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:06.259 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:10:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:06.259 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:10:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:06.320 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 10:10:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:06.321 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 10:10:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:06.342 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:10:06 np0005541914.localdomain ceph-mon[301710]: pgmap v366: 177 pgs: 177 active+clean; 147 MiB data, 820 MiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 24 KiB/s wr, 7 op/s
Dec 02 10:10:06 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/3363842838' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:10:06 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:10:06.721 2 INFO neutron.agent.securitygroups_rpc [None req-3503b005-9d9f-48d0-a8b4-a51ac4fa455f 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']
Dec 02 10:10:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:10:06 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4072896173' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:10:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:06.856 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:10:06 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:06.864 262347 INFO neutron.agent.linux.ip_lib [None req-db5016fe-fe30-43be-9878-19fb858defeb - - - - - -] Device tapdcdae0ab-51 cannot be used as it has no MAC address
Dec 02 10:10:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:06.865 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 10:10:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:06.884 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:06 np0005541914.localdomain kernel: device tapdcdae0ab-51 entered promiscuous mode
Dec 02 10:10:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:06.891 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:06 np0005541914.localdomain NetworkManager[5967]: <info>  [1764670206.8917] manager: (tapdcdae0ab-51): new Generic device (/org/freedesktop/NetworkManager/Devices/34)
Dec 02 10:10:06 np0005541914.localdomain systemd-udevd[318455]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 10:10:06 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:10:06Z|00160|binding|INFO|Claiming lport dcdae0ab-51b5-4def-b7cc-5be762ad32e1 for this chassis.
Dec 02 10:10:06 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:10:06Z|00161|binding|INFO|dcdae0ab-51b5-4def-b7cc-5be762ad32e1: Claiming unknown
Dec 02 10:10:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:10:06
Dec 02 10:10:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 02 10:10:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] do_upmap
Dec 02 10:10:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] pools ['manila_metadata', 'backups', 'volumes', 'manila_data', '.mgr', 'images', 'vms']
Dec 02 10:10:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes
Dec 02 10:10:06 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapdcdae0ab-51: No such device
Dec 02 10:10:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:06.917 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:06 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapdcdae0ab-51: No such device
Dec 02 10:10:06 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:10:06Z|00162|binding|INFO|Setting lport dcdae0ab-51b5-4def-b7cc-5be762ad32e1 ovn-installed in OVS
Dec 02 10:10:06 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapdcdae0ab-51: No such device
Dec 02 10:10:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:06.921 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:06.923 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:06 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapdcdae0ab-51: No such device
Dec 02 10:10:06 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapdcdae0ab-51: No such device
Dec 02 10:10:06 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapdcdae0ab-51: No such device
Dec 02 10:10:06 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapdcdae0ab-51: No such device
Dec 02 10:10:06 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapdcdae0ab-51: No such device
Dec 02 10:10:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:06.946 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:10:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:10:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:06.970 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:10:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:10:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:10:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fd3633f2280>)]
Dec 02 10:10:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 02 10:10:07 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:10:07Z|00163|binding|INFO|Setting lport dcdae0ab-51b5-4def-b7cc-5be762ad32e1 up in Southbound
Dec 02 10:10:07 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:07.100 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-166c1533-b4e1-407e-b8de-28630b01d9d5', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-166c1533-b4e1-407e-b8de-28630b01d9d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d11f96a2f644a22a82a6af9a2a1e5d2', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c55ed48-bff3-4947-9ef2-3c1cfe7583ea, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=dcdae0ab-51b5-4def-b7cc-5be762ad32e1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:10:07 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:07.102 159483 INFO neutron.agent.ovn.metadata.agent [-] Port dcdae0ab-51b5-4def-b7cc-5be762ad32e1 in datapath 166c1533-b4e1-407e-b8de-28630b01d9d5 bound to our chassis
Dec 02 10:10:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:07.103 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 10:10:07 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:07.104 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Port 659d58ac-5e35-45bd-9fb2-a533ad9914c5 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 02 10:10:07 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:07.105 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 166c1533-b4e1-407e-b8de-28630b01d9d5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:10:07 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:07.105 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[5d22b721-f92d-4fde-a40e-68787bd53c6f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:10:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v367: 177 pgs: 177 active+clean; 147 MiB data, 820 MiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 24 KiB/s wr, 7 op/s
Dec 02 10:10:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:07.142 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 10:10:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:07.143 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.884s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:10:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust
Dec 02 10:10:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:10:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 02 10:10:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:10:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32)
Dec 02 10:10:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:10:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.443522589800856e-05 quantized to 32 (current 32)
Dec 02 10:10:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:10:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32)
Dec 02 10:10:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:10:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 02 10:10:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:10:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00021701388888888888 quantized to 32 (current 32)
Dec 02 10:10:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:10:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 9.187648310999442e-05 of space, bias 4.0, pg target 0.07313368055555555 quantized to 16 (current 16)
Dec 02 10:10:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 02 10:10:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:10:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 02 10:10:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:10:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:10:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:10:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:10:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:10:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:10:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:10:07 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/4072896173' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:10:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:10:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 02 10:10:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:10:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 02 10:10:07 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 02 10:10:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Dec 02 10:10:07 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 02 10:10:08 np0005541914.localdomain podman[318526]: 
Dec 02 10:10:08 np0005541914.localdomain podman[318526]: 2025-12-02 10:10:08.026378304 +0000 UTC m=+0.064946368 container create fed3a40df6b1907faa8f6b0084b24df69b96e19ee88ae0bb8219c12580739877 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-166c1533-b4e1-407e-b8de-28630b01d9d5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:10:08 np0005541914.localdomain podman[318526]: 2025-12-02 10:10:07.987660134 +0000 UTC m=+0.026228198 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 02 10:10:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:08.114 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:08 np0005541914.localdomain systemd[1]: Started libpod-conmon-fed3a40df6b1907faa8f6b0084b24df69b96e19ee88ae0bb8219c12580739877.scope.
Dec 02 10:10:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:08.125 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:10:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:08.126 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 10:10:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:08.126 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 10:10:08 np0005541914.localdomain systemd[1]: tmp-crun.ZnNKlc.mount: Deactivated successfully.
Dec 02 10:10:08 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 10:10:08 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:10:08 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 02 10:10:08 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:10:08 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b369f80c954f4690246e93f2906d95212a5da2cdad75d836922a3269ac60754b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 10:10:08 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180
Dec 02 10:10:08 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:10:08 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:10:08 np0005541914.localdomain podman[318526]: 2025-12-02 10:10:08.152371755 +0000 UTC m=+0.190939819 container init fed3a40df6b1907faa8f6b0084b24df69b96e19ee88ae0bb8219c12580739877 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-166c1533-b4e1-407e-b8de-28630b01d9d5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true)
Dec 02 10:10:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:08.159 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 10:10:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:08.160 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:10:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:08.160 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:10:08 np0005541914.localdomain podman[318526]: 2025-12-02 10:10:08.161639179 +0000 UTC m=+0.200207253 container start fed3a40df6b1907faa8f6b0084b24df69b96e19ee88ae0bb8219c12580739877 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-166c1533-b4e1-407e-b8de-28630b01d9d5, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec 02 10:10:08 np0005541914.localdomain dnsmasq[318545]: started, version 2.85 cachesize 150
Dec 02 10:10:08 np0005541914.localdomain dnsmasq[318545]: DNS service limited to local subnets
Dec 02 10:10:08 np0005541914.localdomain dnsmasq[318545]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 02 10:10:08 np0005541914.localdomain dnsmasq[318545]: warning: no upstream servers configured
Dec 02 10:10:08 np0005541914.localdomain dnsmasq-dhcp[318545]: DHCP, static leases only on 10.100.0.16, lease time 1d
Dec 02 10:10:08 np0005541914.localdomain dnsmasq[318545]: read /var/lib/neutron/dhcp/166c1533-b4e1-407e-b8de-28630b01d9d5/addn_hosts - 0 addresses
Dec 02 10:10:08 np0005541914.localdomain dnsmasq-dhcp[318545]: read /var/lib/neutron/dhcp/166c1533-b4e1-407e-b8de-28630b01d9d5/host
Dec 02 10:10:08 np0005541914.localdomain dnsmasq-dhcp[318545]: read /var/lib/neutron/dhcp/166c1533-b4e1-407e-b8de-28630b01d9d5/opts
Dec 02 10:10:08 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:08.395 262347 INFO neutron.agent.dhcp.agent [None req-74f0b61f-c27c-4c47-80d9-385cbdcf0d09 - - - - - -] DHCP configuration for ports {'08c7a173-e070-49da-a327-804bad2b36c4'} is completed
Dec 02 10:10:08 np0005541914.localdomain ceph-mon[301710]: pgmap v367: 177 pgs: 177 active+clean; 147 MiB data, 820 MiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 24 KiB/s wr, 7 op/s
Dec 02 10:10:08 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 02 10:10:08 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 02 10:10:08 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 02 10:10:08 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 02 10:10:08 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:08.495 159483 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 659d58ac-5e35-45bd-9fb2-a533ad9914c5 with type ""
Dec 02 10:10:08 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:10:08Z|00164|binding|INFO|Removing iface tapdcdae0ab-51 ovn-installed in OVS
Dec 02 10:10:08 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:10:08Z|00165|binding|INFO|Removing lport dcdae0ab-51b5-4def-b7cc-5be762ad32e1 ovn-installed in OVS
Dec 02 10:10:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:08.497 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:08.501 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:08 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:08.503 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-166c1533-b4e1-407e-b8de-28630b01d9d5', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-166c1533-b4e1-407e-b8de-28630b01d9d5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6d11f96a2f644a22a82a6af9a2a1e5d2', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c55ed48-bff3-4947-9ef2-3c1cfe7583ea, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=dcdae0ab-51b5-4def-b7cc-5be762ad32e1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:10:08 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:08.505 159483 INFO neutron.agent.ovn.metadata.agent [-] Port dcdae0ab-51b5-4def-b7cc-5be762ad32e1 in datapath 166c1533-b4e1-407e-b8de-28630b01d9d5 unbound from our chassis
Dec 02 10:10:08 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:08.507 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 166c1533-b4e1-407e-b8de-28630b01d9d5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:10:08 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:08.507 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[e68ea0e5-7ad0-408b-a240-22c033063f68]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:10:08 np0005541914.localdomain dnsmasq[318545]: exiting on receipt of SIGTERM
Dec 02 10:10:08 np0005541914.localdomain podman[318563]: 2025-12-02 10:10:08.567510181 +0000 UTC m=+0.049074240 container kill fed3a40df6b1907faa8f6b0084b24df69b96e19ee88ae0bb8219c12580739877 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-166c1533-b4e1-407e-b8de-28630b01d9d5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 02 10:10:08 np0005541914.localdomain systemd[1]: libpod-fed3a40df6b1907faa8f6b0084b24df69b96e19ee88ae0bb8219c12580739877.scope: Deactivated successfully.
Dec 02 10:10:08 np0005541914.localdomain podman[318577]: 2025-12-02 10:10:08.615440143 +0000 UTC m=+0.036736579 container died fed3a40df6b1907faa8f6b0084b24df69b96e19ee88ae0bb8219c12580739877 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-166c1533-b4e1-407e-b8de-28630b01d9d5, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 02 10:10:08 np0005541914.localdomain podman[318577]: 2025-12-02 10:10:08.639365518 +0000 UTC m=+0.060661904 container cleanup fed3a40df6b1907faa8f6b0084b24df69b96e19ee88ae0bb8219c12580739877 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-166c1533-b4e1-407e-b8de-28630b01d9d5, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 02 10:10:08 np0005541914.localdomain systemd[1]: libpod-conmon-fed3a40df6b1907faa8f6b0084b24df69b96e19ee88ae0bb8219c12580739877.scope: Deactivated successfully.
Dec 02 10:10:08 np0005541914.localdomain podman[318578]: 2025-12-02 10:10:08.697815464 +0000 UTC m=+0.112485417 container remove fed3a40df6b1907faa8f6b0084b24df69b96e19ee88ae0bb8219c12580739877 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-166c1533-b4e1-407e-b8de-28630b01d9d5, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:10:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:08.711 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:08 np0005541914.localdomain kernel: device tapdcdae0ab-51 left promiscuous mode
Dec 02 10:10:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:08.727 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:08 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:08.933 262347 INFO neutron.agent.dhcp.agent [None req-ee747dfa-2918-431f-b274-20caf043f838 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:10:09 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-b369f80c954f4690246e93f2906d95212a5da2cdad75d836922a3269ac60754b-merged.mount: Deactivated successfully.
Dec 02 10:10:09 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fed3a40df6b1907faa8f6b0084b24df69b96e19ee88ae0bb8219c12580739877-userdata-shm.mount: Deactivated successfully.
Dec 02 10:10:09 np0005541914.localdomain systemd[1]: run-netns-qdhcp\x2d166c1533\x2db4e1\x2d407e\x2db8de\x2d28630b01d9d5.mount: Deactivated successfully.
Dec 02 10:10:09 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v368: 177 pgs: 177 active+clean; 163 MiB data, 837 MiB used, 41 GiB / 42 GiB avail; 9.5 KiB/s rd, 1.4 MiB/s wr, 24 op/s
Dec 02 10:10:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:09.148 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:10:09 np0005541914.localdomain ceph-mgr[287188]: [devicehealth INFO root] Check health
Dec 02 10:10:09 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 02 10:10:09 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 02 10:10:09 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/24049623' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:10:09 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:09.576 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:10 np0005541914.localdomain ceph-mon[301710]: pgmap v368: 177 pgs: 177 active+clean; 163 MiB data, 837 MiB used, 41 GiB / 42 GiB avail; 9.5 KiB/s rd, 1.4 MiB/s wr, 24 op/s
Dec 02 10:10:10 np0005541914.localdomain ceph-mon[301710]: mgrmap e48: np0005541914.lljzmk(active, since 10m), standbys: np0005541912.qwddia, np0005541913.mfesdm
Dec 02 10:10:10 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/843803668' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:10:10 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:10.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:10:10 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:10.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 10:10:10 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:10.819 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:11 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v369: 177 pgs: 177 active+clean; 163 MiB data, 837 MiB used, 41 GiB / 42 GiB avail; 9.4 KiB/s rd, 1.4 MiB/s wr, 20 op/s
Dec 02 10:10:11 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:10:11 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:10:11 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 02 10:10:11 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 02 10:10:11 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice with tenant a241a07e4161486091e8de3f95a1d6c6
Dec 02 10:10:11 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:10:11 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:10:11 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 02 10:10:11 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:10:11 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:10:11 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:10:11 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:10:11 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:11.926 262347 INFO neutron.agent.linux.ip_lib [None req-f7e42905-5b42-458e-a385-8ecab4999b16 - - - - - -] Device tapdad34ce7-27 cannot be used as it has no MAC address
Dec 02 10:10:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:11.997 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:12 np0005541914.localdomain kernel: device tapdad34ce7-27 entered promiscuous mode
Dec 02 10:10:12 np0005541914.localdomain NetworkManager[5967]: <info>  [1764670212.0035] manager: (tapdad34ce7-27): new Generic device (/org/freedesktop/NetworkManager/Devices/35)
Dec 02 10:10:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:12.003 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:12 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:10:12Z|00166|binding|INFO|Claiming lport dad34ce7-27fc-44a3-9b51-6f8e06622f64 for this chassis.
Dec 02 10:10:12 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:10:12Z|00167|binding|INFO|dad34ce7-27fc-44a3-9b51-6f8e06622f64: Claiming unknown
Dec 02 10:10:12 np0005541914.localdomain systemd-udevd[318616]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 10:10:12 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:12.014 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-9a4bcf0d-86b2-4c41-a908-20a95f0c63b6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9a4bcf0d-86b2-4c41-a908-20a95f0c63b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28f4ef6ddb6546fbb800184721e43e93', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=421d26f0-533b-46ec-b4bf-90e9b385208d, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=dad34ce7-27fc-44a3-9b51-6f8e06622f64) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:10:12 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:12.016 159483 INFO neutron.agent.ovn.metadata.agent [-] Port dad34ce7-27fc-44a3-9b51-6f8e06622f64 in datapath 9a4bcf0d-86b2-4c41-a908-20a95f0c63b6 bound to our chassis
Dec 02 10:10:12 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:10:12Z|00168|binding|INFO|Setting lport dad34ce7-27fc-44a3-9b51-6f8e06622f64 ovn-installed in OVS
Dec 02 10:10:12 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:10:12Z|00169|binding|INFO|Setting lport dad34ce7-27fc-44a3-9b51-6f8e06622f64 up in Southbound
Dec 02 10:10:12 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:12.017 159483 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port e5695a05-13ac-4749-a287-15f3726545f7 with type ""
Dec 02 10:10:12 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:10:12Z|00170|binding|INFO|Removing iface tapdad34ce7-27 ovn-installed in OVS
Dec 02 10:10:12 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:12.018 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-9a4bcf0d-86b2-4c41-a908-20a95f0c63b6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9a4bcf0d-86b2-4c41-a908-20a95f0c63b6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28f4ef6ddb6546fbb800184721e43e93', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=421d26f0-533b-46ec-b4bf-90e9b385208d, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=dad34ce7-27fc-44a3-9b51-6f8e06622f64) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:10:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:12.018 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:12 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:12.024 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9a4bcf0d-86b2-4c41-a908-20a95f0c63b6 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 02 10:10:12 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:12.025 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[c0777391-65a8-4e93-8fa3-17cc89c53ec6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:10:12 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:12.026 159483 INFO neutron.agent.ovn.metadata.agent [-] Port dad34ce7-27fc-44a3-9b51-6f8e06622f64 in datapath 9a4bcf0d-86b2-4c41-a908-20a95f0c63b6 unbound from our chassis
Dec 02 10:10:12 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:12.026 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 9a4bcf0d-86b2-4c41-a908-20a95f0c63b6 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 02 10:10:12 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:12.027 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[2973217b-c11b-4f21-b2fe-8b456b52d504]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:10:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:12.027 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:12 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapdad34ce7-27: No such device
Dec 02 10:10:12 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapdad34ce7-27: No such device
Dec 02 10:10:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:12.036 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:12 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapdad34ce7-27: No such device
Dec 02 10:10:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:12.039 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:12 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapdad34ce7-27: No such device
Dec 02 10:10:12 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapdad34ce7-27: No such device
Dec 02 10:10:12 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapdad34ce7-27: No such device
Dec 02 10:10:12 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapdad34ce7-27: No such device
Dec 02 10:10:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:12.066 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:12 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapdad34ce7-27: No such device
Dec 02 10:10:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:12.094 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:10:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:10:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:10:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:10:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:10:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:10:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:10:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:10:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:10:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:10:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:10:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:10:12 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:10:12 np0005541914.localdomain ceph-mon[301710]: pgmap v369: 177 pgs: 177 active+clean; 163 MiB data, 837 MiB used, 41 GiB / 42 GiB avail; 9.4 KiB/s rd, 1.4 MiB/s wr, 20 op/s
Dec 02 10:10:12 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:10:12 np0005541914.localdomain podman[318688]: 2025-12-02 10:10:12.827719152 +0000 UTC m=+0.036230914 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 02 10:10:12 np0005541914.localdomain podman[318688]: 
Dec 02 10:10:12 np0005541914.localdomain podman[318688]: 2025-12-02 10:10:12.968402575 +0000 UTC m=+0.176914347 container create 23572d974e3a416aa0715c7e1309e2b810d1b6f1ee3d0f75f8f899b143f08ad5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9a4bcf0d-86b2-4c41-a908-20a95f0c63b6, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 02 10:10:13 np0005541914.localdomain systemd[1]: Started libpod-conmon-23572d974e3a416aa0715c7e1309e2b810d1b6f1ee3d0f75f8f899b143f08ad5.scope.
Dec 02 10:10:13 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 10:10:13 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c974d07664f24ea733e7f13eaeb626065a77246b8e684500f0109ffb11044aca/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 10:10:13 np0005541914.localdomain podman[318688]: 2025-12-02 10:10:13.060224966 +0000 UTC m=+0.268736729 container init 23572d974e3a416aa0715c7e1309e2b810d1b6f1ee3d0f75f8f899b143f08ad5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9a4bcf0d-86b2-4c41-a908-20a95f0c63b6, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 02 10:10:13 np0005541914.localdomain dnsmasq[318706]: started, version 2.85 cachesize 150
Dec 02 10:10:13 np0005541914.localdomain dnsmasq[318706]: DNS service limited to local subnets
Dec 02 10:10:13 np0005541914.localdomain dnsmasq[318706]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 02 10:10:13 np0005541914.localdomain dnsmasq[318706]: warning: no upstream servers configured
Dec 02 10:10:13 np0005541914.localdomain dnsmasq-dhcp[318706]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Dec 02 10:10:13 np0005541914.localdomain podman[318688]: 2025-12-02 10:10:13.097986896 +0000 UTC m=+0.306498698 container start 23572d974e3a416aa0715c7e1309e2b810d1b6f1ee3d0f75f8f899b143f08ad5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9a4bcf0d-86b2-4c41-a908-20a95f0c63b6, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 02 10:10:13 np0005541914.localdomain dnsmasq[318706]: read /var/lib/neutron/dhcp/9a4bcf0d-86b2-4c41-a908-20a95f0c63b6/addn_hosts - 0 addresses
Dec 02 10:10:13 np0005541914.localdomain dnsmasq-dhcp[318706]: read /var/lib/neutron/dhcp/9a4bcf0d-86b2-4c41-a908-20a95f0c63b6/host
Dec 02 10:10:13 np0005541914.localdomain dnsmasq-dhcp[318706]: read /var/lib/neutron/dhcp/9a4bcf0d-86b2-4c41-a908-20a95f0c63b6/opts
Dec 02 10:10:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v370: 177 pgs: 177 active+clean; 291 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 12 MiB/s wr, 40 op/s
Dec 02 10:10:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:13.193 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:13 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:13.250 262347 INFO neutron.agent.dhcp.agent [None req-b8e8c85c-cd4f-4434-b1ad-3d359266222c - - - - - -] DHCP configuration for ports {'c8067e90-d871-4aed-96dc-03c10e6871ff'} is completed
Dec 02 10:10:13 np0005541914.localdomain dnsmasq[318706]: exiting on receipt of SIGTERM
Dec 02 10:10:13 np0005541914.localdomain podman[318722]: 2025-12-02 10:10:13.396151987 +0000 UTC m=+0.054899907 container kill 23572d974e3a416aa0715c7e1309e2b810d1b6f1ee3d0f75f8f899b143f08ad5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9a4bcf0d-86b2-4c41-a908-20a95f0c63b6, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 02 10:10:13 np0005541914.localdomain systemd[1]: libpod-23572d974e3a416aa0715c7e1309e2b810d1b6f1ee3d0f75f8f899b143f08ad5.scope: Deactivated successfully.
Dec 02 10:10:13 np0005541914.localdomain podman[318736]: 2025-12-02 10:10:13.448705963 +0000 UTC m=+0.043767976 container died 23572d974e3a416aa0715c7e1309e2b810d1b6f1ee3d0f75f8f899b143f08ad5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9a4bcf0d-86b2-4c41-a908-20a95f0c63b6, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 02 10:10:13 np0005541914.localdomain podman[318736]: 2025-12-02 10:10:13.547710935 +0000 UTC m=+0.142772938 container cleanup 23572d974e3a416aa0715c7e1309e2b810d1b6f1ee3d0f75f8f899b143f08ad5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9a4bcf0d-86b2-4c41-a908-20a95f0c63b6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 02 10:10:13 np0005541914.localdomain systemd[1]: libpod-conmon-23572d974e3a416aa0715c7e1309e2b810d1b6f1ee3d0f75f8f899b143f08ad5.scope: Deactivated successfully.
Dec 02 10:10:13 np0005541914.localdomain podman[318743]: 2025-12-02 10:10:13.571091233 +0000 UTC m=+0.152227198 container remove 23572d974e3a416aa0715c7e1309e2b810d1b6f1ee3d0f75f8f899b143f08ad5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-9a4bcf0d-86b2-4c41-a908-20a95f0c63b6, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:10:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:13.583 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:13 np0005541914.localdomain kernel: device tapdad34ce7-27 left promiscuous mode
Dec 02 10:10:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:13.603 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:13 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:13.626 262347 INFO neutron.agent.dhcp.agent [None req-bd26d58b-e335-43f0-aa05-c75946207c8d - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:10:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:13.626 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:13 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:13.627 262347 INFO neutron.agent.dhcp.agent [None req-bd26d58b-e335-43f0-aa05-c75946207c8d - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:10:13 np0005541914.localdomain ceph-mon[301710]: pgmap v370: 177 pgs: 177 active+clean; 291 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 12 MiB/s wr, 40 op/s
Dec 02 10:10:13 np0005541914.localdomain systemd[1]: tmp-crun.2iLXZE.mount: Deactivated successfully.
Dec 02 10:10:13 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-c974d07664f24ea733e7f13eaeb626065a77246b8e684500f0109ffb11044aca-merged.mount: Deactivated successfully.
Dec 02 10:10:13 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-23572d974e3a416aa0715c7e1309e2b810d1b6f1ee3d0f75f8f899b143f08ad5-userdata-shm.mount: Deactivated successfully.
Dec 02 10:10:13 np0005541914.localdomain systemd[1]: run-netns-qdhcp\x2d9a4bcf0d\x2d86b2\x2d4c41\x2da908\x2d20a95f0c63b6.mount: Deactivated successfully.
Dec 02 10:10:14 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:10:14.384 2 INFO neutron.agent.securitygroups_rpc [None req-aa29d14d-f8b7-4441-acc0-85287ab48c6d 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']
Dec 02 10:10:14 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch
Dec 02 10:10:14 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:10:14 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch
Dec 02 10:10:14 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 02 10:10:14 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 02 10:10:14 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Dec 02 10:10:14 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 02 10:10:15 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:10:15 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch
Dec 02 10:10:15 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:10:15 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180
Dec 02 10:10:15 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:10:15 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:10:15 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:10:15.078 2 INFO neutron.agent.securitygroups_rpc [None req-976c47c9-ae22-4daa-9b62-4c3bf838f3e2 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']
Dec 02 10:10:15 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v371: 177 pgs: 177 active+clean; 291 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 12 MiB/s wr, 37 op/s
Dec 02 10:10:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:10:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:10:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:10:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:10:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:10:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:10:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:10:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:10:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:10:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:10:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:10:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:10:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:10:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:10:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:10:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:10:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:10:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:10:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:10:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:10:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:10:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:10:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:10:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:10:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:10:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:10:15 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:10:15.590 2 INFO neutron.agent.securitygroups_rpc [None req-90e76c3f-20a5-47a4-903d-78b983867e31 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']
Dec 02 10:10:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:15.820 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:15 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 02 10:10:15 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 02 10:10:15 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 02 10:10:15 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 02 10:10:15 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch
Dec 02 10:10:15 np0005541914.localdomain ceph-mon[301710]: pgmap v371: 177 pgs: 177 active+clean; 291 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 12 MiB/s wr, 37 op/s
Dec 02 10:10:15 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:10:15.916 2 INFO neutron.agent.securitygroups_rpc [None req-5542fe4c-166b-4ffa-972c-029f68af7f10 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']
Dec 02 10:10:15 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:10:16 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:16.090 262347 INFO neutron.agent.dhcp.agent [None req-285c8656-c63d-4a62-a012-01c1dac192db - - - - - -] Synchronizing state
Dec 02 10:10:16 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:10:16 np0005541914.localdomain podman[318765]: 2025-12-02 10:10:16.097870423 +0000 UTC m=+0.089499051 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 02 10:10:16 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:10:16 np0005541914.localdomain podman[318765]: 2025-12-02 10:10:16.132983251 +0000 UTC m=+0.124611889 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true)
Dec 02 10:10:16 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:10:16 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:10:16 np0005541914.localdomain podman[318783]: 2025-12-02 10:10:16.244445796 +0000 UTC m=+0.139555439 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 10:10:16 np0005541914.localdomain podman[318783]: 2025-12-02 10:10:16.256059453 +0000 UTC m=+0.151169076 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 02 10:10:16 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:16.268 262347 INFO neutron.agent.dhcp.agent [None req-50790067-e1ed-419b-a4c8-808a5373a8ca - - - - - -] All active networks have been fetched through RPC.
Dec 02 10:10:16 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:16.268 262347 INFO neutron.agent.dhcp.agent [-] Starting network d9f6bbb9-ad7f-4259-9522-4dab6766c81c dhcp configuration
Dec 02 10:10:16 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:16.269 262347 INFO neutron.agent.dhcp.agent [-] Finished network d9f6bbb9-ad7f-4259-9522-4dab6766c81c dhcp configuration
Dec 02 10:10:16 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:16.269 262347 INFO neutron.agent.dhcp.agent [None req-50790067-e1ed-419b-a4c8-808a5373a8ca - - - - - -] Synchronizing state complete
Dec 02 10:10:16 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:16.270 262347 INFO neutron.agent.dhcp.agent [None req-2567fcc3-49f9-4f19-b70e-cd0be5e905e9 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:10:16 np0005541914.localdomain podman[318805]: 2025-12-02 10:10:16.291270555 +0000 UTC m=+0.080722461 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 02 10:10:16 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:10:16 np0005541914.localdomain podman[318805]: 2025-12-02 10:10:16.356316414 +0000 UTC m=+0.145768290 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 02 10:10:16 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:10:16 np0005541914.localdomain podman[318784]: 2025-12-02 10:10:16.358575303 +0000 UTC m=+0.244926787 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 02 10:10:16 np0005541914.localdomain podman[318784]: 2025-12-02 10:10:16.438393955 +0000 UTC m=+0.324745479 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 02 10:10:16 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:10:16 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:16.908 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:10:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v372: 177 pgs: 177 active+clean; 291 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 12 MiB/s wr, 37 op/s
Dec 02 10:10:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:10:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "r", "format": "json"}]: dispatch
Dec 02 10:10:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:10:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 02 10:10:17 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 02 10:10:17 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice with tenant a241a07e4161486091e8de3f95a1d6c6
Dec 02 10:10:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:10:17 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:10:18 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:10:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:18.228 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:18 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:10:18.245 2 INFO neutron.agent.securitygroups_rpc [None req-46c6c36b-2a47-445c-9836-d1e79e5b14a9 8d2b383649fa45f2821f6e290127374a 84fd536b8b4d489f944ed3e4bbfaeb5b - - default default] Security group rule updated ['d6dcbb7b-b610-4062-87d4-37eec03c1ecf']
Dec 02 10:10:18 np0005541914.localdomain ceph-mon[301710]: pgmap v372: 177 pgs: 177 active+clean; 291 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 12 MiB/s wr, 37 op/s
Dec 02 10:10:18 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 02 10:10:18 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:10:18 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:10:18 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:10:18 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:10:18 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:10:19 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp'
Dec 02 10:10:19 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta'
Dec 02 10:10:19 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:10:19 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "format": "json"}]: dispatch
Dec 02 10:10:19 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:10:19 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:10:19 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v373: 177 pgs: 177 active+clean; 435 MiB data, 1.6 GiB used, 40 GiB / 42 GiB avail; 29 KiB/s rd, 24 MiB/s wr, 54 op/s
Dec 02 10:10:19 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "r", "format": "json"}]: dispatch
Dec 02 10:10:19 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:10:20 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:10:20.292 2 INFO neutron.agent.securitygroups_rpc [None req-0a3dbc5e-28c6-4790-aae5-682551e66674 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']
Dec 02 10:10:20 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:10:20 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "format": "json"}]: dispatch
Dec 02 10:10:20 np0005541914.localdomain ceph-mon[301710]: pgmap v373: 177 pgs: 177 active+clean; 435 MiB data, 1.6 GiB used, 40 GiB / 42 GiB avail; 29 KiB/s rd, 24 MiB/s wr, 54 op/s
Dec 02 10:10:20 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:20.823 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:21 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v374: 177 pgs: 177 active+clean; 435 MiB data, 1.6 GiB used, 40 GiB / 42 GiB avail; 20 KiB/s rd, 23 MiB/s wr, 37 op/s
Dec 02 10:10:21 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch
Dec 02 10:10:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:10:21 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 02 10:10:21 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 02 10:10:21 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Dec 02 10:10:21 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 02 10:10:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:10:21 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch
Dec 02 10:10:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:10:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180
Dec 02 10:10:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:10:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:10:22 np0005541914.localdomain ceph-mon[301710]: pgmap v374: 177 pgs: 177 active+clean; 435 MiB data, 1.6 GiB used, 40 GiB / 42 GiB avail; 20 KiB/s rd, 23 MiB/s wr, 37 op/s
Dec 02 10:10:22 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch
Dec 02 10:10:22 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 02 10:10:22 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 02 10:10:22 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 02 10:10:22 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 02 10:10:22 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "aacf1e5d-1b53-42f1-b3a7-45f0acb43c13", "format": "json"}]: dispatch
Dec 02 10:10:22 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:aacf1e5d-1b53-42f1-b3a7-45f0acb43c13, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:10:22 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:aacf1e5d-1b53-42f1-b3a7-45f0acb43c13, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:10:22 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:10:23 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:23.032 262347 INFO neutron.agent.linux.ip_lib [None req-56d2e12e-fb73-49f7-ad4d-e7c6f8822ab2 - - - - - -] Device tap8e7a6388-06 cannot be used as it has no MAC address
Dec 02 10:10:23 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:23.083 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:10:22Z, description=, device_id=f7072194-e728-41c3-b399-b4e1ad65ca17, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f403552a970>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034b39550>], id=d41f53db-06e6-4a78-b76c-e87709f28b83, ip_allocation=immediate, mac_address=fa:16:3e:d8:74:e9, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2495, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:10:22Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:10:23 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:23.133 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:23 np0005541914.localdomain kernel: device tap8e7a6388-06 entered promiscuous mode
Dec 02 10:10:23 np0005541914.localdomain NetworkManager[5967]: <info>  [1764670223.1398] manager: (tap8e7a6388-06): new Generic device (/org/freedesktop/NetworkManager/Devices/36)
Dec 02 10:10:23 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:23.140 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:23 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:10:23Z|00171|binding|INFO|Claiming lport 8e7a6388-0616-4036-bc8b-c45817966af9 for this chassis.
Dec 02 10:10:23 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:10:23Z|00172|binding|INFO|8e7a6388-0616-4036-bc8b-c45817966af9: Claiming unknown
Dec 02 10:10:23 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v375: 177 pgs: 177 active+clean; 563 MiB data, 2.0 GiB used, 40 GiB / 42 GiB avail; 27 KiB/s rd, 33 MiB/s wr, 54 op/s
Dec 02 10:10:23 np0005541914.localdomain systemd-udevd[318865]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 10:10:23 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:23.154 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-fc2e8456-8064-45d4-b986-3bd5157209ba', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2e8456-8064-45d4-b986-3bd5157209ba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f7326c3837b4427191aafcff504110ac', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eb1a8e80-528c-4bda-8d5b-06a577344504, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=8e7a6388-0616-4036-bc8b-c45817966af9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:10:23 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:23.156 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 8e7a6388-0616-4036-bc8b-c45817966af9 in datapath fc2e8456-8064-45d4-b986-3bd5157209ba bound to our chassis
Dec 02 10:10:23 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:23.158 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Port 9fe7ca25-963a-4e60-ab5d-063d04cd1fbe IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 02 10:10:23 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:23.158 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fc2e8456-8064-45d4-b986-3bd5157209ba, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:10:23 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:23.159 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[50b6e7b6-c835-47a9-bf43-a0e722f49097]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:10:23 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:10:23Z|00173|binding|INFO|Setting lport 8e7a6388-0616-4036-bc8b-c45817966af9 ovn-installed in OVS
Dec 02 10:10:23 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:10:23Z|00174|binding|INFO|Setting lport 8e7a6388-0616-4036-bc8b-c45817966af9 up in Southbound
Dec 02 10:10:23 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:23.160 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:23 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:23.161 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:23 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:10:23 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:23.179 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:23 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:23.209 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:23 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:23.230 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:23 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:10:23 np0005541914.localdomain podman[318873]: 2025-12-02 10:10:23.303825628 +0000 UTC m=+0.133193193 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 10:10:23 np0005541914.localdomain systemd[1]: tmp-crun.W7Dkry.mount: Deactivated successfully.
Dec 02 10:10:23 np0005541914.localdomain podman[318873]: 2025-12-02 10:10:23.315689372 +0000 UTC m=+0.145056937 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 10:10:23 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:10:23 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch
Dec 02 10:10:23 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "aacf1e5d-1b53-42f1-b3a7-45f0acb43c13", "format": "json"}]: dispatch
Dec 02 10:10:23 np0005541914.localdomain podman[318895]: 2025-12-02 10:10:23.379795792 +0000 UTC m=+0.146409869 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 02 10:10:23 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses
Dec 02 10:10:23 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:10:23 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:10:23 np0005541914.localdomain podman[318918]: 2025-12-02 10:10:23.369498977 +0000 UTC m=+0.059638985 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.33.7, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec 02 10:10:23 np0005541914.localdomain podman[318918]: 2025-12-02 10:10:23.47702595 +0000 UTC m=+0.167165958 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, vcs-type=git, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.tags=minimal rhel9)
Dec 02 10:10:23 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:10:23 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:23.649 262347 INFO neutron.agent.dhcp.agent [None req-d5c42fa7-b5be-4a22-8211-cbb0209eee28 - - - - - -] DHCP configuration for ports {'d41f53db-06e6-4a78-b76c-e87709f28b83'} is completed
Dec 02 10:10:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:24.038 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:24 np0005541914.localdomain podman[318991]: 
Dec 02 10:10:24 np0005541914.localdomain podman[318991]: 2025-12-02 10:10:24.062634793 +0000 UTC m=+0.095694141 container create bea18d0bdaf6162507d881dbca43975c98ed9066f5a3433b6d7ec813d16cd348 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc2e8456-8064-45d4-b986-3bd5157209ba, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 02 10:10:24 np0005541914.localdomain systemd[1]: Started libpod-conmon-bea18d0bdaf6162507d881dbca43975c98ed9066f5a3433b6d7ec813d16cd348.scope.
Dec 02 10:10:24 np0005541914.localdomain podman[318991]: 2025-12-02 10:10:24.014230166 +0000 UTC m=+0.047289594 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 02 10:10:24 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 10:10:24 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2158c3d2b4f5070bc0f9feccb69eab6266c44d72576ad850b561465969ca2e04/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 10:10:24 np0005541914.localdomain podman[318991]: 2025-12-02 10:10:24.138720271 +0000 UTC m=+0.171779659 container init bea18d0bdaf6162507d881dbca43975c98ed9066f5a3433b6d7ec813d16cd348 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc2e8456-8064-45d4-b986-3bd5157209ba, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:10:24 np0005541914.localdomain podman[318991]: 2025-12-02 10:10:24.150028548 +0000 UTC m=+0.183087906 container start bea18d0bdaf6162507d881dbca43975c98ed9066f5a3433b6d7ec813d16cd348 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc2e8456-8064-45d4-b986-3bd5157209ba, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 02 10:10:24 np0005541914.localdomain dnsmasq[319009]: started, version 2.85 cachesize 150
Dec 02 10:10:24 np0005541914.localdomain dnsmasq[319009]: DNS service limited to local subnets
Dec 02 10:10:24 np0005541914.localdomain dnsmasq[319009]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 02 10:10:24 np0005541914.localdomain dnsmasq[319009]: warning: no upstream servers configured
Dec 02 10:10:24 np0005541914.localdomain dnsmasq-dhcp[319009]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 02 10:10:24 np0005541914.localdomain dnsmasq[319009]: read /var/lib/neutron/dhcp/fc2e8456-8064-45d4-b986-3bd5157209ba/addn_hosts - 0 addresses
Dec 02 10:10:24 np0005541914.localdomain dnsmasq-dhcp[319009]: read /var/lib/neutron/dhcp/fc2e8456-8064-45d4-b986-3bd5157209ba/host
Dec 02 10:10:24 np0005541914.localdomain dnsmasq-dhcp[319009]: read /var/lib/neutron/dhcp/fc2e8456-8064-45d4-b986-3bd5157209ba/opts
Dec 02 10:10:24 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:24.325 262347 INFO neutron.agent.dhcp.agent [None req-093141b0-8f58-450d-bb4d-3a3097e1d39a - - - - - -] DHCP configuration for ports {'15bd109d-23f2-4220-b6ed-d39b4a974041'} is completed
Dec 02 10:10:24 np0005541914.localdomain ceph-mon[301710]: pgmap v375: 177 pgs: 177 active+clean; 563 MiB data, 2.0 GiB used, 40 GiB / 42 GiB avail; 27 KiB/s rd, 33 MiB/s wr, 54 op/s
Dec 02 10:10:24 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:10:24 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:10:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 02 10:10:24 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 02 10:10:24 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice_bob with tenant a241a07e4161486091e8de3f95a1d6c6
Dec 02 10:10:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:10:24 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:10:24 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:10:24 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:10:24.932 2 INFO neutron.agent.securitygroups_rpc [None req-3ae6da31-b902-4566-8baa-11e094d2ee12 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']
Dec 02 10:10:25 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v376: 177 pgs: 177 active+clean; 563 MiB data, 2.0 GiB used, 40 GiB / 42 GiB avail; 16 KiB/s rd, 23 MiB/s wr, 33 op/s
Dec 02 10:10:25 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:10:25 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 02 10:10:25 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:10:25 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:10:25 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:10:25 np0005541914.localdomain sudo[319010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 10:10:25 np0005541914.localdomain sudo[319010]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:10:25 np0005541914.localdomain sudo[319010]: pam_unix(sudo:session): session closed for user root
Dec 02 10:10:25 np0005541914.localdomain sudo[319028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 02 10:10:25 np0005541914.localdomain sudo[319028]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:10:25 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:25.787 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:10:24Z, description=, device_id=f7072194-e728-41c3-b399-b4e1ad65ca17, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034a506d0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034a50a30>], id=6f85e6fe-549a-4116-a1ab-84bd28711afb, ip_allocation=immediate, mac_address=fa:16:3e:b7:59:68, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:10:18Z, description=, dns_domain=, id=fc2e8456-8064-45d4-b986-3bd5157209ba, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesActionsTest-2082968740-network, port_security_enabled=True, project_id=f7326c3837b4427191aafcff504110ac, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=17638, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2467, status=ACTIVE, subnets=['2cf7d7fa-d01f-4f91-a974-bccbb7a5e8f6'], tags=[], tenant_id=f7326c3837b4427191aafcff504110ac, updated_at=2025-12-02T10:10:20Z, vlan_transparent=None, network_id=fc2e8456-8064-45d4-b986-3bd5157209ba, port_security_enabled=False, project_id=f7326c3837b4427191aafcff504110ac, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2506, status=DOWN, tags=[], tenant_id=f7326c3837b4427191aafcff504110ac, updated_at=2025-12-02T10:10:24Z on network fc2e8456-8064-45d4-b986-3bd5157209ba
Dec 02 10:10:25 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:25.825 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:25 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "99d59f6b-cf2c-47ae-b465-7c2965afd103", "format": "json"}]: dispatch
Dec 02 10:10:25 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:99d59f6b-cf2c-47ae-b465-7c2965afd103, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:10:26 np0005541914.localdomain dnsmasq[319009]: read /var/lib/neutron/dhcp/fc2e8456-8064-45d4-b986-3bd5157209ba/addn_hosts - 1 addresses
Dec 02 10:10:26 np0005541914.localdomain dnsmasq-dhcp[319009]: read /var/lib/neutron/dhcp/fc2e8456-8064-45d4-b986-3bd5157209ba/host
Dec 02 10:10:26 np0005541914.localdomain dnsmasq-dhcp[319009]: read /var/lib/neutron/dhcp/fc2e8456-8064-45d4-b986-3bd5157209ba/opts
Dec 02 10:10:26 np0005541914.localdomain podman[319063]: 2025-12-02 10:10:26.018182099 +0000 UTC m=+0.042340452 container kill bea18d0bdaf6162507d881dbca43975c98ed9066f5a3433b6d7ec813d16cd348 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc2e8456-8064-45d4-b986-3bd5157209ba, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec 02 10:10:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:99d59f6b-cf2c-47ae-b465-7c2965afd103, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:10:26 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:26.316 262347 INFO neutron.agent.dhcp.agent [None req-2cdfcba8-4d8c-4b77-af18-f2c12137f31c - - - - - -] DHCP configuration for ports {'6f85e6fe-549a-4116-a1ab-84bd28711afb'} is completed
Dec 02 10:10:26 np0005541914.localdomain ceph-mon[301710]: pgmap v376: 177 pgs: 177 active+clean; 563 MiB data, 2.0 GiB used, 40 GiB / 42 GiB avail; 16 KiB/s rd, 23 MiB/s wr, 33 op/s
Dec 02 10:10:26 np0005541914.localdomain podman[319155]: 2025-12-02 10:10:26.591129944 +0000 UTC m=+0.102343336 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, com.redhat.component=rhceph-container, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, distribution-scope=public, version=7, RELEASE=main, architecture=x86_64, io.openshift.tags=rhceph ceph, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.41.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 02 10:10:26 np0005541914.localdomain podman[319155]: 2025-12-02 10:10:26.71985949 +0000 UTC m=+0.231072872 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, io.buildah.version=1.41.4, distribution-scope=public, GIT_BRANCH=main, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, build-date=2025-11-26T19:44:28Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, name=rhceph, release=1763362218, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_CLEAN=True, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Dec 02 10:10:27 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v377: 177 pgs: 177 active+clean; 563 MiB data, 2.0 GiB used, 40 GiB / 42 GiB avail; 16 KiB/s rd, 23 MiB/s wr, 33 op/s
Dec 02 10:10:27 np0005541914.localdomain sudo[319028]: pam_unix(sudo:session): session closed for user root
Dec 02 10:10:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0)
Dec 02 10:10:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0)
Dec 02 10:10:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 10:10:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 10:10:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 10:10:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 10:10:27 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "99d59f6b-cf2c-47ae-b465-7c2965afd103", "format": "json"}]: dispatch
Dec 02 10:10:27 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:10:27 np0005541914.localdomain sudo[319274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 10:10:27 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:10:27 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:10:27 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:10:27 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:10:27 np0005541914.localdomain sudo[319274]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:10:27 np0005541914.localdomain sudo[319274]: pam_unix(sudo:session): session closed for user root
Dec 02 10:10:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:10:27 np0005541914.localdomain sudo[319292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 10:10:27 np0005541914.localdomain sudo[319292]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:10:28 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 02 10:10:28 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 02 10:10:28 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:28.127 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:10:24Z, description=, device_id=f7072194-e728-41c3-b399-b4e1ad65ca17, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034cdd8b0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034cdd1c0>], id=6f85e6fe-549a-4116-a1ab-84bd28711afb, ip_allocation=immediate, mac_address=fa:16:3e:b7:59:68, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:10:18Z, description=, dns_domain=, id=fc2e8456-8064-45d4-b986-3bd5157209ba, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesActionsTest-2082968740-network, port_security_enabled=True, project_id=f7326c3837b4427191aafcff504110ac, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=17638, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2467, status=ACTIVE, subnets=['2cf7d7fa-d01f-4f91-a974-bccbb7a5e8f6'], tags=[], tenant_id=f7326c3837b4427191aafcff504110ac, updated_at=2025-12-02T10:10:20Z, vlan_transparent=None, network_id=fc2e8456-8064-45d4-b986-3bd5157209ba, port_security_enabled=False, project_id=f7326c3837b4427191aafcff504110ac, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2506, status=DOWN, tags=[], tenant_id=f7326c3837b4427191aafcff504110ac, updated_at=2025-12-02T10:10:24Z on network fc2e8456-8064-45d4-b986-3bd5157209ba
Dec 02 10:10:28 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:10:28 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 02 10:10:28 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:10:28 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180
Dec 02 10:10:28 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:10:28 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:10:28 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:28.234 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:28 np0005541914.localdomain sudo[319292]: pam_unix(sudo:session): session closed for user root
Dec 02 10:10:28 np0005541914.localdomain systemd[1]: tmp-crun.mBz26C.mount: Deactivated successfully.
Dec 02 10:10:28 np0005541914.localdomain dnsmasq[319009]: read /var/lib/neutron/dhcp/fc2e8456-8064-45d4-b986-3bd5157209ba/addn_hosts - 1 addresses
Dec 02 10:10:28 np0005541914.localdomain dnsmasq-dhcp[319009]: read /var/lib/neutron/dhcp/fc2e8456-8064-45d4-b986-3bd5157209ba/host
Dec 02 10:10:28 np0005541914.localdomain dnsmasq-dhcp[319009]: read /var/lib/neutron/dhcp/fc2e8456-8064-45d4-b986-3bd5157209ba/opts
Dec 02 10:10:28 np0005541914.localdomain podman[319360]: 2025-12-02 10:10:28.368392833 +0000 UTC m=+0.066091482 container kill bea18d0bdaf6162507d881dbca43975c98ed9066f5a3433b6d7ec813d16cd348 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc2e8456-8064-45d4-b986-3bd5157209ba, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0)
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0)
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0)
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 02 10:10:28 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO root] Adjusting osd_memory_target on np0005541914.localdomain to 836.6M
Dec 02 10:10:28 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005541914.localdomain to 836.6M
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 02 10:10:28 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO root] Adjusting osd_memory_target on np0005541912.localdomain to 836.6M
Dec 02 10:10:28 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005541912.localdomain to 836.6M
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 02 10:10:28 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO root] Adjusting osd_memory_target on np0005541913.localdomain to 836.6M
Dec 02 10:10:28 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005541913.localdomain to 836.6M
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 02 10:10:28 np0005541914.localdomain ceph-mgr[287188]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005541914.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 10:10:28 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005541914.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 10:10:28 np0005541914.localdomain ceph-mgr[287188]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005541912.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 10:10:28 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005541912.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 10:10:28 np0005541914.localdomain ceph-mgr[287188]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005541913.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 10:10:28 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005541913.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 02 10:10:28 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] update: starting ev 06e2cfb1-440a-45bd-89b0-fc844d89986b (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:10:28 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] complete: finished ev 06e2cfb1-440a-45bd-89b0-fc844d89986b (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:10:28 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Completed event 06e2cfb1-440a-45bd-89b0-fc844d89986b (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: pgmap v377: 177 pgs: 177 active+clean; 563 MiB data, 2.0 GiB used, 40 GiB / 42 GiB avail; 16 KiB/s rd, 23 MiB/s wr, 33 op/s
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:10:28 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:10:28 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:28.615 262347 INFO neutron.agent.dhcp.agent [None req-ba8a10c5-e9de-4c9b-a884-2827556420a0 - - - - - -] DHCP configuration for ports {'6f85e6fe-549a-4116-a1ab-84bd28711afb'} is completed
Dec 02 10:10:28 np0005541914.localdomain sudo[319382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 10:10:28 np0005541914.localdomain sudo[319382]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:10:28 np0005541914.localdomain sudo[319382]: pam_unix(sudo:session): session closed for user root
Dec 02 10:10:29 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v378: 177 pgs: 177 active+clean; 675 MiB data, 2.4 GiB used, 40 GiB / 42 GiB avail; 23 KiB/s rd, 32 MiB/s wr, 48 op/s
Dec 02 10:10:29 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:29.408 262347 INFO neutron.agent.linux.ip_lib [None req-21ab1c9e-59d0-40c0-a5fe-fa801326b9f6 - - - - - -] Device tap18876e1d-66 cannot be used as it has no MAC address
Dec 02 10:10:29 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:29.436 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:29 np0005541914.localdomain kernel: device tap18876e1d-66 entered promiscuous mode
Dec 02 10:10:29 np0005541914.localdomain NetworkManager[5967]: <info>  [1764670229.4468] manager: (tap18876e1d-66): new Generic device (/org/freedesktop/NetworkManager/Devices/37)
Dec 02 10:10:29 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:10:29Z|00175|binding|INFO|Claiming lport 18876e1d-6618-49f4-b6fb-335739a3ce98 for this chassis.
Dec 02 10:10:29 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:10:29Z|00176|binding|INFO|18876e1d-6618-49f4-b6fb-335739a3ce98: Claiming unknown
Dec 02 10:10:29 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:29.450 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:29 np0005541914.localdomain systemd-udevd[319410]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 10:10:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:29.465 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::3/64', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-b08b53d5-dfe9-4f37-b0e2-3da89dc155a5', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b08b53d5-dfe9-4f37-b0e2-3da89dc155a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28f4ef6ddb6546fbb800184721e43e93', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=192b8184-6f69-46ff-bf8b-6041bbb62345, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=18876e1d-6618-49f4-b6fb-335739a3ce98) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:10:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:29.467 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 18876e1d-6618-49f4-b6fb-335739a3ce98 in datapath b08b53d5-dfe9-4f37-b0e2-3da89dc155a5 bound to our chassis
Dec 02 10:10:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:29.469 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network b08b53d5-dfe9-4f37-b0e2-3da89dc155a5 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 02 10:10:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:29.470 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[01f678a1-b9ad-47fc-bb91-616327acf90e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:10:29 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap18876e1d-66: No such device
Dec 02 10:10:29 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap18876e1d-66: No such device
Dec 02 10:10:29 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:10:29Z|00177|binding|INFO|Setting lport 18876e1d-6618-49f4-b6fb-335739a3ce98 ovn-installed in OVS
Dec 02 10:10:29 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:10:29Z|00178|binding|INFO|Setting lport 18876e1d-6618-49f4-b6fb-335739a3ce98 up in Southbound
Dec 02 10:10:29 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:29.487 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:29 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap18876e1d-66: No such device
Dec 02 10:10:29 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap18876e1d-66: No such device
Dec 02 10:10:29 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap18876e1d-66: No such device
Dec 02 10:10:29 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap18876e1d-66: No such device
Dec 02 10:10:29 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap18876e1d-66: No such device
Dec 02 10:10:29 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap18876e1d-66: No such device
Dec 02 10:10:29 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:29.526 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:29 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:29.557 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:29 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 02 10:10:29 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 02 10:10:29 np0005541914.localdomain ceph-mon[301710]: Adjusting osd_memory_target on np0005541914.localdomain to 836.6M
Dec 02 10:10:29 np0005541914.localdomain ceph-mon[301710]: Adjusting osd_memory_target on np0005541912.localdomain to 836.6M
Dec 02 10:10:29 np0005541914.localdomain ceph-mon[301710]: Adjusting osd_memory_target on np0005541913.localdomain to 836.6M
Dec 02 10:10:29 np0005541914.localdomain ceph-mon[301710]: Unable to set osd_memory_target on np0005541914.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 10:10:29 np0005541914.localdomain ceph-mon[301710]: Unable to set osd_memory_target on np0005541912.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 10:10:29 np0005541914.localdomain ceph-mon[301710]: Unable to set osd_memory_target on np0005541913.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 10:10:29 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:10:29 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:10:29 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:10:29Z|00179|binding|INFO|Removing iface tap18876e1d-66 ovn-installed in OVS
Dec 02 10:10:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:29.911 159483 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 624d824d-4d49-4b95-a3a4-9a03de2f337b with type ""
Dec 02 10:10:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:29.913 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::3/64', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-b08b53d5-dfe9-4f37-b0e2-3da89dc155a5', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b08b53d5-dfe9-4f37-b0e2-3da89dc155a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28f4ef6ddb6546fbb800184721e43e93', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=192b8184-6f69-46ff-bf8b-6041bbb62345, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=18876e1d-6618-49f4-b6fb-335739a3ce98) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:10:29 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:10:29Z|00180|binding|INFO|Removing lport 18876e1d-6618-49f4-b6fb-335739a3ce98 ovn-installed in OVS
Dec 02 10:10:29 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:29.915 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:29.917 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 18876e1d-6618-49f4-b6fb-335739a3ce98 in datapath b08b53d5-dfe9-4f37-b0e2-3da89dc155a5 unbound from our chassis
Dec 02 10:10:29 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:29.919 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:29.919 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network b08b53d5-dfe9-4f37-b0e2-3da89dc155a5 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 02 10:10:29 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:29.920 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[f9d2c1b2-7625-4fc2-aa5c-94507d41b4a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:10:30 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "99d59f6b-cf2c-47ae-b465-7c2965afd103_827df7ca-4b70-438d-9e17-f06019bfe5e4", "force": true, "format": "json"}]: dispatch
Dec 02 10:10:30 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:99d59f6b-cf2c-47ae-b465-7c2965afd103_827df7ca-4b70-438d-9e17-f06019bfe5e4, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:10:30 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp'
Dec 02 10:10:30 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta'
Dec 02 10:10:30 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:99d59f6b-cf2c-47ae-b465-7c2965afd103_827df7ca-4b70-438d-9e17-f06019bfe5e4, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:10:30 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "99d59f6b-cf2c-47ae-b465-7c2965afd103", "force": true, "format": "json"}]: dispatch
Dec 02 10:10:30 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:99d59f6b-cf2c-47ae-b465-7c2965afd103, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:10:30 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp'
Dec 02 10:10:30 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta'
Dec 02 10:10:30 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:99d59f6b-cf2c-47ae-b465-7c2965afd103, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:10:30 np0005541914.localdomain podman[319481]: 
Dec 02 10:10:30 np0005541914.localdomain podman[319481]: 2025-12-02 10:10:30.49509863 +0000 UTC m=+0.076999687 container create f5e882fe67e5bb052b2b06c8a1c443b48c63346d23d202e90fd575b9ce361658 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b08b53d5-dfe9-4f37-b0e2-3da89dc155a5, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 02 10:10:30 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:10:30 np0005541914.localdomain systemd[1]: Started libpod-conmon-f5e882fe67e5bb052b2b06c8a1c443b48c63346d23d202e90fd575b9ce361658.scope.
Dec 02 10:10:30 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 10:10:30 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e445d47f7aa63bc26523d0df8868e3fcbc251684de06ab5b024e5fe07226834/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 10:10:30 np0005541914.localdomain podman[319481]: 2025-12-02 10:10:30.453333686 +0000 UTC m=+0.035234773 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 02 10:10:30 np0005541914.localdomain podman[319481]: 2025-12-02 10:10:30.573348094 +0000 UTC m=+0.155249161 container init f5e882fe67e5bb052b2b06c8a1c443b48c63346d23d202e90fd575b9ce361658 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b08b53d5-dfe9-4f37-b0e2-3da89dc155a5, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:10:30 np0005541914.localdomain dnsmasq[319512]: started, version 2.85 cachesize 150
Dec 02 10:10:30 np0005541914.localdomain dnsmasq[319512]: DNS service limited to local subnets
Dec 02 10:10:30 np0005541914.localdomain dnsmasq[319512]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 02 10:10:30 np0005541914.localdomain dnsmasq[319512]: warning: no upstream servers configured
Dec 02 10:10:30 np0005541914.localdomain dnsmasq-dhcp[319512]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Dec 02 10:10:30 np0005541914.localdomain dnsmasq[319512]: read /var/lib/neutron/dhcp/b08b53d5-dfe9-4f37-b0e2-3da89dc155a5/addn_hosts - 0 addresses
Dec 02 10:10:30 np0005541914.localdomain dnsmasq-dhcp[319512]: read /var/lib/neutron/dhcp/b08b53d5-dfe9-4f37-b0e2-3da89dc155a5/host
Dec 02 10:10:30 np0005541914.localdomain dnsmasq-dhcp[319512]: read /var/lib/neutron/dhcp/b08b53d5-dfe9-4f37-b0e2-3da89dc155a5/opts
Dec 02 10:10:30 np0005541914.localdomain podman[319495]: 2025-12-02 10:10:30.609690241 +0000 UTC m=+0.076659566 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 02 10:10:30 np0005541914.localdomain podman[319495]: 2025-12-02 10:10:30.617895463 +0000 UTC m=+0.084864728 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2)
Dec 02 10:10:30 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:10:30 np0005541914.localdomain podman[319481]: 2025-12-02 10:10:30.636116413 +0000 UTC m=+0.218017470 container start f5e882fe67e5bb052b2b06c8a1c443b48c63346d23d202e90fd575b9ce361658 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b08b53d5-dfe9-4f37-b0e2-3da89dc155a5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 02 10:10:30 np0005541914.localdomain ceph-mon[301710]: pgmap v378: 177 pgs: 177 active+clean; 675 MiB data, 2.4 GiB used, 40 GiB / 42 GiB avail; 23 KiB/s rd, 32 MiB/s wr, 48 op/s
Dec 02 10:10:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:30.760 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:30 np0005541914.localdomain kernel: device tap18876e1d-66 left promiscuous mode
Dec 02 10:10:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:30.783 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:30.826 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:30 np0005541914.localdomain dnsmasq[319512]: read /var/lib/neutron/dhcp/b08b53d5-dfe9-4f37-b0e2-3da89dc155a5/addn_hosts - 0 addresses
Dec 02 10:10:30 np0005541914.localdomain dnsmasq-dhcp[319512]: read /var/lib/neutron/dhcp/b08b53d5-dfe9-4f37-b0e2-3da89dc155a5/host
Dec 02 10:10:30 np0005541914.localdomain dnsmasq-dhcp[319512]: read /var/lib/neutron/dhcp/b08b53d5-dfe9-4f37-b0e2-3da89dc155a5/opts
Dec 02 10:10:30 np0005541914.localdomain podman[319538]: 2025-12-02 10:10:30.927132324 +0000 UTC m=+0.047123878 container kill f5e882fe67e5bb052b2b06c8a1c443b48c63346d23d202e90fd575b9ce361658 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b08b53d5-dfe9-4f37-b0e2-3da89dc155a5, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:10:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent [None req-21ab1c9e-59d0-40c0-a5fe-fa801326b9f6 - - - - - -] Unable to reload_allocations dhcp for b08b53d5-dfe9-4f37-b0e2-3da89dc155a5.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap18876e1d-66 not found in namespace qdhcp-b08b53d5-dfe9-4f37-b0e2-3da89dc155a5.
Dec 02 10:10:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent Traceback (most recent call last):
Dec 02 10:10:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver
Dec 02 10:10:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent     rv = getattr(driver, action)(**action_kwargs)
Dec 02 10:10:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations
Dec 02 10:10:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent     self.device_manager.update(self.network, self.interface_name)
Dec 02 10:10:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update
Dec 02 10:10:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent     self._set_default_route(network, device_name)
Dec 02 10:10:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route
Dec 02 10:10:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent     self._set_default_route_ip_version(network, device_name,
Dec 02 10:10:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version
Dec 02 10:10:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent     gateway = device.route.get_gateway(ip_version=ip_version)
Dec 02 10:10:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway
Dec 02 10:10:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent     routes = self.list_routes(ip_version, scope=scope, table=table)
Dec 02 10:10:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes
Dec 02 10:10:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent     return list_ip_routes(self._parent.namespace, ip_version, scope=scope,
Dec 02 10:10:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes
Dec 02 10:10:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent     routes = privileged.list_ip_routes(namespace, ip_version, device=device,
Dec 02 10:10:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f
Dec 02 10:10:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent     return self(f, *args, **kw)
Dec 02 10:10:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__
Dec 02 10:10:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent     do = self.iter(retry_state=retry_state)
Dec 02 10:10:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter
Dec 02 10:10:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent     return fut.result()
Dec 02 10:10:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result
Dec 02 10:10:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent     return self.__get_result()
Dec 02 10:10:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
Dec 02 10:10:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent     raise self._exception
Dec 02 10:10:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__
Dec 02 10:10:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent     result = fn(*args, **kwargs)
Dec 02 10:10:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Dec 02 10:10:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent     return self.channel.remote_call(name, args, kwargs,
Dec 02 10:10:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Dec 02 10:10:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent     raise exc_type(*result[2])
Dec 02 10:10:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap18876e1d-66 not found in namespace qdhcp-b08b53d5-dfe9-4f37-b0e2-3da89dc155a5.
Dec 02 10:10:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.946 262347 ERROR neutron.agent.dhcp.agent 
Dec 02 10:10:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:30.949 262347 INFO neutron.agent.dhcp.agent [None req-50790067-e1ed-419b-a4c8-808a5373a8ca - - - - - -] Synchronizing state
Dec 02 10:10:31 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v379: 177 pgs: 177 active+clean; 675 MiB data, 2.4 GiB used, 40 GiB / 42 GiB avail; 15 KiB/s rd, 20 MiB/s wr, 31 op/s
Dec 02 10:10:31 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:31.417 262347 INFO neutron.agent.dhcp.agent [None req-174f6df8-7825-4331-b3aa-853f12724825 - - - - - -] DHCP configuration for ports {'a3684361-b660-4235-9667-7e02c2ba4562'} is completed
Dec 02 10:10:31 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "99d59f6b-cf2c-47ae-b465-7c2965afd103_827df7ca-4b70-438d-9e17-f06019bfe5e4", "force": true, "format": "json"}]: dispatch
Dec 02 10:10:31 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "99d59f6b-cf2c-47ae-b465-7c2965afd103", "force": true, "format": "json"}]: dispatch
Dec 02 10:10:31 np0005541914.localdomain ceph-mon[301710]: pgmap v379: 177 pgs: 177 active+clean; 675 MiB data, 2.4 GiB used, 40 GiB / 42 GiB avail; 15 KiB/s rd, 20 MiB/s wr, 31 op/s
Dec 02 10:10:31 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "r", "format": "json"}]: dispatch
Dec 02 10:10:31 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:10:31 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 02 10:10:31 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 02 10:10:31 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice_bob with tenant a241a07e4161486091e8de3f95a1d6c6
Dec 02 10:10:31 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:10:31 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:10:31 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:31.959 262347 INFO neutron.agent.dhcp.agent [None req-8d48d508-f59b-479a-8c33-262a1848fc23 - - - - - -] All active networks have been fetched through RPC.
Dec 02 10:10:31 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:31.960 262347 INFO neutron.agent.dhcp.agent [-] Starting network b08b53d5-dfe9-4f37-b0e2-3da89dc155a5 dhcp configuration
Dec 02 10:10:32 np0005541914.localdomain dnsmasq[319512]: exiting on receipt of SIGTERM
Dec 02 10:10:32 np0005541914.localdomain podman[319570]: 2025-12-02 10:10:32.125833856 +0000 UTC m=+0.054465924 container kill f5e882fe67e5bb052b2b06c8a1c443b48c63346d23d202e90fd575b9ce361658 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b08b53d5-dfe9-4f37-b0e2-3da89dc155a5, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:10:32 np0005541914.localdomain systemd[1]: libpod-f5e882fe67e5bb052b2b06c8a1c443b48c63346d23d202e90fd575b9ce361658.scope: Deactivated successfully.
Dec 02 10:10:32 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:10:32 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events
Dec 02 10:10:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 02 10:10:32 np0005541914.localdomain podman[319586]: 2025-12-02 10:10:32.201092538 +0000 UTC m=+0.059017554 container died f5e882fe67e5bb052b2b06c8a1c443b48c63346d23d202e90fd575b9ce361658 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b08b53d5-dfe9-4f37-b0e2-3da89dc155a5, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:10:32 np0005541914.localdomain podman[319586]: 2025-12-02 10:10:32.303880797 +0000 UTC m=+0.161805743 container cleanup f5e882fe67e5bb052b2b06c8a1c443b48c63346d23d202e90fd575b9ce361658 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b08b53d5-dfe9-4f37-b0e2-3da89dc155a5, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 02 10:10:32 np0005541914.localdomain systemd[1]: libpod-conmon-f5e882fe67e5bb052b2b06c8a1c443b48c63346d23d202e90fd575b9ce361658.scope: Deactivated successfully.
Dec 02 10:10:32 np0005541914.localdomain podman[319585]: 2025-12-02 10:10:32.346200427 +0000 UTC m=+0.201542473 container remove f5e882fe67e5bb052b2b06c8a1c443b48c63346d23d202e90fd575b9ce361658 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b08b53d5-dfe9-4f37-b0e2-3da89dc155a5, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:10:32 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:32.430 262347 INFO neutron.agent.dhcp.agent [None req-78366714-e941-4a99-b687-b0cee884fc81 - - - - - -] Finished network b08b53d5-dfe9-4f37-b0e2-3da89dc155a5 dhcp configuration
Dec 02 10:10:32 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:32.431 262347 INFO neutron.agent.dhcp.agent [None req-8d48d508-f59b-479a-8c33-262a1848fc23 - - - - - -] Synchronizing state complete
Dec 02 10:10:32 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-5e445d47f7aa63bc26523d0df8868e3fcbc251684de06ab5b024e5fe07226834-merged.mount: Deactivated successfully.
Dec 02 10:10:32 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f5e882fe67e5bb052b2b06c8a1c443b48c63346d23d202e90fd575b9ce361658-userdata-shm.mount: Deactivated successfully.
Dec 02 10:10:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:10:32 np0005541914.localdomain systemd[1]: run-netns-qdhcp\x2db08b53d5\x2ddfe9\x2d4f37\x2db0e2\x2d3da89dc155a5.mount: Deactivated successfully.
Dec 02 10:10:32 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:32.646 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:32 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "r", "format": "json"}]: dispatch
Dec 02 10:10:32 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 02 10:10:32 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:10:32 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:10:32 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:10:32 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:10:32 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:10:32.836 2 INFO neutron.agent.securitygroups_rpc [None req-b8b78442-432d-4c51-90c0-6b0763587b8d 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['1794fecb-60a8-41cc-838d-a48dc5474875']
Dec 02 10:10:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e156 e156: 6 total, 6 up, 6 in
Dec 02 10:10:33 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v381: 177 pgs: 8 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 167 active+clean; 787 MiB data, 2.7 GiB used, 39 GiB / 42 GiB avail; 16 KiB/s rd, 22 MiB/s wr, 36 op/s
Dec 02 10:10:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:33.237 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:33 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "7a4c64a8-f75e-4eb8-8104-489d2e71f23a", "format": "json"}]: dispatch
Dec 02 10:10:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7a4c64a8-f75e-4eb8-8104-489d2e71f23a, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:10:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7a4c64a8-f75e-4eb8-8104-489d2e71f23a, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:10:33 np0005541914.localdomain podman[239757]: time="2025-12-02T10:10:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:10:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:10:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 158570 "" "Go-http-client/1.1"
Dec 02 10:10:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:10:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19684 "" "Go-http-client/1.1"
Dec 02 10:10:33 np0005541914.localdomain ceph-mon[301710]: osdmap e156: 6 total, 6 up, 6 in
Dec 02 10:10:33 np0005541914.localdomain ceph-mon[301710]: pgmap v381: 177 pgs: 8 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 167 active+clean; 787 MiB data, 2.7 GiB used, 39 GiB / 42 GiB avail; 16 KiB/s rd, 22 MiB/s wr, 36 op/s
Dec 02 10:10:33 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "7a4c64a8-f75e-4eb8-8104-489d2e71f23a", "format": "json"}]: dispatch
Dec 02 10:10:34 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:34.607 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:34 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:34.609 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:10:34 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:34.610 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 02 10:10:35 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v382: 177 pgs: 8 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 167 active+clean; 787 MiB data, 2.7 GiB used, 39 GiB / 42 GiB avail; 16 KiB/s rd, 22 MiB/s wr, 36 op/s
Dec 02 10:10:35 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 02 10:10:35 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:10:35 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:10:35.371 2 INFO neutron.agent.securitygroups_rpc [None req-d462b9e0-1fd6-4bb8-aa5c-65cdc1c34ce0 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['1bd96bc4-2204-473c-8b88-08bb385e4850', '1794fecb-60a8-41cc-838d-a48dc5474875']
Dec 02 10:10:35 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 02 10:10:35 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 02 10:10:35 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Dec 02 10:10:35 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 02 10:10:35 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:10:35 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 02 10:10:35 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:10:35 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180
Dec 02 10:10:35 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:10:35 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:10:35 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:35.828 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:36 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:10:36.038 2 INFO neutron.agent.securitygroups_rpc [None req-d3e51cbf-aa8e-4108-aa2d-8a899b386ca8 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['1bd96bc4-2204-473c-8b88-08bb385e4850']
Dec 02 10:10:36 np0005541914.localdomain ceph-mon[301710]: pgmap v382: 177 pgs: 8 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 167 active+clean; 787 MiB data, 2.7 GiB used, 39 GiB / 42 GiB avail; 16 KiB/s rd, 22 MiB/s wr, 36 op/s
Dec 02 10:10:36 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 02 10:10:36 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 02 10:10:36 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 02 10:10:36 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 02 10:10:36 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 02 10:10:36 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 02 10:10:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:10:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:10:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:10:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:10:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:10:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:10:37 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v383: 177 pgs: 8 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 167 active+clean; 787 MiB data, 2.7 GiB used, 39 GiB / 42 GiB avail; 16 KiB/s rd, 22 MiB/s wr, 36 op/s
Dec 02 10:10:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:10:37 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "7a4c64a8-f75e-4eb8-8104-489d2e71f23a_bcca2fdb-3954-4db7-bf24-5e38c29448c7", "force": true, "format": "json"}]: dispatch
Dec 02 10:10:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7a4c64a8-f75e-4eb8-8104-489d2e71f23a_bcca2fdb-3954-4db7-bf24-5e38c29448c7, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:10:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp'
Dec 02 10:10:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta'
Dec 02 10:10:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7a4c64a8-f75e-4eb8-8104-489d2e71f23a_bcca2fdb-3954-4db7-bf24-5e38c29448c7, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:10:37 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "7a4c64a8-f75e-4eb8-8104-489d2e71f23a", "force": true, "format": "json"}]: dispatch
Dec 02 10:10:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7a4c64a8-f75e-4eb8-8104-489d2e71f23a, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:10:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp'
Dec 02 10:10:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta'
Dec 02 10:10:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7a4c64a8-f75e-4eb8-8104-489d2e71f23a, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:10:38 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:38.239 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:38 np0005541914.localdomain ceph-mon[301710]: pgmap v383: 177 pgs: 8 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 167 active+clean; 787 MiB data, 2.7 GiB used, 39 GiB / 42 GiB avail; 16 KiB/s rd, 22 MiB/s wr, 36 op/s
Dec 02 10:10:38 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "7a4c64a8-f75e-4eb8-8104-489d2e71f23a_bcca2fdb-3954-4db7-bf24-5e38c29448c7", "force": true, "format": "json"}]: dispatch
Dec 02 10:10:38 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "7a4c64a8-f75e-4eb8-8104-489d2e71f23a", "force": true, "format": "json"}]: dispatch
Dec 02 10:10:38 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:10:38 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:10:38 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 02 10:10:38 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 02 10:10:38 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice bob with tenant a241a07e4161486091e8de3f95a1d6c6
Dec 02 10:10:38 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:10:38 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:10:39 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:10:39 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v384: 177 pgs: 177 active+clean; 907 MiB data, 3.1 GiB used, 39 GiB / 42 GiB avail; 15 KiB/s rd, 23 MiB/s wr, 34 op/s
Dec 02 10:10:39 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:39.555 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:10:39Z, description=, device_id=d7b55be3-df8b-4a7a-a053-0a870505d24f, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034a55160>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034a55310>], id=9aec39bb-d45c-46dd-bcbf-fbaa84c48978, ip_allocation=immediate, mac_address=fa:16:3e:7a:f7:0b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2570, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:10:39Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:10:39 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:10:39.562 2 INFO neutron.agent.securitygroups_rpc [None req-b887e192-fbe8-4997-9fd8-8fe0e62f2ad3 ffc28dac62f4495c9452fce17050d09a 16ae7f5f159c4b10a1539c2d9b52fce5 - - default default] Security group rule updated ['2409236f-431b-4039-840f-bb40e7858355']
Dec 02 10:10:39 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 02 10:10:39 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:10:39 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:10:39 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:10:39 np0005541914.localdomain podman[319633]: 2025-12-02 10:10:39.852725025 +0000 UTC m=+0.061381067 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Dec 02 10:10:39 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 4 addresses
Dec 02 10:10:39 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:10:39 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:10:40 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:40.170 262347 INFO neutron.agent.dhcp.agent [None req-4e731b1d-dc36-4987-bfcf-245a6fe2c36a - - - - - -] DHCP configuration for ports {'9aec39bb-d45c-46dd-bcbf-fbaa84c48978'} is completed
Dec 02 10:10:40 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:10:40 np0005541914.localdomain ceph-mon[301710]: pgmap v384: 177 pgs: 177 active+clean; 907 MiB data, 3.1 GiB used, 39 GiB / 42 GiB avail; 15 KiB/s rd, 23 MiB/s wr, 34 op/s
Dec 02 10:10:40 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:40.829 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:41 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "5a8eab8a-1e6c-4298-b827-66849539d417", "format": "json"}]: dispatch
Dec 02 10:10:41 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5a8eab8a-1e6c-4298-b827-66849539d417, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:10:41 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5a8eab8a-1e6c-4298-b827-66849539d417, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:10:41 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v385: 177 pgs: 177 active+clean; 907 MiB data, 3.1 GiB used, 39 GiB / 42 GiB avail; 15 KiB/s rd, 23 MiB/s wr, 34 op/s
Dec 02 10:10:41 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e157 e157: 6 total, 6 up, 6 in
Dec 02 10:10:41 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:41.612 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:10:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:10:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:10:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:10:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:10:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:10:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:10:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:10:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:10:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:10:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:10:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:10:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:10:42 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:42.182 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "5a8eab8a-1e6c-4298-b827-66849539d417", "format": "json"}]: dispatch
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: pgmap v385: 177 pgs: 177 active+clean; 907 MiB data, 3.1 GiB used, 39 GiB / 42 GiB avail; 15 KiB/s rd, 23 MiB/s wr, 34 op/s
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: osdmap e157: 6 total, 6 up, 6 in
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0.
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:10:42.599321) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670242599397, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 2422, "num_deletes": 253, "total_data_size": 4172320, "memory_usage": 4353024, "flush_reason": "Manual Compaction"}
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e158 e158: 6 total, 6 up, 6 in
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670242622815, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 2731233, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22879, "largest_seqno": 25296, "table_properties": {"data_size": 2721883, "index_size": 5663, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 22659, "raw_average_key_size": 21, "raw_value_size": 2701971, "raw_average_value_size": 2610, "num_data_blocks": 241, "num_entries": 1035, "num_filter_entries": 1035, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764670111, "oldest_key_time": 1764670111, "file_creation_time": 1764670242, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 23547 microseconds, and 8085 cpu microseconds.
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:10:42.622872) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 2731233 bytes OK
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:10:42.622899) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:10:42.624875) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:10:42.624899) EVENT_LOG_v1 {"time_micros": 1764670242624892, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:10:42.624924) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 4160873, prev total WAL file size 4160914, number of live WAL files 2.
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:10:42.625846) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132323939' seq:72057594037927935, type:22 .. '7061786F73003132353531' seq:0, type:0; will stop at (end)
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(2667KB)], [36(14MB)]
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670242625881, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 18100064, "oldest_snapshot_seqno": -1}
Dec 02 10:10:42 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 02 10:10:42 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 12951 keys, 16965879 bytes, temperature: kUnknown
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670242714649, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 16965879, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16893397, "index_size": 39037, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32389, "raw_key_size": 347808, "raw_average_key_size": 26, "raw_value_size": 16674132, "raw_average_value_size": 1287, "num_data_blocks": 1470, "num_entries": 12951, "num_filter_entries": 12951, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669502, "oldest_key_time": 0, "file_creation_time": 1764670242, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}}
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:10:42.714925) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 16965879 bytes
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:10:42.720355) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 203.7 rd, 190.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 14.7 +0.0 blob) out(16.2 +0.0 blob), read-write-amplify(12.8) write-amplify(6.2) OK, records in: 13491, records dropped: 540 output_compression: NoCompression
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:10:42.720376) EVENT_LOG_v1 {"time_micros": 1764670242720367, "job": 20, "event": "compaction_finished", "compaction_time_micros": 88874, "compaction_time_cpu_micros": 31948, "output_level": 6, "num_output_files": 1, "total_output_size": 16965879, "num_input_records": 13491, "num_output_records": 12951, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670242720752, "job": 20, "event": "table_file_deletion", "file_number": 38}
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670242722258, "job": 20, "event": "table_file_deletion", "file_number": 36}
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:10:42.625809) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:10:42.722284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:10:42.722288) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:10:42.722290) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:10:42.722292) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:10:42.722294) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Dec 02 10:10:42 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 02 10:10:43 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:10:43 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 02 10:10:43 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:10:43 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180
Dec 02 10:10:43 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:10:43 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:10:43 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v388: 177 pgs: 177 active+clean; 1.0 GiB data, 3.4 GiB used, 39 GiB / 42 GiB avail; 20 KiB/s rd, 31 MiB/s wr, 44 op/s
Dec 02 10:10:43 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:43.270 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:43 np0005541914.localdomain ceph-mon[301710]: osdmap e158: 6 total, 6 up, 6 in
Dec 02 10:10:43 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 02 10:10:43 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 02 10:10:43 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 02 10:10:43 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 02 10:10:44 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 02 10:10:44 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 02 10:10:44 np0005541914.localdomain ceph-mon[301710]: pgmap v388: 177 pgs: 177 active+clean; 1.0 GiB data, 3.4 GiB used, 39 GiB / 42 GiB avail; 20 KiB/s rd, 31 MiB/s wr, 44 op/s
Dec 02 10:10:45 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v389: 177 pgs: 177 active+clean; 1.0 GiB data, 3.4 GiB used, 39 GiB / 42 GiB avail; 20 KiB/s rd, 31 MiB/s wr, 44 op/s
Dec 02 10:10:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:45.831 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:46 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "5a8eab8a-1e6c-4298-b827-66849539d417_de2c1857-d908-4600-af3d-2ff1f2d9e3dc", "force": true, "format": "json"}]: dispatch
Dec 02 10:10:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5a8eab8a-1e6c-4298-b827-66849539d417_de2c1857-d908-4600-af3d-2ff1f2d9e3dc, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:10:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp'
Dec 02 10:10:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta'
Dec 02 10:10:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5a8eab8a-1e6c-4298-b827-66849539d417_de2c1857-d908-4600-af3d-2ff1f2d9e3dc, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:10:46 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "5a8eab8a-1e6c-4298-b827-66849539d417", "force": true, "format": "json"}]: dispatch
Dec 02 10:10:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5a8eab8a-1e6c-4298-b827-66849539d417, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:10:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp'
Dec 02 10:10:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta'
Dec 02 10:10:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5a8eab8a-1e6c-4298-b827-66849539d417, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:10:46 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "r", "format": "json"}]: dispatch
Dec 02 10:10:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:10:46 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 02 10:10:46 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 02 10:10:46 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice bob with tenant a241a07e4161486091e8de3f95a1d6c6
Dec 02 10:10:46 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:10:46 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:10:46 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e159 e159: 6 total, 6 up, 6 in
Dec 02 10:10:46 np0005541914.localdomain ceph-mon[301710]: pgmap v389: 177 pgs: 177 active+clean; 1.0 GiB data, 3.4 GiB used, 39 GiB / 42 GiB avail; 20 KiB/s rd, 31 MiB/s wr, 44 op/s
Dec 02 10:10:46 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 02 10:10:46 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:10:46 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:10:46 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:10:46 np0005541914.localdomain ceph-mon[301710]: osdmap e159: 6 total, 6 up, 6 in
Dec 02 10:10:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:10:46 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:10:46 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:10:47 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:10:47 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:10:47 np0005541914.localdomain podman[319662]: 2025-12-02 10:10:47.087593388 +0000 UTC m=+0.073653174 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec 02 10:10:47 np0005541914.localdomain podman[319657]: 2025-12-02 10:10:47.144569279 +0000 UTC m=+0.134288367 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:10:47 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v391: 177 pgs: 177 active+clean; 1.0 GiB data, 3.4 GiB used, 39 GiB / 42 GiB avail; 14 KiB/s rd, 21 MiB/s wr, 31 op/s
Dec 02 10:10:47 np0005541914.localdomain podman[319662]: 2025-12-02 10:10:47.150835882 +0000 UTC m=+0.136895668 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 02 10:10:47 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:10:47 np0005541914.localdomain podman[319657]: 2025-12-02 10:10:47.184547268 +0000 UTC m=+0.174266436 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:10:47 np0005541914.localdomain podman[319655]: 2025-12-02 10:10:47.191420879 +0000 UTC m=+0.185510461 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:10:47 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:10:47 np0005541914.localdomain podman[319656]: 2025-12-02 10:10:47.299314774 +0000 UTC m=+0.290406634 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 10:10:47 np0005541914.localdomain podman[319656]: 2025-12-02 10:10:47.310092276 +0000 UTC m=+0.301184176 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 10:10:47 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:10:47 np0005541914.localdomain podman[319655]: 2025-12-02 10:10:47.328406858 +0000 UTC m=+0.322496420 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent)
Dec 02 10:10:47 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:10:47 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:10:47 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e160 e160: 6 total, 6 up, 6 in
Dec 02 10:10:47 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:47.572 262347 INFO neutron.agent.linux.ip_lib [None req-b2c78541-a2ac-46a9-97d9-a3df1ad26c63 - - - - - -] Device tap837bee5c-e5 cannot be used as it has no MAC address
Dec 02 10:10:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:47.593 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:47 np0005541914.localdomain kernel: device tap837bee5c-e5 entered promiscuous mode
Dec 02 10:10:47 np0005541914.localdomain NetworkManager[5967]: <info>  [1764670247.5994] manager: (tap837bee5c-e5): new Generic device (/org/freedesktop/NetworkManager/Devices/38)
Dec 02 10:10:47 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:10:47Z|00181|binding|INFO|Claiming lport 837bee5c-e57d-4b1b-85ce-a2e06275b067 for this chassis.
Dec 02 10:10:47 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:10:47Z|00182|binding|INFO|837bee5c-e57d-4b1b-85ce-a2e06275b067: Claiming unknown
Dec 02 10:10:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:47.601 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:47 np0005541914.localdomain systemd-udevd[319744]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 10:10:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:47.615 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::1/64', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-fc2b26fc-414c-4c58-85dd-be52b87d6d85', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2b26fc-414c-4c58-85dd-be52b87d6d85', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28f4ef6ddb6546fbb800184721e43e93', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=967d9288-e578-4b56-bd46-6584d42cca7c, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=837bee5c-e57d-4b1b-85ce-a2e06275b067) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:10:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:47.616 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 837bee5c-e57d-4b1b-85ce-a2e06275b067 in datapath fc2b26fc-414c-4c58-85dd-be52b87d6d85 bound to our chassis
Dec 02 10:10:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:47.618 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network fc2b26fc-414c-4c58-85dd-be52b87d6d85 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 02 10:10:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:47.619 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[feca2329-f713-4d3b-afb4-acd0bed7fc5d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:10:47 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap837bee5c-e5: No such device
Dec 02 10:10:47 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:10:47Z|00183|binding|INFO|Setting lport 837bee5c-e57d-4b1b-85ce-a2e06275b067 ovn-installed in OVS
Dec 02 10:10:47 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:10:47Z|00184|binding|INFO|Setting lport 837bee5c-e57d-4b1b-85ce-a2e06275b067 up in Southbound
Dec 02 10:10:47 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap837bee5c-e5: No such device
Dec 02 10:10:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:47.638 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:47 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap837bee5c-e5: No such device
Dec 02 10:10:47 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap837bee5c-e5: No such device
Dec 02 10:10:47 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap837bee5c-e5: No such device
Dec 02 10:10:47 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap837bee5c-e5: No such device
Dec 02 10:10:47 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap837bee5c-e5: No such device
Dec 02 10:10:47 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap837bee5c-e5: No such device
Dec 02 10:10:47 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "5a8eab8a-1e6c-4298-b827-66849539d417_de2c1857-d908-4600-af3d-2ff1f2d9e3dc", "force": true, "format": "json"}]: dispatch
Dec 02 10:10:47 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "5a8eab8a-1e6c-4298-b827-66849539d417", "force": true, "format": "json"}]: dispatch
Dec 02 10:10:47 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "r", "format": "json"}]: dispatch
Dec 02 10:10:47 np0005541914.localdomain ceph-mon[301710]: osdmap e160: 6 total, 6 up, 6 in
Dec 02 10:10:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:47.674 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:47.703 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:48.306 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:48 np0005541914.localdomain podman[319813]: 
Dec 02 10:10:48 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:10:48Z|00185|binding|INFO|Removing iface tap837bee5c-e5 ovn-installed in OVS
Dec 02 10:10:48 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:10:48Z|00186|binding|INFO|Removing lport 837bee5c-e57d-4b1b-85ce-a2e06275b067 ovn-installed in OVS
Dec 02 10:10:48 np0005541914.localdomain podman[319813]: 2025-12-02 10:10:48.524551581 +0000 UTC m=+0.100871780 container create 8149aa688b6708ce5a1a55c5402483b8ed83bf35a0dd366a394f20e186df2397 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc2b26fc-414c-4c58-85dd-be52b87d6d85, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 02 10:10:48 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:48.527 159483 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port d3773025-b15a-463e-9639-741039d170e1 with type ""
Dec 02 10:10:48 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:48.529 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::1/64', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-fc2b26fc-414c-4c58-85dd-be52b87d6d85', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2b26fc-414c-4c58-85dd-be52b87d6d85', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28f4ef6ddb6546fbb800184721e43e93', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=967d9288-e578-4b56-bd46-6584d42cca7c, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=837bee5c-e57d-4b1b-85ce-a2e06275b067) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:10:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:48.529 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:48 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:48.531 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 837bee5c-e57d-4b1b-85ce-a2e06275b067 in datapath fc2b26fc-414c-4c58-85dd-be52b87d6d85 unbound from our chassis
Dec 02 10:10:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:48.533 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:48 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:48.534 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network fc2b26fc-414c-4c58-85dd-be52b87d6d85 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 02 10:10:48 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:10:48.536 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[57986b43-ca06-414e-9bab-10e6703edcf3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:10:48 np0005541914.localdomain podman[319813]: 2025-12-02 10:10:48.474537025 +0000 UTC m=+0.050857254 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 02 10:10:48 np0005541914.localdomain systemd[1]: Started libpod-conmon-8149aa688b6708ce5a1a55c5402483b8ed83bf35a0dd366a394f20e186df2397.scope.
Dec 02 10:10:48 np0005541914.localdomain systemd[1]: tmp-crun.Qse3Iz.mount: Deactivated successfully.
Dec 02 10:10:48 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 10:10:48 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dfcbfb63df6d9c5020125c7ab6c6cc4e5a9e3aa6c7fc6ec83f933c15cc5b3a1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 10:10:48 np0005541914.localdomain podman[319813]: 2025-12-02 10:10:48.629698143 +0000 UTC m=+0.206018352 container init 8149aa688b6708ce5a1a55c5402483b8ed83bf35a0dd366a394f20e186df2397 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc2b26fc-414c-4c58-85dd-be52b87d6d85, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 02 10:10:48 np0005541914.localdomain podman[319813]: 2025-12-02 10:10:48.641541536 +0000 UTC m=+0.217861735 container start 8149aa688b6708ce5a1a55c5402483b8ed83bf35a0dd366a394f20e186df2397 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc2b26fc-414c-4c58-85dd-be52b87d6d85, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 02 10:10:48 np0005541914.localdomain dnsmasq[319832]: started, version 2.85 cachesize 150
Dec 02 10:10:48 np0005541914.localdomain dnsmasq[319832]: DNS service limited to local subnets
Dec 02 10:10:48 np0005541914.localdomain dnsmasq[319832]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 02 10:10:48 np0005541914.localdomain dnsmasq[319832]: warning: no upstream servers configured
Dec 02 10:10:48 np0005541914.localdomain dnsmasq-dhcp[319832]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Dec 02 10:10:48 np0005541914.localdomain dnsmasq[319832]: read /var/lib/neutron/dhcp/fc2b26fc-414c-4c58-85dd-be52b87d6d85/addn_hosts - 0 addresses
Dec 02 10:10:48 np0005541914.localdomain dnsmasq-dhcp[319832]: read /var/lib/neutron/dhcp/fc2b26fc-414c-4c58-85dd-be52b87d6d85/host
Dec 02 10:10:48 np0005541914.localdomain dnsmasq-dhcp[319832]: read /var/lib/neutron/dhcp/fc2b26fc-414c-4c58-85dd-be52b87d6d85/opts
Dec 02 10:10:48 np0005541914.localdomain ceph-mon[301710]: pgmap v391: 177 pgs: 177 active+clean; 1.0 GiB data, 3.4 GiB used, 39 GiB / 42 GiB avail; 14 KiB/s rd, 21 MiB/s wr, 31 op/s
Dec 02 10:10:48 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e161 e161: 6 total, 6 up, 6 in
Dec 02 10:10:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:48.732 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:48 np0005541914.localdomain kernel: device tap837bee5c-e5 left promiscuous mode
Dec 02 10:10:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:48.753 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:48 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:48.812 262347 INFO neutron.agent.dhcp.agent [None req-34e4fb36-6d31-41b2-9679-c59b21b814e4 - - - - - -] DHCP configuration for ports {'69f43bd0-3572-495c-9546-a66a25fa3e0c'} is completed
Dec 02 10:10:48 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:10:48.917 2 INFO neutron.agent.securitygroups_rpc [None req-c5737a18-0087-499e-be42-7eb006bdc7a0 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['d54da663-bbdd-4967-b64b-8a9f95f589dd']
Dec 02 10:10:49 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v394: 177 pgs: 177 active+clean; 1.1 GiB data, 3.8 GiB used, 38 GiB / 42 GiB avail; 16 KiB/s rd, 23 MiB/s wr, 33 op/s
Dec 02 10:10:49 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "0971658b-39fb-4b1f-bbaf-63a2efed16bf", "format": "json"}]: dispatch
Dec 02 10:10:49 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:0971658b-39fb-4b1f-bbaf-63a2efed16bf, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:10:49 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:0971658b-39fb-4b1f-bbaf-63a2efed16bf, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:10:49 np0005541914.localdomain dnsmasq[319832]: read /var/lib/neutron/dhcp/fc2b26fc-414c-4c58-85dd-be52b87d6d85/addn_hosts - 0 addresses
Dec 02 10:10:49 np0005541914.localdomain dnsmasq-dhcp[319832]: read /var/lib/neutron/dhcp/fc2b26fc-414c-4c58-85dd-be52b87d6d85/host
Dec 02 10:10:49 np0005541914.localdomain podman[319852]: 2025-12-02 10:10:49.660560018 +0000 UTC m=+0.069312111 container kill 8149aa688b6708ce5a1a55c5402483b8ed83bf35a0dd366a394f20e186df2397 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc2b26fc-414c-4c58-85dd-be52b87d6d85, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 02 10:10:49 np0005541914.localdomain dnsmasq-dhcp[319832]: read /var/lib/neutron/dhcp/fc2b26fc-414c-4c58-85dd-be52b87d6d85/opts
Dec 02 10:10:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent [-] Unable to reload_allocations dhcp for fc2b26fc-414c-4c58-85dd-be52b87d6d85.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap837bee5c-e5 not found in namespace qdhcp-fc2b26fc-414c-4c58-85dd-be52b87d6d85.
Dec 02 10:10:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent Traceback (most recent call last):
Dec 02 10:10:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver
Dec 02 10:10:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent     rv = getattr(driver, action)(**action_kwargs)
Dec 02 10:10:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations
Dec 02 10:10:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent     self.device_manager.update(self.network, self.interface_name)
Dec 02 10:10:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update
Dec 02 10:10:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent     self._set_default_route(network, device_name)
Dec 02 10:10:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route
Dec 02 10:10:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent     self._set_default_route_ip_version(network, device_name,
Dec 02 10:10:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version
Dec 02 10:10:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent     gateway = device.route.get_gateway(ip_version=ip_version)
Dec 02 10:10:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway
Dec 02 10:10:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent     routes = self.list_routes(ip_version, scope=scope, table=table)
Dec 02 10:10:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes
Dec 02 10:10:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent     return list_ip_routes(self._parent.namespace, ip_version, scope=scope,
Dec 02 10:10:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes
Dec 02 10:10:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent     routes = privileged.list_ip_routes(namespace, ip_version, device=device,
Dec 02 10:10:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f
Dec 02 10:10:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent     return self(f, *args, **kw)
Dec 02 10:10:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__
Dec 02 10:10:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent     do = self.iter(retry_state=retry_state)
Dec 02 10:10:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter
Dec 02 10:10:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent     return fut.result()
Dec 02 10:10:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result
Dec 02 10:10:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent     return self.__get_result()
Dec 02 10:10:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
Dec 02 10:10:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent     raise self._exception
Dec 02 10:10:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__
Dec 02 10:10:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent     result = fn(*args, **kwargs)
Dec 02 10:10:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Dec 02 10:10:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent     return self.channel.remote_call(name, args, kwargs,
Dec 02 10:10:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Dec 02 10:10:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent     raise exc_type(*result[2])
Dec 02 10:10:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap837bee5c-e5 not found in namespace qdhcp-fc2b26fc-414c-4c58-85dd-be52b87d6d85.
Dec 02 10:10:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.688 262347 ERROR neutron.agent.dhcp.agent 
Dec 02 10:10:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:49.693 262347 INFO neutron.agent.dhcp.agent [None req-8d48d508-f59b-479a-8c33-262a1848fc23 - - - - - -] Synchronizing state
Dec 02 10:10:49 np0005541914.localdomain ceph-mon[301710]: osdmap e161: 6 total, 6 up, 6 in
Dec 02 10:10:49 np0005541914.localdomain ceph-mon[301710]: pgmap v394: 177 pgs: 177 active+clean; 1.1 GiB data, 3.8 GiB used, 38 GiB / 42 GiB avail; 16 KiB/s rd, 23 MiB/s wr, 33 op/s
Dec 02 10:10:49 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "0971658b-39fb-4b1f-bbaf-63a2efed16bf", "format": "json"}]: dispatch
Dec 02 10:10:50 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 02 10:10:50 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:10:50 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 02 10:10:50 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 02 10:10:50 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Dec 02 10:10:50 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 02 10:10:50 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:10:50 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 02 10:10:50 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:10:50 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180
Dec 02 10:10:50 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:10:50 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:10:50 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:50.413 262347 INFO neutron.agent.dhcp.agent [None req-513b0151-6d8e-4d93-9c6d-01ea4bbae47f - - - - - -] All active networks have been fetched through RPC.
Dec 02 10:10:50 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:50.415 262347 INFO neutron.agent.dhcp.agent [-] Starting network fc2b26fc-414c-4c58-85dd-be52b87d6d85 dhcp configuration
Dec 02 10:10:50 np0005541914.localdomain dnsmasq[319832]: exiting on receipt of SIGTERM
Dec 02 10:10:50 np0005541914.localdomain podman[319885]: 2025-12-02 10:10:50.580607368 +0000 UTC m=+0.055060094 container kill 8149aa688b6708ce5a1a55c5402483b8ed83bf35a0dd366a394f20e186df2397 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc2b26fc-414c-4c58-85dd-be52b87d6d85, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 02 10:10:50 np0005541914.localdomain systemd[1]: tmp-crun.mb3Njp.mount: Deactivated successfully.
Dec 02 10:10:50 np0005541914.localdomain systemd[1]: libpod-8149aa688b6708ce5a1a55c5402483b8ed83bf35a0dd366a394f20e186df2397.scope: Deactivated successfully.
Dec 02 10:10:50 np0005541914.localdomain podman[319900]: 2025-12-02 10:10:50.647426331 +0000 UTC m=+0.043769656 container died 8149aa688b6708ce5a1a55c5402483b8ed83bf35a0dd366a394f20e186df2397 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc2b26fc-414c-4c58-85dd-be52b87d6d85, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 02 10:10:50 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8149aa688b6708ce5a1a55c5402483b8ed83bf35a0dd366a394f20e186df2397-userdata-shm.mount: Deactivated successfully.
Dec 02 10:10:50 np0005541914.localdomain podman[319900]: 2025-12-02 10:10:50.693420354 +0000 UTC m=+0.089763599 container remove 8149aa688b6708ce5a1a55c5402483b8ed83bf35a0dd366a394f20e186df2397 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc2b26fc-414c-4c58-85dd-be52b87d6d85, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 02 10:10:50 np0005541914.localdomain systemd[1]: libpod-conmon-8149aa688b6708ce5a1a55c5402483b8ed83bf35a0dd366a394f20e186df2397.scope: Deactivated successfully.
Dec 02 10:10:50 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e162 e162: 6 total, 6 up, 6 in
Dec 02 10:10:50 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 02 10:10:50 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 02 10:10:50 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 02 10:10:50 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 02 10:10:50 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 02 10:10:50 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 02 10:10:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:50.833 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:51 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:51.092 262347 INFO neutron.agent.dhcp.agent [None req-620cde1a-61d1-40fb-a73c-b9690a19ebc6 - - - - - -] Finished network fc2b26fc-414c-4c58-85dd-be52b87d6d85 dhcp configuration
Dec 02 10:10:51 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:10:51.094 262347 INFO neutron.agent.dhcp.agent [None req-513b0151-6d8e-4d93-9c6d-01ea4bbae47f - - - - - -] Synchronizing state complete
Dec 02 10:10:51 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v396: 177 pgs: 177 active+clean; 1.1 GiB data, 3.8 GiB used, 38 GiB / 42 GiB avail; 21 KiB/s rd, 30 MiB/s wr, 44 op/s
Dec 02 10:10:51 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e163 e163: 6 total, 6 up, 6 in
Dec 02 10:10:51 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-8dfcbfb63df6d9c5020125c7ab6c6cc4e5a9e3aa6c7fc6ec83f933c15cc5b3a1-merged.mount: Deactivated successfully.
Dec 02 10:10:51 np0005541914.localdomain systemd[1]: run-netns-qdhcp\x2dfc2b26fc\x2d414c\x2d4c58\x2d85dd\x2dbe52b87d6d85.mount: Deactivated successfully.
Dec 02 10:10:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:51.617 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:51 np0005541914.localdomain ceph-mon[301710]: osdmap e162: 6 total, 6 up, 6 in
Dec 02 10:10:51 np0005541914.localdomain ceph-mon[301710]: pgmap v396: 177 pgs: 177 active+clean; 1.1 GiB data, 3.8 GiB used, 38 GiB / 42 GiB avail; 21 KiB/s rd, 30 MiB/s wr, 44 op/s
Dec 02 10:10:51 np0005541914.localdomain ceph-mon[301710]: osdmap e163: 6 total, 6 up, 6 in
Dec 02 10:10:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:51.952 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:52 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:10:52 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 02 10:10:52 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1465263749' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:10:52 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 02 10:10:52 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1465263749' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:10:52 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1465263749' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:10:52 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1465263749' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:10:53 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v398: 177 pgs: 177 active+clean; 148 MiB data, 949 MiB used, 41 GiB / 42 GiB avail; 82 KiB/s rd, 24 MiB/s wr, 152 op/s
Dec 02 10:10:53 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:10:53 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:10:53 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 02 10:10:53 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 02 10:10:53 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice with tenant a241a07e4161486091e8de3f95a1d6c6
Dec 02 10:10:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:53.311 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:53 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:10:53 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:10:53 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:10:53 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:10:53.457 2 INFO neutron.agent.securitygroups_rpc [None req-f5ab8c31-1c32-4d96-896a-5e2487d3e658 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['d54da663-bbdd-4967-b64b-8a9f95f589dd', '475d5c6b-fba4-44ef-b012-03f922f307d8', 'c4cadb1e-8d38-4a3c-b1f8-f6d93fbe5968']
Dec 02 10:10:53 np0005541914.localdomain ceph-mon[301710]: pgmap v398: 177 pgs: 177 active+clean; 148 MiB data, 949 MiB used, 41 GiB / 42 GiB avail; 82 KiB/s rd, 24 MiB/s wr, 152 op/s
Dec 02 10:10:53 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:10:53 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 02 10:10:53 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:10:53 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:10:53 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:10:53 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:10:53.846 2 INFO neutron.agent.securitygroups_rpc [None req-054f4ed0-bf56-489e-9f2e-7d08aad333fe 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['475d5c6b-fba4-44ef-b012-03f922f307d8', 'c4cadb1e-8d38-4a3c-b1f8-f6d93fbe5968']
Dec 02 10:10:53 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "0971658b-39fb-4b1f-bbaf-63a2efed16bf_856cf5e4-324a-4059-b9e0-23839724525d", "force": true, "format": "json"}]: dispatch
Dec 02 10:10:53 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0971658b-39fb-4b1f-bbaf-63a2efed16bf_856cf5e4-324a-4059-b9e0-23839724525d, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:10:53 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp'
Dec 02 10:10:53 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta'
Dec 02 10:10:53 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0971658b-39fb-4b1f-bbaf-63a2efed16bf_856cf5e4-324a-4059-b9e0-23839724525d, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:10:53 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "0971658b-39fb-4b1f-bbaf-63a2efed16bf", "force": true, "format": "json"}]: dispatch
Dec 02 10:10:53 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0971658b-39fb-4b1f-bbaf-63a2efed16bf, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:10:53 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp'
Dec 02 10:10:53 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta'
Dec 02 10:10:53 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:10:53 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:0971658b-39fb-4b1f-bbaf-63a2efed16bf, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:10:53 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:10:54 np0005541914.localdomain podman[319926]: 2025-12-02 10:10:54.065173736 +0000 UTC m=+0.065490714 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, distribution-scope=public, maintainer=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6)
Dec 02 10:10:54 np0005541914.localdomain systemd[1]: tmp-crun.dALXYj.mount: Deactivated successfully.
Dec 02 10:10:54 np0005541914.localdomain podman[319925]: 2025-12-02 10:10:54.140052567 +0000 UTC m=+0.140249540 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 02 10:10:54 np0005541914.localdomain podman[319925]: 2025-12-02 10:10:54.146374931 +0000 UTC m=+0.146571884 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 02 10:10:54 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:10:54 np0005541914.localdomain podman[319926]: 2025-12-02 10:10:54.202679201 +0000 UTC m=+0.202996209 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., distribution-scope=public, architecture=x86_64, io.openshift.tags=minimal rhel9, name=ubi9-minimal, version=9.6, release=1755695350, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vendor=Red Hat, Inc., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 10:10:54 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:10:54 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "0971658b-39fb-4b1f-bbaf-63a2efed16bf_856cf5e4-324a-4059-b9e0-23839724525d", "force": true, "format": "json"}]: dispatch
Dec 02 10:10:54 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "0971658b-39fb-4b1f-bbaf-63a2efed16bf", "force": true, "format": "json"}]: dispatch
Dec 02 10:10:54 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1771883914' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:10:54 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1771883914' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:10:55 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v399: 177 pgs: 177 active+clean; 148 MiB data, 949 MiB used, 41 GiB / 42 GiB avail; 56 KiB/s rd, 26 KiB/s wr, 99 op/s
Dec 02 10:10:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:55.834 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:55 np0005541914.localdomain ceph-mon[301710]: pgmap v399: 177 pgs: 177 active+clean; 148 MiB data, 949 MiB used, 41 GiB / 42 GiB avail; 56 KiB/s rd, 26 KiB/s wr, 99 op/s
Dec 02 10:10:56 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e164 e164: 6 total, 6 up, 6 in
Dec 02 10:10:56 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch
Dec 02 10:10:56 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:10:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 02 10:10:57 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 02 10:10:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Dec 02 10:10:57 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 02 10:10:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:10:57 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch
Dec 02 10:10:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:10:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180
Dec 02 10:10:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:10:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:10:57 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v401: 177 pgs: 177 active+clean; 148 MiB data, 949 MiB used, 41 GiB / 42 GiB avail; 56 KiB/s rd, 26 KiB/s wr, 100 op/s
Dec 02 10:10:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:10:57 np0005541914.localdomain ceph-mon[301710]: osdmap e164: 6 total, 6 up, 6 in
Dec 02 10:10:57 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 02 10:10:57 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 02 10:10:57 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 02 10:10:57 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 02 10:10:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e165 e165: 6 total, 6 up, 6 in
Dec 02 10:10:58 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "299defe1-3a6d-4652-876c-cda1688f998a", "format": "json"}]: dispatch
Dec 02 10:10:58 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:299defe1-3a6d-4652-876c-cda1688f998a, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:10:58 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:299defe1-3a6d-4652-876c-cda1688f998a, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:10:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:58.313 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:10:58.540 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:10:58 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch
Dec 02 10:10:58 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch
Dec 02 10:10:58 np0005541914.localdomain ceph-mon[301710]: pgmap v401: 177 pgs: 177 active+clean; 148 MiB data, 949 MiB used, 41 GiB / 42 GiB avail; 56 KiB/s rd, 26 KiB/s wr, 100 op/s
Dec 02 10:10:58 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1722968383' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:10:58 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1722968383' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:10:58 np0005541914.localdomain ceph-mon[301710]: osdmap e165: 6 total, 6 up, 6 in
Dec 02 10:10:58 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses
Dec 02 10:10:58 np0005541914.localdomain podman[319986]: 2025-12-02 10:10:58.598421936 +0000 UTC m=+0.060830951 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:10:58 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:10:58 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:10:58 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:10:58.927 2 INFO neutron.agent.securitygroups_rpc [None req-84a55dcc-5035-483d-9948-fd4c09f198da 57832728fce14260b03b0f06122d5897 aae5e2dae10d49c38d5d63835c7677e3 - - default default] Security group member updated ['e8ea3695-3b79-4d4a-ada7-8279c4be34cf']
Dec 02 10:10:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v403: 177 pgs: 177 active+clean; 148 MiB data, 910 MiB used, 41 GiB / 42 GiB avail; 80 KiB/s rd, 61 KiB/s wr, 136 op/s
Dec 02 10:10:59 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "299defe1-3a6d-4652-876c-cda1688f998a", "format": "json"}]: dispatch
Dec 02 10:11:00 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 02 10:11:00 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1955612142' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:11:00 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 02 10:11:00 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1955612142' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:11:00 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "r", "format": "json"}]: dispatch
Dec 02 10:11:00 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:11:00 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 02 10:11:00 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 02 10:11:00 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice with tenant a241a07e4161486091e8de3f95a1d6c6
Dec 02 10:11:00 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:11:00 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:11:00 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:11:00 np0005541914.localdomain ceph-mon[301710]: pgmap v403: 177 pgs: 177 active+clean; 148 MiB data, 910 MiB used, 41 GiB / 42 GiB avail; 80 KiB/s rd, 61 KiB/s wr, 136 op/s
Dec 02 10:11:00 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1955612142' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:11:00 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1955612142' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:11:00 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 02 10:11:00 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:11:00 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:11:00 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:11:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:00.836 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:00 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:11:00 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:11:00.944 262347 INFO neutron.agent.linux.ip_lib [None req-20472392-4cdf-45dc-aaab-c8dc86c333ff - - - - - -] Device tap0caba9ad-8a cannot be used as it has no MAC address
Dec 02 10:11:00 np0005541914.localdomain podman[320009]: 2025-12-02 10:11:00.964418714 +0000 UTC m=+0.089908724 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 02 10:11:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:00.974 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:00 np0005541914.localdomain kernel: device tap0caba9ad-8a entered promiscuous mode
Dec 02 10:11:00 np0005541914.localdomain NetworkManager[5967]: <info>  [1764670260.9837] manager: (tap0caba9ad-8a): new Generic device (/org/freedesktop/NetworkManager/Devices/39)
Dec 02 10:11:00 np0005541914.localdomain podman[320009]: 2025-12-02 10:11:00.979912609 +0000 UTC m=+0.105402589 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:11:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:00.987 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:00 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:11:00Z|00187|binding|INFO|Claiming lport 0caba9ad-8ad7-4727-b26a-176b808427fe for this chassis.
Dec 02 10:11:00 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:11:00Z|00188|binding|INFO|0caba9ad-8ad7-4727-b26a-176b808427fe: Claiming unknown
Dec 02 10:11:00 np0005541914.localdomain systemd-udevd[320034]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 10:11:01 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:11:00.997 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-60fad5f6-701c-4b07-9d8c-e83f0c029e7b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60fad5f6-701c-4b07-9d8c-e83f0c029e7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28f4ef6ddb6546fbb800184721e43e93', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6624a0ad-df63-46d6-9ae9-1be40b885bed, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=0caba9ad-8ad7-4727-b26a-176b808427fe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:11:01 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:11:00.998 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 0caba9ad-8ad7-4727-b26a-176b808427fe in datapath 60fad5f6-701c-4b07-9d8c-e83f0c029e7b bound to our chassis
Dec 02 10:11:01 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:11:01.000 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 60fad5f6-701c-4b07-9d8c-e83f0c029e7b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 02 10:11:01 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:11:01.001 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[131281b5-67e0-4cb1-ba8b-5185f51d7a00]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:11:01 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:11:01 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap0caba9ad-8a: No such device
Dec 02 10:11:01 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap0caba9ad-8a: No such device
Dec 02 10:11:01 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:11:01Z|00189|binding|INFO|Setting lport 0caba9ad-8ad7-4727-b26a-176b808427fe ovn-installed in OVS
Dec 02 10:11:01 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:11:01Z|00190|binding|INFO|Setting lport 0caba9ad-8ad7-4727-b26a-176b808427fe up in Southbound
Dec 02 10:11:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:01.054 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:01 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap0caba9ad-8a: No such device
Dec 02 10:11:01 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap0caba9ad-8a: No such device
Dec 02 10:11:01 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap0caba9ad-8a: No such device
Dec 02 10:11:01 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap0caba9ad-8a: No such device
Dec 02 10:11:01 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap0caba9ad-8a: No such device
Dec 02 10:11:01 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap0caba9ad-8a: No such device
Dec 02 10:11:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:01.091 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:01.123 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:01 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v404: 177 pgs: 177 active+clean; 148 MiB data, 910 MiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 38 KiB/s wr, 49 op/s
Dec 02 10:11:01 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "r", "format": "json"}]: dispatch
Dec 02 10:11:01 np0005541914.localdomain podman[320106]: 
Dec 02 10:11:01 np0005541914.localdomain podman[320106]: 2025-12-02 10:11:01.866615535 +0000 UTC m=+0.064192913 container create 4ab032bd1994b1e90472183ee452785a1f4c4d9dc9af84bc5d49baeb74d7b417 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-60fad5f6-701c-4b07-9d8c-e83f0c029e7b, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 02 10:11:01 np0005541914.localdomain systemd[1]: Started libpod-conmon-4ab032bd1994b1e90472183ee452785a1f4c4d9dc9af84bc5d49baeb74d7b417.scope.
Dec 02 10:11:01 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 10:11:01 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c69ee66c8360ff7a699f7280effe9e3eacf5d9394951387a335cc5b4c018dbf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 10:11:01 np0005541914.localdomain podman[320106]: 2025-12-02 10:11:01.830251428 +0000 UTC m=+0.027828876 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 02 10:11:01 np0005541914.localdomain podman[320106]: 2025-12-02 10:11:01.938670148 +0000 UTC m=+0.136247536 container init 4ab032bd1994b1e90472183ee452785a1f4c4d9dc9af84bc5d49baeb74d7b417 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-60fad5f6-701c-4b07-9d8c-e83f0c029e7b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 02 10:11:01 np0005541914.localdomain podman[320106]: 2025-12-02 10:11:01.947217221 +0000 UTC m=+0.144794589 container start 4ab032bd1994b1e90472183ee452785a1f4c4d9dc9af84bc5d49baeb74d7b417 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-60fad5f6-701c-4b07-9d8c-e83f0c029e7b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 02 10:11:01 np0005541914.localdomain dnsmasq[320124]: started, version 2.85 cachesize 150
Dec 02 10:11:01 np0005541914.localdomain dnsmasq[320124]: DNS service limited to local subnets
Dec 02 10:11:01 np0005541914.localdomain dnsmasq[320124]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 02 10:11:01 np0005541914.localdomain dnsmasq[320124]: warning: no upstream servers configured
Dec 02 10:11:01 np0005541914.localdomain dnsmasq-dhcp[320124]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Dec 02 10:11:01 np0005541914.localdomain dnsmasq[320124]: read /var/lib/neutron/dhcp/60fad5f6-701c-4b07-9d8c-e83f0c029e7b/addn_hosts - 0 addresses
Dec 02 10:11:01 np0005541914.localdomain dnsmasq-dhcp[320124]: read /var/lib/neutron/dhcp/60fad5f6-701c-4b07-9d8c-e83f0c029e7b/host
Dec 02 10:11:01 np0005541914.localdomain dnsmasq-dhcp[320124]: read /var/lib/neutron/dhcp/60fad5f6-701c-4b07-9d8c-e83f0c029e7b/opts
Dec 02 10:11:02 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Dec 02 10:11:02 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:11:02.078 262347 INFO neutron.agent.dhcp.agent [None req-95d3c2ac-d093-43c2-86df-2eae6c5b26ce - - - - - -] DHCP configuration for ports {'abf8504f-a4cf-4596-941f-89fffed30317'} is completed
Dec 02 10:11:02 np0005541914.localdomain podman[320142]: 2025-12-02 10:11:02.377532404 +0000 UTC m=+0.064684919 container kill 4ab032bd1994b1e90472183ee452785a1f4c4d9dc9af84bc5d49baeb74d7b417 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-60fad5f6-701c-4b07-9d8c-e83f0c029e7b, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:11:02 np0005541914.localdomain dnsmasq[320124]: read /var/lib/neutron/dhcp/60fad5f6-701c-4b07-9d8c-e83f0c029e7b/addn_hosts - 0 addresses
Dec 02 10:11:02 np0005541914.localdomain dnsmasq-dhcp[320124]: read /var/lib/neutron/dhcp/60fad5f6-701c-4b07-9d8c-e83f0c029e7b/host
Dec 02 10:11:02 np0005541914.localdomain dnsmasq-dhcp[320124]: read /var/lib/neutron/dhcp/60fad5f6-701c-4b07-9d8c-e83f0c029e7b/opts
Dec 02 10:11:02 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:11:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:02.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:11:02 np0005541914.localdomain ceph-mon[301710]: pgmap v404: 177 pgs: 177 active+clean; 148 MiB data, 910 MiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 38 KiB/s wr, 49 op/s
Dec 02 10:11:02 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/3426892413' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:11:02 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:11:02.709 262347 INFO neutron.agent.dhcp.agent [None req-1051ce87-9e8e-4de9-ba1b-e4321b43d48b - - - - - -] DHCP configuration for ports {'abf8504f-a4cf-4596-941f-89fffed30317', '0caba9ad-8ad7-4727-b26a-176b808427fe'} is completed
Dec 02 10:11:02 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "299defe1-3a6d-4652-876c-cda1688f998a_1df72d22-e70e-4f6b-877d-3a1cb60db11a", "force": true, "format": "json"}]: dispatch
Dec 02 10:11:02 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:299defe1-3a6d-4652-876c-cda1688f998a_1df72d22-e70e-4f6b-877d-3a1cb60db11a, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:11:02 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp'
Dec 02 10:11:02 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta'
Dec 02 10:11:02 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:299defe1-3a6d-4652-876c-cda1688f998a_1df72d22-e70e-4f6b-877d-3a1cb60db11a, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:11:02 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "299defe1-3a6d-4652-876c-cda1688f998a", "force": true, "format": "json"}]: dispatch
Dec 02 10:11:02 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:299defe1-3a6d-4652-876c-cda1688f998a, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:11:02 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp'
Dec 02 10:11:02 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta'
Dec 02 10:11:02 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:299defe1-3a6d-4652-876c-cda1688f998a, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:11:02 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:11:02Z|00191|binding|INFO|Removing iface tap0caba9ad-8a ovn-installed in OVS
Dec 02 10:11:02 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:11:02Z|00192|binding|INFO|Removing lport 0caba9ad-8ad7-4727-b26a-176b808427fe ovn-installed in OVS
Dec 02 10:11:02 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:11:02.874 159483 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port a5949880-e32c-4c65-b409-11f4935b996c with type ""
Dec 02 10:11:02 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:11:02.876 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-60fad5f6-701c-4b07-9d8c-e83f0c029e7b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-60fad5f6-701c-4b07-9d8c-e83f0c029e7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28f4ef6ddb6546fbb800184721e43e93', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6624a0ad-df63-46d6-9ae9-1be40b885bed, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=0caba9ad-8ad7-4727-b26a-176b808427fe) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:11:02 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:11:02.878 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 0caba9ad-8ad7-4727-b26a-176b808427fe in datapath 60fad5f6-701c-4b07-9d8c-e83f0c029e7b unbound from our chassis
Dec 02 10:11:02 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:11:02.882 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 60fad5f6-701c-4b07-9d8c-e83f0c029e7b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:11:02 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:11:02.883 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[a32f6fa1-d18a-4628-a3c7-b7eb71042bc9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:11:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:02.910 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:02.911 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:02 np0005541914.localdomain dnsmasq[320124]: exiting on receipt of SIGTERM
Dec 02 10:11:02 np0005541914.localdomain systemd[1]: libpod-4ab032bd1994b1e90472183ee452785a1f4c4d9dc9af84bc5d49baeb74d7b417.scope: Deactivated successfully.
Dec 02 10:11:02 np0005541914.localdomain podman[320182]: 2025-12-02 10:11:02.991386945 +0000 UTC m=+0.050803782 container kill 4ab032bd1994b1e90472183ee452785a1f4c4d9dc9af84bc5d49baeb74d7b417 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-60fad5f6-701c-4b07-9d8c-e83f0c029e7b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 02 10:11:03 np0005541914.localdomain podman[320201]: 2025-12-02 10:11:03.058288991 +0000 UTC m=+0.047969155 container died 4ab032bd1994b1e90472183ee452785a1f4c4d9dc9af84bc5d49baeb74d7b417 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-60fad5f6-701c-4b07-9d8c-e83f0c029e7b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 02 10:11:03 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4ab032bd1994b1e90472183ee452785a1f4c4d9dc9af84bc5d49baeb74d7b417-userdata-shm.mount: Deactivated successfully.
Dec 02 10:11:03 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-7c69ee66c8360ff7a699f7280effe9e3eacf5d9394951387a335cc5b4c018dbf-merged.mount: Deactivated successfully.
Dec 02 10:11:03 np0005541914.localdomain podman[320201]: 2025-12-02 10:11:03.106473571 +0000 UTC m=+0.096153695 container remove 4ab032bd1994b1e90472183ee452785a1f4c4d9dc9af84bc5d49baeb74d7b417 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-60fad5f6-701c-4b07-9d8c-e83f0c029e7b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 02 10:11:03 np0005541914.localdomain systemd[1]: libpod-conmon-4ab032bd1994b1e90472183ee452785a1f4c4d9dc9af84bc5d49baeb74d7b417.scope: Deactivated successfully.
Dec 02 10:11:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:03.118 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:03 np0005541914.localdomain kernel: device tap0caba9ad-8a left promiscuous mode
Dec 02 10:11:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:03.130 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:03 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:11:03.144 262347 INFO neutron.agent.dhcp.agent [None req-513b0151-6d8e-4d93-9c6d-01ea4bbae47f - - - - - -] Synchronizing state
Dec 02 10:11:03 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v405: 177 pgs: 177 active+clean; 149 MiB data, 928 MiB used, 41 GiB / 42 GiB avail; 63 KiB/s rd, 78 KiB/s wr, 98 op/s
Dec 02 10:11:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:11:03.181 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:11:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:11:03.181 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:11:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:11:03.182 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:11:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:03.316 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:03 np0005541914.localdomain podman[239757]: time="2025-12-02T10:11:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:11:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:11:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 158570 "" "Go-http-client/1.1"
Dec 02 10:11:03 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/383165306' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:11:03 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/383165306' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:11:03 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/2275295108' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:11:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:11:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19696 "" "Go-http-client/1.1"
Dec 02 10:11:03 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:11:03.793 262347 INFO neutron.agent.dhcp.agent [None req-dff763d9-1c11-4931-a80d-aa0322eb1ff2 - - - - - -] All active networks have been fetched through RPC.
Dec 02 10:11:03 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:11:03.793 262347 INFO neutron.agent.dhcp.agent [-] Starting network 60fad5f6-701c-4b07-9d8c-e83f0c029e7b dhcp configuration
Dec 02 10:11:03 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:11:03.870 262347 INFO neutron.agent.dhcp.agent [None req-e7309244-7ad6-4ba5-ba3f-5c3f65a5a309 - - - - - -] Finished network 60fad5f6-701c-4b07-9d8c-e83f0c029e7b dhcp configuration
Dec 02 10:11:03 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:11:03.871 262347 INFO neutron.agent.dhcp.agent [None req-dff763d9-1c11-4931-a80d-aa0322eb1ff2 - - - - - -] Synchronizing state complete
Dec 02 10:11:03 np0005541914.localdomain systemd[1]: run-netns-qdhcp\x2d60fad5f6\x2d701c\x2d4b07\x2d9d8c\x2de83f0c029e7b.mount: Deactivated successfully.
Dec 02 10:11:03 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:11:03.955 262347 INFO neutron.agent.dhcp.agent [None req-c09d1cdd-d903-4cee-ba8c-50839f4dfc3c - - - - - -] DHCP configuration for ports {'abf8504f-a4cf-4596-941f-89fffed30317'} is completed
Dec 02 10:11:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:04.026 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:04 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch
Dec 02 10:11:04 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:11:04 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 02 10:11:04 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 02 10:11:04 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Dec 02 10:11:04 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 02 10:11:04 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:11:04 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch
Dec 02 10:11:04 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:11:04 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180
Dec 02 10:11:04 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:11:04 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:11:04 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 02 10:11:04 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/780445321' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:11:04 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 02 10:11:04 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/780445321' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:11:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:04.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:11:04 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "299defe1-3a6d-4652-876c-cda1688f998a_1df72d22-e70e-4f6b-877d-3a1cb60db11a", "force": true, "format": "json"}]: dispatch
Dec 02 10:11:04 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "299defe1-3a6d-4652-876c-cda1688f998a", "force": true, "format": "json"}]: dispatch
Dec 02 10:11:04 np0005541914.localdomain ceph-mon[301710]: pgmap v405: 177 pgs: 177 active+clean; 149 MiB data, 928 MiB used, 41 GiB / 42 GiB avail; 63 KiB/s rd, 78 KiB/s wr, 98 op/s
Dec 02 10:11:04 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch
Dec 02 10:11:04 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 02 10:11:04 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 02 10:11:04 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 02 10:11:04 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 02 10:11:04 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch
Dec 02 10:11:04 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/780445321' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:11:04 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/780445321' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:11:05 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v406: 177 pgs: 177 active+clean; 149 MiB data, 928 MiB used, 41 GiB / 42 GiB avail; 58 KiB/s rd, 73 KiB/s wr, 90 op/s
Dec 02 10:11:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:05.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:11:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:05.838 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:06 np0005541914.localdomain ceph-mon[301710]: pgmap v406: 177 pgs: 177 active+clean; 149 MiB data, 928 MiB used, 41 GiB / 42 GiB avail; 58 KiB/s rd, 73 KiB/s wr, 90 op/s
Dec 02 10:11:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:06.523 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:11:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:06.526 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:11:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e166 e166: 6 total, 6 up, 6 in
Dec 02 10:11:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:06.624 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:11:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:06.625 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:11:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:06.625 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:11:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:06.625 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 10:11:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:06.626 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:11:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:11:06
Dec 02 10:11:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 02 10:11:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] do_upmap
Dec 02 10:11:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] pools ['backups', 'vms', 'manila_metadata', '.mgr', 'images', 'manila_data', 'volumes']
Dec 02 10:11:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes
Dec 02 10:11:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:11:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:11:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:11:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fd3633f29d0>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fd397e9ad90>)]
Dec 02 10:11:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 02 10:11:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:11:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:11:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 02 10:11:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:11:07 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3661005679' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:11:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:07.074 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:11:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v408: 177 pgs: 177 active+clean; 149 MiB data, 928 MiB used, 41 GiB / 42 GiB avail; 52 KiB/s rd, 65 KiB/s wr, 81 op/s
Dec 02 10:11:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust
Dec 02 10:11:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 02 10:11:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:11:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:11:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 02 10:11:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:11:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32)
Dec 02 10:11:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:11:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.443522589800856e-05 quantized to 32 (current 32)
Dec 02 10:11:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:11:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32)
Dec 02 10:11:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:11:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 02 10:11:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:11:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.9084135957565606e-06 of space, bias 1.0, pg target 0.00037977430555555556 quantized to 32 (current 32)
Dec 02 10:11:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:11:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.00022164860762144054 of space, bias 4.0, pg target 0.17643229166666666 quantized to 16 (current 16)
Dec 02 10:11:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 02 10:11:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:11:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:11:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:11:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:11:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:11:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:11:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:11:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:07.283 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 10:11:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:07.284 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11506MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 10:11:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:07.285 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:11:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:07.285 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:11:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:07.499 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 10:11:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:07.500 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 10:11:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:11:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:07.536 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:11:07 np0005541914.localdomain ceph-mon[301710]: osdmap e166: 6 total, 6 up, 6 in
Dec 02 10:11:07 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/690375908' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:11:07 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/690375908' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:11:07 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/3661005679' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:11:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e167 e167: 6 total, 6 up, 6 in
Dec 02 10:11:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:11:07 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1405474493' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:11:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:08.010 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:11:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:08.018 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 10:11:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:08.046 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 10:11:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:08.049 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 10:11:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:08.050 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:11:08 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:11:08 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:11:08 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 02 10:11:08 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 02 10:11:08 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice_bob with tenant a241a07e4161486091e8de3f95a1d6c6
Dec 02 10:11:08 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:11:08 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:11:08 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:11:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:08.318 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:08 np0005541914.localdomain ceph-mon[301710]: pgmap v408: 177 pgs: 177 active+clean; 149 MiB data, 928 MiB used, 41 GiB / 42 GiB avail; 52 KiB/s rd, 65 KiB/s wr, 81 op/s
Dec 02 10:11:08 np0005541914.localdomain ceph-mon[301710]: osdmap e167: 6 total, 6 up, 6 in
Dec 02 10:11:08 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/1405474493' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:11:08 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 02 10:11:08 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:11:08 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:11:08 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:11:08 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/4217679944' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:11:09 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:09.052 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:11:09 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:09.053 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 10:11:09 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:09.053 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 10:11:09 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:09.112 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 10:11:09 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:09.113 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:11:09 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:09.113 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:11:09 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v410: 177 pgs: 177 active+clean; 149 MiB data, 947 MiB used, 41 GiB / 42 GiB avail; 48 KiB/s rd, 59 KiB/s wr, 75 op/s
Dec 02 10:11:09 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "aacf1e5d-1b53-42f1-b3a7-45f0acb43c13_a93da211-cd1e-4fb4-ab83-001318ab16bf", "force": true, "format": "json"}]: dispatch
Dec 02 10:11:09 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aacf1e5d-1b53-42f1-b3a7-45f0acb43c13_a93da211-cd1e-4fb4-ab83-001318ab16bf, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:11:09 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp'
Dec 02 10:11:09 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta'
Dec 02 10:11:09 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aacf1e5d-1b53-42f1-b3a7-45f0acb43c13_a93da211-cd1e-4fb4-ab83-001318ab16bf, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:11:09 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "aacf1e5d-1b53-42f1-b3a7-45f0acb43c13", "force": true, "format": "json"}]: dispatch
Dec 02 10:11:09 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aacf1e5d-1b53-42f1-b3a7-45f0acb43c13, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:11:09 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp'
Dec 02 10:11:09 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta.tmp' to config b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28/.meta'
Dec 02 10:11:09 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:aacf1e5d-1b53-42f1-b3a7-45f0acb43c13, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:11:09 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e168 e168: 6 total, 6 up, 6 in
Dec 02 10:11:09 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:11:09 np0005541914.localdomain dnsmasq[319009]: read /var/lib/neutron/dhcp/fc2e8456-8064-45d4-b986-3bd5157209ba/addn_hosts - 0 addresses
Dec 02 10:11:09 np0005541914.localdomain podman[320282]: 2025-12-02 10:11:09.97341617 +0000 UTC m=+0.062779201 container kill bea18d0bdaf6162507d881dbca43975c98ed9066f5a3433b6d7ec813d16cd348 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc2e8456-8064-45d4-b986-3bd5157209ba, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:11:09 np0005541914.localdomain dnsmasq-dhcp[319009]: read /var/lib/neutron/dhcp/fc2e8456-8064-45d4-b986-3bd5157209ba/host
Dec 02 10:11:09 np0005541914.localdomain dnsmasq-dhcp[319009]: read /var/lib/neutron/dhcp/fc2e8456-8064-45d4-b986-3bd5157209ba/opts
Dec 02 10:11:10 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:10.253 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:10 np0005541914.localdomain kernel: device tap8e7a6388-06 left promiscuous mode
Dec 02 10:11:10 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:11:10Z|00193|binding|INFO|Releasing lport 8e7a6388-0616-4036-bc8b-c45817966af9 from this chassis (sb_readonly=0)
Dec 02 10:11:10 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:11:10Z|00194|binding|INFO|Setting lport 8e7a6388-0616-4036-bc8b-c45817966af9 down in Southbound
Dec 02 10:11:10 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:10.271 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:10 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:11:10.283 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-fc2e8456-8064-45d4-b986-3bd5157209ba', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc2e8456-8064-45d4-b986-3bd5157209ba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f7326c3837b4427191aafcff504110ac', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eb1a8e80-528c-4bda-8d5b-06a577344504, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=8e7a6388-0616-4036-bc8b-c45817966af9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:11:10 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:11:10.285 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 8e7a6388-0616-4036-bc8b-c45817966af9 in datapath fc2e8456-8064-45d4-b986-3bd5157209ba unbound from our chassis
Dec 02 10:11:10 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:11:10.288 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fc2e8456-8064-45d4-b986-3bd5157209ba, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:11:10 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:11:10.289 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[2571ab22-7a3f-49e3-9445-7b6e95edb2ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:11:10 np0005541914.localdomain ceph-mon[301710]: pgmap v410: 177 pgs: 177 active+clean; 149 MiB data, 947 MiB used, 41 GiB / 42 GiB avail; 48 KiB/s rd, 59 KiB/s wr, 75 op/s
Dec 02 10:11:10 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "aacf1e5d-1b53-42f1-b3a7-45f0acb43c13_a93da211-cd1e-4fb4-ab83-001318ab16bf", "force": true, "format": "json"}]: dispatch
Dec 02 10:11:10 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "snap_name": "aacf1e5d-1b53-42f1-b3a7-45f0acb43c13", "force": true, "format": "json"}]: dispatch
Dec 02 10:11:10 np0005541914.localdomain ceph-mon[301710]: osdmap e168: 6 total, 6 up, 6 in
Dec 02 10:11:10 np0005541914.localdomain ceph-mon[301710]: mgrmap e49: np0005541914.lljzmk(active, since 11m), standbys: np0005541912.qwddia, np0005541913.mfesdm
Dec 02 10:11:10 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e169 e169: 6 total, 6 up, 6 in
Dec 02 10:11:10 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:10.839 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:11 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v413: 177 pgs: 177 active+clean; 149 MiB data, 947 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 32 KiB/s wr, 46 op/s
Dec 02 10:11:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:11.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:11:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:11.527 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 10:11:11 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 02 10:11:11 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2736016088' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:11:11 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 02 10:11:11 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2736016088' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:11:11 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e170 e170: 6 total, 6 up, 6 in
Dec 02 10:11:11 np0005541914.localdomain ceph-mon[301710]: osdmap e169: 6 total, 6 up, 6 in
Dec 02 10:11:11 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2736016088' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:11:11 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2736016088' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:11:11 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 02 10:11:11 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:11:11 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 02 10:11:11 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 02 10:11:11 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Dec 02 10:11:11 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 02 10:11:11 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:11:11 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 02 10:11:11 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:11:11 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180
Dec 02 10:11:11 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:11:11 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:11:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:11:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:11:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:11:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:11:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:11:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:11:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:11:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:11:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:11:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:11:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:11:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:11:12 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:11:12 np0005541914.localdomain ceph-mon[301710]: pgmap v413: 177 pgs: 177 active+clean; 149 MiB data, 947 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 32 KiB/s wr, 46 op/s
Dec 02 10:11:12 np0005541914.localdomain ceph-mon[301710]: osdmap e170: 6 total, 6 up, 6 in
Dec 02 10:11:12 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/3619384774' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:11:12 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 02 10:11:12 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 02 10:11:12 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 02 10:11:12 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 02 10:11:12 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/2917294181' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:11:12 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e171 e171: 6 total, 6 up, 6 in
Dec 02 10:11:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v416: 177 pgs: 177 active+clean; 149 MiB data, 940 MiB used, 41 GiB / 42 GiB avail; 58 KiB/s rd, 108 KiB/s wr, 99 op/s
Dec 02 10:11:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:13.378 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "format": "json"}]: dispatch
Dec 02 10:11:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:45fda55f-a67b-4a03-8e83-17717dd47f28, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:11:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:45fda55f-a67b-4a03-8e83-17717dd47f28, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:11:13 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:11:13.445+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '45fda55f-a67b-4a03-8e83-17717dd47f28' of type subvolume
Dec 02 10:11:13 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '45fda55f-a67b-4a03-8e83-17717dd47f28' of type subvolume
Dec 02 10:11:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "force": true, "format": "json"}]: dispatch
Dec 02 10:11:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:11:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/45fda55f-a67b-4a03-8e83-17717dd47f28'' moved to trashcan
Dec 02 10:11:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:11:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:45fda55f-a67b-4a03-8e83-17717dd47f28, vol_name:cephfs) < ""
Dec 02 10:11:13 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:11:13 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:11:13 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:11:13 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:11:13 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:11:13 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:11:13.471+0000 7fd37fd73640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:11:13 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:11:13.471+0000 7fd37fd73640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:11:13 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:11:13.471+0000 7fd37fd73640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:11:13 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:11:13.471+0000 7fd37fd73640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:11:13 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:11:13.471+0000 7fd37fd73640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:11:13 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:11:13 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:11:13 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:11:13 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:11:13 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:11:13 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:11:13.514+0000 7fd380d75640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:11:13 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:11:13.514+0000 7fd380d75640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:11:13 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:11:13.514+0000 7fd380d75640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:11:13 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:11:13.514+0000 7fd380d75640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:11:13 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:11:13.514+0000 7fd380d75640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:11:13 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:11:13.555 262347 INFO neutron.agent.linux.ip_lib [None req-29612d0d-bf4e-4b65-8c8e-1bb291f2f9ee - - - - - -] Device tap24f6853f-6a cannot be used as it has no MAC address
Dec 02 10:11:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:13.574 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:13 np0005541914.localdomain kernel: device tap24f6853f-6a entered promiscuous mode
Dec 02 10:11:13 np0005541914.localdomain NetworkManager[5967]: <info>  [1764670273.5803] manager: (tap24f6853f-6a): new Generic device (/org/freedesktop/NetworkManager/Devices/40)
Dec 02 10:11:13 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:11:13Z|00195|binding|INFO|Claiming lport 24f6853f-6ab0-449f-846c-b775d5c1b118 for this chassis.
Dec 02 10:11:13 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:11:13Z|00196|binding|INFO|24f6853f-6ab0-449f-846c-b775d5c1b118: Claiming unknown
Dec 02 10:11:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:13.582 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:13 np0005541914.localdomain systemd-udevd[320339]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 10:11:13 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:11:13Z|00197|binding|INFO|Setting lport 24f6853f-6ab0-449f-846c-b775d5c1b118 ovn-installed in OVS
Dec 02 10:11:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:13.615 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:13.641 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:13.667 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:13 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 02 10:11:13 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 02 10:11:13 np0005541914.localdomain ceph-mon[301710]: osdmap e171: 6 total, 6 up, 6 in
Dec 02 10:11:13 np0005541914.localdomain ceph-mon[301710]: pgmap v416: 177 pgs: 177 active+clean; 149 MiB data, 940 MiB used, 41 GiB / 42 GiB avail; 58 KiB/s rd, 108 KiB/s wr, 99 op/s
Dec 02 10:11:13 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "format": "json"}]: dispatch
Dec 02 10:11:13 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "45fda55f-a67b-4a03-8e83-17717dd47f28", "force": true, "format": "json"}]: dispatch
Dec 02 10:11:13 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:11:13Z|00198|binding|INFO|Setting lport 24f6853f-6ab0-449f-846c-b775d5c1b118 up in Southbound
Dec 02 10:11:13 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:11:13.778 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-e639c436-316b-48d5-b04e-92acf5f6e4d6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e639c436-316b-48d5-b04e-92acf5f6e4d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eea084241c14c5d9a6cc0d912041a21', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b39d1a8c-5e56-4b96-bd5c-0c0c80df17e2, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=24f6853f-6ab0-449f-846c-b775d5c1b118) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:11:13 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:11:13.780 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 24f6853f-6ab0-449f-846c-b775d5c1b118 in datapath e639c436-316b-48d5-b04e-92acf5f6e4d6 bound to our chassis
Dec 02 10:11:13 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:11:13.782 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e639c436-316b-48d5-b04e-92acf5f6e4d6 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 02 10:11:13 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:11:13.783 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[3c5b121a-6024-4f08-b39d-f84d2ab0b327]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:11:14 np0005541914.localdomain podman[320394]: 
Dec 02 10:11:14 np0005541914.localdomain podman[320394]: 2025-12-02 10:11:14.485381367 +0000 UTC m=+0.091495783 container create 4f7ef6089cb283e879072312f9d621f3cda979e5a7e1ff49621f39530818fff3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e639c436-316b-48d5-b04e-92acf5f6e4d6, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS)
Dec 02 10:11:14 np0005541914.localdomain systemd[1]: Started libpod-conmon-4f7ef6089cb283e879072312f9d621f3cda979e5a7e1ff49621f39530818fff3.scope.
Dec 02 10:11:14 np0005541914.localdomain podman[320394]: 2025-12-02 10:11:14.4409092 +0000 UTC m=+0.047023666 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 02 10:11:14 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 10:11:14 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dec703e64cf49252cf57f900c5246aed36f5c491b36caec2fbe59693cfdfe47/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 10:11:14 np0005541914.localdomain podman[320394]: 2025-12-02 10:11:14.56294027 +0000 UTC m=+0.169054696 container init 4f7ef6089cb283e879072312f9d621f3cda979e5a7e1ff49621f39530818fff3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e639c436-316b-48d5-b04e-92acf5f6e4d6, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:11:14 np0005541914.localdomain podman[320394]: 2025-12-02 10:11:14.570078189 +0000 UTC m=+0.176192605 container start 4f7ef6089cb283e879072312f9d621f3cda979e5a7e1ff49621f39530818fff3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e639c436-316b-48d5-b04e-92acf5f6e4d6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 02 10:11:14 np0005541914.localdomain dnsmasq[320412]: started, version 2.85 cachesize 150
Dec 02 10:11:14 np0005541914.localdomain dnsmasq[320412]: DNS service limited to local subnets
Dec 02 10:11:14 np0005541914.localdomain dnsmasq[320412]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 02 10:11:14 np0005541914.localdomain dnsmasq[320412]: warning: no upstream servers configured
Dec 02 10:11:14 np0005541914.localdomain dnsmasq-dhcp[320412]: DHCPv6, static leases only on 2001:db8:1::, lease time 1d
Dec 02 10:11:14 np0005541914.localdomain dnsmasq[320412]: read /var/lib/neutron/dhcp/e639c436-316b-48d5-b04e-92acf5f6e4d6/addn_hosts - 0 addresses
Dec 02 10:11:14 np0005541914.localdomain dnsmasq-dhcp[320412]: read /var/lib/neutron/dhcp/e639c436-316b-48d5-b04e-92acf5f6e4d6/host
Dec 02 10:11:14 np0005541914.localdomain dnsmasq-dhcp[320412]: read /var/lib/neutron/dhcp/e639c436-316b-48d5-b04e-92acf5f6e4d6/opts
Dec 02 10:11:14 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e172 e172: 6 total, 6 up, 6 in
Dec 02 10:11:15 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "r", "format": "json"}]: dispatch
Dec 02 10:11:15 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:11:15 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 02 10:11:15 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 02 10:11:15 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice_bob with tenant a241a07e4161486091e8de3f95a1d6c6
Dec 02 10:11:15 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:11:15 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:11:15 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v418: 177 pgs: 177 active+clean; 149 MiB data, 940 MiB used, 41 GiB / 42 GiB avail; 52 KiB/s rd, 95 KiB/s wr, 87 op/s
Dec 02 10:11:15 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:11:15 np0005541914.localdomain ceph-mon[301710]: osdmap e172: 6 total, 6 up, 6 in
Dec 02 10:11:15 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "r", "format": "json"}]: dispatch
Dec 02 10:11:15 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 02 10:11:15 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:11:15 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:11:15 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:11:15 np0005541914.localdomain ceph-mon[301710]: pgmap v418: 177 pgs: 177 active+clean; 149 MiB data, 940 MiB used, 41 GiB / 42 GiB avail; 52 KiB/s rd, 95 KiB/s wr, 87 op/s
Dec 02 10:11:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:15.842 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:16 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:11:16.350 262347 INFO neutron.agent.dhcp.agent [None req-e0477b0a-5e14-4156-8406-cbd688063842 - - - - - -] DHCP configuration for ports {'d8028153-5f2c-4429-a73a-6e644730b15a'} is completed
Dec 02 10:11:16 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e173 e173: 6 total, 6 up, 6 in
Dec 02 10:11:16 np0005541914.localdomain ceph-mon[301710]: mgrmap e50: np0005541914.lljzmk(active, since 11m), standbys: np0005541912.qwddia, np0005541913.mfesdm
Dec 02 10:11:16 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/3191412397' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:11:16 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/3191412397' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:11:16 np0005541914.localdomain ceph-mon[301710]: osdmap e173: 6 total, 6 up, 6 in
Dec 02 10:11:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v420: 177 pgs: 177 active+clean; 149 MiB data, 940 MiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 78 KiB/s wr, 72 op/s
Dec 02 10:11:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:11:17 np0005541914.localdomain podman[320430]: 2025-12-02 10:11:17.776826812 +0000 UTC m=+0.053337210 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 02 10:11:17 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses
Dec 02 10:11:17 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:11:17 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:11:17 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:11:17 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:11:17 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:11:17 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:11:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e174 e174: 6 total, 6 up, 6 in
Dec 02 10:11:17 np0005541914.localdomain ceph-mon[301710]: pgmap v420: 177 pgs: 177 active+clean; 149 MiB data, 940 MiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 78 KiB/s wr, 72 op/s
Dec 02 10:11:17 np0005541914.localdomain systemd[1]: tmp-crun.uf3WSZ.mount: Deactivated successfully.
Dec 02 10:11:17 np0005541914.localdomain podman[320452]: 2025-12-02 10:11:17.937607542 +0000 UTC m=+0.123080042 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible)
Dec 02 10:11:17 np0005541914.localdomain podman[320445]: 2025-12-02 10:11:17.94370468 +0000 UTC m=+0.141077956 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible)
Dec 02 10:11:17 np0005541914.localdomain podman[320447]: 2025-12-02 10:11:17.902548915 +0000 UTC m=+0.095424873 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2)
Dec 02 10:11:17 np0005541914.localdomain podman[320452]: 2025-12-02 10:11:17.991908731 +0000 UTC m=+0.177381241 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller)
Dec 02 10:11:17 np0005541914.localdomain podman[320446]: 2025-12-02 10:11:17.998988678 +0000 UTC m=+0.191807314 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 02 10:11:18 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:11:18 np0005541914.localdomain podman[320445]: 2025-12-02 10:11:18.023638606 +0000 UTC m=+0.221011842 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:11:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:18.027 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:18 np0005541914.localdomain podman[320447]: 2025-12-02 10:11:18.032019503 +0000 UTC m=+0.224895511 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm)
Dec 02 10:11:18 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:11:18 np0005541914.localdomain podman[320446]: 2025-12-02 10:11:18.034996124 +0000 UTC m=+0.227814700 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 10:11:18 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:11:18 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:11:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:18.381 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:18 np0005541914.localdomain ceph-mon[301710]: osdmap e174: 6 total, 6 up, 6 in
Dec 02 10:11:19 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v422: 177 pgs: 177 active+clean; 149 MiB data, 925 MiB used, 41 GiB / 42 GiB avail; 62 KiB/s rd, 40 KiB/s wr, 88 op/s
Dec 02 10:11:19 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2016593782' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:11:19 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2016593782' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:11:19 np0005541914.localdomain ceph-mon[301710]: pgmap v422: 177 pgs: 177 active+clean; 149 MiB data, 925 MiB used, 41 GiB / 42 GiB avail; 62 KiB/s rd, 40 KiB/s wr, 88 op/s
Dec 02 10:11:19 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 02 10:11:19 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:11:20 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 02 10:11:20 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 02 10:11:20 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Dec 02 10:11:20 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 02 10:11:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:11:20 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 02 10:11:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:11:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180
Dec 02 10:11:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:11:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:11:20 np0005541914.localdomain dnsmasq[319009]: exiting on receipt of SIGTERM
Dec 02 10:11:20 np0005541914.localdomain podman[320550]: 2025-12-02 10:11:20.481659212 +0000 UTC m=+0.056649002 container kill bea18d0bdaf6162507d881dbca43975c98ed9066f5a3433b6d7ec813d16cd348 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc2e8456-8064-45d4-b986-3bd5157209ba, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:11:20 np0005541914.localdomain systemd[1]: libpod-bea18d0bdaf6162507d881dbca43975c98ed9066f5a3433b6d7ec813d16cd348.scope: Deactivated successfully.
Dec 02 10:11:20 np0005541914.localdomain podman[320565]: 2025-12-02 10:11:20.54442165 +0000 UTC m=+0.043335072 container died bea18d0bdaf6162507d881dbca43975c98ed9066f5a3433b6d7ec813d16cd348 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc2e8456-8064-45d4-b986-3bd5157209ba, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:11:20 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bea18d0bdaf6162507d881dbca43975c98ed9066f5a3433b6d7ec813d16cd348-userdata-shm.mount: Deactivated successfully.
Dec 02 10:11:20 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-2158c3d2b4f5070bc0f9feccb69eab6266c44d72576ad850b561465969ca2e04-merged.mount: Deactivated successfully.
Dec 02 10:11:20 np0005541914.localdomain podman[320565]: 2025-12-02 10:11:20.587283707 +0000 UTC m=+0.086197089 container remove bea18d0bdaf6162507d881dbca43975c98ed9066f5a3433b6d7ec813d16cd348 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc2e8456-8064-45d4-b986-3bd5157209ba, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true)
Dec 02 10:11:20 np0005541914.localdomain systemd[1]: libpod-conmon-bea18d0bdaf6162507d881dbca43975c98ed9066f5a3433b6d7ec813d16cd348.scope: Deactivated successfully.
Dec 02 10:11:20 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:20.846 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:20 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 02 10:11:20 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 02 10:11:20 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 02 10:11:20 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 02 10:11:20 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 02 10:11:20 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 02 10:11:21 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v423: 177 pgs: 177 active+clean; 149 MiB data, 925 MiB used, 41 GiB / 42 GiB avail; 59 KiB/s rd, 38 KiB/s wr, 83 op/s
Dec 02 10:11:21 np0005541914.localdomain systemd[1]: run-netns-qdhcp\x2dfc2e8456\x2d8064\x2d45d4\x2db986\x2d3bd5157209ba.mount: Deactivated successfully.
Dec 02 10:11:21 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:11:21.471 262347 INFO neutron.agent.dhcp.agent [None req-4dda3967-d52f-460b-8e51-36900425d582 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:11:21 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:11:21.497 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:11:21 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e175 e175: 6 total, 6 up, 6 in
Dec 02 10:11:21 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:11:21.910 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:11:21Z, description=, device_id=be2bd9ee-1025-4bde-b6f9-05c48824f4be, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034bdffd0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034bdf610>], id=1eb79f60-066d-4ce1-95d5-094eb8f1c4ab, ip_allocation=immediate, mac_address=fa:16:3e:26:39:5c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:11:05Z, description=, dns_domain=, id=e639c436-316b-48d5-b04e-92acf5f6e4d6, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1026036337, port_security_enabled=True, project_id=8eea084241c14c5d9a6cc0d912041a21, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=53507, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2636, status=ACTIVE, subnets=['0b6177ed-0d94-40e6-82ff-9b5fca1eea57'], tags=[], tenant_id=8eea084241c14c5d9a6cc0d912041a21, updated_at=2025-12-02T10:11:10Z, vlan_transparent=None, network_id=e639c436-316b-48d5-b04e-92acf5f6e4d6, port_security_enabled=False, project_id=8eea084241c14c5d9a6cc0d912041a21, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2669, status=DOWN, tags=[], tenant_id=8eea084241c14c5d9a6cc0d912041a21, updated_at=2025-12-02T10:11:21Z on network e639c436-316b-48d5-b04e-92acf5f6e4d6
Dec 02 10:11:22 np0005541914.localdomain systemd[1]: tmp-crun.IuWPVj.mount: Deactivated successfully.
Dec 02 10:11:22 np0005541914.localdomain podman[320609]: 2025-12-02 10:11:22.109934014 +0000 UTC m=+0.066875286 container kill 4f7ef6089cb283e879072312f9d621f3cda979e5a7e1ff49621f39530818fff3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e639c436-316b-48d5-b04e-92acf5f6e4d6, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:11:22 np0005541914.localdomain dnsmasq[320412]: read /var/lib/neutron/dhcp/e639c436-316b-48d5-b04e-92acf5f6e4d6/addn_hosts - 1 addresses
Dec 02 10:11:22 np0005541914.localdomain dnsmasq-dhcp[320412]: read /var/lib/neutron/dhcp/e639c436-316b-48d5-b04e-92acf5f6e4d6/host
Dec 02 10:11:22 np0005541914.localdomain dnsmasq-dhcp[320412]: read /var/lib/neutron/dhcp/e639c436-316b-48d5-b04e-92acf5f6e4d6/opts
Dec 02 10:11:22 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:11:22.232 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:11:22 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:11:22 np0005541914.localdomain ceph-mon[301710]: pgmap v423: 177 pgs: 177 active+clean; 149 MiB data, 925 MiB used, 41 GiB / 42 GiB avail; 59 KiB/s rd, 38 KiB/s wr, 83 op/s
Dec 02 10:11:22 np0005541914.localdomain ceph-mon[301710]: osdmap e175: 6 total, 6 up, 6 in
Dec 02 10:11:22 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:11:22.623 262347 INFO neutron.agent.dhcp.agent [None req-0d607488-2f2d-4687-b909-1781d2309cba - - - - - -] DHCP configuration for ports {'1eb79f60-066d-4ce1-95d5-094eb8f1c4ab'} is completed
Dec 02 10:11:23 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v425: 177 pgs: 177 active+clean; 149 MiB data, 926 MiB used, 41 GiB / 42 GiB avail; 72 KiB/s rd, 66 KiB/s wr, 104 op/s
Dec 02 10:11:23 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:23.419 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:24 np0005541914.localdomain ceph-mon[301710]: pgmap v425: 177 pgs: 177 active+clean; 149 MiB data, 926 MiB used, 41 GiB / 42 GiB avail; 72 KiB/s rd, 66 KiB/s wr, 104 op/s
Dec 02 10:11:24 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:11:24 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:11:25 np0005541914.localdomain podman[320630]: 2025-12-02 10:11:25.065938202 +0000 UTC m=+0.062417799 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, name=ubi9-minimal, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., release=1755695350, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9)
Dec 02 10:11:25 np0005541914.localdomain podman[320630]: 2025-12-02 10:11:25.0788818 +0000 UTC m=+0.075361367 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, name=ubi9-minimal, build-date=2025-08-20T13:12:41, vcs-type=git, architecture=x86_64, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 02 10:11:25 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:11:25 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:11:25 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:11:25 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 02 10:11:25 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 02 10:11:25 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice bob with tenant a241a07e4161486091e8de3f95a1d6c6
Dec 02 10:11:25 np0005541914.localdomain podman[320629]: 2025-12-02 10:11:25.12446048 +0000 UTC m=+0.123734482 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 02 10:11:25 np0005541914.localdomain podman[320629]: 2025-12-02 10:11:25.133989463 +0000 UTC m=+0.133263455 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 02 10:11:25 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:11:25 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v426: 177 pgs: 177 active+clean; 149 MiB data, 926 MiB used, 41 GiB / 42 GiB avail; 59 KiB/s rd, 54 KiB/s wr, 86 op/s
Dec 02 10:11:25 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:11:25 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:11:25 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:11:25 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 02 10:11:25 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:11:25 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:11:25 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:11:25 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:25.891 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:26 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e176 e176: 6 total, 6 up, 6 in
Dec 02 10:11:26 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:11:26 np0005541914.localdomain ceph-mon[301710]: pgmap v426: 177 pgs: 177 active+clean; 149 MiB data, 926 MiB used, 41 GiB / 42 GiB avail; 59 KiB/s rd, 54 KiB/s wr, 86 op/s
Dec 02 10:11:26 np0005541914.localdomain ceph-mon[301710]: osdmap e176: 6 total, 6 up, 6 in
Dec 02 10:11:27 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v428: 177 pgs: 177 active+clean; 149 MiB data, 926 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 24 KiB/s wr, 19 op/s
Dec 02 10:11:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:11:28 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:28.494 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:28 np0005541914.localdomain ceph-mon[301710]: pgmap v428: 177 pgs: 177 active+clean; 149 MiB data, 926 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 24 KiB/s wr, 19 op/s
Dec 02 10:11:28 np0005541914.localdomain sudo[320672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 10:11:28 np0005541914.localdomain sudo[320672]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:11:28 np0005541914.localdomain sudo[320672]: pam_unix(sudo:session): session closed for user root
Dec 02 10:11:29 np0005541914.localdomain sudo[320690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 check-host
Dec 02 10:11:29 np0005541914.localdomain sudo[320690]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:11:29 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v429: 177 pgs: 177 active+clean; 149 MiB data, 926 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 41 KiB/s wr, 22 op/s
Dec 02 10:11:29 np0005541914.localdomain sudo[320690]: pam_unix(sudo:session): session closed for user root
Dec 02 10:11:29 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0)
Dec 02 10:11:29 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0)
Dec 02 10:11:29 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 10:11:29 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 10:11:29 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 10:11:29 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 10:11:29 np0005541914.localdomain sudo[320729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 10:11:29 np0005541914.localdomain sudo[320729]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:11:29 np0005541914.localdomain sudo[320729]: pam_unix(sudo:session): session closed for user root
Dec 02 10:11:29 np0005541914.localdomain sudo[320747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 10:11:29 np0005541914.localdomain sudo[320747]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:11:29 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:11:29.719 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:11:21Z, description=, device_id=be2bd9ee-1025-4bde-b6f9-05c48824f4be, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034b05970>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034b05c40>], id=1eb79f60-066d-4ce1-95d5-094eb8f1c4ab, ip_allocation=immediate, mac_address=fa:16:3e:26:39:5c, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:11:05Z, description=, dns_domain=, id=e639c436-316b-48d5-b04e-92acf5f6e4d6, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1026036337, port_security_enabled=True, project_id=8eea084241c14c5d9a6cc0d912041a21, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=53507, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2636, status=ACTIVE, subnets=['0b6177ed-0d94-40e6-82ff-9b5fca1eea57'], tags=[], tenant_id=8eea084241c14c5d9a6cc0d912041a21, updated_at=2025-12-02T10:11:10Z, vlan_transparent=None, network_id=e639c436-316b-48d5-b04e-92acf5f6e4d6, port_security_enabled=False, project_id=8eea084241c14c5d9a6cc0d912041a21, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2669, status=DOWN, tags=[], tenant_id=8eea084241c14c5d9a6cc0d912041a21, updated_at=2025-12-02T10:11:21Z on network e639c436-316b-48d5-b04e-92acf5f6e4d6
Dec 02 10:11:29 np0005541914.localdomain podman[320783]: 2025-12-02 10:11:29.89912494 +0000 UTC m=+0.059906442 container kill 4f7ef6089cb283e879072312f9d621f3cda979e5a7e1ff49621f39530818fff3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e639c436-316b-48d5-b04e-92acf5f6e4d6, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 02 10:11:29 np0005541914.localdomain dnsmasq[320412]: read /var/lib/neutron/dhcp/e639c436-316b-48d5-b04e-92acf5f6e4d6/addn_hosts - 1 addresses
Dec 02 10:11:29 np0005541914.localdomain dnsmasq-dhcp[320412]: read /var/lib/neutron/dhcp/e639c436-316b-48d5-b04e-92acf5f6e4d6/host
Dec 02 10:11:29 np0005541914.localdomain dnsmasq-dhcp[320412]: read /var/lib/neutron/dhcp/e639c436-316b-48d5-b04e-92acf5f6e4d6/opts
Dec 02 10:11:30 np0005541914.localdomain sudo[320747]: pam_unix(sudo:session): session closed for user root
Dec 02 10:11:30 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 10:11:30 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:11:30 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 02 10:11:30 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:11:30 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 02 10:11:30 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] update: starting ev d636f5ca-c3f4-4085-8356-84232b9d2592 (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:11:30 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] complete: finished ev d636f5ca-c3f4-4085-8356-84232b9d2592 (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:11:30 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Completed event d636f5ca-c3f4-4085-8356-84232b9d2592 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 02 10:11:30 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 02 10:11:30 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:11:30 np0005541914.localdomain ceph-mon[301710]: pgmap v429: 177 pgs: 177 active+clean; 149 MiB data, 926 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 41 KiB/s wr, 22 op/s
Dec 02 10:11:30 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:11:30 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:11:30 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:11:30 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:11:30 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:11:30 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:11:30 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:11:30 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:11:30 np0005541914.localdomain sudo[320836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 10:11:30 np0005541914.localdomain sudo[320836]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:11:30 np0005541914.localdomain sudo[320836]: pam_unix(sudo:session): session closed for user root
Dec 02 10:11:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:30.924 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:31 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v430: 177 pgs: 177 active+clean; 149 MiB data, 926 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 34 KiB/s wr, 18 op/s
Dec 02 10:11:31 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:11:31.202 262347 INFO neutron.agent.dhcp.agent [None req-10fde4b1-331a-4454-8c30-12f6a466fbaf - - - - - -] DHCP configuration for ports {'1eb79f60-066d-4ce1-95d5-094eb8f1c4ab'} is completed
Dec 02 10:11:31 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 02 10:11:31 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:11:31 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 02 10:11:31 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 02 10:11:31 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Dec 02 10:11:31 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 02 10:11:31 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:11:31 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 02 10:11:31 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:11:31 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180
Dec 02 10:11:31 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:11:31 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:11:31 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:11:31 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:11:31 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 02 10:11:31 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 02 10:11:31 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 02 10:11:31 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 02 10:11:31 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:11:32 np0005541914.localdomain systemd[1]: tmp-crun.OeTcHO.mount: Deactivated successfully.
Dec 02 10:11:32 np0005541914.localdomain podman[320855]: 2025-12-02 10:11:32.091754162 +0000 UTC m=+0.094408432 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec 02 10:11:32 np0005541914.localdomain podman[320855]: 2025-12-02 10:11:32.128876563 +0000 UTC m=+0.131530833 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:11:32 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:11:32 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events
Dec 02 10:11:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 02 10:11:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:11:32 np0005541914.localdomain ceph-mon[301710]: pgmap v430: 177 pgs: 177 active+clean; 149 MiB data, 926 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 34 KiB/s wr, 18 op/s
Dec 02 10:11:32 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 02 10:11:32 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 02 10:11:32 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:11:33 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v431: 177 pgs: 177 active+clean; 150 MiB data, 911 MiB used, 41 GiB / 42 GiB avail; 102 B/s rd, 32 KiB/s wr, 4 op/s
Dec 02 10:11:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:33.525 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:33 np0005541914.localdomain podman[239757]: time="2025-12-02T10:11:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:11:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:11:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 158565 "" "Go-http-client/1.1"
Dec 02 10:11:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:11:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19675 "" "Go-http-client/1.1"
Dec 02 10:11:34 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "r", "format": "json"}]: dispatch
Dec 02 10:11:34 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:11:34 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 02 10:11:34 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 02 10:11:34 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice bob with tenant a241a07e4161486091e8de3f95a1d6c6
Dec 02 10:11:34 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:11:34 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:11:34 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:11:34 np0005541914.localdomain ceph-mon[301710]: pgmap v431: 177 pgs: 177 active+clean; 150 MiB data, 911 MiB used, 41 GiB / 42 GiB avail; 102 B/s rd, 32 KiB/s wr, 4 op/s
Dec 02 10:11:34 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 02 10:11:34 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:11:34 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:11:34 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:11:34 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:34.873 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:34 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:11:34.876 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:11:34 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:11:34.877 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 02 10:11:35 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v432: 177 pgs: 177 active+clean; 150 MiB data, 911 MiB used, 41 GiB / 42 GiB avail; 102 B/s rd, 32 KiB/s wr, 4 op/s
Dec 02 10:11:35 np0005541914.localdomain systemd[1]: tmp-crun.97FvBz.mount: Deactivated successfully.
Dec 02 10:11:35 np0005541914.localdomain dnsmasq[320412]: read /var/lib/neutron/dhcp/e639c436-316b-48d5-b04e-92acf5f6e4d6/addn_hosts - 0 addresses
Dec 02 10:11:35 np0005541914.localdomain dnsmasq-dhcp[320412]: read /var/lib/neutron/dhcp/e639c436-316b-48d5-b04e-92acf5f6e4d6/host
Dec 02 10:11:35 np0005541914.localdomain dnsmasq-dhcp[320412]: read /var/lib/neutron/dhcp/e639c436-316b-48d5-b04e-92acf5f6e4d6/opts
Dec 02 10:11:35 np0005541914.localdomain podman[320890]: 2025-12-02 10:11:35.490625278 +0000 UTC m=+0.058014063 container kill 4f7ef6089cb283e879072312f9d621f3cda979e5a7e1ff49621f39530818fff3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e639c436-316b-48d5-b04e-92acf5f6e4d6, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2)
Dec 02 10:11:35 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "r", "format": "json"}]: dispatch
Dec 02 10:11:35 np0005541914.localdomain kernel: device tap24f6853f-6a left promiscuous mode
Dec 02 10:11:35 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:35.660 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:35 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:11:35Z|00199|binding|INFO|Releasing lport 24f6853f-6ab0-449f-846c-b775d5c1b118 from this chassis (sb_readonly=0)
Dec 02 10:11:35 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:11:35Z|00200|binding|INFO|Setting lport 24f6853f-6ab0-449f-846c-b775d5c1b118 down in Southbound
Dec 02 10:11:35 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:11:35.678 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-e639c436-316b-48d5-b04e-92acf5f6e4d6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e639c436-316b-48d5-b04e-92acf5f6e4d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eea084241c14c5d9a6cc0d912041a21', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b39d1a8c-5e56-4b96-bd5c-0c0c80df17e2, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=24f6853f-6ab0-449f-846c-b775d5c1b118) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:11:35 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:11:35.680 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 24f6853f-6ab0-449f-846c-b775d5c1b118 in datapath e639c436-316b-48d5-b04e-92acf5f6e4d6 unbound from our chassis
Dec 02 10:11:35 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:11:35.681 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network e639c436-316b-48d5-b04e-92acf5f6e4d6 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 02 10:11:35 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:11:35.682 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[41ef2759-381c-40d3-a161-b4fc0f095de7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:11:35 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:35.685 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:35 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:35.686 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:35 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:35.965 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:36 np0005541914.localdomain ceph-mon[301710]: pgmap v432: 177 pgs: 177 active+clean; 150 MiB data, 911 MiB used, 41 GiB / 42 GiB avail; 102 B/s rd, 32 KiB/s wr, 4 op/s
Dec 02 10:11:36 np0005541914.localdomain systemd[1]: tmp-crun.cj2gmj.mount: Deactivated successfully.
Dec 02 10:11:36 np0005541914.localdomain podman[320932]: 2025-12-02 10:11:36.879097212 +0000 UTC m=+0.064194664 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 02 10:11:36 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 1 addresses
Dec 02 10:11:36 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:11:36 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:11:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:11:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:11:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:11:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:11:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:11:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:11:37 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 02 10:11:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:11:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:37.121 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 02 10:11:37 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 02 10:11:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Dec 02 10:11:37 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 02 10:11:37 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v433: 177 pgs: 177 active+clean; 150 MiB data, 911 MiB used, 41 GiB / 42 GiB avail; 96 B/s rd, 30 KiB/s wr, 4 op/s
Dec 02 10:11:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:11:37 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 02 10:11:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:11:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180
Dec 02 10:11:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:11:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:11:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:11:37 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 02 10:11:37 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 02 10:11:37 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 02 10:11:37 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 02 10:11:38 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:38.529 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:38 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 02 10:11:38 np0005541914.localdomain ceph-mon[301710]: pgmap v433: 177 pgs: 177 active+clean; 150 MiB data, 911 MiB used, 41 GiB / 42 GiB avail; 96 B/s rd, 30 KiB/s wr, 4 op/s
Dec 02 10:11:38 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 02 10:11:38 np0005541914.localdomain dnsmasq[320412]: exiting on receipt of SIGTERM
Dec 02 10:11:38 np0005541914.localdomain podman[320969]: 2025-12-02 10:11:38.958472324 +0000 UTC m=+0.044890091 container kill 4f7ef6089cb283e879072312f9d621f3cda979e5a7e1ff49621f39530818fff3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e639c436-316b-48d5-b04e-92acf5f6e4d6, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 02 10:11:38 np0005541914.localdomain systemd[1]: libpod-4f7ef6089cb283e879072312f9d621f3cda979e5a7e1ff49621f39530818fff3.scope: Deactivated successfully.
Dec 02 10:11:39 np0005541914.localdomain podman[320983]: 2025-12-02 10:11:39.014440173 +0000 UTC m=+0.041812816 container died 4f7ef6089cb283e879072312f9d621f3cda979e5a7e1ff49621f39530818fff3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e639c436-316b-48d5-b04e-92acf5f6e4d6, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:11:39 np0005541914.localdomain systemd[1]: tmp-crun.k6JWDO.mount: Deactivated successfully.
Dec 02 10:11:39 np0005541914.localdomain podman[320983]: 2025-12-02 10:11:39.044859588 +0000 UTC m=+0.072232211 container cleanup 4f7ef6089cb283e879072312f9d621f3cda979e5a7e1ff49621f39530818fff3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e639c436-316b-48d5-b04e-92acf5f6e4d6, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 02 10:11:39 np0005541914.localdomain systemd[1]: libpod-conmon-4f7ef6089cb283e879072312f9d621f3cda979e5a7e1ff49621f39530818fff3.scope: Deactivated successfully.
Dec 02 10:11:39 np0005541914.localdomain podman[320984]: 2025-12-02 10:11:39.077943995 +0000 UTC m=+0.103911864 container remove 4f7ef6089cb283e879072312f9d621f3cda979e5a7e1ff49621f39530818fff3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e639c436-316b-48d5-b04e-92acf5f6e4d6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:11:39 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v434: 177 pgs: 177 active+clean; 150 MiB data, 912 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 48 KiB/s wr, 30 op/s
Dec 02 10:11:39 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:11:39.880 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:11:39 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-3dec703e64cf49252cf57f900c5246aed36f5c491b36caec2fbe59693cfdfe47-merged.mount: Deactivated successfully.
Dec 02 10:11:39 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4f7ef6089cb283e879072312f9d621f3cda979e5a7e1ff49621f39530818fff3-userdata-shm.mount: Deactivated successfully.
Dec 02 10:11:40 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:11:40 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:11:40 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 02 10:11:40 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 02 10:11:40 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice with tenant a241a07e4161486091e8de3f95a1d6c6
Dec 02 10:11:40 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:11:40 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:11:40 np0005541914.localdomain systemd[1]: run-netns-qdhcp\x2de639c436\x2d316b\x2d48d5\x2db04e\x2d92acf5f6e4d6.mount: Deactivated successfully.
Dec 02 10:11:40 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:11:40.625 262347 INFO neutron.agent.dhcp.agent [None req-41b18abb-e5a9-428d-88ca-344e6205bb08 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:11:40 np0005541914.localdomain ceph-mon[301710]: pgmap v434: 177 pgs: 177 active+clean; 150 MiB data, 912 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 48 KiB/s wr, 30 op/s
Dec 02 10:11:40 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 02 10:11:40 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:11:40 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:11:40 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:11:40 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:11:40 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:40.967 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:41 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:11:41.054 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:11:41 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v435: 177 pgs: 177 active+clean; 150 MiB data, 912 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 36 KiB/s wr, 27 op/s
Dec 02 10:11:41 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:11:41 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e177 e177: 6 total, 6 up, 6 in
Dec 02 10:11:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:11:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:11:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:11:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:11:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:11:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:11:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:11:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:11:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:11:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:11:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:11:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:11:42 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:11:42 np0005541914.localdomain ceph-mon[301710]: pgmap v435: 177 pgs: 177 active+clean; 150 MiB data, 912 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 36 KiB/s wr, 27 op/s
Dec 02 10:11:42 np0005541914.localdomain ceph-mon[301710]: osdmap e177: 6 total, 6 up, 6 in
Dec 02 10:11:43 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v437: 177 pgs: 177 active+clean; 196 MiB data, 982 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 89 op/s
Dec 02 10:11:43 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:43.565 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:43 np0005541914.localdomain ceph-mon[301710]: pgmap v437: 177 pgs: 177 active+clean; 196 MiB data, 982 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 89 op/s
Dec 02 10:11:44 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch
Dec 02 10:11:44 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:11:44 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 02 10:11:44 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 02 10:11:44 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1144789831' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:11:44 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1144789831' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:11:44 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/3521941269' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:11:44 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch
Dec 02 10:11:44 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Dec 02 10:11:44 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 02 10:11:44 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:11:44 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch
Dec 02 10:11:44 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:11:44 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180
Dec 02 10:11:44 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:11:44 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:11:45 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v438: 177 pgs: 177 active+clean; 196 MiB data, 982 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 89 op/s
Dec 02 10:11:45 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 02 10:11:45 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 02 10:11:45 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 02 10:11:45 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 02 10:11:45 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch
Dec 02 10:11:45 np0005541914.localdomain ceph-mon[301710]: pgmap v438: 177 pgs: 177 active+clean; 196 MiB data, 982 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 89 op/s
Dec 02 10:11:45 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e178 e178: 6 total, 6 up, 6 in
Dec 02 10:11:45 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 02 10:11:45 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1526106310' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:11:45 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 02 10:11:45 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1526106310' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:11:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:45.970 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:46 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:11:46.051 262347 INFO neutron.agent.linux.ip_lib [None req-60aba708-07de-48eb-a1c4-fafbeb26bacf - - - - - -] Device tap60565337-ba cannot be used as it has no MAC address
Dec 02 10:11:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:46.075 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:46 np0005541914.localdomain kernel: device tap60565337-ba entered promiscuous mode
Dec 02 10:11:46 np0005541914.localdomain NetworkManager[5967]: <info>  [1764670306.0808] manager: (tap60565337-ba): new Generic device (/org/freedesktop/NetworkManager/Devices/41)
Dec 02 10:11:46 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:11:46Z|00201|binding|INFO|Claiming lport 60565337-ba9f-460c-b321-9bed6bae4c6b for this chassis.
Dec 02 10:11:46 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:11:46Z|00202|binding|INFO|60565337-ba9f-460c-b321-9bed6bae4c6b: Claiming unknown
Dec 02 10:11:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:46.084 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:46 np0005541914.localdomain systemd-udevd[321024]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 10:11:46 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap60565337-ba: No such device
Dec 02 10:11:46 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap60565337-ba: No such device
Dec 02 10:11:46 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:11:46Z|00203|binding|INFO|Setting lport 60565337-ba9f-460c-b321-9bed6bae4c6b ovn-installed in OVS
Dec 02 10:11:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:46.119 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:46 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap60565337-ba: No such device
Dec 02 10:11:46 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap60565337-ba: No such device
Dec 02 10:11:46 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap60565337-ba: No such device
Dec 02 10:11:46 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap60565337-ba: No such device
Dec 02 10:11:46 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap60565337-ba: No such device
Dec 02 10:11:46 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap60565337-ba: No such device
Dec 02 10:11:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:46.152 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:46.175 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:46 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:11:46Z|00204|binding|INFO|Setting lport 60565337-ba9f-460c-b321-9bed6bae4c6b up in Southbound
Dec 02 10:11:46 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:11:46.573 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-d7575463-fed8-42a9-b848-634ac68ed078', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7575463-fed8-42a9-b848-634ac68ed078', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '043cc6f66b444d00959c7dcdb078fbe8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3d9cd90-16cc-47f0-86ae-247f0a618c23, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=60565337-ba9f-460c-b321-9bed6bae4c6b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:11:46 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:11:46.576 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 60565337-ba9f-460c-b321-9bed6bae4c6b in datapath d7575463-fed8-42a9-b848-634ac68ed078 bound to our chassis
Dec 02 10:11:46 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:11:46.579 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Port 3a9ca436-78c1-4e2f-8261-557907b0f38d IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 02 10:11:46 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:11:46.580 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d7575463-fed8-42a9-b848-634ac68ed078, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:11:46 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:11:46.581 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[e77a9404-4f75-4802-89f0-8c0888b9d5c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:11:46 np0005541914.localdomain ceph-mon[301710]: osdmap e178: 6 total, 6 up, 6 in
Dec 02 10:11:46 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1526106310' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:11:46 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1526106310' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:11:46 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e179 e179: 6 total, 6 up, 6 in
Dec 02 10:11:47 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v441: 177 pgs: 177 active+clean; 196 MiB data, 982 MiB used, 41 GiB / 42 GiB avail; 64 KiB/s rd, 3.6 MiB/s wr, 97 op/s
Dec 02 10:11:47 np0005541914.localdomain podman[321095]: 
Dec 02 10:11:47 np0005541914.localdomain podman[321095]: 2025-12-02 10:11:47.333670156 +0000 UTC m=+0.094944868 container create 31d3655aabd707fc14d4655f9bf3400ff36470ced2ee3c0a77a0dc1212f8950f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7575463-fed8-42a9-b848-634ac68ed078, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:11:47 np0005541914.localdomain systemd[1]: Started libpod-conmon-31d3655aabd707fc14d4655f9bf3400ff36470ced2ee3c0a77a0dc1212f8950f.scope.
Dec 02 10:11:47 np0005541914.localdomain podman[321095]: 2025-12-02 10:11:47.287336012 +0000 UTC m=+0.048610674 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 02 10:11:47 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 10:11:47 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a238a32a00c88b302c7631d7402cd3a543c022c2ba3aed4b7050aab5db379c8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 10:11:47 np0005541914.localdomain podman[321095]: 2025-12-02 10:11:47.406434452 +0000 UTC m=+0.167709114 container init 31d3655aabd707fc14d4655f9bf3400ff36470ced2ee3c0a77a0dc1212f8950f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7575463-fed8-42a9-b848-634ac68ed078, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 02 10:11:47 np0005541914.localdomain podman[321095]: 2025-12-02 10:11:47.415557192 +0000 UTC m=+0.176831854 container start 31d3655aabd707fc14d4655f9bf3400ff36470ced2ee3c0a77a0dc1212f8950f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7575463-fed8-42a9-b848-634ac68ed078, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Dec 02 10:11:47 np0005541914.localdomain dnsmasq[321113]: started, version 2.85 cachesize 150
Dec 02 10:11:47 np0005541914.localdomain dnsmasq[321113]: DNS service limited to local subnets
Dec 02 10:11:47 np0005541914.localdomain dnsmasq[321113]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 02 10:11:47 np0005541914.localdomain dnsmasq[321113]: warning: no upstream servers configured
Dec 02 10:11:47 np0005541914.localdomain dnsmasq-dhcp[321113]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 02 10:11:47 np0005541914.localdomain dnsmasq[321113]: read /var/lib/neutron/dhcp/d7575463-fed8-42a9-b848-634ac68ed078/addn_hosts - 0 addresses
Dec 02 10:11:47 np0005541914.localdomain dnsmasq-dhcp[321113]: read /var/lib/neutron/dhcp/d7575463-fed8-42a9-b848-634ac68ed078/host
Dec 02 10:11:47 np0005541914.localdomain dnsmasq-dhcp[321113]: read /var/lib/neutron/dhcp/d7575463-fed8-42a9-b848-634ac68ed078/opts
Dec 02 10:11:47 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:11:47 np0005541914.localdomain ceph-mon[301710]: osdmap e179: 6 total, 6 up, 6 in
Dec 02 10:11:47 np0005541914.localdomain ceph-mon[301710]: pgmap v441: 177 pgs: 177 active+clean; 196 MiB data, 982 MiB used, 41 GiB / 42 GiB avail; 64 KiB/s rd, 3.6 MiB/s wr, 97 op/s
Dec 02 10:11:48 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:11:48.144 262347 INFO neutron.agent.dhcp.agent [None req-8e89a2b8-a075-49a3-b365-0b6cd35efe3e - - - - - -] DHCP configuration for ports {'764c6210-436e-4e65-8738-de5d89857e38'} is completed
Dec 02 10:11:48 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "r", "format": "json"}]: dispatch
Dec 02 10:11:48 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:11:48 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 02 10:11:48 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 02 10:11:48 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice with tenant a241a07e4161486091e8de3f95a1d6c6
Dec 02 10:11:48 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:11:48 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:11:48 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:11:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:48.593 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:48 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "r", "format": "json"}]: dispatch
Dec 02 10:11:48 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 02 10:11:48 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:11:48 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:11:48 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:11:48 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:11:48.832 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:11:48Z, description=, device_id=a96fc995-3987-4eed-90c5-508db1a52dc8, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034a72070>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034a72f40>], id=e37b6248-e731-4bac-b771-73f326b6c55b, ip_allocation=immediate, mac_address=fa:16:3e:5c:a4:da, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2722, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:11:48Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:11:48 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:11:48 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:11:48 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:11:48 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:11:49 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v442: 177 pgs: 177 active+clean; 196 MiB data, 982 MiB used, 41 GiB / 42 GiB avail; 88 KiB/s rd, 2.9 MiB/s wr, 131 op/s
Dec 02 10:11:49 np0005541914.localdomain podman[321123]: 2025-12-02 10:11:49.125648578 +0000 UTC m=+0.115721987 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true)
Dec 02 10:11:49 np0005541914.localdomain podman[321117]: 2025-12-02 10:11:49.098549105 +0000 UTC m=+0.094436533 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 02 10:11:49 np0005541914.localdomain podman[321123]: 2025-12-02 10:11:49.206079599 +0000 UTC m=+0.196153018 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 02 10:11:49 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:11:49 np0005541914.localdomain podman[321205]: 2025-12-02 10:11:49.239726082 +0000 UTC m=+0.055929469 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:11:49 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses
Dec 02 10:11:49 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:11:49 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:11:49 np0005541914.localdomain podman[321116]: 2025-12-02 10:11:49.155501815 +0000 UTC m=+0.154750526 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 02 10:11:49 np0005541914.localdomain podman[321116]: 2025-12-02 10:11:49.285596783 +0000 UTC m=+0.284845494 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 10:11:49 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:11:49 np0005541914.localdomain podman[321117]: 2025-12-02 10:11:49.33595675 +0000 UTC m=+0.331844178 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 02 10:11:49 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:11:49 np0005541914.localdomain podman[321115]: 2025-12-02 10:11:49.344124101 +0000 UTC m=+0.346687184 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 02 10:11:49 np0005541914.localdomain podman[321115]: 2025-12-02 10:11:49.425932934 +0000 UTC m=+0.428495957 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:11:49 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:11:49 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e180 e180: 6 total, 6 up, 6 in
Dec 02 10:11:49 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:11:49.639 262347 INFO neutron.agent.dhcp.agent [None req-246bc6ef-4f58-49a3-bc56-0df20ecec196 - - - - - -] DHCP configuration for ports {'e37b6248-e731-4bac-b771-73f326b6c55b'} is completed
Dec 02 10:11:50 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:11:50.333 2 INFO neutron.agent.securitygroups_rpc [None req-c7a943f3-fab0-4b36-9210-2f6cba57e1de defcf0debbf84a5c9ec6342ae3d02928 8eea084241c14c5d9a6cc0d912041a21 - - default default] Security group member updated ['712bb249-1109-4289-a9cf-1e3d3f6e301e']
Dec 02 10:11:50 np0005541914.localdomain ceph-mon[301710]: pgmap v442: 177 pgs: 177 active+clean; 196 MiB data, 982 MiB used, 41 GiB / 42 GiB avail; 88 KiB/s rd, 2.9 MiB/s wr, 131 op/s
Dec 02 10:11:50 np0005541914.localdomain ceph-mon[301710]: osdmap e180: 6 total, 6 up, 6 in
Dec 02 10:11:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:50.973 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:51 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v444: 177 pgs: 177 active+clean; 196 MiB data, 982 MiB used, 41 GiB / 42 GiB avail; 45 KiB/s rd, 24 KiB/s wr, 67 op/s
Dec 02 10:11:52 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:11:52 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch
Dec 02 10:11:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:11:52 np0005541914.localdomain ceph-mon[301710]: pgmap v444: 177 pgs: 177 active+clean; 196 MiB data, 982 MiB used, 41 GiB / 42 GiB avail; 45 KiB/s rd, 24 KiB/s wr, 67 op/s
Dec 02 10:11:52 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/4069706698' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:11:52 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/4069706698' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:11:52 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice", "format": "json"} v 0)
Dec 02 10:11:52 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 02 10:11:52 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice"} v 0)
Dec 02 10:11:52 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 02 10:11:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:11:52 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch
Dec 02 10:11:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:11:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180
Dec 02 10:11:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:11:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:11:53 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v445: 177 pgs: 177 active+clean; 197 MiB data, 983 MiB used, 41 GiB / 42 GiB avail; 89 KiB/s rd, 55 KiB/s wr, 129 op/s
Dec 02 10:11:53 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 02 10:11:53 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/283666617' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:11:53 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch
Dec 02 10:11:53 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Dec 02 10:11:53 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 02 10:11:53 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Dec 02 10:11:53 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Dec 02 10:11:53 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice", "format": "json"}]: dispatch
Dec 02 10:11:53 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/283666617' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:11:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:53.620 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:54 np0005541914.localdomain ceph-mon[301710]: pgmap v445: 177 pgs: 177 active+clean; 197 MiB data, 983 MiB used, 41 GiB / 42 GiB avail; 89 KiB/s rd, 55 KiB/s wr, 129 op/s
Dec 02 10:11:54 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2457589825' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:11:54 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2457589825' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:11:54 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e181 e181: 6 total, 6 up, 6 in
Dec 02 10:11:55 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v447: 177 pgs: 177 active+clean; 197 MiB data, 983 MiB used, 41 GiB / 42 GiB avail; 82 KiB/s rd, 51 KiB/s wr, 119 op/s
Dec 02 10:11:55 np0005541914.localdomain ceph-mon[301710]: osdmap e181: 6 total, 6 up, 6 in
Dec 02 10:11:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:56.017 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:56 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:11:56 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:11:56 np0005541914.localdomain podman[321233]: 2025-12-02 10:11:56.108319152 +0000 UTC m=+0.073216491 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 02 10:11:56 np0005541914.localdomain podman[321234]: 2025-12-02 10:11:56.174216657 +0000 UTC m=+0.135535306 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1755695350, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.openshift.expose-services=, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 02 10:11:56 np0005541914.localdomain podman[321234]: 2025-12-02 10:11:56.185789803 +0000 UTC m=+0.147108452 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, container_name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, release=1755695350, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9-minimal)
Dec 02 10:11:56 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:11:56 np0005541914.localdomain podman[321233]: 2025-12-02 10:11:56.241096591 +0000 UTC m=+0.205993940 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 10:11:56 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:11:56 np0005541914.localdomain ceph-mon[301710]: pgmap v447: 177 pgs: 177 active+clean; 197 MiB data, 983 MiB used, 41 GiB / 42 GiB avail; 82 KiB/s rd, 51 KiB/s wr, 119 op/s
Dec 02 10:11:57 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:11:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:11:57 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v448: 177 pgs: 177 active+clean; 197 MiB data, 983 MiB used, 41 GiB / 42 GiB avail; 48 KiB/s rd, 33 KiB/s wr, 68 op/s
Dec 02 10:11:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 02 10:11:57 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 02 10:11:57 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice_bob with tenant a241a07e4161486091e8de3f95a1d6c6
Dec 02 10:11:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:11:57 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:11:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:11:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 02 10:11:57 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:11:57 np0005541914.localdomain ceph-mon[301710]: pgmap v448: 177 pgs: 177 active+clean; 197 MiB data, 983 MiB used, 41 GiB / 42 GiB avail; 48 KiB/s rd, 33 KiB/s wr, 68 op/s
Dec 02 10:11:57 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 02 10:11:57 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:11:57 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:11:57 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:11:58 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:11:58.417 262347 INFO neutron.agent.linux.ip_lib [None req-8d0c34da-6416-4236-856d-a92b7d7df30e - - - - - -] Device tap5a59ed58-8e cannot be used as it has no MAC address
Dec 02 10:11:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:58.475 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:58 np0005541914.localdomain kernel: device tap5a59ed58-8e entered promiscuous mode
Dec 02 10:11:58 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:11:58Z|00205|binding|INFO|Claiming lport 5a59ed58-8e8e-4218-a027-de857358efdd for this chassis.
Dec 02 10:11:58 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:11:58Z|00206|binding|INFO|5a59ed58-8e8e-4218-a027-de857358efdd: Claiming unknown
Dec 02 10:11:58 np0005541914.localdomain NetworkManager[5967]: <info>  [1764670318.4848] manager: (tap5a59ed58-8e): new Generic device (/org/freedesktop/NetworkManager/Devices/42)
Dec 02 10:11:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:58.486 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:58 np0005541914.localdomain systemd-udevd[321285]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 10:11:58 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:11:58.495 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-dcbd2fde-cd87-4087-93b9-a7b43b07dcbf', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dcbd2fde-cd87-4087-93b9-a7b43b07dcbf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '043cc6f66b444d00959c7dcdb078fbe8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=569352f8-5776-4fd7-bf95-e3a12d36086c, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=5a59ed58-8e8e-4218-a027-de857358efdd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:11:58 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:11:58.498 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 5a59ed58-8e8e-4218-a027-de857358efdd in datapath dcbd2fde-cd87-4087-93b9-a7b43b07dcbf bound to our chassis
Dec 02 10:11:58 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:11:58.500 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network dcbd2fde-cd87-4087-93b9-a7b43b07dcbf or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 02 10:11:58 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:11:58.502 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[ef320c61-0674-483c-a33a-731a808c274f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:11:58 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap5a59ed58-8e: No such device
Dec 02 10:11:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:58.520 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:58 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap5a59ed58-8e: No such device
Dec 02 10:11:58 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:11:58Z|00207|binding|INFO|Setting lport 5a59ed58-8e8e-4218-a027-de857358efdd ovn-installed in OVS
Dec 02 10:11:58 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:11:58Z|00208|binding|INFO|Setting lport 5a59ed58-8e8e-4218-a027-de857358efdd up in Southbound
Dec 02 10:11:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:58.523 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:58 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap5a59ed58-8e: No such device
Dec 02 10:11:58 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap5a59ed58-8e: No such device
Dec 02 10:11:58 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap5a59ed58-8e: No such device
Dec 02 10:11:58 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap5a59ed58-8e: No such device
Dec 02 10:11:58 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap5a59ed58-8e: No such device
Dec 02 10:11:58 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap5a59ed58-8e: No such device
Dec 02 10:11:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:58.557 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:58.584 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:11:58.622 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:11:58 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e182 e182: 6 total, 6 up, 6 in
Dec 02 10:11:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v450: 177 pgs: 177 active+clean; 197 MiB data, 983 MiB used, 41 GiB / 42 GiB avail; 91 KiB/s rd, 53 KiB/s wr, 127 op/s
Dec 02 10:11:59 np0005541914.localdomain podman[321354]: 
Dec 02 10:11:59 np0005541914.localdomain podman[321354]: 2025-12-02 10:11:59.364313567 +0000 UTC m=+0.097787496 container create 86a9a803da6e1bb3fb90ee5df28e010492ff3a894d8e2ac70ca9cba716799ce2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dcbd2fde-cd87-4087-93b9-a7b43b07dcbf, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:11:59 np0005541914.localdomain systemd[1]: Started libpod-conmon-86a9a803da6e1bb3fb90ee5df28e010492ff3a894d8e2ac70ca9cba716799ce2.scope.
Dec 02 10:11:59 np0005541914.localdomain podman[321354]: 2025-12-02 10:11:59.317298943 +0000 UTC m=+0.050772902 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 02 10:11:59 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 10:11:59 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcaf42c28c81b21a1deda3e252f2a551239aaaaf7b56461c20fc989a8f045479/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 10:11:59 np0005541914.localdomain podman[321354]: 2025-12-02 10:11:59.440438057 +0000 UTC m=+0.173911966 container init 86a9a803da6e1bb3fb90ee5df28e010492ff3a894d8e2ac70ca9cba716799ce2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dcbd2fde-cd87-4087-93b9-a7b43b07dcbf, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 02 10:11:59 np0005541914.localdomain podman[321354]: 2025-12-02 10:11:59.446691159 +0000 UTC m=+0.180165068 container start 86a9a803da6e1bb3fb90ee5df28e010492ff3a894d8e2ac70ca9cba716799ce2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dcbd2fde-cd87-4087-93b9-a7b43b07dcbf, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:11:59 np0005541914.localdomain dnsmasq[321373]: started, version 2.85 cachesize 150
Dec 02 10:11:59 np0005541914.localdomain dnsmasq[321373]: DNS service limited to local subnets
Dec 02 10:11:59 np0005541914.localdomain dnsmasq[321373]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 02 10:11:59 np0005541914.localdomain dnsmasq[321373]: warning: no upstream servers configured
Dec 02 10:11:59 np0005541914.localdomain dnsmasq-dhcp[321373]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 02 10:11:59 np0005541914.localdomain dnsmasq[321373]: read /var/lib/neutron/dhcp/dcbd2fde-cd87-4087-93b9-a7b43b07dcbf/addn_hosts - 0 addresses
Dec 02 10:11:59 np0005541914.localdomain dnsmasq-dhcp[321373]: read /var/lib/neutron/dhcp/dcbd2fde-cd87-4087-93b9-a7b43b07dcbf/host
Dec 02 10:11:59 np0005541914.localdomain dnsmasq-dhcp[321373]: read /var/lib/neutron/dhcp/dcbd2fde-cd87-4087-93b9-a7b43b07dcbf/opts
Dec 02 10:11:59 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:11:59.630 262347 INFO neutron.agent.dhcp.agent [None req-7af2711b-304d-4fae-9a95-8fb2b43f7bcf - - - - - -] DHCP configuration for ports {'eaffce96-ae12-457e-84d2-2c06058bbc40'} is completed
Dec 02 10:11:59 np0005541914.localdomain ceph-mon[301710]: osdmap e182: 6 total, 6 up, 6 in
Dec 02 10:11:59 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2303854732' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:11:59 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2303854732' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:11:59 np0005541914.localdomain ceph-mon[301710]: pgmap v450: 177 pgs: 177 active+clean; 197 MiB data, 983 MiB used, 41 GiB / 42 GiB avail; 91 KiB/s rd, 53 KiB/s wr, 127 op/s
Dec 02 10:11:59 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2930831674' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:11:59 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2930831674' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:11:59 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e183 e183: 6 total, 6 up, 6 in
Dec 02 10:12:00 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 02 10:12:00 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:12:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:01.031 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:01 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:12:01.061 2 INFO neutron.agent.securitygroups_rpc [None req-bd3bdc93-0ba6-42a3-9063-ee94eddd1f8f defcf0debbf84a5c9ec6342ae3d02928 8eea084241c14c5d9a6cc0d912041a21 - - default default] Security group member updated ['712bb249-1109-4289-a9cf-1e3d3f6e301e']
Dec 02 10:12:01 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v452: 177 pgs: 177 active+clean; 197 MiB data, 983 MiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 25 KiB/s wr, 72 op/s
Dec 02 10:12:01 np0005541914.localdomain ceph-mon[301710]: osdmap e183: 6 total, 6 up, 6 in
Dec 02 10:12:01 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 02 10:12:01 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 02 10:12:01 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Dec 02 10:12:01 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 02 10:12:01 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:12:01 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 02 10:12:01 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:12:01 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180
Dec 02 10:12:01 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:12:01 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:12:01 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 02 10:12:01 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/965134514' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:12:01 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 02 10:12:01 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/965134514' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:12:02 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 02 10:12:02 np0005541914.localdomain ceph-mon[301710]: pgmap v452: 177 pgs: 177 active+clean; 197 MiB data, 983 MiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 25 KiB/s wr, 72 op/s
Dec 02 10:12:02 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 02 10:12:02 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 02 10:12:02 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 02 10:12:02 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 02 10:12:02 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 02 10:12:02 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/965134514' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:12:02 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/965134514' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:12:02 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e184 e184: 6 total, 6 up, 6 in
Dec 02 10:12:02 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:12:02 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:12:03 np0005541914.localdomain podman[321375]: 2025-12-02 10:12:03.049841292 +0000 UTC m=+0.053880256 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 02 10:12:03 np0005541914.localdomain podman[321375]: 2025-12-02 10:12:03.061640924 +0000 UTC m=+0.065679878 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:12:03 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:12:03 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v454: 177 pgs: 177 active+clean; 197 MiB data, 988 MiB used, 41 GiB / 42 GiB avail; 174 KiB/s rd, 55 KiB/s wr, 239 op/s
Dec 02 10:12:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:12:03.182 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:12:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:12:03.182 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:12:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:12:03.182 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:12:03 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/2146629484' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:12:03 np0005541914.localdomain ceph-mon[301710]: osdmap e184: 6 total, 6 up, 6 in
Dec 02 10:12:03 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/3335471670' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:12:03 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/3335471670' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:12:03 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/1371812197' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:12:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:03.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:12:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:03.625 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:03 np0005541914.localdomain podman[239757]: time="2025-12-02T10:12:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:12:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:12:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 160393 "" "Go-http-client/1.1"
Dec 02 10:12:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:12:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20155 "" "Go-http-client/1.1"
Dec 02 10:12:03 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "r", "format": "json"}]: dispatch
Dec 02 10:12:03 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:12:03 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 02 10:12:03 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 02 10:12:03 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice_bob with tenant a241a07e4161486091e8de3f95a1d6c6
Dec 02 10:12:03 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:12:03 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:12:04 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice_bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:12:04 np0005541914.localdomain ceph-mon[301710]: pgmap v454: 177 pgs: 177 active+clean; 197 MiB data, 988 MiB used, 41 GiB / 42 GiB avail; 174 KiB/s rd, 55 KiB/s wr, 239 op/s
Dec 02 10:12:04 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 02 10:12:04 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:12:04 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:12:04 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:12:04 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 02 10:12:04 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1431900446' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:12:04 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 02 10:12:04 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1431900446' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:12:05 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v455: 177 pgs: 177 active+clean; 197 MiB data, 988 MiB used, 41 GiB / 42 GiB avail; 108 KiB/s rd, 26 KiB/s wr, 149 op/s
Dec 02 10:12:05 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "r", "format": "json"}]: dispatch
Dec 02 10:12:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1431900446' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:12:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1431900446' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:12:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/4271932895' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:12:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:05.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:12:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:06.037 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:06 np0005541914.localdomain ceph-mon[301710]: pgmap v455: 177 pgs: 177 active+clean; 197 MiB data, 988 MiB used, 41 GiB / 42 GiB avail; 108 KiB/s rd, 26 KiB/s wr, 149 op/s
Dec 02 10:12:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e185 e185: 6 total, 6 up, 6 in
Dec 02 10:12:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:06.523 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:12:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:06.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:12:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:06.556 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:12:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:06.557 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:12:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:06.557 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:12:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:06.557 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 10:12:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:06.558 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:12:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e186 e186: 6 total, 6 up, 6 in
Dec 02 10:12:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:12:06
Dec 02 10:12:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 02 10:12:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] do_upmap
Dec 02 10:12:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] pools ['.mgr', 'manila_data', 'images', 'backups', 'vms', 'volumes', 'manila_metadata']
Dec 02 10:12:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes
Dec 02 10:12:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:12:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:12:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:12:06 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/200831030' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:12:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:12:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:12:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:07.015 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:12:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:12:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:12:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v458: 177 pgs: 177 active+clean; 197 MiB data, 988 MiB used, 41 GiB / 42 GiB avail; 116 KiB/s rd, 28 KiB/s wr, 160 op/s
Dec 02 10:12:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:07.198 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 10:12:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:07.200 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11510MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 10:12:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:07.201 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:12:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:07.202 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:12:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 02 10:12:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:12:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust
Dec 02 10:12:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:12:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:12:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 02 10:12:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:12:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32)
Dec 02 10:12:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:12:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014844731469849247 of space, bias 1.0, pg target 0.2963998050146566 quantized to 32 (current 32)
Dec 02 10:12:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:12:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32)
Dec 02 10:12:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:12:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32)
Dec 02 10:12:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:12:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 8.17891541038526e-07 of space, bias 1.0, pg target 0.00016276041666666666 quantized to 32 (current 32)
Dec 02 10:12:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:12:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0003675059324399777 of space, bias 4.0, pg target 0.2925347222222222 quantized to 16 (current 16)
Dec 02 10:12:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:12:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 02 10:12:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:12:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:12:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:12:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:12:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:12:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:07.272 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 10:12:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:07.272 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 10:12:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:07.314 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:12:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 02 10:12:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:12:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} v 0)
Dec 02 10:12:07 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 02 10:12:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice_bob"} v 0)
Dec 02 10:12:07 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 02 10:12:07 np0005541914.localdomain ceph-mon[301710]: osdmap e185: 6 total, 6 up, 6 in
Dec 02 10:12:07 np0005541914.localdomain ceph-mon[301710]: osdmap e186: 6 total, 6 up, 6 in
Dec 02 10:12:07 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/200831030' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:12:07 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Dec 02 10:12:07 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 02 10:12:07 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Dec 02 10:12:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice_bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:12:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 02 10:12:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:12:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice_bob, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180
Dec 02 10:12:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:12:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice_bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:12:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:12:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e187 e187: 6 total, 6 up, 6 in
Dec 02 10:12:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:12:07 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2956956327' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:12:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:07.821 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:12:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:07.827 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 10:12:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:07.901 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 10:12:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:07.903 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 10:12:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:07.904 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:12:08 np0005541914.localdomain ceph-mon[301710]: pgmap v458: 177 pgs: 177 active+clean; 197 MiB data, 988 MiB used, 41 GiB / 42 GiB avail; 116 KiB/s rd, 28 KiB/s wr, 160 op/s
Dec 02 10:12:08 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 02 10:12:08 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Dec 02 10:12:08 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice_bob", "format": "json"}]: dispatch
Dec 02 10:12:08 np0005541914.localdomain ceph-mon[301710]: osdmap e187: 6 total, 6 up, 6 in
Dec 02 10:12:08 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/2956956327' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:12:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:08.627 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:08.905 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:12:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:08.923 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:12:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:08.923 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 10:12:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:08.924 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 10:12:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:08.941 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 10:12:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:08.942 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:12:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:08.942 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:12:09 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v460: 177 pgs: 177 active+clean; 243 MiB data, 1002 MiB used, 41 GiB / 42 GiB avail; 3.5 MiB/s rd, 3.6 MiB/s wr, 118 op/s
Dec 02 10:12:09 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e188 e188: 6 total, 6 up, 6 in
Dec 02 10:12:09 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:12:09.607 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:12:09Z, description=, device_id=ff168046-1219-4329-be5a-02b35c99fef5, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034a2a760>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034d46550>], id=fdc48f44-b0e1-4b3a-b889-2b67e2d1c8c7, ip_allocation=immediate, mac_address=fa:16:3e:e1:29:46, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:11:38Z, description=, dns_domain=, id=d7575463-fed8-42a9-b848-634ac68ed078, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersNegativeTest-test-network-2057032213, port_security_enabled=True, project_id=043cc6f66b444d00959c7dcdb078fbe8, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=54044, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2706, status=ACTIVE, subnets=['b2084c2f-9ef2-4632-8e41-02c37dcc4849'], tags=[], tenant_id=043cc6f66b444d00959c7dcdb078fbe8, updated_at=2025-12-02T10:11:41Z, vlan_transparent=None, network_id=d7575463-fed8-42a9-b848-634ac68ed078, port_security_enabled=False, project_id=043cc6f66b444d00959c7dcdb078fbe8, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2781, status=DOWN, tags=[], tenant_id=043cc6f66b444d00959c7dcdb078fbe8, updated_at=2025-12-02T10:12:09Z on network d7575463-fed8-42a9-b848-634ac68ed078
Dec 02 10:12:09 np0005541914.localdomain podman[321456]: 2025-12-02 10:12:09.942643514 +0000 UTC m=+0.066250376 container kill 31d3655aabd707fc14d4655f9bf3400ff36470ced2ee3c0a77a0dc1212f8950f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7575463-fed8-42a9-b848-634ac68ed078, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:12:09 np0005541914.localdomain dnsmasq[321113]: read /var/lib/neutron/dhcp/d7575463-fed8-42a9-b848-634ac68ed078/addn_hosts - 1 addresses
Dec 02 10:12:09 np0005541914.localdomain dnsmasq-dhcp[321113]: read /var/lib/neutron/dhcp/d7575463-fed8-42a9-b848-634ac68ed078/host
Dec 02 10:12:09 np0005541914.localdomain dnsmasq-dhcp[321113]: read /var/lib/neutron/dhcp/d7575463-fed8-42a9-b848-634ac68ed078/opts
Dec 02 10:12:10 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:12:10.228 262347 INFO neutron.agent.dhcp.agent [None req-463e93d4-7f91-4ad0-a6db-ae4ca58f0bc2 - - - - - -] DHCP configuration for ports {'fdc48f44-b0e1-4b3a-b889-2b67e2d1c8c7'} is completed
Dec 02 10:12:10 np0005541914.localdomain ceph-mon[301710]: pgmap v460: 177 pgs: 177 active+clean; 243 MiB data, 1002 MiB used, 41 GiB / 42 GiB avail; 3.5 MiB/s rd, 3.6 MiB/s wr, 118 op/s
Dec 02 10:12:10 np0005541914.localdomain ceph-mon[301710]: osdmap e188: 6 total, 6 up, 6 in
Dec 02 10:12:10 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/658691468' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:12:10 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/658691468' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:12:10 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:10.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:12:10 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:12:10 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:12:10 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 02 10:12:10 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 02 10:12:10 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice bob with tenant a241a07e4161486091e8de3f95a1d6c6
Dec 02 10:12:10 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:12:10 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:12:10 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:12:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:11.042 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:11 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v462: 177 pgs: 177 active+clean; 243 MiB data, 1002 MiB used, 41 GiB / 42 GiB avail; 4.4 MiB/s rd, 4.5 MiB/s wr, 149 op/s
Dec 02 10:12:11 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:12:11 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 02 10:12:11 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:12:11 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:12:11 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:12:11 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/657463899' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:12:11 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/657463899' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:12:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:11.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:12:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:11.527 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 10:12:11 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e189 e189: 6 total, 6 up, 6 in
Dec 02 10:12:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:12:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:12:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:12:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:12:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:12:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:12:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:12:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:12:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:12:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:12:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:12:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:12:12 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:12:12 np0005541914.localdomain ceph-mon[301710]: pgmap v462: 177 pgs: 177 active+clean; 243 MiB data, 1002 MiB used, 41 GiB / 42 GiB avail; 4.4 MiB/s rd, 4.5 MiB/s wr, 149 op/s
Dec 02 10:12:12 np0005541914.localdomain ceph-mon[301710]: osdmap e189: 6 total, 6 up, 6 in
Dec 02 10:12:12 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/325227520' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:12:12 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/325227520' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:12:12 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/1003594229' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:12:12 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:12:12.764 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:12:09Z, description=, device_id=ff168046-1219-4329-be5a-02b35c99fef5, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034a68400>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034a68d60>], id=fdc48f44-b0e1-4b3a-b889-2b67e2d1c8c7, ip_allocation=immediate, mac_address=fa:16:3e:e1:29:46, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:11:38Z, description=, dns_domain=, id=d7575463-fed8-42a9-b848-634ac68ed078, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersNegativeTest-test-network-2057032213, port_security_enabled=True, project_id=043cc6f66b444d00959c7dcdb078fbe8, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=54044, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2706, status=ACTIVE, subnets=['b2084c2f-9ef2-4632-8e41-02c37dcc4849'], tags=[], tenant_id=043cc6f66b444d00959c7dcdb078fbe8, updated_at=2025-12-02T10:11:41Z, vlan_transparent=None, network_id=d7575463-fed8-42a9-b848-634ac68ed078, port_security_enabled=False, project_id=043cc6f66b444d00959c7dcdb078fbe8, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2781, status=DOWN, tags=[], tenant_id=043cc6f66b444d00959c7dcdb078fbe8, updated_at=2025-12-02T10:12:09Z on network d7575463-fed8-42a9-b848-634ac68ed078
Dec 02 10:12:13 np0005541914.localdomain systemd[1]: tmp-crun.DBCrcP.mount: Deactivated successfully.
Dec 02 10:12:13 np0005541914.localdomain dnsmasq[321113]: read /var/lib/neutron/dhcp/d7575463-fed8-42a9-b848-634ac68ed078/addn_hosts - 1 addresses
Dec 02 10:12:13 np0005541914.localdomain dnsmasq-dhcp[321113]: read /var/lib/neutron/dhcp/d7575463-fed8-42a9-b848-634ac68ed078/host
Dec 02 10:12:13 np0005541914.localdomain dnsmasq-dhcp[321113]: read /var/lib/neutron/dhcp/d7575463-fed8-42a9-b848-634ac68ed078/opts
Dec 02 10:12:13 np0005541914.localdomain podman[321493]: 2025-12-02 10:12:13.005379802 +0000 UTC m=+0.070425025 container kill 31d3655aabd707fc14d4655f9bf3400ff36470ced2ee3c0a77a0dc1212f8950f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7575463-fed8-42a9-b848-634ac68ed078, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:12:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v464: 177 pgs: 177 active+clean; 197 MiB data, 990 MiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 274 op/s
Dec 02 10:12:13 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:12:13.193 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:12:12Z, description=, device_id=11fb415a-fd46-4f5e-91b3-127c51ee0b41, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034b27c10>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034ab2ee0>], id=35bde078-363a-4aaf-a0b9-6375ba936eaf, ip_allocation=immediate, mac_address=fa:16:3e:cb:64:57, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2790, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:12:12Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:12:13 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:12:13.251 2 INFO neutron.agent.securitygroups_rpc [None req-1b9cf27d-4f71-42a8-aff0-a386ad5e469f 27e8ee5045c2430583000f8d62f6e4f1 096ffa0a51b143039159efc232ec547a - - default default] Security group member updated ['0a7d83ca-acbf-4932-884e-9eff3b0bc0ff']
Dec 02 10:12:13 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:12:13.307 262347 INFO neutron.agent.dhcp.agent [None req-ead53d56-2f7b-48ee-b3c3-5443783ab476 - - - - - -] DHCP configuration for ports {'fdc48f44-b0e1-4b3a-b889-2b67e2d1c8c7'} is completed
Dec 02 10:12:13 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses
Dec 02 10:12:13 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:12:13 np0005541914.localdomain podman[321531]: 2025-12-02 10:12:13.419379223 +0000 UTC m=+0.059675075 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 02 10:12:13 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:12:13 np0005541914.localdomain systemd[1]: tmp-crun.R3SS6a.mount: Deactivated successfully.
Dec 02 10:12:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:13.629 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:13 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:12:13.637 262347 INFO neutron.agent.dhcp.agent [None req-3f4bc7e2-5de8-4280-899a-8f4c7281456e - - - - - -] DHCP configuration for ports {'35bde078-363a-4aaf-a0b9-6375ba936eaf'} is completed
Dec 02 10:12:13 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/2361950633' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:12:13 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e190 e190: 6 total, 6 up, 6 in
Dec 02 10:12:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 02 10:12:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:12:14 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 02 10:12:14 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 02 10:12:14 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Dec 02 10:12:14 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 02 10:12:14 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:12:14 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 02 10:12:14 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:12:14 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180
Dec 02 10:12:14 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:12:14 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:12:14 np0005541914.localdomain dnsmasq[321113]: read /var/lib/neutron/dhcp/d7575463-fed8-42a9-b848-634ac68ed078/addn_hosts - 0 addresses
Dec 02 10:12:14 np0005541914.localdomain dnsmasq-dhcp[321113]: read /var/lib/neutron/dhcp/d7575463-fed8-42a9-b848-634ac68ed078/host
Dec 02 10:12:14 np0005541914.localdomain dnsmasq-dhcp[321113]: read /var/lib/neutron/dhcp/d7575463-fed8-42a9-b848-634ac68ed078/opts
Dec 02 10:12:14 np0005541914.localdomain podman[321570]: 2025-12-02 10:12:14.445551373 +0000 UTC m=+0.060188111 container kill 31d3655aabd707fc14d4655f9bf3400ff36470ced2ee3c0a77a0dc1212f8950f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7575463-fed8-42a9-b848-634ac68ed078, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 02 10:12:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:14.664 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:14 np0005541914.localdomain kernel: device tap60565337-ba left promiscuous mode
Dec 02 10:12:14 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:12:14Z|00209|binding|INFO|Releasing lport 60565337-ba9f-460c-b321-9bed6bae4c6b from this chassis (sb_readonly=0)
Dec 02 10:12:14 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:12:14Z|00210|binding|INFO|Setting lport 60565337-ba9f-460c-b321-9bed6bae4c6b down in Southbound
Dec 02 10:12:14 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:12:14.672 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-d7575463-fed8-42a9-b848-634ac68ed078', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7575463-fed8-42a9-b848-634ac68ed078', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '043cc6f66b444d00959c7dcdb078fbe8', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3d9cd90-16cc-47f0-86ae-247f0a618c23, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=60565337-ba9f-460c-b321-9bed6bae4c6b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:12:14 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:12:14.674 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 60565337-ba9f-460c-b321-9bed6bae4c6b in datapath d7575463-fed8-42a9-b848-634ac68ed078 unbound from our chassis
Dec 02 10:12:14 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:12:14.677 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d7575463-fed8-42a9-b848-634ac68ed078, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:12:14 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:12:14.678 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[2b20a0ab-386d-4dca-b9b4-5071f4e74ebf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:12:14 np0005541914.localdomain ceph-mon[301710]: pgmap v464: 177 pgs: 177 active+clean; 197 MiB data, 990 MiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 274 op/s
Dec 02 10:12:14 np0005541914.localdomain ceph-mon[301710]: osdmap e190: 6 total, 6 up, 6 in
Dec 02 10:12:14 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 02 10:12:14 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 02 10:12:14 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 02 10:12:14 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 02 10:12:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:14.689 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:14 np0005541914.localdomain sshd[321594]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 10:12:14 np0005541914.localdomain sshd[321594]: Connection closed by 45.79.172.21 port 58982 [preauth]
Dec 02 10:12:14 np0005541914.localdomain sshd[321596]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 10:12:14 np0005541914.localdomain sshd[321596]: Connection closed by 45.79.172.21 port 58988 [preauth]
Dec 02 10:12:14 np0005541914.localdomain sshd[321598]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 10:12:14 np0005541914.localdomain sshd[321598]: Connection closed by 45.79.172.21 port 58992 [preauth]
Dec 02 10:12:15 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v466: 177 pgs: 177 active+clean; 197 MiB data, 990 MiB used, 41 GiB / 42 GiB avail; 110 KiB/s rd, 33 KiB/s wr, 156 op/s
Dec 02 10:12:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:12:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.443 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:12:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:12:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:12:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:12:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:12:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:12:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:12:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:12:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:12:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:12:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:12:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:12:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:12:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:12:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:12:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:12:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:12:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:12:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:12:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:12:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:12:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:12:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:12:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:12:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:12:15 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 02 10:12:15 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 02 10:12:15 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "884b3444-4a7a-4744-9a4b-7d6039625376", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:12:15 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:884b3444-4a7a-4744-9a4b-7d6039625376, vol_name:cephfs) < ""
Dec 02 10:12:15 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/884b3444-4a7a-4744-9a4b-7d6039625376/.meta.tmp'
Dec 02 10:12:15 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/884b3444-4a7a-4744-9a4b-7d6039625376/.meta.tmp' to config b'/volumes/_nogroup/884b3444-4a7a-4744-9a4b-7d6039625376/.meta'
Dec 02 10:12:15 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:884b3444-4a7a-4744-9a4b-7d6039625376, vol_name:cephfs) < ""
Dec 02 10:12:15 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "884b3444-4a7a-4744-9a4b-7d6039625376", "format": "json"}]: dispatch
Dec 02 10:12:15 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:884b3444-4a7a-4744-9a4b-7d6039625376, vol_name:cephfs) < ""
Dec 02 10:12:15 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:884b3444-4a7a-4744-9a4b-7d6039625376, vol_name:cephfs) < ""
Dec 02 10:12:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:16.075 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:16 np0005541914.localdomain podman[321616]: 2025-12-02 10:12:16.46551188 +0000 UTC m=+0.048369157 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec 02 10:12:16 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses
Dec 02 10:12:16 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:12:16 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:12:16 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e191 e191: 6 total, 6 up, 6 in
Dec 02 10:12:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:16.647 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:16 np0005541914.localdomain ceph-mon[301710]: pgmap v466: 177 pgs: 177 active+clean; 197 MiB data, 990 MiB used, 41 GiB / 42 GiB avail; 110 KiB/s rd, 33 KiB/s wr, 156 op/s
Dec 02 10:12:16 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "884b3444-4a7a-4744-9a4b-7d6039625376", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:12:16 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "884b3444-4a7a-4744-9a4b-7d6039625376", "format": "json"}]: dispatch
Dec 02 10:12:16 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:12:16 np0005541914.localdomain ceph-mon[301710]: osdmap e191: 6 total, 6 up, 6 in
Dec 02 10:12:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "r", "format": "json"}]: dispatch
Dec 02 10:12:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:12:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 02 10:12:17 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 02 10:12:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v468: 177 pgs: 177 active+clean; 197 MiB data, 990 MiB used, 41 GiB / 42 GiB avail; 110 KiB/s rd, 33 KiB/s wr, 156 op/s
Dec 02 10:12:17 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID alice bob with tenant a241a07e4161486091e8de3f95a1d6c6
Dec 02 10:12:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:12:17 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:12:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:r, auth_id:alice bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:12:17 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:12:17.471 2 INFO neutron.agent.securitygroups_rpc [None req-12cf221e-0940-4511-92ba-1f5763df32bf 27e8ee5045c2430583000f8d62f6e4f1 096ffa0a51b143039159efc232ec547a - - default default] Security group member updated ['0a7d83ca-acbf-4932-884e-9eff3b0bc0ff']
Dec 02 10:12:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:12:17 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "r", "format": "json"}]: dispatch
Dec 02 10:12:17 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 02 10:12:17 np0005541914.localdomain ceph-mon[301710]: pgmap v468: 177 pgs: 177 active+clean; 197 MiB data, 990 MiB used, 41 GiB / 42 GiB avail; 110 KiB/s rd, 33 KiB/s wr, 156 op/s
Dec 02 10:12:17 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:12:17 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:12:17 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow r pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:12:17 np0005541914.localdomain systemd[1]: tmp-crun.jqqcn0.mount: Deactivated successfully.
Dec 02 10:12:17 np0005541914.localdomain dnsmasq[321373]: exiting on receipt of SIGTERM
Dec 02 10:12:17 np0005541914.localdomain podman[321656]: 2025-12-02 10:12:17.88513119 +0000 UTC m=+0.083184527 container kill 86a9a803da6e1bb3fb90ee5df28e010492ff3a894d8e2ac70ca9cba716799ce2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dcbd2fde-cd87-4087-93b9-a7b43b07dcbf, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:12:17 np0005541914.localdomain systemd[1]: libpod-86a9a803da6e1bb3fb90ee5df28e010492ff3a894d8e2ac70ca9cba716799ce2.scope: Deactivated successfully.
Dec 02 10:12:17 np0005541914.localdomain podman[321670]: 2025-12-02 10:12:17.955722819 +0000 UTC m=+0.055484496 container died 86a9a803da6e1bb3fb90ee5df28e010492ff3a894d8e2ac70ca9cba716799ce2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dcbd2fde-cd87-4087-93b9-a7b43b07dcbf, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 02 10:12:17 np0005541914.localdomain podman[321670]: 2025-12-02 10:12:17.983615366 +0000 UTC m=+0.083376983 container cleanup 86a9a803da6e1bb3fb90ee5df28e010492ff3a894d8e2ac70ca9cba716799ce2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dcbd2fde-cd87-4087-93b9-a7b43b07dcbf, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 02 10:12:17 np0005541914.localdomain systemd[1]: libpod-conmon-86a9a803da6e1bb3fb90ee5df28e010492ff3a894d8e2ac70ca9cba716799ce2.scope: Deactivated successfully.
Dec 02 10:12:18 np0005541914.localdomain podman[321672]: 2025-12-02 10:12:18.02638203 +0000 UTC m=+0.119274776 container remove 86a9a803da6e1bb3fb90ee5df28e010492ff3a894d8e2ac70ca9cba716799ce2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dcbd2fde-cd87-4087-93b9-a7b43b07dcbf, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 02 10:12:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:18.075 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:18 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:12:18Z|00211|binding|INFO|Releasing lport 5a59ed58-8e8e-4218-a027-de857358efdd from this chassis (sb_readonly=0)
Dec 02 10:12:18 np0005541914.localdomain kernel: device tap5a59ed58-8e left promiscuous mode
Dec 02 10:12:18 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:12:18Z|00212|binding|INFO|Setting lport 5a59ed58-8e8e-4218-a027-de857358efdd down in Southbound
Dec 02 10:12:18 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:12:18.084 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-dcbd2fde-cd87-4087-93b9-a7b43b07dcbf', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dcbd2fde-cd87-4087-93b9-a7b43b07dcbf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '043cc6f66b444d00959c7dcdb078fbe8', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=569352f8-5776-4fd7-bf95-e3a12d36086c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=5a59ed58-8e8e-4218-a027-de857358efdd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:12:18 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:12:18.085 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 5a59ed58-8e8e-4218-a027-de857358efdd in datapath dcbd2fde-cd87-4087-93b9-a7b43b07dcbf unbound from our chassis
Dec 02 10:12:18 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:12:18.087 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dcbd2fde-cd87-4087-93b9-a7b43b07dcbf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:12:18 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:12:18.088 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[c70fd4f2-38e6-45d2-bd77-f9803fb144f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:12:18 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 02 10:12:18 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1842775925' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:12:18 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 02 10:12:18 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1842775925' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:12:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:18.098 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:18 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 1 addresses
Dec 02 10:12:18 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:12:18 np0005541914.localdomain podman[321717]: 2025-12-02 10:12:18.13182419 +0000 UTC m=+0.045861960 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 02 10:12:18 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:12:18 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:12:18.575 262347 INFO neutron.agent.dhcp.agent [None req-09c6e4f1-e646-4647-90b1-164dcdf57259 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:12:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:18.631 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:18 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1842775925' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:12:18 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1842775925' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:12:18 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-dcaf42c28c81b21a1deda3e252f2a551239aaaaf7b56461c20fc989a8f045479-merged.mount: Deactivated successfully.
Dec 02 10:12:18 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-86a9a803da6e1bb3fb90ee5df28e010492ff3a894d8e2ac70ca9cba716799ce2-userdata-shm.mount: Deactivated successfully.
Dec 02 10:12:18 np0005541914.localdomain systemd[1]: run-netns-qdhcp\x2ddcbd2fde\x2dcd87\x2d4087\x2d93b9\x2da7b43b07dcbf.mount: Deactivated successfully.
Dec 02 10:12:19 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:12:19.099 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:12:19 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "884b3444-4a7a-4744-9a4b-7d6039625376", "snap_name": "a0843ffe-d6ab-48e1-a5c8-33bfbdacd761", "format": "json"}]: dispatch
Dec 02 10:12:19 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a0843ffe-d6ab-48e1-a5c8-33bfbdacd761, sub_name:884b3444-4a7a-4744-9a4b-7d6039625376, vol_name:cephfs) < ""
Dec 02 10:12:19 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:a0843ffe-d6ab-48e1-a5c8-33bfbdacd761, sub_name:884b3444-4a7a-4744-9a4b-7d6039625376, vol_name:cephfs) < ""
Dec 02 10:12:19 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v469: 177 pgs: 177 active+clean; 198 MiB data, 979 MiB used, 41 GiB / 42 GiB avail; 125 KiB/s rd, 71 KiB/s wr, 180 op/s
Dec 02 10:12:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:19.376 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:19 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "884b3444-4a7a-4744-9a4b-7d6039625376", "snap_name": "a0843ffe-d6ab-48e1-a5c8-33bfbdacd761", "format": "json"}]: dispatch
Dec 02 10:12:19 np0005541914.localdomain ceph-mon[301710]: pgmap v469: 177 pgs: 177 active+clean; 198 MiB data, 979 MiB used, 41 GiB / 42 GiB avail; 125 KiB/s rd, 71 KiB/s wr, 180 op/s
Dec 02 10:12:19 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:12:19 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:12:19 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:12:19 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:12:20 np0005541914.localdomain podman[321738]: 2025-12-02 10:12:20.080916489 +0000 UTC m=+0.085486677 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 02 10:12:20 np0005541914.localdomain systemd[1]: tmp-crun.kOgTsV.mount: Deactivated successfully.
Dec 02 10:12:20 np0005541914.localdomain podman[321739]: 2025-12-02 10:12:20.134590129 +0000 UTC m=+0.135728232 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 10:12:20 np0005541914.localdomain podman[321739]: 2025-12-02 10:12:20.146182695 +0000 UTC m=+0.147320778 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 02 10:12:20 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:12:20 np0005541914.localdomain podman[321738]: 2025-12-02 10:12:20.163976771 +0000 UTC m=+0.168546929 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 02 10:12:20 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:12:20 np0005541914.localdomain podman[321746]: 2025-12-02 10:12:20.149444105 +0000 UTC m=+0.141645104 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:12:20 np0005541914.localdomain podman[321746]: 2025-12-02 10:12:20.229500605 +0000 UTC m=+0.221701624 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:12:20 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:12:20 np0005541914.localdomain podman[321740]: 2025-12-02 10:12:20.290978933 +0000 UTC m=+0.289121714 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:12:20 np0005541914.localdomain podman[321740]: 2025-12-02 10:12:20.304845679 +0000 UTC m=+0.302988450 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Dec 02 10:12:20 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:12:20 np0005541914.localdomain dnsmasq[321113]: exiting on receipt of SIGTERM
Dec 02 10:12:20 np0005541914.localdomain podman[321835]: 2025-12-02 10:12:20.384620771 +0000 UTC m=+0.039907687 container kill 31d3655aabd707fc14d4655f9bf3400ff36470ced2ee3c0a77a0dc1212f8950f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7575463-fed8-42a9-b848-634ac68ed078, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 02 10:12:20 np0005541914.localdomain systemd[1]: libpod-31d3655aabd707fc14d4655f9bf3400ff36470ced2ee3c0a77a0dc1212f8950f.scope: Deactivated successfully.
Dec 02 10:12:20 np0005541914.localdomain podman[321851]: 2025-12-02 10:12:20.427402335 +0000 UTC m=+0.030785847 container died 31d3655aabd707fc14d4655f9bf3400ff36470ced2ee3c0a77a0dc1212f8950f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7575463-fed8-42a9-b848-634ac68ed078, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 02 10:12:20 np0005541914.localdomain podman[321851]: 2025-12-02 10:12:20.445224724 +0000 UTC m=+0.048608216 container cleanup 31d3655aabd707fc14d4655f9bf3400ff36470ced2ee3c0a77a0dc1212f8950f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7575463-fed8-42a9-b848-634ac68ed078, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:12:20 np0005541914.localdomain systemd[1]: libpod-conmon-31d3655aabd707fc14d4655f9bf3400ff36470ced2ee3c0a77a0dc1212f8950f.scope: Deactivated successfully.
Dec 02 10:12:20 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 02 10:12:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:12:20 np0005541914.localdomain podman[321850]: 2025-12-02 10:12:20.514326416 +0000 UTC m=+0.116237572 container remove 31d3655aabd707fc14d4655f9bf3400ff36470ced2ee3c0a77a0dc1212f8950f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d7575463-fed8-42a9-b848-634ac68ed078, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 02 10:12:20 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.alice bob", "format": "json"} v 0)
Dec 02 10:12:20 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 02 10:12:20 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.alice bob"} v 0)
Dec 02 10:12:20 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 02 10:12:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:alice bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:12:20 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 02 10:12:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:12:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=alice bob, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180
Dec 02 10:12:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:12:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:alice bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:12:20 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:12:20.709 262347 INFO neutron.agent.dhcp.agent [None req-4e982f42-dd31-44e3-9576-1423f2ff4236 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:12:20 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 02 10:12:20 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Dec 02 10:12:20 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 02 10:12:20 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Dec 02 10:12:20 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Dec 02 10:12:20 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "alice bob", "format": "json"}]: dispatch
Dec 02 10:12:20 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:12:20.820 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:12:21 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-8a238a32a00c88b302c7631d7402cd3a543c022c2ba3aed4b7050aab5db379c8-merged.mount: Deactivated successfully.
Dec 02 10:12:21 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-31d3655aabd707fc14d4655f9bf3400ff36470ced2ee3c0a77a0dc1212f8950f-userdata-shm.mount: Deactivated successfully.
Dec 02 10:12:21 np0005541914.localdomain systemd[1]: run-netns-qdhcp\x2dd7575463\x2dfed8\x2d42a9\x2db848\x2d634ac68ed078.mount: Deactivated successfully.
Dec 02 10:12:21 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:21.077 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:21 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v470: 177 pgs: 177 active+clean; 198 MiB data, 979 MiB used, 41 GiB / 42 GiB avail; 35 KiB/s rd, 42 KiB/s wr, 52 op/s
Dec 02 10:12:21 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:12:21.423 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:12:21 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e192 e192: 6 total, 6 up, 6 in
Dec 02 10:12:22 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:12:22 np0005541914.localdomain ceph-mon[301710]: pgmap v470: 177 pgs: 177 active+clean; 198 MiB data, 979 MiB used, 41 GiB / 42 GiB avail; 35 KiB/s rd, 42 KiB/s wr, 52 op/s
Dec 02 10:12:22 np0005541914.localdomain ceph-mon[301710]: osdmap e192: 6 total, 6 up, 6 in
Dec 02 10:12:22 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "884b3444-4a7a-4744-9a4b-7d6039625376", "snap_name": "a0843ffe-d6ab-48e1-a5c8-33bfbdacd761_e41a686a-024b-44d9-a830-e637ced25120", "force": true, "format": "json"}]: dispatch
Dec 02 10:12:22 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a0843ffe-d6ab-48e1-a5c8-33bfbdacd761_e41a686a-024b-44d9-a830-e637ced25120, sub_name:884b3444-4a7a-4744-9a4b-7d6039625376, vol_name:cephfs) < ""
Dec 02 10:12:23 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/884b3444-4a7a-4744-9a4b-7d6039625376/.meta.tmp'
Dec 02 10:12:23 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/884b3444-4a7a-4744-9a4b-7d6039625376/.meta.tmp' to config b'/volumes/_nogroup/884b3444-4a7a-4744-9a4b-7d6039625376/.meta'
Dec 02 10:12:23 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a0843ffe-d6ab-48e1-a5c8-33bfbdacd761_e41a686a-024b-44d9-a830-e637ced25120, sub_name:884b3444-4a7a-4744-9a4b-7d6039625376, vol_name:cephfs) < ""
Dec 02 10:12:23 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "884b3444-4a7a-4744-9a4b-7d6039625376", "snap_name": "a0843ffe-d6ab-48e1-a5c8-33bfbdacd761", "force": true, "format": "json"}]: dispatch
Dec 02 10:12:23 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a0843ffe-d6ab-48e1-a5c8-33bfbdacd761, sub_name:884b3444-4a7a-4744-9a4b-7d6039625376, vol_name:cephfs) < ""
Dec 02 10:12:23 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/884b3444-4a7a-4744-9a4b-7d6039625376/.meta.tmp'
Dec 02 10:12:23 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/884b3444-4a7a-4744-9a4b-7d6039625376/.meta.tmp' to config b'/volumes/_nogroup/884b3444-4a7a-4744-9a4b-7d6039625376/.meta'
Dec 02 10:12:23 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:a0843ffe-d6ab-48e1-a5c8-33bfbdacd761, sub_name:884b3444-4a7a-4744-9a4b-7d6039625376, vol_name:cephfs) < ""
Dec 02 10:12:23 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v472: 177 pgs: 177 active+clean; 198 MiB data, 983 MiB used, 41 GiB / 42 GiB avail; 35 KiB/s rd, 68 KiB/s wr, 55 op/s
Dec 02 10:12:23 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:23.634 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:23 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:12:23 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:12:23 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Dec 02 10:12:23 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec 02 10:12:23 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID bob with tenant a241a07e4161486091e8de3f95a1d6c6
Dec 02 10:12:23 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:12:23 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:12:23 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:12:24 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:12:24.358 262347 INFO neutron.agent.linux.ip_lib [None req-10d4fab1-1edf-451c-862e-68882ca8de40 - - - - - -] Device tap2f12f501-39 cannot be used as it has no MAC address
Dec 02 10:12:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:24.422 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:24 np0005541914.localdomain kernel: device tap2f12f501-39 entered promiscuous mode
Dec 02 10:12:24 np0005541914.localdomain NetworkManager[5967]: <info>  [1764670344.4307] manager: (tap2f12f501-39): new Generic device (/org/freedesktop/NetworkManager/Devices/43)
Dec 02 10:12:24 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:12:24Z|00213|binding|INFO|Claiming lport 2f12f501-3942-418c-89e6-d03f08b5b903 for this chassis.
Dec 02 10:12:24 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:12:24Z|00214|binding|INFO|2f12f501-3942-418c-89e6-d03f08b5b903: Claiming unknown
Dec 02 10:12:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:24.432 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:24 np0005541914.localdomain systemd-udevd[321886]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 10:12:24 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:12:24.443 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-0ed6501a-31af-475e-83c5-b9d22d72adda', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ed6501a-31af-475e-83c5-b9d22d72adda', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a1854cb9cd7e49c4a6a223acc8d74075', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=290a207a-1202-4b25-ae81-ba8a96163204, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=2f12f501-3942-418c-89e6-d03f08b5b903) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:12:24 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:12:24.445 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 2f12f501-3942-418c-89e6-d03f08b5b903 in datapath 0ed6501a-31af-475e-83c5-b9d22d72adda bound to our chassis
Dec 02 10:12:24 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:12:24.446 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 0ed6501a-31af-475e-83c5-b9d22d72adda or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 02 10:12:24 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:12:24.447 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[59520c80-154e-4459-b2ae-203ec0c57b34]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:12:24 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap2f12f501-39: No such device
Dec 02 10:12:24 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:12:24Z|00215|binding|INFO|Setting lport 2f12f501-3942-418c-89e6-d03f08b5b903 ovn-installed in OVS
Dec 02 10:12:24 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:12:24Z|00216|binding|INFO|Setting lport 2f12f501-3942-418c-89e6-d03f08b5b903 up in Southbound
Dec 02 10:12:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:24.465 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:24 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap2f12f501-39: No such device
Dec 02 10:12:24 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap2f12f501-39: No such device
Dec 02 10:12:24 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap2f12f501-39: No such device
Dec 02 10:12:24 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap2f12f501-39: No such device
Dec 02 10:12:24 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap2f12f501-39: No such device
Dec 02 10:12:24 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap2f12f501-39: No such device
Dec 02 10:12:24 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap2f12f501-39: No such device
Dec 02 10:12:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:24.499 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:24.528 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:24 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "884b3444-4a7a-4744-9a4b-7d6039625376", "snap_name": "a0843ffe-d6ab-48e1-a5c8-33bfbdacd761_e41a686a-024b-44d9-a830-e637ced25120", "force": true, "format": "json"}]: dispatch
Dec 02 10:12:24 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "884b3444-4a7a-4744-9a4b-7d6039625376", "snap_name": "a0843ffe-d6ab-48e1-a5c8-33bfbdacd761", "force": true, "format": "json"}]: dispatch
Dec 02 10:12:24 np0005541914.localdomain ceph-mon[301710]: pgmap v472: 177 pgs: 177 active+clean; 198 MiB data, 983 MiB used, 41 GiB / 42 GiB avail; 35 KiB/s rd, 68 KiB/s wr, 55 op/s
Dec 02 10:12:24 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:12:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec 02 10:12:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:12:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:12:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:12:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e193 e193: 6 total, 6 up, 6 in
Dec 02 10:12:25 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v474: 177 pgs: 177 active+clean; 198 MiB data, 983 MiB used, 41 GiB / 42 GiB avail; 35 KiB/s rd, 68 KiB/s wr, 55 op/s
Dec 02 10:12:25 np0005541914.localdomain podman[321956]: 
Dec 02 10:12:25 np0005541914.localdomain podman[321956]: 2025-12-02 10:12:25.343402398 +0000 UTC m=+0.084882019 container create b36f2988170324249ed2bd7b7835756f78e0506ed602fdf8f422ba6b1e741071 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ed6501a-31af-475e-83c5-b9d22d72adda, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 02 10:12:25 np0005541914.localdomain systemd[1]: Started libpod-conmon-b36f2988170324249ed2bd7b7835756f78e0506ed602fdf8f422ba6b1e741071.scope.
Dec 02 10:12:25 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 10:12:25 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2824933fa3ea201d2880e13a9881e7be88073d7cf00c05154858ec9e5ab1017d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 10:12:25 np0005541914.localdomain podman[321956]: 2025-12-02 10:12:25.396117367 +0000 UTC m=+0.137596998 container init b36f2988170324249ed2bd7b7835756f78e0506ed602fdf8f422ba6b1e741071 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ed6501a-31af-475e-83c5-b9d22d72adda, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:12:25 np0005541914.localdomain podman[321956]: 2025-12-02 10:12:25.304227854 +0000 UTC m=+0.045707475 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 02 10:12:25 np0005541914.localdomain podman[321956]: 2025-12-02 10:12:25.407427985 +0000 UTC m=+0.148907636 container start b36f2988170324249ed2bd7b7835756f78e0506ed602fdf8f422ba6b1e741071 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ed6501a-31af-475e-83c5-b9d22d72adda, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 02 10:12:25 np0005541914.localdomain dnsmasq[321974]: started, version 2.85 cachesize 150
Dec 02 10:12:25 np0005541914.localdomain dnsmasq[321974]: DNS service limited to local subnets
Dec 02 10:12:25 np0005541914.localdomain dnsmasq[321974]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 02 10:12:25 np0005541914.localdomain dnsmasq[321974]: warning: no upstream servers configured
Dec 02 10:12:25 np0005541914.localdomain dnsmasq-dhcp[321974]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 02 10:12:25 np0005541914.localdomain dnsmasq[321974]: read /var/lib/neutron/dhcp/0ed6501a-31af-475e-83c5-b9d22d72adda/addn_hosts - 0 addresses
Dec 02 10:12:25 np0005541914.localdomain dnsmasq-dhcp[321974]: read /var/lib/neutron/dhcp/0ed6501a-31af-475e-83c5-b9d22d72adda/host
Dec 02 10:12:25 np0005541914.localdomain dnsmasq-dhcp[321974]: read /var/lib/neutron/dhcp/0ed6501a-31af-475e-83c5-b9d22d72adda/opts
Dec 02 10:12:25 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:12:25.602 262347 INFO neutron.agent.dhcp.agent [None req-6ca692d4-9cdd-4385-8125-bb7622ea7b1e - - - - - -] DHCP configuration for ports {'66e17cf8-df51-450a-8146-fab121409d73'} is completed
Dec 02 10:12:25 np0005541914.localdomain ceph-mon[301710]: osdmap e193: 6 total, 6 up, 6 in
Dec 02 10:12:25 np0005541914.localdomain ceph-mon[301710]: pgmap v474: 177 pgs: 177 active+clean; 198 MiB data, 983 MiB used, 41 GiB / 42 GiB avail; 35 KiB/s rd, 68 KiB/s wr, 55 op/s
Dec 02 10:12:25 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e194 e194: 6 total, 6 up, 6 in
Dec 02 10:12:26 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:12:26.065 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:12:25Z, description=, device_id=8df809b8-facb-40b9-bb8b-01b96dff964d, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034a50b50>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034a50940>], id=c4b1ca5d-6e39-4259-ad69-632c6ab0e0c6, ip_allocation=immediate, mac_address=fa:16:3e:13:f2:f0, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2856, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:12:25Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:12:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:26.080 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:26 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "884b3444-4a7a-4744-9a4b-7d6039625376", "format": "json"}]: dispatch
Dec 02 10:12:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:884b3444-4a7a-4744-9a4b-7d6039625376, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:12:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:884b3444-4a7a-4744-9a4b-7d6039625376, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:12:26 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:12:26.218+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '884b3444-4a7a-4744-9a4b-7d6039625376' of type subvolume
Dec 02 10:12:26 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '884b3444-4a7a-4744-9a4b-7d6039625376' of type subvolume
Dec 02 10:12:26 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "884b3444-4a7a-4744-9a4b-7d6039625376", "force": true, "format": "json"}]: dispatch
Dec 02 10:12:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:884b3444-4a7a-4744-9a4b-7d6039625376, vol_name:cephfs) < ""
Dec 02 10:12:26 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:12:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/884b3444-4a7a-4744-9a4b-7d6039625376'' moved to trashcan
Dec 02 10:12:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:12:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:884b3444-4a7a-4744-9a4b-7d6039625376, vol_name:cephfs) < ""
Dec 02 10:12:26 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses
Dec 02 10:12:26 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:12:26 np0005541914.localdomain podman[321990]: 2025-12-02 10:12:26.310337858 +0000 UTC m=+0.067198666 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:12:26 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:12:26 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:12:26 np0005541914.localdomain systemd[1]: tmp-crun.zOcbl5.mount: Deactivated successfully.
Dec 02 10:12:26 np0005541914.localdomain podman[321991]: 2025-12-02 10:12:26.393735801 +0000 UTC m=+0.146401699 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, config_id=edpm, release=1755695350, architecture=x86_64, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9)
Dec 02 10:12:26 np0005541914.localdomain podman[321991]: 2025-12-02 10:12:26.407777472 +0000 UTC m=+0.160443330 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git)
Dec 02 10:12:26 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:12:26 np0005541914.localdomain podman[322018]: 2025-12-02 10:12:26.449516765 +0000 UTC m=+0.076395958 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 10:12:26 np0005541914.localdomain podman[322018]: 2025-12-02 10:12:26.486955225 +0000 UTC m=+0.113834418 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 10:12:26 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:12:26 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:12:26.560 262347 INFO neutron.agent.dhcp.agent [None req-1dfcae20-88e2-4169-95a4-2b08e9d5a9bb - - - - - -] DHCP configuration for ports {'c4b1ca5d-6e39-4259-ad69-632c6ab0e0c6'} is completed
Dec 02 10:12:26 np0005541914.localdomain ceph-mon[301710]: osdmap e194: 6 total, 6 up, 6 in
Dec 02 10:12:26 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "884b3444-4a7a-4744-9a4b-7d6039625376", "format": "json"}]: dispatch
Dec 02 10:12:26 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "884b3444-4a7a-4744-9a4b-7d6039625376", "force": true, "format": "json"}]: dispatch
Dec 02 10:12:26 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e195 e195: 6 total, 6 up, 6 in
Dec 02 10:12:27 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:12:27.051 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:12:26Z, description=, device_id=48efaaba-b00b-49e5-9453-c3674fe08fa7, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034a2aa30>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034a2a4c0>], id=cd561d24-857a-48f4-96ac-6e7e7342afec, ip_allocation=immediate, mac_address=fa:16:3e:1d:40:1e, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2860, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:12:26Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:12:27 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v477: 177 pgs: 177 active+clean; 198 MiB data, 983 MiB used, 41 GiB / 42 GiB avail; 38 KiB/s wr, 4 op/s
Dec 02 10:12:27 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses
Dec 02 10:12:27 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:12:27 np0005541914.localdomain podman[322069]: 2025-12-02 10:12:27.255339865 +0000 UTC m=+0.051084971 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3)
Dec 02 10:12:27 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:12:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:12:27 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:12:27.522 262347 INFO neutron.agent.dhcp.agent [None req-8ea8bd61-bb0c-4381-b54b-4234e8bd708e - - - - - -] DHCP configuration for ports {'cd561d24-857a-48f4-96ac-6e7e7342afec'} is completed
Dec 02 10:12:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e196 e196: 6 total, 6 up, 6 in
Dec 02 10:12:27 np0005541914.localdomain ceph-mon[301710]: osdmap e195: 6 total, 6 up, 6 in
Dec 02 10:12:27 np0005541914.localdomain ceph-mon[301710]: pgmap v477: 177 pgs: 177 active+clean; 198 MiB data, 983 MiB used, 41 GiB / 42 GiB avail; 38 KiB/s wr, 4 op/s
Dec 02 10:12:27 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b9e19d1e-178b-4a98-88b5-d79880cd9496", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:12:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b9e19d1e-178b-4a98-88b5-d79880cd9496, vol_name:cephfs) < ""
Dec 02 10:12:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b9e19d1e-178b-4a98-88b5-d79880cd9496/.meta.tmp'
Dec 02 10:12:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b9e19d1e-178b-4a98-88b5-d79880cd9496/.meta.tmp' to config b'/volumes/_nogroup/b9e19d1e-178b-4a98-88b5-d79880cd9496/.meta'
Dec 02 10:12:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b9e19d1e-178b-4a98-88b5-d79880cd9496, vol_name:cephfs) < ""
Dec 02 10:12:27 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b9e19d1e-178b-4a98-88b5-d79880cd9496", "format": "json"}]: dispatch
Dec 02 10:12:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b9e19d1e-178b-4a98-88b5-d79880cd9496, vol_name:cephfs) < ""
Dec 02 10:12:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b9e19d1e-178b-4a98-88b5-d79880cd9496, vol_name:cephfs) < ""
Dec 02 10:12:28 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:28.190 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:28 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:28.638 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:29 np0005541914.localdomain ceph-mon[301710]: osdmap e196: 6 total, 6 up, 6 in
Dec 02 10:12:29 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:12:29 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e197 e197: 6 total, 6 up, 6 in
Dec 02 10:12:29 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v480: 177 pgs: 177 active+clean; 198 MiB data, 985 MiB used, 41 GiB / 42 GiB avail; 99 KiB/s rd, 107 KiB/s wr, 152 op/s
Dec 02 10:12:29 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:29.353 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:30 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b9e19d1e-178b-4a98-88b5-d79880cd9496", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:12:30 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b9e19d1e-178b-4a98-88b5-d79880cd9496", "format": "json"}]: dispatch
Dec 02 10:12:30 np0005541914.localdomain ceph-mon[301710]: osdmap e197: 6 total, 6 up, 6 in
Dec 02 10:12:30 np0005541914.localdomain ceph-mon[301710]: pgmap v480: 177 pgs: 177 active+clean; 198 MiB data, 985 MiB used, 41 GiB / 42 GiB avail; 99 KiB/s rd, 107 KiB/s wr, 152 op/s
Dec 02 10:12:30 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e198 e198: 6 total, 6 up, 6 in
Dec 02 10:12:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:12:30.555 262347 INFO neutron.agent.linux.ip_lib [None req-9dac3193-bf5f-4c2e-8889-0f4528b741ac - - - - - -] Device tapdb728fdc-d8 cannot be used as it has no MAC address
Dec 02 10:12:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:30.611 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:30 np0005541914.localdomain kernel: device tapdb728fdc-d8 entered promiscuous mode
Dec 02 10:12:30 np0005541914.localdomain NetworkManager[5967]: <info>  [1764670350.6210] manager: (tapdb728fdc-d8): new Generic device (/org/freedesktop/NetworkManager/Devices/44)
Dec 02 10:12:30 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:12:30Z|00217|binding|INFO|Claiming lport db728fdc-d8ab-468d-a1cd-ab731f58d6dc for this chassis.
Dec 02 10:12:30 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:12:30Z|00218|binding|INFO|db728fdc-d8ab-468d-a1cd-ab731f58d6dc: Claiming unknown
Dec 02 10:12:30 np0005541914.localdomain systemd-udevd[322099]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 10:12:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:30.624 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:30 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:12:30.629 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-ca9eef71-1213-4a2c-90d0-cfc01ce50fc6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ca9eef71-1213-4a2c-90d0-cfc01ce50fc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eea084241c14c5d9a6cc0d912041a21', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dcd3979d-613c-4a99-a744-aee0cbcf87d6, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=db728fdc-d8ab-468d-a1cd-ab731f58d6dc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:12:30 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:12:30.631 159483 INFO neutron.agent.ovn.metadata.agent [-] Port db728fdc-d8ab-468d-a1cd-ab731f58d6dc in datapath ca9eef71-1213-4a2c-90d0-cfc01ce50fc6 bound to our chassis
Dec 02 10:12:30 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:12:30.632 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ca9eef71-1213-4a2c-90d0-cfc01ce50fc6 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 02 10:12:30 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:12:30.632 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[31401f2e-bfb8-4d85-8725-0bd807121f4d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:12:30 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapdb728fdc-d8: No such device
Dec 02 10:12:30 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:12:30Z|00219|binding|INFO|Setting lport db728fdc-d8ab-468d-a1cd-ab731f58d6dc ovn-installed in OVS
Dec 02 10:12:30 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:12:30Z|00220|binding|INFO|Setting lport db728fdc-d8ab-468d-a1cd-ab731f58d6dc up in Southbound
Dec 02 10:12:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:30.653 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:30 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapdb728fdc-d8: No such device
Dec 02 10:12:30 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapdb728fdc-d8: No such device
Dec 02 10:12:30 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapdb728fdc-d8: No such device
Dec 02 10:12:30 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapdb728fdc-d8: No such device
Dec 02 10:12:30 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapdb728fdc-d8: No such device
Dec 02 10:12:30 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapdb728fdc-d8: No such device
Dec 02 10:12:30 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapdb728fdc-d8: No such device
Dec 02 10:12:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:30.679 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:30.701 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:30 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:12:30.728 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:12:30Z, description=, device_id=48efaaba-b00b-49e5-9453-c3674fe08fa7, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034a15f10>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034a152b0>], id=0e034430-6b11-4bae-9254-bdf3f84dc12d, ip_allocation=immediate, mac_address=fa:16:3e:27:a3:1d, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:12:22Z, description=, dns_domain=, id=0ed6501a-31af-475e-83c5-b9d22d72adda, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PrometheusGabbiTest-1168865774-network, port_security_enabled=True, project_id=a1854cb9cd7e49c4a6a223acc8d74075, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=20837, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2841, status=ACTIVE, subnets=['418123ed-5885-4e16-91c2-f687fe4bb883'], tags=[], tenant_id=a1854cb9cd7e49c4a6a223acc8d74075, updated_at=2025-12-02T10:12:23Z, vlan_transparent=None, network_id=0ed6501a-31af-475e-83c5-b9d22d72adda, port_security_enabled=False, project_id=a1854cb9cd7e49c4a6a223acc8d74075, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2872, status=DOWN, tags=[], tenant_id=a1854cb9cd7e49c4a6a223acc8d74075, updated_at=2025-12-02T10:12:30Z on network 0ed6501a-31af-475e-83c5-b9d22d72adda
Dec 02 10:12:30 np0005541914.localdomain sudo[322129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 10:12:30 np0005541914.localdomain sudo[322129]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:12:30 np0005541914.localdomain sudo[322129]: pam_unix(sudo:session): session closed for user root
Dec 02 10:12:30 np0005541914.localdomain sudo[322164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 10:12:30 np0005541914.localdomain sudo[322164]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:12:30 np0005541914.localdomain dnsmasq[321974]: read /var/lib/neutron/dhcp/0ed6501a-31af-475e-83c5-b9d22d72adda/addn_hosts - 1 addresses
Dec 02 10:12:30 np0005541914.localdomain dnsmasq-dhcp[321974]: read /var/lib/neutron/dhcp/0ed6501a-31af-475e-83c5-b9d22d72adda/host
Dec 02 10:12:30 np0005541914.localdomain dnsmasq-dhcp[321974]: read /var/lib/neutron/dhcp/0ed6501a-31af-475e-83c5-b9d22d72adda/opts
Dec 02 10:12:30 np0005541914.localdomain podman[322179]: 2025-12-02 10:12:30.986358887 +0000 UTC m=+0.053757352 container kill b36f2988170324249ed2bd7b7835756f78e0506ed602fdf8f422ba6b1e741071 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ed6501a-31af-475e-83c5-b9d22d72adda, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 02 10:12:31 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:31.081 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:31 np0005541914.localdomain ceph-mon[301710]: osdmap e198: 6 total, 6 up, 6 in
Dec 02 10:12:31 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v482: 177 pgs: 177 active+clean; 198 MiB data, 985 MiB used, 41 GiB / 42 GiB avail; 91 KiB/s rd, 98 KiB/s wr, 140 op/s
Dec 02 10:12:31 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "b9e19d1e-178b-4a98-88b5-d79880cd9496", "auth_id": "bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:12:31 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:b9e19d1e-178b-4a98-88b5-d79880cd9496, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:12:31 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Dec 02 10:12:31 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec 02 10:12:31 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:12:31.398 262347 INFO neutron.agent.dhcp.agent [None req-92fbaf53-e152-41e3-bae3-41ce2726b01b - - - - - -] DHCP configuration for ports {'0e034430-6b11-4bae-9254-bdf3f84dc12d'} is completed
Dec 02 10:12:31 np0005541914.localdomain podman[322257]: 
Dec 02 10:12:31 np0005541914.localdomain podman[322257]: 2025-12-02 10:12:31.490741855 +0000 UTC m=+0.071562689 container create ec7424855634084bfd143cc74d1116dc79577efba21bda6aebc77a81797ba305 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ca9eef71-1213-4a2c-90d0-cfc01ce50fc6, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:12:31 np0005541914.localdomain sudo[322164]: pam_unix(sudo:session): session closed for user root
Dec 02 10:12:31 np0005541914.localdomain systemd[1]: Started libpod-conmon-ec7424855634084bfd143cc74d1116dc79577efba21bda6aebc77a81797ba305.scope.
Dec 02 10:12:31 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 10:12:31 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/787199df7695c6c24c2e87676ed9a156fdbe63f95ead2467b2e5b6b6b47c8524/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 10:12:31 np0005541914.localdomain podman[322257]: 2025-12-02 10:12:31.45412771 +0000 UTC m=+0.034948544 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 02 10:12:31 np0005541914.localdomain podman[322257]: 2025-12-02 10:12:31.563170741 +0000 UTC m=+0.143991575 container init ec7424855634084bfd143cc74d1116dc79577efba21bda6aebc77a81797ba305 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ca9eef71-1213-4a2c-90d0-cfc01ce50fc6, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:12:31 np0005541914.localdomain podman[322257]: 2025-12-02 10:12:31.56995307 +0000 UTC m=+0.150773904 container start ec7424855634084bfd143cc74d1116dc79577efba21bda6aebc77a81797ba305 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ca9eef71-1213-4a2c-90d0-cfc01ce50fc6, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 02 10:12:31 np0005541914.localdomain dnsmasq[322288]: started, version 2.85 cachesize 150
Dec 02 10:12:31 np0005541914.localdomain dnsmasq[322288]: DNS service limited to local subnets
Dec 02 10:12:31 np0005541914.localdomain dnsmasq[322288]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 02 10:12:31 np0005541914.localdomain dnsmasq[322288]: warning: no upstream servers configured
Dec 02 10:12:31 np0005541914.localdomain dnsmasq-dhcp[322288]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Dec 02 10:12:31 np0005541914.localdomain dnsmasq[322288]: read /var/lib/neutron/dhcp/ca9eef71-1213-4a2c-90d0-cfc01ce50fc6/addn_hosts - 0 addresses
Dec 02 10:12:31 np0005541914.localdomain dnsmasq-dhcp[322288]: read /var/lib/neutron/dhcp/ca9eef71-1213-4a2c-90d0-cfc01ce50fc6/host
Dec 02 10:12:31 np0005541914.localdomain dnsmasq-dhcp[322288]: read /var/lib/neutron/dhcp/ca9eef71-1213-4a2c-90d0-cfc01ce50fc6/opts
Dec 02 10:12:31 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:12:31.627 262347 INFO neutron.agent.dhcp.agent [None req-9dac3193-bf5f-4c2e-8889-0f4528b741ac - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:12:30Z, description=, device_id=9ad261e0-bab0-4724-94e5-b35ab4156358, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034a499d0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034a495b0>], id=dd3c2d4d-793d-492e-ab43-90dc5d2cfc76, ip_allocation=immediate, mac_address=fa:16:3e:7d:7c:fe, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:12:28Z, description=, dns_domain=, id=ca9eef71-1213-4a2c-90d0-cfc01ce50fc6, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-2127784479, port_security_enabled=True, project_id=8eea084241c14c5d9a6cc0d912041a21, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=47682, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2862, status=ACTIVE, subnets=['7fab66c3-2c3a-4182-8b9f-a90ae9fdebc9'], tags=[], tenant_id=8eea084241c14c5d9a6cc0d912041a21, updated_at=2025-12-02T10:12:29Z, vlan_transparent=None, network_id=ca9eef71-1213-4a2c-90d0-cfc01ce50fc6, port_security_enabled=False, project_id=8eea084241c14c5d9a6cc0d912041a21, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2874, status=DOWN, tags=[], tenant_id=8eea084241c14c5d9a6cc0d912041a21, updated_at=2025-12-02T10:12:30Z on network ca9eef71-1213-4a2c-90d0-cfc01ce50fc6
Dec 02 10:12:31 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e199 e199: 6 total, 6 up, 6 in
Dec 02 10:12:31 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180,allow rw path=/volumes/_nogroup/b9e19d1e-178b-4a98-88b5-d79880cd9496/13c658d8-8e0f-421c-9526-6f9449a5852e", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72,allow rw pool=manila_data namespace=fsvolumens_b9e19d1e-178b-4a98-88b5-d79880cd9496"]} v 0)
Dec 02 10:12:31 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180,allow rw path=/volumes/_nogroup/b9e19d1e-178b-4a98-88b5-d79880cd9496/13c658d8-8e0f-421c-9526-6f9449a5852e", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72,allow rw pool=manila_data namespace=fsvolumens_b9e19d1e-178b-4a98-88b5-d79880cd9496"]} : dispatch
Dec 02 10:12:31 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:12:31.692 262347 INFO neutron.agent.linux.ip_lib [None req-047e92ce-068f-4bb6-860f-517e6bfd269f - - - - - -] Device tap11279470-8a cannot be used as it has no MAC address
Dec 02 10:12:31 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Dec 02 10:12:31 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec 02 10:12:31 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 10:12:31 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:12:31 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 02 10:12:31 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:12:31 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 02 10:12:31 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:31.753 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:31 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:12:31.759 262347 INFO neutron.agent.dhcp.agent [None req-5c1b0e98-3bd5-4dcd-8b89-7eff9835a2a3 - - - - - -] DHCP configuration for ports {'74817638-3673-4c07-8de2-9aa38992d8f9'} is completed
Dec 02 10:12:31 np0005541914.localdomain kernel: device tap11279470-8a entered promiscuous mode
Dec 02 10:12:31 np0005541914.localdomain NetworkManager[5967]: <info>  [1764670351.7666] manager: (tap11279470-8a): new Generic device (/org/freedesktop/NetworkManager/Devices/45)
Dec 02 10:12:31 np0005541914.localdomain systemd-udevd[322101]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 10:12:31 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:31.767 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:31 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:31.773 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:31 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] update: starting ev 817fc4c9-450f-4b89-a12c-f69158f38a89 (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:12:31 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] complete: finished ev 817fc4c9-450f-4b89-a12c-f69158f38a89 (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:12:31 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Completed event 817fc4c9-450f-4b89-a12c-f69158f38a89 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 02 10:12:31 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 02 10:12:31 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:12:31 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap11279470-8a: No such device
Dec 02 10:12:31 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:31.790 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:31 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap11279470-8a: No such device
Dec 02 10:12:31 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:bob, format:json, prefix:fs subvolume authorize, sub_name:b9e19d1e-178b-4a98-88b5-d79880cd9496, tenant_id:a241a07e4161486091e8de3f95a1d6c6, vol_name:cephfs) < ""
Dec 02 10:12:31 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap11279470-8a: No such device
Dec 02 10:12:31 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap11279470-8a: No such device
Dec 02 10:12:31 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap11279470-8a: No such device
Dec 02 10:12:31 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap11279470-8a: No such device
Dec 02 10:12:31 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap11279470-8a: No such device
Dec 02 10:12:31 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tap11279470-8a: No such device
Dec 02 10:12:31 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:31.827 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:31 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:31.853 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:31 np0005541914.localdomain dnsmasq[322288]: read /var/lib/neutron/dhcp/ca9eef71-1213-4a2c-90d0-cfc01ce50fc6/addn_hosts - 1 addresses
Dec 02 10:12:31 np0005541914.localdomain dnsmasq-dhcp[322288]: read /var/lib/neutron/dhcp/ca9eef71-1213-4a2c-90d0-cfc01ce50fc6/host
Dec 02 10:12:31 np0005541914.localdomain dnsmasq-dhcp[322288]: read /var/lib/neutron/dhcp/ca9eef71-1213-4a2c-90d0-cfc01ce50fc6/opts
Dec 02 10:12:31 np0005541914.localdomain podman[322323]: 2025-12-02 10:12:31.869596717 +0000 UTC m=+0.061678646 container kill ec7424855634084bfd143cc74d1116dc79577efba21bda6aebc77a81797ba305 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ca9eef71-1213-4a2c-90d0-cfc01ce50fc6, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Dec 02 10:12:32 np0005541914.localdomain sudo[322364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 10:12:32 np0005541914.localdomain sudo[322364]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:12:32 np0005541914.localdomain sudo[322364]: pam_unix(sudo:session): session closed for user root
Dec 02 10:12:32 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:12:32.096 262347 INFO neutron.agent.dhcp.agent [None req-42e6126b-2fe3-4b22-8a95-9fc409c111c5 - - - - - -] DHCP configuration for ports {'dd3c2d4d-793d-492e-ab43-90dc5d2cfc76'} is completed
Dec 02 10:12:32 np0005541914.localdomain ceph-mon[301710]: pgmap v482: 177 pgs: 177 active+clean; 198 MiB data, 985 MiB used, 41 GiB / 42 GiB avail; 91 KiB/s rd, 98 KiB/s wr, 140 op/s
Dec 02 10:12:32 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "b9e19d1e-178b-4a98-88b5-d79880cd9496", "auth_id": "bob", "tenant_id": "a241a07e4161486091e8de3f95a1d6c6", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:12:32 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec 02 10:12:32 np0005541914.localdomain ceph-mon[301710]: osdmap e199: 6 total, 6 up, 6 in
Dec 02 10:12:32 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180,allow rw path=/volumes/_nogroup/b9e19d1e-178b-4a98-88b5-d79880cd9496/13c658d8-8e0f-421c-9526-6f9449a5852e", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72,allow rw pool=manila_data namespace=fsvolumens_b9e19d1e-178b-4a98-88b5-d79880cd9496"]} : dispatch
Dec 02 10:12:32 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180,allow rw path=/volumes/_nogroup/b9e19d1e-178b-4a98-88b5-d79880cd9496/13c658d8-8e0f-421c-9526-6f9449a5852e", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72,allow rw pool=manila_data namespace=fsvolumens_b9e19d1e-178b-4a98-88b5-d79880cd9496"]} : dispatch
Dec 02 10:12:32 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180,allow rw path=/volumes/_nogroup/b9e19d1e-178b-4a98-88b5-d79880cd9496/13c658d8-8e0f-421c-9526-6f9449a5852e", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72,allow rw pool=manila_data namespace=fsvolumens_b9e19d1e-178b-4a98-88b5-d79880cd9496"]}]': finished
Dec 02 10:12:32 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec 02 10:12:32 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:12:32 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:12:32 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/3972691788' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:12:32 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/3972691788' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:12:32 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:12:32 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:12:32 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events
Dec 02 10:12:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 02 10:12:32 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:12:32.315 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:12:30Z, description=, device_id=9ad261e0-bab0-4724-94e5-b35ab4156358, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034a4c550>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034a4caf0>], id=dd3c2d4d-793d-492e-ab43-90dc5d2cfc76, ip_allocation=immediate, mac_address=fa:16:3e:7d:7c:fe, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:12:28Z, description=, dns_domain=, id=ca9eef71-1213-4a2c-90d0-cfc01ce50fc6, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-2127784479, port_security_enabled=True, project_id=8eea084241c14c5d9a6cc0d912041a21, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=47682, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2862, status=ACTIVE, subnets=['7fab66c3-2c3a-4182-8b9f-a90ae9fdebc9'], tags=[], tenant_id=8eea084241c14c5d9a6cc0d912041a21, updated_at=2025-12-02T10:12:29Z, vlan_transparent=None, network_id=ca9eef71-1213-4a2c-90d0-cfc01ce50fc6, port_security_enabled=False, project_id=8eea084241c14c5d9a6cc0d912041a21, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2874, status=DOWN, tags=[], tenant_id=8eea084241c14c5d9a6cc0d912041a21, updated_at=2025-12-02T10:12:30Z on network ca9eef71-1213-4a2c-90d0-cfc01ce50fc6
Dec 02 10:12:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:12:32 np0005541914.localdomain podman[322421]: 2025-12-02 10:12:32.562523948 +0000 UTC m=+0.109243288 container kill ec7424855634084bfd143cc74d1116dc79577efba21bda6aebc77a81797ba305 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ca9eef71-1213-4a2c-90d0-cfc01ce50fc6, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:12:32 np0005541914.localdomain dnsmasq[322288]: read /var/lib/neutron/dhcp/ca9eef71-1213-4a2c-90d0-cfc01ce50fc6/addn_hosts - 1 addresses
Dec 02 10:12:32 np0005541914.localdomain dnsmasq-dhcp[322288]: read /var/lib/neutron/dhcp/ca9eef71-1213-4a2c-90d0-cfc01ce50fc6/host
Dec 02 10:12:32 np0005541914.localdomain dnsmasq-dhcp[322288]: read /var/lib/neutron/dhcp/ca9eef71-1213-4a2c-90d0-cfc01ce50fc6/opts
Dec 02 10:12:32 np0005541914.localdomain podman[322448]: 
Dec 02 10:12:32 np0005541914.localdomain podman[322448]: 2025-12-02 10:12:32.598999959 +0000 UTC m=+0.076596805 container create b8186d2abb81bf620d4df128e6c22be46300f5f106549ac2d27767d8ea22d86d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e84d56b5-6863-43e4-89bd-1291a3d50373, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 02 10:12:32 np0005541914.localdomain systemd[1]: Started libpod-conmon-b8186d2abb81bf620d4df128e6c22be46300f5f106549ac2d27767d8ea22d86d.scope.
Dec 02 10:12:32 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 10:12:32 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/761371641b9c35bd92f4d96d2b80b5565ec48e80d1083d2493cf238bafa2efcc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 10:12:32 np0005541914.localdomain podman[322448]: 2025-12-02 10:12:32.567772319 +0000 UTC m=+0.045369195 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 02 10:12:32 np0005541914.localdomain podman[322448]: 2025-12-02 10:12:32.672048763 +0000 UTC m=+0.149645669 container init b8186d2abb81bf620d4df128e6c22be46300f5f106549ac2d27767d8ea22d86d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e84d56b5-6863-43e4-89bd-1291a3d50373, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec 02 10:12:32 np0005541914.localdomain podman[322448]: 2025-12-02 10:12:32.682947618 +0000 UTC m=+0.160544494 container start b8186d2abb81bf620d4df128e6c22be46300f5f106549ac2d27767d8ea22d86d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e84d56b5-6863-43e4-89bd-1291a3d50373, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:12:32 np0005541914.localdomain dnsmasq[322473]: started, version 2.85 cachesize 150
Dec 02 10:12:32 np0005541914.localdomain dnsmasq[322473]: DNS service limited to local subnets
Dec 02 10:12:32 np0005541914.localdomain dnsmasq[322473]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 02 10:12:32 np0005541914.localdomain dnsmasq[322473]: warning: no upstream servers configured
Dec 02 10:12:32 np0005541914.localdomain dnsmasq-dhcp[322473]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Dec 02 10:12:32 np0005541914.localdomain dnsmasq[322473]: read /var/lib/neutron/dhcp/e84d56b5-6863-43e4-89bd-1291a3d50373/addn_hosts - 0 addresses
Dec 02 10:12:32 np0005541914.localdomain dnsmasq-dhcp[322473]: read /var/lib/neutron/dhcp/e84d56b5-6863-43e4-89bd-1291a3d50373/host
Dec 02 10:12:32 np0005541914.localdomain dnsmasq-dhcp[322473]: read /var/lib/neutron/dhcp/e84d56b5-6863-43e4-89bd-1291a3d50373/opts
Dec 02 10:12:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e200 e200: 6 total, 6 up, 6 in
Dec 02 10:12:32 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:12:32.734 262347 INFO neutron.agent.dhcp.agent [None req-894111e8-1c3c-4b4c-b907-32a7898efa78 - - - - - -] DHCP configuration for ports {'dd3c2d4d-793d-492e-ab43-90dc5d2cfc76'} is completed
Dec 02 10:12:32 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:12:32.853 262347 INFO neutron.agent.dhcp.agent [None req-a9328db3-b464-4950-b4cf-099159acb6b9 - - - - - -] DHCP configuration for ports {'48d4a17b-d013-47e9-85fa-29ec2f97b779'} is completed
Dec 02 10:12:32 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:32.870 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:32 np0005541914.localdomain dnsmasq[322473]: exiting on receipt of SIGTERM
Dec 02 10:12:32 np0005541914.localdomain podman[322490]: 2025-12-02 10:12:32.999151824 +0000 UTC m=+0.059012424 container kill b8186d2abb81bf620d4df128e6c22be46300f5f106549ac2d27767d8ea22d86d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e84d56b5-6863-43e4-89bd-1291a3d50373, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:12:33 np0005541914.localdomain systemd[1]: libpod-b8186d2abb81bf620d4df128e6c22be46300f5f106549ac2d27767d8ea22d86d.scope: Deactivated successfully.
Dec 02 10:12:33 np0005541914.localdomain podman[322504]: 2025-12-02 10:12:33.070577528 +0000 UTC m=+0.060545990 container died b8186d2abb81bf620d4df128e6c22be46300f5f106549ac2d27767d8ea22d86d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e84d56b5-6863-43e4-89bd-1291a3d50373, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:12:33 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:12:33 np0005541914.localdomain podman[322504]: 2025-12-02 10:12:33.108388421 +0000 UTC m=+0.098356833 container cleanup b8186d2abb81bf620d4df128e6c22be46300f5f106549ac2d27767d8ea22d86d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e84d56b5-6863-43e4-89bd-1291a3d50373, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 02 10:12:33 np0005541914.localdomain systemd[1]: libpod-conmon-b8186d2abb81bf620d4df128e6c22be46300f5f106549ac2d27767d8ea22d86d.scope: Deactivated successfully.
Dec 02 10:12:33 np0005541914.localdomain podman[322511]: 2025-12-02 10:12:33.159298225 +0000 UTC m=+0.133247046 container remove b8186d2abb81bf620d4df128e6c22be46300f5f106549ac2d27767d8ea22d86d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-e84d56b5-6863-43e4-89bd-1291a3d50373, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 02 10:12:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:33.171 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:33 np0005541914.localdomain kernel: device tap11279470-8a left promiscuous mode
Dec 02 10:12:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:33.187 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:33 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v485: 177 pgs: 177 active+clean; 198 MiB data, 986 MiB used, 41 GiB / 42 GiB avail; 109 KiB/s rd, 69 KiB/s wr, 157 op/s
Dec 02 10:12:33 np0005541914.localdomain podman[322532]: 2025-12-02 10:12:33.20373053 +0000 UTC m=+0.088334976 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 02 10:12:33 np0005541914.localdomain podman[322532]: 2025-12-02 10:12:33.213326595 +0000 UTC m=+0.097931021 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 02 10:12:33 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:12:33 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:12:33 np0005541914.localdomain ceph-mon[301710]: osdmap e200: 6 total, 6 up, 6 in
Dec 02 10:12:33 np0005541914.localdomain systemd[1]: tmp-crun.dRmwHR.mount: Deactivated successfully.
Dec 02 10:12:33 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-761371641b9c35bd92f4d96d2b80b5565ec48e80d1083d2493cf238bafa2efcc-merged.mount: Deactivated successfully.
Dec 02 10:12:33 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b8186d2abb81bf620d4df128e6c22be46300f5f106549ac2d27767d8ea22d86d-userdata-shm.mount: Deactivated successfully.
Dec 02 10:12:33 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:12:33.608 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:12:30Z, description=, device_id=48efaaba-b00b-49e5-9453-c3674fe08fa7, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034a688b0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034a688e0>], id=0e034430-6b11-4bae-9254-bdf3f84dc12d, ip_allocation=immediate, mac_address=fa:16:3e:27:a3:1d, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:12:22Z, description=, dns_domain=, id=0ed6501a-31af-475e-83c5-b9d22d72adda, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PrometheusGabbiTest-1168865774-network, port_security_enabled=True, project_id=a1854cb9cd7e49c4a6a223acc8d74075, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=20837, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2841, status=ACTIVE, subnets=['418123ed-5885-4e16-91c2-f687fe4bb883'], tags=[], tenant_id=a1854cb9cd7e49c4a6a223acc8d74075, updated_at=2025-12-02T10:12:23Z, vlan_transparent=None, network_id=0ed6501a-31af-475e-83c5-b9d22d72adda, port_security_enabled=False, project_id=a1854cb9cd7e49c4a6a223acc8d74075, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2872, status=DOWN, tags=[], tenant_id=a1854cb9cd7e49c4a6a223acc8d74075, updated_at=2025-12-02T10:12:30Z on network 0ed6501a-31af-475e-83c5-b9d22d72adda
Dec 02 10:12:33 np0005541914.localdomain podman[239757]: time="2025-12-02T10:12:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:12:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:33.640 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:33 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:12:33.655 262347 INFO neutron.agent.dhcp.agent [None req-d80070d0-e1b4-4c42-bba6-698f47362b1a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:12:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:12:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 160387 "" "Go-http-client/1.1"
Dec 02 10:12:33 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:12:33.656 262347 INFO neutron.agent.dhcp.agent [None req-d80070d0-e1b4-4c42-bba6-698f47362b1a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:12:33 np0005541914.localdomain systemd[1]: run-netns-qdhcp\x2de84d56b5\x2d6863\x2d43e4\x2d89bd\x2d1291a3d50373.mount: Deactivated successfully.
Dec 02 10:12:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:12:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20155 "" "Go-http-client/1.1"
Dec 02 10:12:33 np0005541914.localdomain podman[322574]: 2025-12-02 10:12:33.846713987 +0000 UTC m=+0.054392782 container kill b36f2988170324249ed2bd7b7835756f78e0506ed602fdf8f422ba6b1e741071 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ed6501a-31af-475e-83c5-b9d22d72adda, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:12:33 np0005541914.localdomain systemd[1]: tmp-crun.c55JLf.mount: Deactivated successfully.
Dec 02 10:12:33 np0005541914.localdomain dnsmasq[321974]: read /var/lib/neutron/dhcp/0ed6501a-31af-475e-83c5-b9d22d72adda/addn_hosts - 1 addresses
Dec 02 10:12:33 np0005541914.localdomain dnsmasq-dhcp[321974]: read /var/lib/neutron/dhcp/0ed6501a-31af-475e-83c5-b9d22d72adda/host
Dec 02 10:12:33 np0005541914.localdomain dnsmasq-dhcp[321974]: read /var/lib/neutron/dhcp/0ed6501a-31af-475e-83c5-b9d22d72adda/opts
Dec 02 10:12:34 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:12:34.095 262347 INFO neutron.agent.dhcp.agent [None req-60d0e473-6437-448e-9d6a-619b2132d567 - - - - - -] DHCP configuration for ports {'0e034430-6b11-4bae-9254-bdf3f84dc12d'} is completed
Dec 02 10:12:34 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "b9e19d1e-178b-4a98-88b5-d79880cd9496", "auth_id": "bob", "format": "json"}]: dispatch
Dec 02 10:12:34 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:b9e19d1e-178b-4a98-88b5-d79880cd9496, vol_name:cephfs) < ""
Dec 02 10:12:34 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Dec 02 10:12:34 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec 02 10:12:34 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72"]} v 0)
Dec 02 10:12:34 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72"]} : dispatch
Dec 02 10:12:34 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:b9e19d1e-178b-4a98-88b5-d79880cd9496, vol_name:cephfs) < ""
Dec 02 10:12:34 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "b9e19d1e-178b-4a98-88b5-d79880cd9496", "auth_id": "bob", "format": "json"}]: dispatch
Dec 02 10:12:34 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:b9e19d1e-178b-4a98-88b5-d79880cd9496, vol_name:cephfs) < ""
Dec 02 10:12:34 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/b9e19d1e-178b-4a98-88b5-d79880cd9496/13c658d8-8e0f-421c-9526-6f9449a5852e
Dec 02 10:12:34 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:12:34 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:b9e19d1e-178b-4a98-88b5-d79880cd9496, vol_name:cephfs) < ""
Dec 02 10:12:34 np0005541914.localdomain ceph-mon[301710]: pgmap v485: 177 pgs: 177 active+clean; 198 MiB data, 986 MiB used, 41 GiB / 42 GiB avail; 109 KiB/s rd, 69 KiB/s wr, 157 op/s
Dec 02 10:12:34 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/4056498242' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:12:34 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/4056498242' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:12:34 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec 02 10:12:34 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72"]} : dispatch
Dec 02 10:12:34 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72"]} : dispatch
Dec 02 10:12:34 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180", "osd", "allow rw pool=manila_data namespace=fsvolumens_4951f94c-f3a4-4170-9869-8238a9dc7b72"]}]': finished
Dec 02 10:12:35 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:12:35.025 2 INFO neutron.agent.securitygroups_rpc [None req-c29e5bff-b968-4a83-beb0-2b46e231db68 27e8ee5045c2430583000f8d62f6e4f1 096ffa0a51b143039159efc232ec547a - - default default] Security group member updated ['0a7d83ca-acbf-4932-884e-9eff3b0bc0ff']
Dec 02 10:12:35 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:35.101 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:35 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:12:35.101 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:12:35 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:12:35.102 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 02 10:12:35 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:35.180 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:35 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v486: 177 pgs: 177 active+clean; 198 MiB data, 986 MiB used, 41 GiB / 42 GiB avail; 74 KiB/s rd, 47 KiB/s wr, 107 op/s
Dec 02 10:12:35 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "b9e19d1e-178b-4a98-88b5-d79880cd9496", "auth_id": "bob", "format": "json"}]: dispatch
Dec 02 10:12:35 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "b9e19d1e-178b-4a98-88b5-d79880cd9496", "auth_id": "bob", "format": "json"}]: dispatch
Dec 02 10:12:36 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:36.083 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:36 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:12:36.104 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:12:36 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:12:36.287 2 INFO neutron.agent.securitygroups_rpc [None req-11289385-b413-4a50-89a7-e0a67d214908 27e8ee5045c2430583000f8d62f6e4f1 096ffa0a51b143039159efc232ec547a - - default default] Security group member updated ['0a7d83ca-acbf-4932-884e-9eff3b0bc0ff']
Dec 02 10:12:36 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e201 e201: 6 total, 6 up, 6 in
Dec 02 10:12:36 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0.
Dec 02 10:12:36 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:12:36.646728) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 02 10:12:36 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40
Dec 02 10:12:36 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670356646775, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2850, "num_deletes": 268, "total_data_size": 4650827, "memory_usage": 4791072, "flush_reason": "Manual Compaction"}
Dec 02 10:12:36 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started
Dec 02 10:12:36 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670356666737, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 3040370, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25302, "largest_seqno": 28146, "table_properties": {"data_size": 3028743, "index_size": 7492, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 27700, "raw_average_key_size": 22, "raw_value_size": 3004285, "raw_average_value_size": 2450, "num_data_blocks": 314, "num_entries": 1226, "num_filter_entries": 1226, "num_deletions": 268, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764670242, "oldest_key_time": 1764670242, "file_creation_time": 1764670356, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Dec 02 10:12:36 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 20069 microseconds, and 7811 cpu microseconds.
Dec 02 10:12:36 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 10:12:36 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:12:36.666793) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 3040370 bytes OK
Dec 02 10:12:36 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:12:36.666821) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started
Dec 02 10:12:36 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:12:36.668807) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done
Dec 02 10:12:36 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:12:36.668831) EVENT_LOG_v1 {"time_micros": 1764670356668824, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 02 10:12:36 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:12:36.668856) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 02 10:12:36 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 4637293, prev total WAL file size 4637293, number of live WAL files 2.
Dec 02 10:12:36 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:12:36 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:12:36.670093) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132353530' seq:72057594037927935, type:22 .. '7061786F73003132383032' seq:0, type:0; will stop at (end)
Dec 02 10:12:36 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 02 10:12:36 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(2969KB)], [39(16MB)]
Dec 02 10:12:36 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670356670231, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 20006249, "oldest_snapshot_seqno": -1}
Dec 02 10:12:36 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 13620 keys, 18490102 bytes, temperature: kUnknown
Dec 02 10:12:36 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670356778287, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 18490102, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18411103, "index_size": 43826, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 34117, "raw_key_size": 363814, "raw_average_key_size": 26, "raw_value_size": 18178196, "raw_average_value_size": 1334, "num_data_blocks": 1659, "num_entries": 13620, "num_filter_entries": 13620, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669502, "oldest_key_time": 0, "file_creation_time": 1764670356, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}}
Dec 02 10:12:36 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 10:12:36 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:12:36.778678) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 18490102 bytes
Dec 02 10:12:36 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:12:36.780186) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 184.9 rd, 170.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 16.2 +0.0 blob) out(17.6 +0.0 blob), read-write-amplify(12.7) write-amplify(6.1) OK, records in: 14177, records dropped: 557 output_compression: NoCompression
Dec 02 10:12:36 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:12:36.780230) EVENT_LOG_v1 {"time_micros": 1764670356780201, "job": 22, "event": "compaction_finished", "compaction_time_micros": 108180, "compaction_time_cpu_micros": 55933, "output_level": 6, "num_output_files": 1, "total_output_size": 18490102, "num_input_records": 14177, "num_output_records": 13620, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 02 10:12:36 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:12:36 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670356780964, "job": 22, "event": "table_file_deletion", "file_number": 41}
Dec 02 10:12:36 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:12:36 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670356783132, "job": 22, "event": "table_file_deletion", "file_number": 39}
Dec 02 10:12:36 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:12:36.669997) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:12:36 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:12:36.783250) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:12:36 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:12:36.783257) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:12:36 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:12:36.783260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:12:36 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:12:36.783263) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:12:36 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:12:36.783266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:12:36 np0005541914.localdomain ceph-mon[301710]: pgmap v486: 177 pgs: 177 active+clean; 198 MiB data, 986 MiB used, 41 GiB / 42 GiB avail; 74 KiB/s rd, 47 KiB/s wr, 107 op/s
Dec 02 10:12:36 np0005541914.localdomain ceph-mon[301710]: osdmap e201: 6 total, 6 up, 6 in
Dec 02 10:12:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:12:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:12:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:12:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:12:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:12:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:12:37 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:12:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:12:37 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v488: 177 pgs: 177 active+clean; 198 MiB data, 986 MiB used, 41 GiB / 42 GiB avail; 74 KiB/s rd, 47 KiB/s wr, 107 op/s
Dec 02 10:12:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp'
Dec 02 10:12:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta'
Dec 02 10:12:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:12:37 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "format": "json"}]: dispatch
Dec 02 10:12:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:12:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:12:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:12:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e202 e202: 6 total, 6 up, 6 in
Dec 02 10:12:37 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:12:37 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "bob", "format": "json"}]: dispatch
Dec 02 10:12:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:12:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:37.881 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.bob", "format": "json"} v 0)
Dec 02 10:12:37 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec 02 10:12:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.bob"} v 0)
Dec 02 10:12:37 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch
Dec 02 10:12:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:bob, format:json, prefix:fs subvolume deauthorize, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:12:37 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "bob", "format": "json"}]: dispatch
Dec 02 10:12:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:12:38 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=bob, client_metadata.root=/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72/9297652e-e843-4300-a77e-137058f03180
Dec 02 10:12:38 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:12:38 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:bob, format:json, prefix:fs subvolume evict, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:12:38 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:38.642 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:38 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:12:38 np0005541914.localdomain ceph-mon[301710]: pgmap v488: 177 pgs: 177 active+clean; 198 MiB data, 986 MiB used, 41 GiB / 42 GiB avail; 74 KiB/s rd, 47 KiB/s wr, 107 op/s
Dec 02 10:12:38 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "format": "json"}]: dispatch
Dec 02 10:12:38 np0005541914.localdomain ceph-mon[301710]: osdmap e202: 6 total, 6 up, 6 in
Dec 02 10:12:38 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "bob", "format": "json"}]: dispatch
Dec 02 10:12:38 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Dec 02 10:12:38 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch
Dec 02 10:12:38 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch
Dec 02 10:12:38 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished
Dec 02 10:12:38 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "auth_id": "bob", "format": "json"}]: dispatch
Dec 02 10:12:39 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v490: 177 pgs: 177 active+clean; 199 MiB data, 990 MiB used, 41 GiB / 42 GiB avail; 97 KiB/s rd, 92 KiB/s wr, 144 op/s
Dec 02 10:12:39 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 02 10:12:39 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/681489516' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:12:39 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/681489516' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:12:39 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:12:39.948 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:12:39Z, description=, device_id=7ab6a068-34a0-43a2-9f3f-037d65a744e4, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c31850>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c31f40>], id=f4bff906-1f9b-4162-bd71-18028fa4ee89, ip_allocation=immediate, mac_address=fa:16:3e:63:08:8f, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2932, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:12:39Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:12:40 np0005541914.localdomain podman[322612]: 2025-12-02 10:12:40.169174593 +0000 UTC m=+0.057794977 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 02 10:12:40 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 4 addresses
Dec 02 10:12:40 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:12:40 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:12:40 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e72625fe-e204-4902-a792-e35cd0c49318", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:12:40 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e72625fe-e204-4902-a792-e35cd0c49318, vol_name:cephfs) < ""
Dec 02 10:12:40 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:12:40.466 262347 INFO neutron.agent.dhcp.agent [None req-59443a53-c9f6-41ba-b868-92507d8c3f7d - - - - - -] DHCP configuration for ports {'f4bff906-1f9b-4162-bd71-18028fa4ee89'} is completed
Dec 02 10:12:40 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:40.780 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:40 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e72625fe-e204-4902-a792-e35cd0c49318/.meta.tmp'
Dec 02 10:12:40 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e72625fe-e204-4902-a792-e35cd0c49318/.meta.tmp' to config b'/volumes/_nogroup/e72625fe-e204-4902-a792-e35cd0c49318/.meta'
Dec 02 10:12:40 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e72625fe-e204-4902-a792-e35cd0c49318, vol_name:cephfs) < ""
Dec 02 10:12:40 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e72625fe-e204-4902-a792-e35cd0c49318", "format": "json"}]: dispatch
Dec 02 10:12:40 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e72625fe-e204-4902-a792-e35cd0c49318, vol_name:cephfs) < ""
Dec 02 10:12:40 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e72625fe-e204-4902-a792-e35cd0c49318, vol_name:cephfs) < ""
Dec 02 10:12:40 np0005541914.localdomain ceph-mon[301710]: pgmap v490: 177 pgs: 177 active+clean; 199 MiB data, 990 MiB used, 41 GiB / 42 GiB avail; 97 KiB/s rd, 92 KiB/s wr, 144 op/s
Dec 02 10:12:40 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:12:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:41.084 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:41 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v491: 177 pgs: 177 active+clean; 199 MiB data, 990 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 39 KiB/s wr, 36 op/s
Dec 02 10:12:41 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b9e19d1e-178b-4a98-88b5-d79880cd9496", "format": "json"}]: dispatch
Dec 02 10:12:41 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b9e19d1e-178b-4a98-88b5-d79880cd9496, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:12:41 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b9e19d1e-178b-4a98-88b5-d79880cd9496, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:12:41 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:12:41.365+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b9e19d1e-178b-4a98-88b5-d79880cd9496' of type subvolume
Dec 02 10:12:41 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b9e19d1e-178b-4a98-88b5-d79880cd9496' of type subvolume
Dec 02 10:12:41 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b9e19d1e-178b-4a98-88b5-d79880cd9496", "force": true, "format": "json"}]: dispatch
Dec 02 10:12:41 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b9e19d1e-178b-4a98-88b5-d79880cd9496, vol_name:cephfs) < ""
Dec 02 10:12:41 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b9e19d1e-178b-4a98-88b5-d79880cd9496'' moved to trashcan
Dec 02 10:12:41 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:12:41 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b9e19d1e-178b-4a98-88b5-d79880cd9496, vol_name:cephfs) < ""
Dec 02 10:12:41 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8115277a-c4bb-4c47-9857-029dcd8c9879", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:12:41 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8115277a-c4bb-4c47-9857-029dcd8c9879, vol_name:cephfs) < ""
Dec 02 10:12:41 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8115277a-c4bb-4c47-9857-029dcd8c9879/.meta.tmp'
Dec 02 10:12:41 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8115277a-c4bb-4c47-9857-029dcd8c9879/.meta.tmp' to config b'/volumes/_nogroup/8115277a-c4bb-4c47-9857-029dcd8c9879/.meta'
Dec 02 10:12:41 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8115277a-c4bb-4c47-9857-029dcd8c9879, vol_name:cephfs) < ""
Dec 02 10:12:41 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8115277a-c4bb-4c47-9857-029dcd8c9879", "format": "json"}]: dispatch
Dec 02 10:12:41 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8115277a-c4bb-4c47-9857-029dcd8c9879, vol_name:cephfs) < ""
Dec 02 10:12:41 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8115277a-c4bb-4c47-9857-029dcd8c9879, vol_name:cephfs) < ""
Dec 02 10:12:41 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e72625fe-e204-4902-a792-e35cd0c49318", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:12:41 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e72625fe-e204-4902-a792-e35cd0c49318", "format": "json"}]: dispatch
Dec 02 10:12:41 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:12:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:41.948 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:12:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:12:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:12:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:12:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:12:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:12:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:12:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:12:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:12:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:12:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:12:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:12:42 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Dec 02 10:12:42 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:12:43 np0005541914.localdomain ceph-mon[301710]: pgmap v491: 177 pgs: 177 active+clean; 199 MiB data, 990 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 39 KiB/s wr, 36 op/s
Dec 02 10:12:43 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b9e19d1e-178b-4a98-88b5-d79880cd9496", "format": "json"}]: dispatch
Dec 02 10:12:43 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b9e19d1e-178b-4a98-88b5-d79880cd9496", "force": true, "format": "json"}]: dispatch
Dec 02 10:12:43 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8115277a-c4bb-4c47-9857-029dcd8c9879", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:12:43 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8115277a-c4bb-4c47-9857-029dcd8c9879", "format": "json"}]: dispatch
Dec 02 10:12:43 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v492: 177 pgs: 177 active+clean; 443 MiB data, 1.6 GiB used, 40 GiB / 42 GiB avail; 88 KiB/s rd, 31 MiB/s wr, 143 op/s
Dec 02 10:12:43 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "e72625fe-e204-4902-a792-e35cd0c49318", "new_size": 2147483648, "format": "json"}]: dispatch
Dec 02 10:12:43 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:e72625fe-e204-4902-a792-e35cd0c49318, vol_name:cephfs) < ""
Dec 02 10:12:43 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:43.646 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:43 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:e72625fe-e204-4902-a792-e35cd0c49318, vol_name:cephfs) < ""
Dec 02 10:12:44 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:12:44.503 2 INFO neutron.agent.securitygroups_rpc [None req-4efd9dba-690c-4981-8a45-b91486e5b5eb 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']
Dec 02 10:12:44 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "format": "json"}]: dispatch
Dec 02 10:12:44 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:12:44 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:12:44 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4951f94c-f3a4-4170-9869-8238a9dc7b72' of type subvolume
Dec 02 10:12:44 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:12:44.618+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4951f94c-f3a4-4170-9869-8238a9dc7b72' of type subvolume
Dec 02 10:12:44 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "force": true, "format": "json"}]: dispatch
Dec 02 10:12:44 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:12:44 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4951f94c-f3a4-4170-9869-8238a9dc7b72'' moved to trashcan
Dec 02 10:12:44 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:12:44 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4951f94c-f3a4-4170-9869-8238a9dc7b72, vol_name:cephfs) < ""
Dec 02 10:12:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:44.772 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:44 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8115277a-c4bb-4c47-9857-029dcd8c9879", "snap_name": "f755cd55-747e-4403-a245-43cbf9abc4bb", "format": "json"}]: dispatch
Dec 02 10:12:44 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f755cd55-747e-4403-a245-43cbf9abc4bb, sub_name:8115277a-c4bb-4c47-9857-029dcd8c9879, vol_name:cephfs) < ""
Dec 02 10:12:44 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:f755cd55-747e-4403-a245-43cbf9abc4bb, sub_name:8115277a-c4bb-4c47-9857-029dcd8c9879, vol_name:cephfs) < ""
Dec 02 10:12:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:45.009 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:45 np0005541914.localdomain ceph-mon[301710]: pgmap v492: 177 pgs: 177 active+clean; 443 MiB data, 1.6 GiB used, 40 GiB / 42 GiB avail; 88 KiB/s rd, 31 MiB/s wr, 143 op/s
Dec 02 10:12:45 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "e72625fe-e204-4902-a792-e35cd0c49318", "new_size": 2147483648, "format": "json"}]: dispatch
Dec 02 10:12:45 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v493: 177 pgs: 177 active+clean; 443 MiB data, 1.6 GiB used, 40 GiB / 42 GiB avail; 82 KiB/s rd, 29 MiB/s wr, 133 op/s
Dec 02 10:12:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:46.107 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:46 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "format": "json"}]: dispatch
Dec 02 10:12:46 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4951f94c-f3a4-4170-9869-8238a9dc7b72", "force": true, "format": "json"}]: dispatch
Dec 02 10:12:46 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8115277a-c4bb-4c47-9857-029dcd8c9879", "snap_name": "f755cd55-747e-4403-a245-43cbf9abc4bb", "format": "json"}]: dispatch
Dec 02 10:12:46 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e72625fe-e204-4902-a792-e35cd0c49318", "format": "json"}]: dispatch
Dec 02 10:12:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e72625fe-e204-4902-a792-e35cd0c49318, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:12:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e72625fe-e204-4902-a792-e35cd0c49318, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:12:46 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e72625fe-e204-4902-a792-e35cd0c49318' of type subvolume
Dec 02 10:12:46 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:12:46.913+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e72625fe-e204-4902-a792-e35cd0c49318' of type subvolume
Dec 02 10:12:46 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e72625fe-e204-4902-a792-e35cd0c49318", "force": true, "format": "json"}]: dispatch
Dec 02 10:12:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e72625fe-e204-4902-a792-e35cd0c49318, vol_name:cephfs) < ""
Dec 02 10:12:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e72625fe-e204-4902-a792-e35cd0c49318'' moved to trashcan
Dec 02 10:12:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:12:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e72625fe-e204-4902-a792-e35cd0c49318, vol_name:cephfs) < ""
Dec 02 10:12:47 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:12:47.116 2 INFO neutron.agent.securitygroups_rpc [None req-6b684e17-61a9-4f64-b7d6-8863ef06b405 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']
Dec 02 10:12:47 np0005541914.localdomain ceph-mon[301710]: pgmap v493: 177 pgs: 177 active+clean; 443 MiB data, 1.6 GiB used, 40 GiB / 42 GiB avail; 82 KiB/s rd, 29 MiB/s wr, 133 op/s
Dec 02 10:12:47 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e72625fe-e204-4902-a792-e35cd0c49318", "format": "json"}]: dispatch
Dec 02 10:12:47 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e72625fe-e204-4902-a792-e35cd0c49318", "force": true, "format": "json"}]: dispatch
Dec 02 10:12:47 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v494: 177 pgs: 177 active+clean; 443 MiB data, 1.6 GiB used, 40 GiB / 42 GiB avail; 70 KiB/s rd, 24 MiB/s wr, 114 op/s
Dec 02 10:12:47 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:12:47.328 2 INFO neutron.agent.securitygroups_rpc [None req-6b684e17-61a9-4f64-b7d6-8863ef06b405 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']
Dec 02 10:12:47 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:12:47 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:12:47.809 2 INFO neutron.agent.securitygroups_rpc [None req-2a241596-c202-4fbe-a6b6-7dd1095be821 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']
Dec 02 10:12:48 np0005541914.localdomain ceph-mon[301710]: pgmap v494: 177 pgs: 177 active+clean; 443 MiB data, 1.6 GiB used, 40 GiB / 42 GiB avail; 70 KiB/s rd, 24 MiB/s wr, 114 op/s
Dec 02 10:12:48 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8115277a-c4bb-4c47-9857-029dcd8c9879", "snap_name": "9dc12954-5289-4f2d-a2ed-7e457be2a4e6", "format": "json"}]: dispatch
Dec 02 10:12:48 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:9dc12954-5289-4f2d-a2ed-7e457be2a4e6, sub_name:8115277a-c4bb-4c47-9857-029dcd8c9879, vol_name:cephfs) < ""
Dec 02 10:12:48 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:9dc12954-5289-4f2d-a2ed-7e457be2a4e6, sub_name:8115277a-c4bb-4c47-9857-029dcd8c9879, vol_name:cephfs) < ""
Dec 02 10:12:48 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:12:48.422 2 INFO neutron.agent.securitygroups_rpc [None req-2a4c2d24-b746-4ff5-abcb-d93c9952342d 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']
Dec 02 10:12:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:48.649 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:49 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v495: 177 pgs: 177 active+clean; 839 MiB data, 2.8 GiB used, 39 GiB / 42 GiB avail; 108 KiB/s rd, 56 MiB/s wr, 185 op/s
Dec 02 10:12:49 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "8115277a-c4bb-4c47-9857-029dcd8c9879", "snap_name": "9dc12954-5289-4f2d-a2ed-7e457be2a4e6", "format": "json"}]: dispatch
Dec 02 10:12:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:49.987 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:50 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "977a2594-0007-4fab-a7e2-b6bc2dee3113", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:12:50 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:977a2594-0007-4fab-a7e2-b6bc2dee3113, vol_name:cephfs) < ""
Dec 02 10:12:50 np0005541914.localdomain ceph-mon[301710]: pgmap v495: 177 pgs: 177 active+clean; 839 MiB data, 2.8 GiB used, 39 GiB / 42 GiB avail; 108 KiB/s rd, 56 MiB/s wr, 185 op/s
Dec 02 10:12:50 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/977a2594-0007-4fab-a7e2-b6bc2dee3113/.meta.tmp'
Dec 02 10:12:50 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/977a2594-0007-4fab-a7e2-b6bc2dee3113/.meta.tmp' to config b'/volumes/_nogroup/977a2594-0007-4fab-a7e2-b6bc2dee3113/.meta'
Dec 02 10:12:50 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:977a2594-0007-4fab-a7e2-b6bc2dee3113, vol_name:cephfs) < ""
Dec 02 10:12:50 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "977a2594-0007-4fab-a7e2-b6bc2dee3113", "format": "json"}]: dispatch
Dec 02 10:12:50 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:977a2594-0007-4fab-a7e2-b6bc2dee3113, vol_name:cephfs) < ""
Dec 02 10:12:50 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:977a2594-0007-4fab-a7e2-b6bc2dee3113, vol_name:cephfs) < ""
Dec 02 10:12:50 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:12:50 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:12:50 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:12:51 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:12:51 np0005541914.localdomain podman[322635]: 2025-12-02 10:12:51.109776008 +0000 UTC m=+0.075026877 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 10:12:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:51.146 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:51 np0005541914.localdomain podman[322642]: 2025-12-02 10:12:51.170327178 +0000 UTC m=+0.135673800 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 02 10:12:51 np0005541914.localdomain podman[322635]: 2025-12-02 10:12:51.175910959 +0000 UTC m=+0.141161848 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 10:12:51 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:12:51 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v496: 177 pgs: 177 active+clean; 839 MiB data, 2.8 GiB used, 39 GiB / 42 GiB avail; 86 KiB/s rd, 53 MiB/s wr, 151 op/s
Dec 02 10:12:51 np0005541914.localdomain podman[322634]: 2025-12-02 10:12:51.222611905 +0000 UTC m=+0.221639491 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 02 10:12:51 np0005541914.localdomain podman[322642]: 2025-12-02 10:12:51.25111997 +0000 UTC m=+0.216466672 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:12:51 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:12:51 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "977a2594-0007-4fab-a7e2-b6bc2dee3113", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:12:51 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "977a2594-0007-4fab-a7e2-b6bc2dee3113", "format": "json"}]: dispatch
Dec 02 10:12:51 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:12:51 np0005541914.localdomain podman[322636]: 2025-12-02 10:12:51.3318041 +0000 UTC m=+0.311120051 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3)
Dec 02 10:12:51 np0005541914.localdomain podman[322634]: 2025-12-02 10:12:51.356802818 +0000 UTC m=+0.355830494 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Dec 02 10:12:51 np0005541914.localdomain podman[322636]: 2025-12-02 10:12:51.365901077 +0000 UTC m=+0.345217048 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 02 10:12:51 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:12:51 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:12:51 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8115277a-c4bb-4c47-9857-029dcd8c9879", "snap_name": "9dc12954-5289-4f2d-a2ed-7e457be2a4e6_0d6f16e3-0b37-49c0-9f62-73be706c6c58", "force": true, "format": "json"}]: dispatch
Dec 02 10:12:51 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9dc12954-5289-4f2d-a2ed-7e457be2a4e6_0d6f16e3-0b37-49c0-9f62-73be706c6c58, sub_name:8115277a-c4bb-4c47-9857-029dcd8c9879, vol_name:cephfs) < ""
Dec 02 10:12:51 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8115277a-c4bb-4c47-9857-029dcd8c9879/.meta.tmp'
Dec 02 10:12:51 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8115277a-c4bb-4c47-9857-029dcd8c9879/.meta.tmp' to config b'/volumes/_nogroup/8115277a-c4bb-4c47-9857-029dcd8c9879/.meta'
Dec 02 10:12:51 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9dc12954-5289-4f2d-a2ed-7e457be2a4e6_0d6f16e3-0b37-49c0-9f62-73be706c6c58, sub_name:8115277a-c4bb-4c47-9857-029dcd8c9879, vol_name:cephfs) < ""
Dec 02 10:12:51 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8115277a-c4bb-4c47-9857-029dcd8c9879", "snap_name": "9dc12954-5289-4f2d-a2ed-7e457be2a4e6", "force": true, "format": "json"}]: dispatch
Dec 02 10:12:51 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9dc12954-5289-4f2d-a2ed-7e457be2a4e6, sub_name:8115277a-c4bb-4c47-9857-029dcd8c9879, vol_name:cephfs) < ""
Dec 02 10:12:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8115277a-c4bb-4c47-9857-029dcd8c9879/.meta.tmp'
Dec 02 10:12:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8115277a-c4bb-4c47-9857-029dcd8c9879/.meta.tmp' to config b'/volumes/_nogroup/8115277a-c4bb-4c47-9857-029dcd8c9879/.meta'
Dec 02 10:12:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:9dc12954-5289-4f2d-a2ed-7e457be2a4e6, sub_name:8115277a-c4bb-4c47-9857-029dcd8c9879, vol_name:cephfs) < ""
Dec 02 10:12:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:52.334 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:52 np0005541914.localdomain ceph-mon[301710]: pgmap v496: 177 pgs: 177 active+clean; 839 MiB data, 2.8 GiB used, 39 GiB / 42 GiB avail; 86 KiB/s rd, 53 MiB/s wr, 151 op/s
Dec 02 10:12:52 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2286031429' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:12:52 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2286031429' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:12:52 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8115277a-c4bb-4c47-9857-029dcd8c9879", "snap_name": "9dc12954-5289-4f2d-a2ed-7e457be2a4e6_0d6f16e3-0b37-49c0-9f62-73be706c6c58", "force": true, "format": "json"}]: dispatch
Dec 02 10:12:52 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8115277a-c4bb-4c47-9857-029dcd8c9879", "snap_name": "9dc12954-5289-4f2d-a2ed-7e457be2a4e6", "force": true, "format": "json"}]: dispatch
Dec 02 10:12:52 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:12:52 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e203 e203: 6 total, 6 up, 6 in
Dec 02 10:12:53 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v498: 177 pgs: 177 active+clean; 1.2 GiB data, 3.9 GiB used, 38 GiB / 42 GiB avail; 122 KiB/s rd, 80 MiB/s wr, 215 op/s
Dec 02 10:12:53 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "977a2594-0007-4fab-a7e2-b6bc2dee3113", "new_size": 2147483648, "format": "json"}]: dispatch
Dec 02 10:12:53 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:977a2594-0007-4fab-a7e2-b6bc2dee3113, vol_name:cephfs) < ""
Dec 02 10:12:53 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:977a2594-0007-4fab-a7e2-b6bc2dee3113, vol_name:cephfs) < ""
Dec 02 10:12:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:53.679 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:53 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e204 e204: 6 total, 6 up, 6 in
Dec 02 10:12:53 np0005541914.localdomain ceph-mon[301710]: osdmap e203: 6 total, 6 up, 6 in
Dec 02 10:12:53 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:12:53.791 2 INFO neutron.agent.securitygroups_rpc [None req-099086ea-fe05-4147-b538-a150ea38436a 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']
Dec 02 10:12:54 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e205 e205: 6 total, 6 up, 6 in
Dec 02 10:12:54 np0005541914.localdomain ceph-mon[301710]: pgmap v498: 177 pgs: 177 active+clean; 1.2 GiB data, 3.9 GiB used, 38 GiB / 42 GiB avail; 122 KiB/s rd, 80 MiB/s wr, 215 op/s
Dec 02 10:12:54 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "977a2594-0007-4fab-a7e2-b6bc2dee3113", "new_size": 2147483648, "format": "json"}]: dispatch
Dec 02 10:12:54 np0005541914.localdomain ceph-mon[301710]: osdmap e204: 6 total, 6 up, 6 in
Dec 02 10:12:54 np0005541914.localdomain ceph-mon[301710]: osdmap e205: 6 total, 6 up, 6 in
Dec 02 10:12:55 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8115277a-c4bb-4c47-9857-029dcd8c9879", "snap_name": "f755cd55-747e-4403-a245-43cbf9abc4bb_c1048806-17d0-47ca-8ee7-cf284ee33136", "force": true, "format": "json"}]: dispatch
Dec 02 10:12:55 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f755cd55-747e-4403-a245-43cbf9abc4bb_c1048806-17d0-47ca-8ee7-cf284ee33136, sub_name:8115277a-c4bb-4c47-9857-029dcd8c9879, vol_name:cephfs) < ""
Dec 02 10:12:55 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8115277a-c4bb-4c47-9857-029dcd8c9879/.meta.tmp'
Dec 02 10:12:55 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8115277a-c4bb-4c47-9857-029dcd8c9879/.meta.tmp' to config b'/volumes/_nogroup/8115277a-c4bb-4c47-9857-029dcd8c9879/.meta'
Dec 02 10:12:55 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f755cd55-747e-4403-a245-43cbf9abc4bb_c1048806-17d0-47ca-8ee7-cf284ee33136, sub_name:8115277a-c4bb-4c47-9857-029dcd8c9879, vol_name:cephfs) < ""
Dec 02 10:12:55 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8115277a-c4bb-4c47-9857-029dcd8c9879", "snap_name": "f755cd55-747e-4403-a245-43cbf9abc4bb", "force": true, "format": "json"}]: dispatch
Dec 02 10:12:55 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f755cd55-747e-4403-a245-43cbf9abc4bb, sub_name:8115277a-c4bb-4c47-9857-029dcd8c9879, vol_name:cephfs) < ""
Dec 02 10:12:55 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:12:55.084 2 INFO neutron.agent.securitygroups_rpc [None req-51b03213-9479-4e3e-bbd0-ea81e004b8c6 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']
Dec 02 10:12:55 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8115277a-c4bb-4c47-9857-029dcd8c9879/.meta.tmp'
Dec 02 10:12:55 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8115277a-c4bb-4c47-9857-029dcd8c9879/.meta.tmp' to config b'/volumes/_nogroup/8115277a-c4bb-4c47-9857-029dcd8c9879/.meta'
Dec 02 10:12:55 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:f755cd55-747e-4403-a245-43cbf9abc4bb, sub_name:8115277a-c4bb-4c47-9857-029dcd8c9879, vol_name:cephfs) < ""
Dec 02 10:12:55 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v501: 177 pgs: 177 active+clean; 1.2 GiB data, 3.9 GiB used, 38 GiB / 42 GiB avail; 118 KiB/s rd, 67 MiB/s wr, 199 op/s
Dec 02 10:12:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:55.587 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:55 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8115277a-c4bb-4c47-9857-029dcd8c9879", "snap_name": "f755cd55-747e-4403-a245-43cbf9abc4bb_c1048806-17d0-47ca-8ee7-cf284ee33136", "force": true, "format": "json"}]: dispatch
Dec 02 10:12:55 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e206 e206: 6 total, 6 up, 6 in
Dec 02 10:12:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:56.147 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:56 np0005541914.localdomain dnsmasq[322288]: read /var/lib/neutron/dhcp/ca9eef71-1213-4a2c-90d0-cfc01ce50fc6/addn_hosts - 0 addresses
Dec 02 10:12:56 np0005541914.localdomain podman[322733]: 2025-12-02 10:12:56.250628078 +0000 UTC m=+0.064252035 container kill ec7424855634084bfd143cc74d1116dc79577efba21bda6aebc77a81797ba305 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ca9eef71-1213-4a2c-90d0-cfc01ce50fc6, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 02 10:12:56 np0005541914.localdomain dnsmasq-dhcp[322288]: read /var/lib/neutron/dhcp/ca9eef71-1213-4a2c-90d0-cfc01ce50fc6/host
Dec 02 10:12:56 np0005541914.localdomain dnsmasq-dhcp[322288]: read /var/lib/neutron/dhcp/ca9eef71-1213-4a2c-90d0-cfc01ce50fc6/opts
Dec 02 10:12:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:56.420 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:56 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:12:56Z|00221|binding|INFO|Releasing lport db728fdc-d8ab-468d-a1cd-ab731f58d6dc from this chassis (sb_readonly=0)
Dec 02 10:12:56 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:12:56Z|00222|binding|INFO|Setting lport db728fdc-d8ab-468d-a1cd-ab731f58d6dc down in Southbound
Dec 02 10:12:56 np0005541914.localdomain kernel: device tapdb728fdc-d8 left promiscuous mode
Dec 02 10:12:56 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:12:56.430 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-ca9eef71-1213-4a2c-90d0-cfc01ce50fc6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ca9eef71-1213-4a2c-90d0-cfc01ce50fc6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8eea084241c14c5d9a6cc0d912041a21', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dcd3979d-613c-4a99-a744-aee0cbcf87d6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=db728fdc-d8ab-468d-a1cd-ab731f58d6dc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:12:56 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:12:56.432 159483 INFO neutron.agent.ovn.metadata.agent [-] Port db728fdc-d8ab-468d-a1cd-ab731f58d6dc in datapath ca9eef71-1213-4a2c-90d0-cfc01ce50fc6 unbound from our chassis
Dec 02 10:12:56 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:12:56.434 159483 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ca9eef71-1213-4a2c-90d0-cfc01ce50fc6 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Dec 02 10:12:56 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:12:56.435 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[7fea05c5-7125-4c1b-9fcc-08909b7da4f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:12:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:56.448 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:56 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "977a2594-0007-4fab-a7e2-b6bc2dee3113", "format": "json"}]: dispatch
Dec 02 10:12:56 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:977a2594-0007-4fab-a7e2-b6bc2dee3113, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:12:56 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:977a2594-0007-4fab-a7e2-b6bc2dee3113, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:12:56 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '977a2594-0007-4fab-a7e2-b6bc2dee3113' of type subvolume
Dec 02 10:12:56 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:12:56.655+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '977a2594-0007-4fab-a7e2-b6bc2dee3113' of type subvolume
Dec 02 10:12:56 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "977a2594-0007-4fab-a7e2-b6bc2dee3113", "force": true, "format": "json"}]: dispatch
Dec 02 10:12:56 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:977a2594-0007-4fab-a7e2-b6bc2dee3113, vol_name:cephfs) < ""
Dec 02 10:12:56 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/977a2594-0007-4fab-a7e2-b6bc2dee3113'' moved to trashcan
Dec 02 10:12:56 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:12:56 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:977a2594-0007-4fab-a7e2-b6bc2dee3113, vol_name:cephfs) < ""
Dec 02 10:12:56 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e207 e207: 6 total, 6 up, 6 in
Dec 02 10:12:56 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "8115277a-c4bb-4c47-9857-029dcd8c9879", "snap_name": "f755cd55-747e-4403-a245-43cbf9abc4bb", "force": true, "format": "json"}]: dispatch
Dec 02 10:12:56 np0005541914.localdomain ceph-mon[301710]: pgmap v501: 177 pgs: 177 active+clean; 1.2 GiB data, 3.9 GiB used, 38 GiB / 42 GiB avail; 118 KiB/s rd, 67 MiB/s wr, 199 op/s
Dec 02 10:12:56 np0005541914.localdomain ceph-mon[301710]: osdmap e206: 6 total, 6 up, 6 in
Dec 02 10:12:56 np0005541914.localdomain dnsmasq[322288]: exiting on receipt of SIGTERM
Dec 02 10:12:56 np0005541914.localdomain podman[322771]: 2025-12-02 10:12:56.819929091 +0000 UTC m=+0.061231283 container kill ec7424855634084bfd143cc74d1116dc79577efba21bda6aebc77a81797ba305 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ca9eef71-1213-4a2c-90d0-cfc01ce50fc6, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 02 10:12:56 np0005541914.localdomain systemd[1]: libpod-ec7424855634084bfd143cc74d1116dc79577efba21bda6aebc77a81797ba305.scope: Deactivated successfully.
Dec 02 10:12:56 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:12:56 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:12:56 np0005541914.localdomain podman[322787]: 2025-12-02 10:12:56.903053496 +0000 UTC m=+0.060932924 container died ec7424855634084bfd143cc74d1116dc79577efba21bda6aebc77a81797ba305 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ca9eef71-1213-4a2c-90d0-cfc01ce50fc6, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 02 10:12:56 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ec7424855634084bfd143cc74d1116dc79577efba21bda6aebc77a81797ba305-userdata-shm.mount: Deactivated successfully.
Dec 02 10:12:56 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-787199df7695c6c24c2e87676ed9a156fdbe63f95ead2467b2e5b6b6b47c8524-merged.mount: Deactivated successfully.
Dec 02 10:12:56 np0005541914.localdomain podman[322787]: 2025-12-02 10:12:56.944196969 +0000 UTC m=+0.102076317 container remove ec7424855634084bfd143cc74d1116dc79577efba21bda6aebc77a81797ba305 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ca9eef71-1213-4a2c-90d0-cfc01ce50fc6, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 02 10:12:56 np0005541914.localdomain systemd[1]: libpod-conmon-ec7424855634084bfd143cc74d1116dc79577efba21bda6aebc77a81797ba305.scope: Deactivated successfully.
Dec 02 10:12:56 np0005541914.localdomain podman[322793]: 2025-12-02 10:12:56.984847979 +0000 UTC m=+0.138866649 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 02 10:12:57 np0005541914.localdomain podman[322795]: 2025-12-02 10:12:57.048605268 +0000 UTC m=+0.197066417 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, config_id=edpm, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, architecture=x86_64)
Dec 02 10:12:57 np0005541914.localdomain podman[322793]: 2025-12-02 10:12:57.074407451 +0000 UTC m=+0.228426181 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 02 10:12:57 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:12:57 np0005541914.localdomain podman[322795]: 2025-12-02 10:12:57.091936379 +0000 UTC m=+0.240397588 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, version=9.6, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vendor=Red Hat, Inc., release=1755695350, com.redhat.component=ubi9-minimal-container, distribution-scope=public, config_id=edpm, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc.)
Dec 02 10:12:57 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:12:57 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v504: 177 pgs: 177 active+clean; 1.2 GiB data, 3.9 GiB used, 38 GiB / 42 GiB avail
Dec 02 10:12:57 np0005541914.localdomain systemd[1]: run-netns-qdhcp\x2dca9eef71\x2d1213\x2d4a2c\x2d90d0\x2dcfc01ce50fc6.mount: Deactivated successfully.
Dec 02 10:12:57 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:12:57.269 262347 INFO neutron.agent.dhcp.agent [None req-3522d6ba-0fd8-426b-b188-90d535cb028c - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:12:57 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:12:57.270 262347 INFO neutron.agent.dhcp.agent [None req-3522d6ba-0fd8-426b-b188-90d535cb028c - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:12:57 np0005541914.localdomain podman[322871]: 2025-12-02 10:12:57.385562881 +0000 UTC m=+0.054432084 container kill b36f2988170324249ed2bd7b7835756f78e0506ed602fdf8f422ba6b1e741071 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ed6501a-31af-475e-83c5-b9d22d72adda, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 02 10:12:57 np0005541914.localdomain dnsmasq[321974]: read /var/lib/neutron/dhcp/0ed6501a-31af-475e-83c5-b9d22d72adda/addn_hosts - 0 addresses
Dec 02 10:12:57 np0005541914.localdomain dnsmasq-dhcp[321974]: read /var/lib/neutron/dhcp/0ed6501a-31af-475e-83c5-b9d22d72adda/host
Dec 02 10:12:57 np0005541914.localdomain dnsmasq-dhcp[321974]: read /var/lib/neutron/dhcp/0ed6501a-31af-475e-83c5-b9d22d72adda/opts
Dec 02 10:12:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:12:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:57.656 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:57 np0005541914.localdomain kernel: device tap2f12f501-39 left promiscuous mode
Dec 02 10:12:57 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:12:57Z|00223|binding|INFO|Releasing lport 2f12f501-3942-418c-89e6-d03f08b5b903 from this chassis (sb_readonly=0)
Dec 02 10:12:57 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:12:57Z|00224|binding|INFO|Setting lport 2f12f501-3942-418c-89e6-d03f08b5b903 down in Southbound
Dec 02 10:12:57 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:12:57.665 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-0ed6501a-31af-475e-83c5-b9d22d72adda', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0ed6501a-31af-475e-83c5-b9d22d72adda', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a1854cb9cd7e49c4a6a223acc8d74075', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=290a207a-1202-4b25-ae81-ba8a96163204, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=2f12f501-3942-418c-89e6-d03f08b5b903) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:12:57 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:12:57.667 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 2f12f501-3942-418c-89e6-d03f08b5b903 in datapath 0ed6501a-31af-475e-83c5-b9d22d72adda unbound from our chassis
Dec 02 10:12:57 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:12:57.670 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0ed6501a-31af-475e-83c5-b9d22d72adda, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:12:57 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:12:57.671 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[4b6e6cd0-cc3c-4d28-b1cf-25ddf87c759e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:12:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:57.682 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:57 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:12:57.755 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:12:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e208 e208: 6 total, 6 up, 6 in
Dec 02 10:12:57 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "977a2594-0007-4fab-a7e2-b6bc2dee3113", "format": "json"}]: dispatch
Dec 02 10:12:57 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "977a2594-0007-4fab-a7e2-b6bc2dee3113", "force": true, "format": "json"}]: dispatch
Dec 02 10:12:57 np0005541914.localdomain ceph-mon[301710]: osdmap e207: 6 total, 6 up, 6 in
Dec 02 10:12:57 np0005541914.localdomain ceph-mon[301710]: osdmap e208: 6 total, 6 up, 6 in
Dec 02 10:12:58 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8115277a-c4bb-4c47-9857-029dcd8c9879", "format": "json"}]: dispatch
Dec 02 10:12:58 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8115277a-c4bb-4c47-9857-029dcd8c9879, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:12:58 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8115277a-c4bb-4c47-9857-029dcd8c9879, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:12:58 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8115277a-c4bb-4c47-9857-029dcd8c9879' of type subvolume
Dec 02 10:12:58 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:12:58.235+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8115277a-c4bb-4c47-9857-029dcd8c9879' of type subvolume
Dec 02 10:12:58 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8115277a-c4bb-4c47-9857-029dcd8c9879", "force": true, "format": "json"}]: dispatch
Dec 02 10:12:58 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8115277a-c4bb-4c47-9857-029dcd8c9879, vol_name:cephfs) < ""
Dec 02 10:12:58 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8115277a-c4bb-4c47-9857-029dcd8c9879'' moved to trashcan
Dec 02 10:12:58 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:12:58 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8115277a-c4bb-4c47-9857-029dcd8c9879, vol_name:cephfs) < ""
Dec 02 10:12:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:58.290 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:12:58.742 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:12:58 np0005541914.localdomain ceph-mon[301710]: pgmap v504: 177 pgs: 177 active+clean; 1.2 GiB data, 3.9 GiB used, 38 GiB / 42 GiB avail
Dec 02 10:12:58 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e209 e209: 6 total, 6 up, 6 in
Dec 02 10:12:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v507: 177 pgs: 177 active+clean; 200 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 173 KiB/s rd, 90 KiB/s wr, 304 op/s
Dec 02 10:12:59 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8115277a-c4bb-4c47-9857-029dcd8c9879", "format": "json"}]: dispatch
Dec 02 10:12:59 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8115277a-c4bb-4c47-9857-029dcd8c9879", "force": true, "format": "json"}]: dispatch
Dec 02 10:12:59 np0005541914.localdomain ceph-mon[301710]: osdmap e209: 6 total, 6 up, 6 in
Dec 02 10:12:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:12:59 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60, vol_name:cephfs) < ""
Dec 02 10:13:00 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60/.meta.tmp'
Dec 02 10:13:00 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60/.meta.tmp' to config b'/volumes/_nogroup/c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60/.meta'
Dec 02 10:13:00 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60, vol_name:cephfs) < ""
Dec 02 10:13:00 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60", "format": "json"}]: dispatch
Dec 02 10:13:00 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60, vol_name:cephfs) < ""
Dec 02 10:13:00 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60, vol_name:cephfs) < ""
Dec 02 10:13:00 np0005541914.localdomain ceph-mon[301710]: pgmap v507: 177 pgs: 177 active+clean; 200 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 173 KiB/s rd, 90 KiB/s wr, 304 op/s
Dec 02 10:13:00 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:13:00 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60", "format": "json"}]: dispatch
Dec 02 10:13:00 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:13:00 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e210 e210: 6 total, 6 up, 6 in
Dec 02 10:13:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:01.151 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:01 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v509: 177 pgs: 177 active+clean; 200 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 156 KiB/s rd, 82 KiB/s wr, 274 op/s
Dec 02 10:13:01 np0005541914.localdomain podman[322911]: 2025-12-02 10:13:01.625934735 +0000 UTC m=+0.068996872 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 02 10:13:01 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses
Dec 02 10:13:01 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:13:01 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:13:01 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e211 e211: 6 total, 6 up, 6 in
Dec 02 10:13:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0.
Dec 02 10:13:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:13:01.666512) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 02 10:13:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43
Dec 02 10:13:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670381666575, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 746, "num_deletes": 261, "total_data_size": 736960, "memory_usage": 750920, "flush_reason": "Manual Compaction"}
Dec 02 10:13:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started
Dec 02 10:13:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670381677275, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 482171, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28151, "largest_seqno": 28892, "table_properties": {"data_size": 478562, "index_size": 1400, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9028, "raw_average_key_size": 19, "raw_value_size": 470871, "raw_average_value_size": 1037, "num_data_blocks": 60, "num_entries": 454, "num_filter_entries": 454, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764670356, "oldest_key_time": 1764670356, "file_creation_time": 1764670381, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Dec 02 10:13:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 10821 microseconds, and 2431 cpu microseconds.
Dec 02 10:13:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 10:13:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:13:01.677328) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 482171 bytes OK
Dec 02 10:13:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:13:01.677355) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started
Dec 02 10:13:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:13:01.687321) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done
Dec 02 10:13:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:13:01.687368) EVENT_LOG_v1 {"time_micros": 1764670381687357, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 02 10:13:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:13:01.687399) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 02 10:13:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 732773, prev total WAL file size 732773, number of live WAL files 2.
Dec 02 10:13:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:13:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:13:01.688121) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034323730' seq:72057594037927935, type:22 .. '6C6F676D0034353234' seq:0, type:0; will stop at (end)
Dec 02 10:13:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 02 10:13:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(470KB)], [42(17MB)]
Dec 02 10:13:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670381688162, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 18972273, "oldest_snapshot_seqno": -1}
Dec 02 10:13:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 13532 keys, 18454852 bytes, temperature: kUnknown
Dec 02 10:13:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670381786627, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 18454852, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18376674, "index_size": 43261, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33861, "raw_key_size": 363228, "raw_average_key_size": 26, "raw_value_size": 18145442, "raw_average_value_size": 1340, "num_data_blocks": 1625, "num_entries": 13532, "num_filter_entries": 13532, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669502, "oldest_key_time": 0, "file_creation_time": 1764670381, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}}
Dec 02 10:13:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 10:13:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:13:01.786937) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 18454852 bytes
Dec 02 10:13:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:13:01.788935) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 192.5 rd, 187.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 17.6 +0.0 blob) out(17.6 +0.0 blob), read-write-amplify(77.6) write-amplify(38.3) OK, records in: 14074, records dropped: 542 output_compression: NoCompression
Dec 02 10:13:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:13:01.788965) EVENT_LOG_v1 {"time_micros": 1764670381788952, "job": 24, "event": "compaction_finished", "compaction_time_micros": 98553, "compaction_time_cpu_micros": 45418, "output_level": 6, "num_output_files": 1, "total_output_size": 18454852, "num_input_records": 14074, "num_output_records": 13532, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 02 10:13:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:13:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670381789212, "job": 24, "event": "table_file_deletion", "file_number": 44}
Dec 02 10:13:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:13:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670381792284, "job": 24, "event": "table_file_deletion", "file_number": 42}
Dec 02 10:13:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:13:01.688031) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:13:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:13:01.792368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:13:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:13:01.792374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:13:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:13:01.792376) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:13:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:13:01.792378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:13:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:13:01.792380) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:13:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:01.834 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:01 np0005541914.localdomain ceph-mon[301710]: osdmap e210: 6 total, 6 up, 6 in
Dec 02 10:13:01 np0005541914.localdomain ceph-mon[301710]: osdmap e211: 6 total, 6 up, 6 in
Dec 02 10:13:02 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:13:02 np0005541914.localdomain podman[322950]: 2025-12-02 10:13:02.822670237 +0000 UTC m=+0.063992908 container kill b36f2988170324249ed2bd7b7835756f78e0506ed602fdf8f422ba6b1e741071 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ed6501a-31af-475e-83c5-b9d22d72adda, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 02 10:13:02 np0005541914.localdomain dnsmasq[321974]: exiting on receipt of SIGTERM
Dec 02 10:13:02 np0005541914.localdomain systemd[1]: libpod-b36f2988170324249ed2bd7b7835756f78e0506ed602fdf8f422ba6b1e741071.scope: Deactivated successfully.
Dec 02 10:13:02 np0005541914.localdomain podman[322965]: 2025-12-02 10:13:02.896982249 +0000 UTC m=+0.056706993 container died b36f2988170324249ed2bd7b7835756f78e0506ed602fdf8f422ba6b1e741071 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ed6501a-31af-475e-83c5-b9d22d72adda, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 02 10:13:02 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b36f2988170324249ed2bd7b7835756f78e0506ed602fdf8f422ba6b1e741071-userdata-shm.mount: Deactivated successfully.
Dec 02 10:13:02 np0005541914.localdomain podman[322965]: 2025-12-02 10:13:02.933970647 +0000 UTC m=+0.093695361 container cleanup b36f2988170324249ed2bd7b7835756f78e0506ed602fdf8f422ba6b1e741071 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ed6501a-31af-475e-83c5-b9d22d72adda, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125)
Dec 02 10:13:02 np0005541914.localdomain systemd[1]: libpod-conmon-b36f2988170324249ed2bd7b7835756f78e0506ed602fdf8f422ba6b1e741071.scope: Deactivated successfully.
Dec 02 10:13:02 np0005541914.localdomain ceph-mon[301710]: pgmap v509: 177 pgs: 177 active+clean; 200 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 156 KiB/s rd, 82 KiB/s wr, 274 op/s
Dec 02 10:13:02 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e212 e212: 6 total, 6 up, 6 in
Dec 02 10:13:02 np0005541914.localdomain podman[322964]: 2025-12-02 10:13:02.997483548 +0000 UTC m=+0.153688894 container remove b36f2988170324249ed2bd7b7835756f78e0506ed602fdf8f422ba6b1e741071 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0ed6501a-31af-475e-83c5-b9d22d72adda, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 02 10:13:03 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:13:03.030 262347 INFO neutron.agent.dhcp.agent [None req-e34b8e17-30d4-43a1-a30d-c3a69e5a6527 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:13:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:13:03.182 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:13:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:13:03.183 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:13:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:13:03.184 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:13:03 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v512: 177 pgs: 3 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 170 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 87 KiB/s rd, 56 KiB/s wr, 125 op/s
Dec 02 10:13:03 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:13:03.274 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:13:03 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60", "format": "json"}]: dispatch
Dec 02 10:13:03 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:13:03 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:13:03 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:13:03.385+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60' of type subvolume
Dec 02 10:13:03 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60' of type subvolume
Dec 02 10:13:03 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60", "force": true, "format": "json"}]: dispatch
Dec 02 10:13:03 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60, vol_name:cephfs) < ""
Dec 02 10:13:03 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60'' moved to trashcan
Dec 02 10:13:03 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:13:03 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60, vol_name:cephfs) < ""
Dec 02 10:13:03 np0005541914.localdomain podman[239757]: time="2025-12-02T10:13:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:13:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:13:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 10:13:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:13:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19238 "" "Go-http-client/1.1"
Dec 02 10:13:03 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:13:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:03.785 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:03 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-2824933fa3ea201d2880e13a9881e7be88073d7cf00c05154858ec9e5ab1017d-merged.mount: Deactivated successfully.
Dec 02 10:13:03 np0005541914.localdomain systemd[1]: run-netns-qdhcp\x2d0ed6501a\x2d31af\x2d475e\x2d83c5\x2db9d22d72adda.mount: Deactivated successfully.
Dec 02 10:13:03 np0005541914.localdomain podman[322994]: 2025-12-02 10:13:03.829267676 +0000 UTC m=+0.080892247 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:13:03 np0005541914.localdomain podman[322994]: 2025-12-02 10:13:03.865783218 +0000 UTC m=+0.117407789 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 02 10:13:03 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:13:03 np0005541914.localdomain ceph-mon[301710]: osdmap e212: 6 total, 6 up, 6 in
Dec 02 10:13:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:04.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:13:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:04.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 02 10:13:04 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 02 10:13:04 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1146820639' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:13:04 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 02 10:13:04 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1146820639' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:13:04 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:13:04.690 2 INFO neutron.agent.securitygroups_rpc [None req-87ab1b98-cb82-4479-9190-360042c3aeed 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']
Dec 02 10:13:04 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:13:04.986 2 INFO neutron.agent.securitygroups_rpc [None req-c2e2f39f-f9d0-4681-8662-02fbec20bbf5 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']
Dec 02 10:13:05 np0005541914.localdomain ceph-mon[301710]: pgmap v512: 177 pgs: 3 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 170 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 87 KiB/s rd, 56 KiB/s wr, 125 op/s
Dec 02 10:13:05 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60", "format": "json"}]: dispatch
Dec 02 10:13:05 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c36fc0cf-86b1-4fe5-92a4-23ca7fc4ab60", "force": true, "format": "json"}]: dispatch
Dec 02 10:13:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1077481991' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:13:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1077481991' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:13:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/1647211614' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:13:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/3306293220' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:13:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/3306293220' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:13:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1146820639' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:13:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1146820639' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:13:05 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e213 e213: 6 total, 6 up, 6 in
Dec 02 10:13:05 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v514: 177 pgs: 3 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 170 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 89 KiB/s rd, 58 KiB/s wr, 127 op/s
Dec 02 10:13:05 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:13:05.407 2 INFO neutron.agent.securitygroups_rpc [None req-2bcc8101-e89e-4e48-9cb1-9d691b1fbb0a 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']
Dec 02 10:13:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:05.539 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:13:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:05.540 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:13:05 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:13:05.766 2 INFO neutron.agent.securitygroups_rpc [None req-af45566d-4b46-4c57-afce-d4fa5a30fbdd 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']
Dec 02 10:13:06 np0005541914.localdomain ceph-mon[301710]: osdmap e213: 6 total, 6 up, 6 in
Dec 02 10:13:06 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/3953452699' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:13:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:06.155 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:06 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:13:06.260 2 INFO neutron.agent.securitygroups_rpc [None req-4587ba21-504d-4c76-ba2e-bc511139041d 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']
Dec 02 10:13:06 np0005541914.localdomain podman[323030]: 2025-12-02 10:13:06.479807417 +0000 UTC m=+0.062788459 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 02 10:13:06 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses
Dec 02 10:13:06 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:13:06 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:13:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:06.526 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:13:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:06.559 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:13:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:06.560 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:13:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:06.560 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:13:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:06.560 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 10:13:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:06.561 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:13:06 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1abd7d7a-1fad-4e16-a25e-c36a0784c2b0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:13:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1abd7d7a-1fad-4e16-a25e-c36a0784c2b0, vol_name:cephfs) < ""
Dec 02 10:13:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:06.647 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e214 e214: 6 total, 6 up, 6 in
Dec 02 10:13:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1abd7d7a-1fad-4e16-a25e-c36a0784c2b0/.meta.tmp'
Dec 02 10:13:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1abd7d7a-1fad-4e16-a25e-c36a0784c2b0/.meta.tmp' to config b'/volumes/_nogroup/1abd7d7a-1fad-4e16-a25e-c36a0784c2b0/.meta'
Dec 02 10:13:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1abd7d7a-1fad-4e16-a25e-c36a0784c2b0, vol_name:cephfs) < ""
Dec 02 10:13:06 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1abd7d7a-1fad-4e16-a25e-c36a0784c2b0", "format": "json"}]: dispatch
Dec 02 10:13:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1abd7d7a-1fad-4e16-a25e-c36a0784c2b0, vol_name:cephfs) < ""
Dec 02 10:13:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1abd7d7a-1fad-4e16-a25e-c36a0784c2b0, vol_name:cephfs) < ""
Dec 02 10:13:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:13:06
Dec 02 10:13:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 02 10:13:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] do_upmap
Dec 02 10:13:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] pools ['vms', 'images', 'backups', '.mgr', 'manila_metadata', 'manila_data', 'volumes']
Dec 02 10:13:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes
Dec 02 10:13:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:13:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:13:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:13:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:13:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:13:07 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/641882817' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:13:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:07.035 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:13:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:13:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:13:07 np0005541914.localdomain ceph-mon[301710]: pgmap v514: 177 pgs: 3 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 170 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 89 KiB/s rd, 58 KiB/s wr, 127 op/s
Dec 02 10:13:07 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2337962783' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:13:07 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2337962783' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:13:07 np0005541914.localdomain ceph-mon[301710]: osdmap e214: 6 total, 6 up, 6 in
Dec 02 10:13:07 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:13:07 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/641882817' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:13:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v516: 177 pgs: 3 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 170 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 68 KiB/s rd, 44 KiB/s wr, 98 op/s
Dec 02 10:13:07 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:13:07.211 2 INFO neutron.agent.securitygroups_rpc [None req-5417d582-9df9-4660-b1a1-ab7e1fcb97ff 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']
Dec 02 10:13:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:07.210 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 10:13:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:07.213 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11496MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 10:13:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:07.214 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:13:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:07.214 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:13:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust
Dec 02 10:13:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:13:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 02 10:13:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:13:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32)
Dec 02 10:13:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:13:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014869268216080402 of space, bias 1.0, pg target 0.2968897220477387 quantized to 32 (current 32)
Dec 02 10:13:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:13:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32)
Dec 02 10:13:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:13:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32)
Dec 02 10:13:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:13:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00010850694444444444 quantized to 32 (current 32)
Dec 02 10:13:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:13:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0005662535769123395 of space, bias 4.0, pg target 0.45073784722222227 quantized to 16 (current 16)
Dec 02 10:13:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 02 10:13:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:13:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 02 10:13:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:13:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:13:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:13:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:13:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:13:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:13:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:13:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1eb437a1-55f1-4e1b-ab0c-b43c26904e3d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:13:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1eb437a1-55f1-4e1b-ab0c-b43c26904e3d, vol_name:cephfs) < ""
Dec 02 10:13:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1eb437a1-55f1-4e1b-ab0c-b43c26904e3d/.meta.tmp'
Dec 02 10:13:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1eb437a1-55f1-4e1b-ab0c-b43c26904e3d/.meta.tmp' to config b'/volumes/_nogroup/1eb437a1-55f1-4e1b-ab0c-b43c26904e3d/.meta'
Dec 02 10:13:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:1eb437a1-55f1-4e1b-ab0c-b43c26904e3d, vol_name:cephfs) < ""
Dec 02 10:13:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1eb437a1-55f1-4e1b-ab0c-b43c26904e3d", "format": "json"}]: dispatch
Dec 02 10:13:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1eb437a1-55f1-4e1b-ab0c-b43c26904e3d, vol_name:cephfs) < ""
Dec 02 10:13:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:1eb437a1-55f1-4e1b-ab0c-b43c26904e3d, vol_name:cephfs) < ""
Dec 02 10:13:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:07.516 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 10:13:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:07.516 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 10:13:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:13:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:07.575 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Refreshing inventories for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 02 10:13:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:07.652 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Updating ProviderTree inventory for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 02 10:13:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:07.653 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Updating inventory in ProviderTree for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 02 10:13:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:07.671 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Refreshing aggregate associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 02 10:13:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:07.696 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Refreshing trait associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, traits: HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,HW_CPU_X86_SSE42,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_IDE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AKI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 02 10:13:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:07.713 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:13:08 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1abd7d7a-1fad-4e16-a25e-c36a0784c2b0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:13:08 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1abd7d7a-1fad-4e16-a25e-c36a0784c2b0", "format": "json"}]: dispatch
Dec 02 10:13:08 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:13:08 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:13:08 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3210582778' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:13:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:08.177 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:13:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:08.181 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 10:13:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:08.199 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 10:13:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:08.201 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 10:13:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:08.201 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.987s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:13:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:08.830 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:09 np0005541914.localdomain ceph-mon[301710]: pgmap v516: 177 pgs: 3 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 170 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 68 KiB/s rd, 44 KiB/s wr, 98 op/s
Dec 02 10:13:09 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1eb437a1-55f1-4e1b-ab0c-b43c26904e3d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:13:09 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1eb437a1-55f1-4e1b-ab0c-b43c26904e3d", "format": "json"}]: dispatch
Dec 02 10:13:09 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/3210582778' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:13:09 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e215 e215: 6 total, 6 up, 6 in
Dec 02 10:13:09 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:09.198 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:13:09 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:09.198 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:13:09 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:09.198 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:13:09 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v518: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 83 KiB/s rd, 51 KiB/s wr, 113 op/s
Dec 02 10:13:09 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1abd7d7a-1fad-4e16-a25e-c36a0784c2b0", "format": "json"}]: dispatch
Dec 02 10:13:09 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1abd7d7a-1fad-4e16-a25e-c36a0784c2b0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:13:09 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1abd7d7a-1fad-4e16-a25e-c36a0784c2b0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:13:09 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:13:09.936+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1abd7d7a-1fad-4e16-a25e-c36a0784c2b0' of type subvolume
Dec 02 10:13:09 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1abd7d7a-1fad-4e16-a25e-c36a0784c2b0' of type subvolume
Dec 02 10:13:09 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1abd7d7a-1fad-4e16-a25e-c36a0784c2b0", "force": true, "format": "json"}]: dispatch
Dec 02 10:13:09 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1abd7d7a-1fad-4e16-a25e-c36a0784c2b0, vol_name:cephfs) < ""
Dec 02 10:13:09 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1abd7d7a-1fad-4e16-a25e-c36a0784c2b0'' moved to trashcan
Dec 02 10:13:09 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:13:09 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1abd7d7a-1fad-4e16-a25e-c36a0784c2b0, vol_name:cephfs) < ""
Dec 02 10:13:10 np0005541914.localdomain ceph-mon[301710]: osdmap e215: 6 total, 6 up, 6 in
Dec 02 10:13:10 np0005541914.localdomain ceph-mon[301710]: pgmap v518: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 83 KiB/s rd, 51 KiB/s wr, 113 op/s
Dec 02 10:13:10 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1abd7d7a-1fad-4e16-a25e-c36a0784c2b0", "format": "json"}]: dispatch
Dec 02 10:13:10 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1abd7d7a-1fad-4e16-a25e-c36a0784c2b0", "force": true, "format": "json"}]: dispatch
Dec 02 10:13:10 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:10.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:13:10 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:10.527 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 10:13:10 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:10.527 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 10:13:10 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:10.553 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 10:13:10 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:10.553 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:13:10 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "1eb437a1-55f1-4e1b-ab0c-b43c26904e3d", "snap_name": "d93e1e67-e087-48ee-b546-b47b516fdb8a", "format": "json"}]: dispatch
Dec 02 10:13:10 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d93e1e67-e087-48ee-b546-b47b516fdb8a, sub_name:1eb437a1-55f1-4e1b-ab0c-b43c26904e3d, vol_name:cephfs) < ""
Dec 02 10:13:10 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:d93e1e67-e087-48ee-b546-b47b516fdb8a, sub_name:1eb437a1-55f1-4e1b-ab0c-b43c26904e3d, vol_name:cephfs) < ""
Dec 02 10:13:11 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "1eb437a1-55f1-4e1b-ab0c-b43c26904e3d", "snap_name": "d93e1e67-e087-48ee-b546-b47b516fdb8a", "format": "json"}]: dispatch
Dec 02 10:13:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:11.159 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:11 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v519: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 80 KiB/s rd, 50 KiB/s wr, 110 op/s
Dec 02 10:13:11 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:13:11.353 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:47:bc 10.100.0.18 10.100.0.3'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.3/28', 'neutron:device_id': 'ovnmeta-e625cddc-8a19-4455-8def-acda09527180', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e625cddc-8a19-4455-8def-acda09527180', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ac3f69b39e24601806d0f601335ff31', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=39bb33c3-24c9-42a5-b452-f2a8901739e7, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=f6a41283-fc6f-4680-bb13-d9b38f4d32ad) old=Port_Binding(mac=['fa:16:3e:b4:47:bc 10.100.0.3'], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ovnmeta-e625cddc-8a19-4455-8def-acda09527180', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e625cddc-8a19-4455-8def-acda09527180', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ac3f69b39e24601806d0f601335ff31', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:13:11 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:13:11.355 159483 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port f6a41283-fc6f-4680-bb13-d9b38f4d32ad in datapath e625cddc-8a19-4455-8def-acda09527180 updated
Dec 02 10:13:11 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:13:11.357 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e625cddc-8a19-4455-8def-acda09527180, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:13:11 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:13:11.358 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[48236fc2-2d86-47ff-91d2-1da9728cffd1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:13:11 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 02 10:13:11 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2086186195' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:13:11 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 02 10:13:11 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2086186195' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:13:11 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e216 e216: 6 total, 6 up, 6 in
Dec 02 10:13:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:13:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:13:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:13:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:13:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:13:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:13:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:13:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:13:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:13:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:13:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:13:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:13:12 np0005541914.localdomain ceph-mon[301710]: pgmap v519: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 80 KiB/s rd, 50 KiB/s wr, 110 op/s
Dec 02 10:13:12 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2086186195' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:13:12 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2086186195' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:13:12 np0005541914.localdomain ceph-mon[301710]: osdmap e216: 6 total, 6 up, 6 in
Dec 02 10:13:12 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:13:12.475 2 INFO neutron.agent.securitygroups_rpc [None req-7de79900-a7b1-40d4-917a-526ff9de5d92 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']
Dec 02 10:13:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:12.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:13:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:12.527 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 10:13:12 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:13:13 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:13:13.014 2 INFO neutron.agent.securitygroups_rpc [None req-bfd5de53-2a30-4cbd-a16d-14779023f72a 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']
Dec 02 10:13:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "51b857f0-7bbc-47ae-84d3-61ced77d3364", "format": "json"}]: dispatch
Dec 02 10:13:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:51b857f0-7bbc-47ae-84d3-61ced77d3364, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:13:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:51b857f0-7bbc-47ae-84d3-61ced77d3364, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:13:13 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e217 e217: 6 total, 6 up, 6 in
Dec 02 10:13:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v522: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 149 KiB/s rd, 76 KiB/s wr, 204 op/s
Dec 02 10:13:13 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:13:13.619 2 INFO neutron.agent.securitygroups_rpc [None req-c543e5d9-b160-4e67-9d20-7dd20a3ffb2c 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']
Dec 02 10:13:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:13.861 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:14 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "51b857f0-7bbc-47ae-84d3-61ced77d3364", "format": "json"}]: dispatch
Dec 02 10:13:14 np0005541914.localdomain ceph-mon[301710]: osdmap e217: 6 total, 6 up, 6 in
Dec 02 10:13:14 np0005541914.localdomain ceph-mon[301710]: pgmap v522: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 149 KiB/s rd, 76 KiB/s wr, 204 op/s
Dec 02 10:13:14 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:13:14.261 2 INFO neutron.agent.securitygroups_rpc [None req-654147e9-be5d-4ad5-89df-552977a60280 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']
Dec 02 10:13:14 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 02 10:13:14 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/853958348' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:13:14 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 02 10:13:14 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/853958348' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:13:14 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1eb437a1-55f1-4e1b-ab0c-b43c26904e3d", "snap_name": "d93e1e67-e087-48ee-b546-b47b516fdb8a_ad4a4f2d-b818-4a17-accd-1b8ca8c807b2", "force": true, "format": "json"}]: dispatch
Dec 02 10:13:14 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d93e1e67-e087-48ee-b546-b47b516fdb8a_ad4a4f2d-b818-4a17-accd-1b8ca8c807b2, sub_name:1eb437a1-55f1-4e1b-ab0c-b43c26904e3d, vol_name:cephfs) < ""
Dec 02 10:13:14 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1eb437a1-55f1-4e1b-ab0c-b43c26904e3d/.meta.tmp'
Dec 02 10:13:14 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1eb437a1-55f1-4e1b-ab0c-b43c26904e3d/.meta.tmp' to config b'/volumes/_nogroup/1eb437a1-55f1-4e1b-ab0c-b43c26904e3d/.meta'
Dec 02 10:13:14 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d93e1e67-e087-48ee-b546-b47b516fdb8a_ad4a4f2d-b818-4a17-accd-1b8ca8c807b2, sub_name:1eb437a1-55f1-4e1b-ab0c-b43c26904e3d, vol_name:cephfs) < ""
Dec 02 10:13:14 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1eb437a1-55f1-4e1b-ab0c-b43c26904e3d", "snap_name": "d93e1e67-e087-48ee-b546-b47b516fdb8a", "force": true, "format": "json"}]: dispatch
Dec 02 10:13:14 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d93e1e67-e087-48ee-b546-b47b516fdb8a, sub_name:1eb437a1-55f1-4e1b-ab0c-b43c26904e3d, vol_name:cephfs) < ""
Dec 02 10:13:14 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/1eb437a1-55f1-4e1b-ab0c-b43c26904e3d/.meta.tmp'
Dec 02 10:13:14 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/1eb437a1-55f1-4e1b-ab0c-b43c26904e3d/.meta.tmp' to config b'/volumes/_nogroup/1eb437a1-55f1-4e1b-ab0c-b43c26904e3d/.meta'
Dec 02 10:13:14 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:d93e1e67-e087-48ee-b546-b47b516fdb8a, sub_name:1eb437a1-55f1-4e1b-ab0c-b43c26904e3d, vol_name:cephfs) < ""
Dec 02 10:13:15 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v523: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 65 KiB/s rd, 24 KiB/s wr, 89 op/s
Dec 02 10:13:15 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/3657225323' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:13:15 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/853958348' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:13:15 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/853958348' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:13:15 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1eb437a1-55f1-4e1b-ab0c-b43c26904e3d", "snap_name": "d93e1e67-e087-48ee-b546-b47b516fdb8a_ad4a4f2d-b818-4a17-accd-1b8ca8c807b2", "force": true, "format": "json"}]: dispatch
Dec 02 10:13:15 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1eb437a1-55f1-4e1b-ab0c-b43c26904e3d", "snap_name": "d93e1e67-e087-48ee-b546-b47b516fdb8a", "force": true, "format": "json"}]: dispatch
Dec 02 10:13:15 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/3212393842' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:13:15 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/3212393842' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:13:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:16.163 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:16 np0005541914.localdomain ceph-mon[301710]: pgmap v523: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 65 KiB/s rd, 24 KiB/s wr, 89 op/s
Dec 02 10:13:16 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/3088318282' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:13:16 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1171825153' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:13:16 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1171825153' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:13:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:16.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:13:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:16.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 02 10:13:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:16.551 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 02 10:13:16 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "51b857f0-7bbc-47ae-84d3-61ced77d3364_ef18a710-573a-49b3-bf4f-f42593b839e9", "force": true, "format": "json"}]: dispatch
Dec 02 10:13:16 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:51b857f0-7bbc-47ae-84d3-61ced77d3364_ef18a710-573a-49b3-bf4f-f42593b839e9, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:13:16 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp'
Dec 02 10:13:16 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta'
Dec 02 10:13:16 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:51b857f0-7bbc-47ae-84d3-61ced77d3364_ef18a710-573a-49b3-bf4f-f42593b839e9, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:13:16 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "51b857f0-7bbc-47ae-84d3-61ced77d3364", "force": true, "format": "json"}]: dispatch
Dec 02 10:13:16 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:51b857f0-7bbc-47ae-84d3-61ced77d3364, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:13:16 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp'
Dec 02 10:13:16 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta'
Dec 02 10:13:16 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:51b857f0-7bbc-47ae-84d3-61ced77d3364, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:13:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v524: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 50 KiB/s rd, 18 KiB/s wr, 68 op/s
Dec 02 10:13:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:13:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1eb437a1-55f1-4e1b-ab0c-b43c26904e3d", "format": "json"}]: dispatch
Dec 02 10:13:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:1eb437a1-55f1-4e1b-ab0c-b43c26904e3d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:13:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:1eb437a1-55f1-4e1b-ab0c-b43c26904e3d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:13:17 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1eb437a1-55f1-4e1b-ab0c-b43c26904e3d' of type subvolume
Dec 02 10:13:17 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:13:17.595+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '1eb437a1-55f1-4e1b-ab0c-b43c26904e3d' of type subvolume
Dec 02 10:13:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1eb437a1-55f1-4e1b-ab0c-b43c26904e3d", "force": true, "format": "json"}]: dispatch
Dec 02 10:13:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1eb437a1-55f1-4e1b-ab0c-b43c26904e3d, vol_name:cephfs) < ""
Dec 02 10:13:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/1eb437a1-55f1-4e1b-ab0c-b43c26904e3d'' moved to trashcan
Dec 02 10:13:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:13:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:1eb437a1-55f1-4e1b-ab0c-b43c26904e3d, vol_name:cephfs) < ""
Dec 02 10:13:17 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "51b857f0-7bbc-47ae-84d3-61ced77d3364_ef18a710-573a-49b3-bf4f-f42593b839e9", "force": true, "format": "json"}]: dispatch
Dec 02 10:13:17 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "51b857f0-7bbc-47ae-84d3-61ced77d3364", "force": true, "format": "json"}]: dispatch
Dec 02 10:13:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e218 e218: 6 total, 6 up, 6 in
Dec 02 10:13:18 np0005541914.localdomain ceph-mon[301710]: pgmap v524: 177 pgs: 177 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 50 KiB/s rd, 18 KiB/s wr, 68 op/s
Dec 02 10:13:18 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1eb437a1-55f1-4e1b-ab0c-b43c26904e3d", "format": "json"}]: dispatch
Dec 02 10:13:18 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1eb437a1-55f1-4e1b-ab0c-b43c26904e3d", "force": true, "format": "json"}]: dispatch
Dec 02 10:13:18 np0005541914.localdomain ceph-mon[301710]: osdmap e218: 6 total, 6 up, 6 in
Dec 02 10:13:18 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2024253158' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:13:18 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2024253158' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:13:18 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2845869502' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:13:18 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2845869502' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:13:18 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e219 e219: 6 total, 6 up, 6 in
Dec 02 10:13:18 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:13:18.829 2 INFO neutron.agent.securitygroups_rpc [None req-809c0752-7390-4fe2-a50b-599cd97feccb 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']
Dec 02 10:13:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:18.896 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:19 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v527: 177 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 170 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 84 KiB/s rd, 71 KiB/s wr, 117 op/s
Dec 02 10:13:19 np0005541914.localdomain podman[323111]: 2025-12-02 10:13:19.3329361 +0000 UTC m=+0.066116172 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:13:19 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 1 addresses
Dec 02 10:13:19 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:13:19 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:13:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:19.573 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:19 np0005541914.localdomain ceph-mon[301710]: osdmap e219: 6 total, 6 up, 6 in
Dec 02 10:13:19 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:13:19.816 2 INFO neutron.agent.securitygroups_rpc [None req-86a7997e-3c35-4f55-af22-9588e0545ddf 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']
Dec 02 10:13:20 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:13:20.546 2 INFO neutron.agent.securitygroups_rpc [None req-173c0738-50d3-48cc-af5a-b78421c8e23c 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']
Dec 02 10:13:20 np0005541914.localdomain ceph-mon[301710]: pgmap v527: 177 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 170 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 84 KiB/s rd, 71 KiB/s wr, 117 op/s
Dec 02 10:13:20 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:13:20.979 2 INFO neutron.agent.securitygroups_rpc [None req-a615b87d-8e60-4d08-9004-5c448f1ed91b 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']
Dec 02 10:13:21 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:21.196 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:21 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v528: 177 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 170 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 63 KiB/s rd, 53 KiB/s wr, 88 op/s
Dec 02 10:13:21 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e220 e220: 6 total, 6 up, 6 in
Dec 02 10:13:21 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:13:21 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:13:21 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:13:22 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:13:22 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "71fe3ff3-77b1-42b9-a13c-7c107bdd326d", "format": "json"}]: dispatch
Dec 02 10:13:22 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:71fe3ff3-77b1-42b9-a13c-7c107bdd326d, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:13:22 np0005541914.localdomain podman[323131]: 2025-12-02 10:13:22.086315492 +0000 UTC m=+0.087511370 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 02 10:13:22 np0005541914.localdomain systemd[1]: tmp-crun.QJdCpA.mount: Deactivated successfully.
Dec 02 10:13:22 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:71fe3ff3-77b1-42b9-a13c-7c107bdd326d, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:13:22 np0005541914.localdomain podman[323132]: 2025-12-02 10:13:22.101222291 +0000 UTC m=+0.098231011 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 10:13:22 np0005541914.localdomain podman[323133]: 2025-12-02 10:13:22.146651376 +0000 UTC m=+0.137522966 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 02 10:13:22 np0005541914.localdomain podman[323139]: 2025-12-02 10:13:22.166243688 +0000 UTC m=+0.150723272 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 02 10:13:22 np0005541914.localdomain podman[323131]: 2025-12-02 10:13:22.173172381 +0000 UTC m=+0.174368309 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 02 10:13:22 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:13:22 np0005541914.localdomain podman[323132]: 2025-12-02 10:13:22.188960426 +0000 UTC m=+0.185969176 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 02 10:13:22 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:13:22 np0005541914.localdomain podman[323139]: 2025-12-02 10:13:22.207731463 +0000 UTC m=+0.192211077 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:13:22 np0005541914.localdomain podman[323133]: 2025-12-02 10:13:22.233071652 +0000 UTC m=+0.223943182 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2)
Dec 02 10:13:22 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:13:22 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:13:22 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:13:22 np0005541914.localdomain ceph-mon[301710]: pgmap v528: 177 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 170 active+clean; 200 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 63 KiB/s rd, 53 KiB/s wr, 88 op/s
Dec 02 10:13:22 np0005541914.localdomain ceph-mon[301710]: osdmap e220: 6 total, 6 up, 6 in
Dec 02 10:13:22 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "71fe3ff3-77b1-42b9-a13c-7c107bdd326d", "format": "json"}]: dispatch
Dec 02 10:13:23 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v530: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 86 KiB/s rd, 82 KiB/s wr, 124 op/s
Dec 02 10:13:23 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:23.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:13:23 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:23.923 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:24 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:13:24.100 2 INFO neutron.agent.securitygroups_rpc [None req-1c76d5fd-a4fe-47e2-aa2d-3afed3d7786f 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']
Dec 02 10:13:24 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:13:24.346 262347 INFO neutron.agent.linux.ip_lib [None req-0de97030-fe51-4340-a1d2-e26ef3f59dda - - - - - -] Device tapf15aee39-f3 cannot be used as it has no MAC address
Dec 02 10:13:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:24.370 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:24 np0005541914.localdomain kernel: device tapf15aee39-f3 entered promiscuous mode
Dec 02 10:13:24 np0005541914.localdomain NetworkManager[5967]: <info>  [1764670404.3812] manager: (tapf15aee39-f3): new Generic device (/org/freedesktop/NetworkManager/Devices/46)
Dec 02 10:13:24 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:13:24Z|00225|binding|INFO|Claiming lport f15aee39-f3c5-43c5-8331-48f6ffa03ae6 for this chassis.
Dec 02 10:13:24 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:13:24Z|00226|binding|INFO|f15aee39-f3c5-43c5-8331-48f6ffa03ae6: Claiming unknown
Dec 02 10:13:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:24.382 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:24 np0005541914.localdomain systemd-udevd[323224]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 10:13:24 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:13:24.392 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-689e2fb8-60b0-49ed-bf14-a87677f003a5', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-689e2fb8-60b0-49ed-bf14-a87677f003a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ac3f69b39e24601806d0f601335ff31', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b7028d62-160f-453a-9151-4aa69f4e4388, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=f15aee39-f3c5-43c5-8331-48f6ffa03ae6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:13:24 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:13:24.394 159483 INFO neutron.agent.ovn.metadata.agent [-] Port f15aee39-f3c5-43c5-8331-48f6ffa03ae6 in datapath 689e2fb8-60b0-49ed-bf14-a87677f003a5 bound to our chassis
Dec 02 10:13:24 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:13:24.396 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Port b87ab23f-af15-44a2-85b2-a53866079edf IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Dec 02 10:13:24 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:13:24.397 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 689e2fb8-60b0-49ed-bf14-a87677f003a5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:13:24 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:13:24.397 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[a84b25bb-e90b-4e0b-9e24-586e17f5aa57]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:13:24 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapf15aee39-f3: No such device
Dec 02 10:13:24 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapf15aee39-f3: No such device
Dec 02 10:13:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:24.418 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:24 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapf15aee39-f3: No such device
Dec 02 10:13:24 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:13:24Z|00227|binding|INFO|Setting lport f15aee39-f3c5-43c5-8331-48f6ffa03ae6 ovn-installed in OVS
Dec 02 10:13:24 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:13:24Z|00228|binding|INFO|Setting lport f15aee39-f3c5-43c5-8331-48f6ffa03ae6 up in Southbound
Dec 02 10:13:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:24.424 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:24 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapf15aee39-f3: No such device
Dec 02 10:13:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:24.426 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:24 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapf15aee39-f3: No such device
Dec 02 10:13:24 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapf15aee39-f3: No such device
Dec 02 10:13:24 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapf15aee39-f3: No such device
Dec 02 10:13:24 np0005541914.localdomain virtnodedevd[229262]: ethtool ioctl error on tapf15aee39-f3: No such device
Dec 02 10:13:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:24.456 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:24.475 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:24 np0005541914.localdomain ceph-mon[301710]: pgmap v530: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 86 KiB/s rd, 82 KiB/s wr, 124 op/s
Dec 02 10:13:25 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v531: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 66 KiB/s wr, 99 op/s
Dec 02 10:13:25 np0005541914.localdomain podman[323295]: 
Dec 02 10:13:25 np0005541914.localdomain podman[323295]: 2025-12-02 10:13:25.341790982 +0000 UTC m=+0.059997584 container create f1a848ed1159952f749ec6ffa3ade007079667a1f8726827ec2ce3635cffb8c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-689e2fb8-60b0-49ed-bf14-a87677f003a5, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 02 10:13:25 np0005541914.localdomain systemd[1]: Started libpod-conmon-f1a848ed1159952f749ec6ffa3ade007079667a1f8726827ec2ce3635cffb8c2.scope.
Dec 02 10:13:25 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 10:13:25 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4f5f02ccadbec04bd08e00844ffed85a529fb4b80bc5fbf08b5b1862ed14228/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 10:13:25 np0005541914.localdomain podman[323295]: 2025-12-02 10:13:25.31045988 +0000 UTC m=+0.028666472 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Dec 02 10:13:25 np0005541914.localdomain podman[323295]: 2025-12-02 10:13:25.413746993 +0000 UTC m=+0.131953575 container init f1a848ed1159952f749ec6ffa3ade007079667a1f8726827ec2ce3635cffb8c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-689e2fb8-60b0-49ed-bf14-a87677f003a5, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 02 10:13:25 np0005541914.localdomain systemd[1]: tmp-crun.F4yA1N.mount: Deactivated successfully.
Dec 02 10:13:25 np0005541914.localdomain podman[323295]: 2025-12-02 10:13:25.42144119 +0000 UTC m=+0.139647822 container start f1a848ed1159952f749ec6ffa3ade007079667a1f8726827ec2ce3635cffb8c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-689e2fb8-60b0-49ed-bf14-a87677f003a5, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 02 10:13:25 np0005541914.localdomain dnsmasq[323313]: started, version 2.85 cachesize 150
Dec 02 10:13:25 np0005541914.localdomain dnsmasq[323313]: DNS service limited to local subnets
Dec 02 10:13:25 np0005541914.localdomain dnsmasq[323313]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Dec 02 10:13:25 np0005541914.localdomain dnsmasq[323313]: warning: no upstream servers configured
Dec 02 10:13:25 np0005541914.localdomain dnsmasq-dhcp[323313]: DHCP, static leases only on 10.100.0.0, lease time 1d
Dec 02 10:13:25 np0005541914.localdomain dnsmasq[323313]: read /var/lib/neutron/dhcp/689e2fb8-60b0-49ed-bf14-a87677f003a5/addn_hosts - 0 addresses
Dec 02 10:13:25 np0005541914.localdomain dnsmasq-dhcp[323313]: read /var/lib/neutron/dhcp/689e2fb8-60b0-49ed-bf14-a87677f003a5/host
Dec 02 10:13:25 np0005541914.localdomain dnsmasq-dhcp[323313]: read /var/lib/neutron/dhcp/689e2fb8-60b0-49ed-bf14-a87677f003a5/opts
Dec 02 10:13:25 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:13:25.482 262347 INFO neutron.agent.dhcp.agent [None req-070bb103-3d0c-40ca-b063-f53a13f625be - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:13:23Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034aa11c0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034a2dd00>], id=cd659161-3401-46de-89bb-6bc014b75b2f, ip_allocation=immediate, mac_address=fa:16:3e:5f:95:4d, name=tempest-PortsTestJSON-1991715511, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T10:13:22Z, description=, dns_domain=, id=689e2fb8-60b0-49ed-bf14-a87677f003a5, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsTestJSON-1651050851, port_security_enabled=True, project_id=4ac3f69b39e24601806d0f601335ff31, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=50437, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3112, status=ACTIVE, subnets=['9e76e82a-6a74-4080-8429-de5a530068cf'], tags=[], tenant_id=4ac3f69b39e24601806d0f601335ff31, updated_at=2025-12-02T10:13:22Z, vlan_transparent=None, network_id=689e2fb8-60b0-49ed-bf14-a87677f003a5, port_security_enabled=True, project_id=4ac3f69b39e24601806d0f601335ff31, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['a05fa096-2813-49c8-a900-5ab13174ee5a'], standard_attr_id=3139, status=DOWN, tags=[], tenant_id=4ac3f69b39e24601806d0f601335ff31, updated_at=2025-12-02T10:13:23Z on network 689e2fb8-60b0-49ed-bf14-a87677f003a5
Dec 02 10:13:25 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "cab3f864-d18a-47fe-ac99-2bece590c4f2", "format": "json"}]: dispatch
Dec 02 10:13:25 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:cab3f864-d18a-47fe-ac99-2bece590c4f2, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:13:25 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:13:25.592 262347 INFO neutron.agent.dhcp.agent [None req-9b93fd14-9258-4557-862f-f1a6211b5023 - - - - - -] DHCP configuration for ports {'b3f2ab16-9924-4121-b481-6649f6786325'} is completed
Dec 02 10:13:25 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:cab3f864-d18a-47fe-ac99-2bece590c4f2, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:13:25 np0005541914.localdomain dnsmasq[323313]: read /var/lib/neutron/dhcp/689e2fb8-60b0-49ed-bf14-a87677f003a5/addn_hosts - 1 addresses
Dec 02 10:13:25 np0005541914.localdomain dnsmasq-dhcp[323313]: read /var/lib/neutron/dhcp/689e2fb8-60b0-49ed-bf14-a87677f003a5/host
Dec 02 10:13:25 np0005541914.localdomain dnsmasq-dhcp[323313]: read /var/lib/neutron/dhcp/689e2fb8-60b0-49ed-bf14-a87677f003a5/opts
Dec 02 10:13:25 np0005541914.localdomain podman[323330]: 2025-12-02 10:13:25.753585825 +0000 UTC m=+0.064579495 container kill f1a848ed1159952f749ec6ffa3ade007079667a1f8726827ec2ce3635cffb8c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-689e2fb8-60b0-49ed-bf14-a87677f003a5, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 02 10:13:25 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:13:25.957 262347 INFO neutron.agent.dhcp.agent [None req-69e49db5-98ea-4402-a4ef-a0f7f5a098bd - - - - - -] DHCP configuration for ports {'cd659161-3401-46de-89bb-6bc014b75b2f'} is completed
Dec 02 10:13:26 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:13:26.033 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:13:23Z, description=, device_id=d65b4168-a26b-402d-b6c6-567c639808fe, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034a2a730>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034a2a790>], id=cd659161-3401-46de-89bb-6bc014b75b2f, ip_allocation=immediate, mac_address=fa:16:3e:5f:95:4d, name=tempest-PortsTestJSON-1991715511, network_id=689e2fb8-60b0-49ed-bf14-a87677f003a5, port_security_enabled=True, project_id=4ac3f69b39e24601806d0f601335ff31, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=3, security_groups=['a05fa096-2813-49c8-a900-5ab13174ee5a'], standard_attr_id=3139, status=ACTIVE, tags=[], tenant_id=4ac3f69b39e24601806d0f601335ff31, updated_at=2025-12-02T10:13:25Z on network 689e2fb8-60b0-49ed-bf14-a87677f003a5
Dec 02 10:13:26 np0005541914.localdomain sshd[323368]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 10:13:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:26.227 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:26 np0005541914.localdomain podman[323370]: 2025-12-02 10:13:26.269019753 +0000 UTC m=+0.061642376 container kill f1a848ed1159952f749ec6ffa3ade007079667a1f8726827ec2ce3635cffb8c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-689e2fb8-60b0-49ed-bf14-a87677f003a5, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 02 10:13:26 np0005541914.localdomain dnsmasq[323313]: read /var/lib/neutron/dhcp/689e2fb8-60b0-49ed-bf14-a87677f003a5/addn_hosts - 1 addresses
Dec 02 10:13:26 np0005541914.localdomain dnsmasq-dhcp[323313]: read /var/lib/neutron/dhcp/689e2fb8-60b0-49ed-bf14-a87677f003a5/host
Dec 02 10:13:26 np0005541914.localdomain dnsmasq-dhcp[323313]: read /var/lib/neutron/dhcp/689e2fb8-60b0-49ed-bf14-a87677f003a5/opts
Dec 02 10:13:26 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:13:26.518 262347 INFO neutron.agent.dhcp.agent [None req-0f815766-69a0-42ad-a679-f5d8d64c20ec - - - - - -] DHCP configuration for ports {'cd659161-3401-46de-89bb-6bc014b75b2f'} is completed
Dec 02 10:13:26 np0005541914.localdomain sshd[323368]: Invalid user node from 193.32.162.146 port 55844
Dec 02 10:13:26 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e221 e221: 6 total, 6 up, 6 in
Dec 02 10:13:26 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:13:26.678 2 INFO neutron.agent.securitygroups_rpc [None req-03a5d6f5-e9fc-4da2-b77f-f6e56b5ab3f7 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']
Dec 02 10:13:26 np0005541914.localdomain ceph-mon[301710]: pgmap v531: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 66 KiB/s wr, 99 op/s
Dec 02 10:13:26 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "cab3f864-d18a-47fe-ac99-2bece590c4f2", "format": "json"}]: dispatch
Dec 02 10:13:26 np0005541914.localdomain ceph-mon[301710]: osdmap e221: 6 total, 6 up, 6 in
Dec 02 10:13:26 np0005541914.localdomain sshd[323368]: Connection closed by invalid user node 193.32.162.146 port 55844 [preauth]
Dec 02 10:13:26 np0005541914.localdomain systemd[1]: tmp-crun.duMxgT.mount: Deactivated successfully.
Dec 02 10:13:26 np0005541914.localdomain dnsmasq[323313]: read /var/lib/neutron/dhcp/689e2fb8-60b0-49ed-bf14-a87677f003a5/addn_hosts - 0 addresses
Dec 02 10:13:26 np0005541914.localdomain dnsmasq-dhcp[323313]: read /var/lib/neutron/dhcp/689e2fb8-60b0-49ed-bf14-a87677f003a5/host
Dec 02 10:13:26 np0005541914.localdomain dnsmasq-dhcp[323313]: read /var/lib/neutron/dhcp/689e2fb8-60b0-49ed-bf14-a87677f003a5/opts
Dec 02 10:13:26 np0005541914.localdomain podman[323408]: 2025-12-02 10:13:26.89769353 +0000 UTC m=+0.047731238 container kill f1a848ed1159952f749ec6ffa3ade007079667a1f8726827ec2ce3635cffb8c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-689e2fb8-60b0-49ed-bf14-a87677f003a5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:13:27 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:27.079 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:27 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:13:27Z|00229|binding|INFO|Releasing lport f15aee39-f3c5-43c5-8331-48f6ffa03ae6 from this chassis (sb_readonly=0)
Dec 02 10:13:27 np0005541914.localdomain kernel: device tapf15aee39-f3 left promiscuous mode
Dec 02 10:13:27 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:13:27Z|00230|binding|INFO|Setting lport f15aee39-f3c5-43c5-8331-48f6ffa03ae6 down in Southbound
Dec 02 10:13:27 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:13:27.087 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp71446731-2bf3-5f07-9433-c6ccc8c8960b-689e2fb8-60b0-49ed-bf14-a87677f003a5', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-689e2fb8-60b0-49ed-bf14-a87677f003a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ac3f69b39e24601806d0f601335ff31', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b7028d62-160f-453a-9151-4aa69f4e4388, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=f15aee39-f3c5-43c5-8331-48f6ffa03ae6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:13:27 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:13:27.088 159483 INFO neutron.agent.ovn.metadata.agent [-] Port f15aee39-f3c5-43c5-8331-48f6ffa03ae6 in datapath 689e2fb8-60b0-49ed-bf14-a87677f003a5 unbound from our chassis
Dec 02 10:13:27 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:13:27.089 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 689e2fb8-60b0-49ed-bf14-a87677f003a5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:13:27 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:13:27.090 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[0b9c9ab1-be23-4294-827a-e1c92b1e4fe9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:13:27 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:27.105 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:27 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:27.106 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:27 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v533: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 1.7 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Dec 02 10:13:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:13:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 02 10:13:27 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3797520786' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:13:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 02 10:13:27 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3797520786' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:13:27 np0005541914.localdomain dnsmasq[323313]: exiting on receipt of SIGTERM
Dec 02 10:13:27 np0005541914.localdomain podman[323447]: 2025-12-02 10:13:27.615096753 +0000 UTC m=+0.044790957 container kill f1a848ed1159952f749ec6ffa3ade007079667a1f8726827ec2ce3635cffb8c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-689e2fb8-60b0-49ed-bf14-a87677f003a5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 02 10:13:27 np0005541914.localdomain systemd[1]: libpod-f1a848ed1159952f749ec6ffa3ade007079667a1f8726827ec2ce3635cffb8c2.scope: Deactivated successfully.
Dec 02 10:13:27 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:13:27 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:13:27 np0005541914.localdomain podman[323460]: 2025-12-02 10:13:27.699228108 +0000 UTC m=+0.068716573 container died f1a848ed1159952f749ec6ffa3ade007079667a1f8726827ec2ce3635cffb8c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-689e2fb8-60b0-49ed-bf14-a87677f003a5, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 02 10:13:27 np0005541914.localdomain podman[323460]: 2025-12-02 10:13:27.731348955 +0000 UTC m=+0.100837390 container cleanup f1a848ed1159952f749ec6ffa3ade007079667a1f8726827ec2ce3635cffb8c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-689e2fb8-60b0-49ed-bf14-a87677f003a5, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 02 10:13:27 np0005541914.localdomain systemd[1]: libpod-conmon-f1a848ed1159952f749ec6ffa3ade007079667a1f8726827ec2ce3635cffb8c2.scope: Deactivated successfully.
Dec 02 10:13:27 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/3797520786' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:13:27 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/3797520786' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:13:27 np0005541914.localdomain podman[323462]: 2025-12-02 10:13:27.748874804 +0000 UTC m=+0.110345322 container remove f1a848ed1159952f749ec6ffa3ade007079667a1f8726827ec2ce3635cffb8c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-689e2fb8-60b0-49ed-bf14-a87677f003a5, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 02 10:13:27 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:13:27.772 262347 INFO neutron.agent.dhcp.agent [None req-84945643-cf33-4819-8941-2aea5216c05a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:13:27 np0005541914.localdomain podman[323474]: 2025-12-02 10:13:27.800544351 +0000 UTC m=+0.132730489 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, vcs-type=git, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, managed_by=edpm_ansible, maintainer=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public)
Dec 02 10:13:27 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:13:27.816 262347 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Dec 02 10:13:27 np0005541914.localdomain podman[323473]: 2025-12-02 10:13:27.780717072 +0000 UTC m=+0.119303467 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 02 10:13:27 np0005541914.localdomain podman[323473]: 2025-12-02 10:13:27.866073704 +0000 UTC m=+0.204660109 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 02 10:13:27 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:13:27 np0005541914.localdomain podman[323474]: 2025-12-02 10:13:27.884863252 +0000 UTC m=+0.217049420 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, config_id=edpm)
Dec 02 10:13:27 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:13:28 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:28.014 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:28 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-d4f5f02ccadbec04bd08e00844ffed85a529fb4b80bc5fbf08b5b1862ed14228-merged.mount: Deactivated successfully.
Dec 02 10:13:28 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f1a848ed1159952f749ec6ffa3ade007079667a1f8726827ec2ce3635cffb8c2-userdata-shm.mount: Deactivated successfully.
Dec 02 10:13:28 np0005541914.localdomain systemd[1]: run-netns-qdhcp\x2d689e2fb8\x2d60b0\x2d49ed\x2dbf14\x2da87677f003a5.mount: Deactivated successfully.
Dec 02 10:13:28 np0005541914.localdomain ceph-mon[301710]: pgmap v533: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 1.7 KiB/s rd, 8.6 KiB/s wr, 5 op/s
Dec 02 10:13:28 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:28.926 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:29 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v534: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 32 KiB/s rd, 27 KiB/s wr, 46 op/s
Dec 02 10:13:29 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "75cce5b0-a115-45be-bdca-5a004bb97c21", "format": "json"}]: dispatch
Dec 02 10:13:29 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:75cce5b0-a115-45be-bdca-5a004bb97c21, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:13:29 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:75cce5b0-a115-45be-bdca-5a004bb97c21, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:13:30 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:13:30.049 2 INFO neutron.agent.securitygroups_rpc [None req-fd3a2c94-7b8a-4692-92f4-6e53ae5bb9ca 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['b06e62c3-67bb-4248-8ca7-8eec12bdd5e1']
Dec 02 10:13:30 np0005541914.localdomain ceph-mon[301710]: pgmap v534: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 32 KiB/s rd, 27 KiB/s wr, 46 op/s
Dec 02 10:13:30 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "75cce5b0-a115-45be-bdca-5a004bb97c21", "format": "json"}]: dispatch
Dec 02 10:13:31 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v535: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 22 KiB/s wr, 39 op/s
Dec 02 10:13:31 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:31.278 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:32 np0005541914.localdomain sudo[323526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 10:13:32 np0005541914.localdomain sudo[323526]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:13:32 np0005541914.localdomain sudo[323526]: pam_unix(sudo:session): session closed for user root
Dec 02 10:13:32 np0005541914.localdomain sudo[323544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 10:13:32 np0005541914.localdomain sudo[323544]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:13:32 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:13:32.443 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:31:56:e4 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-55499ea7-fec3-45ce-8fdc-4c408cd7abf9', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55499ea7-fec3-45ce-8fdc-4c408cd7abf9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ac3f69b39e24601806d0f601335ff31', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d9b4da99-dc68-46c9-bcf0-a3cfe207d767, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=3d63b6f3-67ae-4c21-b56a-394abd9240e9) old=Port_Binding(mac=['fa:16:3e:31:56:e4 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-55499ea7-fec3-45ce-8fdc-4c408cd7abf9', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55499ea7-fec3-45ce-8fdc-4c408cd7abf9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ac3f69b39e24601806d0f601335ff31', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:13:32 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:13:32.445 159483 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 3d63b6f3-67ae-4c21-b56a-394abd9240e9 in datapath 55499ea7-fec3-45ce-8fdc-4c408cd7abf9 updated
Dec 02 10:13:32 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:13:32.447 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 55499ea7-fec3-45ce-8fdc-4c408cd7abf9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:13:32 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:13:32.448 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[29ac695f-cf82-47a4-8130-ab087af71fc9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:13:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:13:32 np0005541914.localdomain ceph-mon[301710]: pgmap v535: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 22 KiB/s wr, 39 op/s
Dec 02 10:13:32 np0005541914.localdomain sudo[323544]: pam_unix(sudo:session): session closed for user root
Dec 02 10:13:32 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "2b9fdfdd-4f6a-4e8b-9cca-e9a879aa25b8", "format": "json"}]: dispatch
Dec 02 10:13:32 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:2b9fdfdd-4f6a-4e8b-9cca-e9a879aa25b8, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:13:32 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:2b9fdfdd-4f6a-4e8b-9cca-e9a879aa25b8, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:13:33 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:13:33.040 2 INFO neutron.agent.securitygroups_rpc [None req-2f13aa5e-1f09-4188-af19-a84ba5538b10 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['b06e62c3-67bb-4248-8ca7-8eec12bdd5e1', '19b93206-6bbf-441b-abe9-609f462663ba']
Dec 02 10:13:33 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 10:13:33 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:13:33 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 02 10:13:33 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:13:33 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 02 10:13:33 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] update: starting ev 4d3a6ad7-b49d-4fb4-8024-a6f6a362305d (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:13:33 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] complete: finished ev 4d3a6ad7-b49d-4fb4-8024-a6f6a362305d (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:13:33 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Completed event 4d3a6ad7-b49d-4fb4-8024-a6f6a362305d (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 02 10:13:33 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 02 10:13:33 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:13:33 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v536: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 24 KiB/s wr, 33 op/s
Dec 02 10:13:33 np0005541914.localdomain sudo[323593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 10:13:33 np0005541914.localdomain sudo[323593]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:13:33 np0005541914.localdomain sudo[323593]: pam_unix(sudo:session): session closed for user root
Dec 02 10:13:33 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:13:33.506 2 INFO neutron.agent.securitygroups_rpc [None req-af664360-5183-46be-8a67-9553906db0ca 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['19b93206-6bbf-441b-abe9-609f462663ba']
Dec 02 10:13:33 np0005541914.localdomain podman[239757]: time="2025-12-02T10:13:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:13:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:13:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 10:13:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:13:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19232 "" "Go-http-client/1.1"
Dec 02 10:13:33 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "2b9fdfdd-4f6a-4e8b-9cca-e9a879aa25b8", "format": "json"}]: dispatch
Dec 02 10:13:33 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:13:33 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:13:33 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:13:33 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:13:33 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:33.931 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:33 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:13:34 np0005541914.localdomain podman[323611]: 2025-12-02 10:13:34.086981423 +0000 UTC m=+0.085953392 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 02 10:13:34 np0005541914.localdomain podman[323611]: 2025-12-02 10:13:34.100151898 +0000 UTC m=+0.099123897 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 02 10:13:34 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:13:35 np0005541914.localdomain ceph-mon[301710]: pgmap v536: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 24 KiB/s wr, 33 op/s
Dec 02 10:13:35 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v537: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 24 KiB/s wr, 33 op/s
Dec 02 10:13:35 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:35.534 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:35 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:13:35.536 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:13:35 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:13:35.537 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 02 10:13:36 np0005541914.localdomain ceph-mon[301710]: pgmap v537: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 24 KiB/s wr, 33 op/s
Dec 02 10:13:36 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:36.324 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:36 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:13:36.627 2 INFO neutron.agent.securitygroups_rpc [None req-3559f9f7-1434-4371-a7c8-42d18644b0ee 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['5ce035be-6b85-468c-9f45-e514c3373f72']
Dec 02 10:13:36 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "bbc0db63-e14e-46b1-8a2c-1e2c8a265a54", "format": "json"}]: dispatch
Dec 02 10:13:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:bbc0db63-e14e-46b1-8a2c-1e2c8a265a54, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:13:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:bbc0db63-e14e-46b1-8a2c-1e2c8a265a54, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:13:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:13:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:13:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:13:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:13:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:13:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:13:37 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v538: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 23 KiB/s wr, 31 op/s
Dec 02 10:13:37 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events
Dec 02 10:13:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 02 10:13:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:13:37 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "bbc0db63-e14e-46b1-8a2c-1e2c8a265a54", "format": "json"}]: dispatch
Dec 02 10:13:37 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:13:37 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:13:37.971 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:31:56:e4 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-55499ea7-fec3-45ce-8fdc-4c408cd7abf9', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55499ea7-fec3-45ce-8fdc-4c408cd7abf9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ac3f69b39e24601806d0f601335ff31', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d9b4da99-dc68-46c9-bcf0-a3cfe207d767, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=3d63b6f3-67ae-4c21-b56a-394abd9240e9) old=Port_Binding(mac=['fa:16:3e:31:56:e4 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-55499ea7-fec3-45ce-8fdc-4c408cd7abf9', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55499ea7-fec3-45ce-8fdc-4c408cd7abf9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ac3f69b39e24601806d0f601335ff31', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:13:37 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:13:37.973 159483 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 3d63b6f3-67ae-4c21-b56a-394abd9240e9 in datapath 55499ea7-fec3-45ce-8fdc-4c408cd7abf9 updated
Dec 02 10:13:37 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:13:37.976 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 55499ea7-fec3-45ce-8fdc-4c408cd7abf9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:13:37 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:13:37.977 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[eaf4aa1e-d95c-4ab9-a7ee-7369d230f4ee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:13:38 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 02 10:13:38 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2836928614' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:13:38 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 02 10:13:38 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2836928614' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:13:38 np0005541914.localdomain ceph-mon[301710]: pgmap v538: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 23 KiB/s wr, 31 op/s
Dec 02 10:13:38 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2836928614' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:13:38 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2836928614' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:13:38 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:38.933 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:39 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v539: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 26 KiB/s wr, 37 op/s
Dec 02 10:13:39 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:13:39.413 2 INFO neutron.agent.securitygroups_rpc [None req-f1fb19ca-fe9a-41d0-b171-21af469f3b04 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['5ce035be-6b85-468c-9f45-e514c3373f72', '4635549b-8be4-4094-becd-47d2d3f392be', '4dd0e6ef-da7b-4d17-b1c7-4a0b0fd81445']
Dec 02 10:13:39 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:13:39.969 2 INFO neutron.agent.securitygroups_rpc [None req-1b71adfa-49cf-47f5-a7a4-715d1b19b4b9 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['4635549b-8be4-4094-becd-47d2d3f392be', '4dd0e6ef-da7b-4d17-b1c7-4a0b0fd81445']
Dec 02 10:13:40 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "bbc0db63-e14e-46b1-8a2c-1e2c8a265a54_19841362-7310-4db1-9177-f7698f0587e5", "force": true, "format": "json"}]: dispatch
Dec 02 10:13:40 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bbc0db63-e14e-46b1-8a2c-1e2c8a265a54_19841362-7310-4db1-9177-f7698f0587e5, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:13:40 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp'
Dec 02 10:13:40 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta'
Dec 02 10:13:40 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bbc0db63-e14e-46b1-8a2c-1e2c8a265a54_19841362-7310-4db1-9177-f7698f0587e5, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:13:40 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "bbc0db63-e14e-46b1-8a2c-1e2c8a265a54", "force": true, "format": "json"}]: dispatch
Dec 02 10:13:40 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bbc0db63-e14e-46b1-8a2c-1e2c8a265a54, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:13:40 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp'
Dec 02 10:13:40 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta'
Dec 02 10:13:40 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:bbc0db63-e14e-46b1-8a2c-1e2c8a265a54, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:13:40 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:13:40.539 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:13:40 np0005541914.localdomain ceph-mon[301710]: pgmap v539: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 26 KiB/s wr, 37 op/s
Dec 02 10:13:41 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v540: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 6.7 KiB/s rd, 14 KiB/s wr, 9 op/s
Dec 02 10:13:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:41.364 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:41 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "bbc0db63-e14e-46b1-8a2c-1e2c8a265a54_19841362-7310-4db1-9177-f7698f0587e5", "force": true, "format": "json"}]: dispatch
Dec 02 10:13:41 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "bbc0db63-e14e-46b1-8a2c-1e2c8a265a54", "force": true, "format": "json"}]: dispatch
Dec 02 10:13:42 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:13:42.059 2 INFO neutron.agent.securitygroups_rpc [None req-0f8f5643-0b03-43e9-aad8-6bac530a8f71 8a48cd892c354d1695f4e180869e6d08 4ac3f69b39e24601806d0f601335ff31 - - default default] Security group member updated ['a05fa096-2813-49c8-a900-5ab13174ee5a']
Dec 02 10:13:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:13:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:13:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:13:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:13:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:13:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:13:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:13:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:13:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:13:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:13:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:13:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:13:42 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:13:42 np0005541914.localdomain ceph-mon[301710]: pgmap v540: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 6.7 KiB/s rd, 14 KiB/s wr, 9 op/s
Dec 02 10:13:43 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v541: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 25 KiB/s wr, 16 op/s
Dec 02 10:13:43 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "2b9fdfdd-4f6a-4e8b-9cca-e9a879aa25b8_3ce82e88-98cf-4623-9819-20bc79fd24ff", "force": true, "format": "json"}]: dispatch
Dec 02 10:13:43 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2b9fdfdd-4f6a-4e8b-9cca-e9a879aa25b8_3ce82e88-98cf-4623-9819-20bc79fd24ff, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:13:43 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp'
Dec 02 10:13:43 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta'
Dec 02 10:13:43 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2b9fdfdd-4f6a-4e8b-9cca-e9a879aa25b8_3ce82e88-98cf-4623-9819-20bc79fd24ff, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:13:43 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "2b9fdfdd-4f6a-4e8b-9cca-e9a879aa25b8", "force": true, "format": "json"}]: dispatch
Dec 02 10:13:43 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2b9fdfdd-4f6a-4e8b-9cca-e9a879aa25b8, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:13:43 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp'
Dec 02 10:13:43 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta'
Dec 02 10:13:43 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:2b9fdfdd-4f6a-4e8b-9cca-e9a879aa25b8, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:13:43 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:43.934 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:44 np0005541914.localdomain ceph-mon[301710]: pgmap v541: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 25 KiB/s wr, 16 op/s
Dec 02 10:13:44 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "2b9fdfdd-4f6a-4e8b-9cca-e9a879aa25b8_3ce82e88-98cf-4623-9819-20bc79fd24ff", "force": true, "format": "json"}]: dispatch
Dec 02 10:13:44 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "2b9fdfdd-4f6a-4e8b-9cca-e9a879aa25b8", "force": true, "format": "json"}]: dispatch
Dec 02 10:13:45 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v542: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 17 KiB/s wr, 16 op/s
Dec 02 10:13:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:46.404 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:46 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "75cce5b0-a115-45be-bdca-5a004bb97c21_5c7747e5-6063-411a-b429-32a057c35acd", "force": true, "format": "json"}]: dispatch
Dec 02 10:13:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:75cce5b0-a115-45be-bdca-5a004bb97c21_5c7747e5-6063-411a-b429-32a057c35acd, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:13:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp'
Dec 02 10:13:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta'
Dec 02 10:13:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:75cce5b0-a115-45be-bdca-5a004bb97c21_5c7747e5-6063-411a-b429-32a057c35acd, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:13:46 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "75cce5b0-a115-45be-bdca-5a004bb97c21", "force": true, "format": "json"}]: dispatch
Dec 02 10:13:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:75cce5b0-a115-45be-bdca-5a004bb97c21, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:13:46 np0005541914.localdomain ceph-mon[301710]: pgmap v542: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 17 KiB/s wr, 16 op/s
Dec 02 10:13:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp'
Dec 02 10:13:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta'
Dec 02 10:13:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:75cce5b0-a115-45be-bdca-5a004bb97c21, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:13:47 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v543: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 17 KiB/s wr, 16 op/s
Dec 02 10:13:47 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:13:47 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 02 10:13:47 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3643059818' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:13:47 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 02 10:13:47 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3643059818' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:13:47 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "75cce5b0-a115-45be-bdca-5a004bb97c21_5c7747e5-6063-411a-b429-32a057c35acd", "force": true, "format": "json"}]: dispatch
Dec 02 10:13:47 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "75cce5b0-a115-45be-bdca-5a004bb97c21", "force": true, "format": "json"}]: dispatch
Dec 02 10:13:47 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/3643059818' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:13:47 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/3643059818' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:13:47 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e222 e222: 6 total, 6 up, 6 in
Dec 02 10:13:48 np0005541914.localdomain ceph-mon[301710]: pgmap v543: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 17 KiB/s wr, 16 op/s
Dec 02 10:13:48 np0005541914.localdomain ceph-mon[301710]: osdmap e222: 6 total, 6 up, 6 in
Dec 02 10:13:48 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e223 e223: 6 total, 6 up, 6 in
Dec 02 10:13:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:48.940 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:49 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v546: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 71 KiB/s wr, 36 op/s
Dec 02 10:13:49 np0005541914.localdomain ceph-mon[301710]: osdmap e223: 6 total, 6 up, 6 in
Dec 02 10:13:50 np0005541914.localdomain ceph-mon[301710]: pgmap v546: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 71 KiB/s wr, 36 op/s
Dec 02 10:13:51 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "cab3f864-d18a-47fe-ac99-2bece590c4f2_66e86b61-d3e8-4024-bc38-f520916b3578", "force": true, "format": "json"}]: dispatch
Dec 02 10:13:51 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cab3f864-d18a-47fe-ac99-2bece590c4f2_66e86b61-d3e8-4024-bc38-f520916b3578, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:13:51 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp'
Dec 02 10:13:51 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta'
Dec 02 10:13:51 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cab3f864-d18a-47fe-ac99-2bece590c4f2_66e86b61-d3e8-4024-bc38-f520916b3578, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:13:51 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "cab3f864-d18a-47fe-ac99-2bece590c4f2", "force": true, "format": "json"}]: dispatch
Dec 02 10:13:51 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cab3f864-d18a-47fe-ac99-2bece590c4f2, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:13:51 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp'
Dec 02 10:13:51 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta'
Dec 02 10:13:51 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:cab3f864-d18a-47fe-ac99-2bece590c4f2, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:13:51 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v547: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 54 KiB/s wr, 25 op/s
Dec 02 10:13:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:51.455 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:51 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e224 e224: 6 total, 6 up, 6 in
Dec 02 10:13:51 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/4272430941' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:13:51 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/4272430941' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:13:51 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "cab3f864-d18a-47fe-ac99-2bece590c4f2_66e86b61-d3e8-4024-bc38-f520916b3578", "force": true, "format": "json"}]: dispatch
Dec 02 10:13:51 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "cab3f864-d18a-47fe-ac99-2bece590c4f2", "force": true, "format": "json"}]: dispatch
Dec 02 10:13:51 np0005541914.localdomain ceph-mon[301710]: osdmap e224: 6 total, 6 up, 6 in
Dec 02 10:13:51 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 02 10:13:51 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2051237495' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:13:51 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 02 10:13:51 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2051237495' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:13:52 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:13:52 np0005541914.localdomain ceph-mon[301710]: pgmap v547: 177 pgs: 177 active+clean; 201 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 54 KiB/s wr, 25 op/s
Dec 02 10:13:52 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2051237495' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:13:52 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2051237495' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:13:52 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:13:52 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:13:52 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:13:53 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:13:53 np0005541914.localdomain podman[323633]: 2025-12-02 10:13:53.090279298 +0000 UTC m=+0.091662888 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 02 10:13:53 np0005541914.localdomain podman[323634]: 2025-12-02 10:13:53.144844484 +0000 UTC m=+0.142946293 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 10:13:53 np0005541914.localdomain podman[323634]: 2025-12-02 10:13:53.156887644 +0000 UTC m=+0.154989453 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 10:13:53 np0005541914.localdomain systemd[1]: tmp-crun.FRORne.mount: Deactivated successfully.
Dec 02 10:13:53 np0005541914.localdomain podman[323635]: 2025-12-02 10:13:53.203912209 +0000 UTC m=+0.197647204 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 02 10:13:53 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v549: 177 pgs: 177 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 116 KiB/s rd, 103 KiB/s wr, 165 op/s
Dec 02 10:13:53 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:13:53 np0005541914.localdomain podman[323635]: 2025-12-02 10:13:53.245037533 +0000 UTC m=+0.238772518 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Dec 02 10:13:53 np0005541914.localdomain podman[323641]: 2025-12-02 10:13:53.256890747 +0000 UTC m=+0.243086681 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 02 10:13:53 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:13:53 np0005541914.localdomain podman[323633]: 2025-12-02 10:13:53.279082639 +0000 UTC m=+0.280466189 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 02 10:13:53 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:13:53 np0005541914.localdomain podman[323641]: 2025-12-02 10:13:53.299236648 +0000 UTC m=+0.285432682 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2)
Dec 02 10:13:53 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:13:53 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/3242656336' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:13:53 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/3242656336' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:13:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:53.947 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:54 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "71fe3ff3-77b1-42b9-a13c-7c107bdd326d_d24362ed-a55a-4034-ba1b-811e27325111", "force": true, "format": "json"}]: dispatch
Dec 02 10:13:54 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:71fe3ff3-77b1-42b9-a13c-7c107bdd326d_d24362ed-a55a-4034-ba1b-811e27325111, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:13:54 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp'
Dec 02 10:13:54 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta'
Dec 02 10:13:54 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:71fe3ff3-77b1-42b9-a13c-7c107bdd326d_d24362ed-a55a-4034-ba1b-811e27325111, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:13:54 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "71fe3ff3-77b1-42b9-a13c-7c107bdd326d", "force": true, "format": "json"}]: dispatch
Dec 02 10:13:54 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:71fe3ff3-77b1-42b9-a13c-7c107bdd326d, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:13:54 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp'
Dec 02 10:13:54 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta.tmp' to config b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675/.meta'
Dec 02 10:13:54 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:71fe3ff3-77b1-42b9-a13c-7c107bdd326d, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:13:54 np0005541914.localdomain ceph-mon[301710]: pgmap v549: 177 pgs: 177 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 116 KiB/s rd, 103 KiB/s wr, 165 op/s
Dec 02 10:13:54 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2476172005' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:13:54 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2476172005' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:13:55 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v550: 177 pgs: 177 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 94 KiB/s rd, 84 KiB/s wr, 134 op/s
Dec 02 10:13:55 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "71fe3ff3-77b1-42b9-a13c-7c107bdd326d_d24362ed-a55a-4034-ba1b-811e27325111", "force": true, "format": "json"}]: dispatch
Dec 02 10:13:55 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "snap_name": "71fe3ff3-77b1-42b9-a13c-7c107bdd326d", "force": true, "format": "json"}]: dispatch
Dec 02 10:13:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:56.460 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:56 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e225 e225: 6 total, 6 up, 6 in
Dec 02 10:13:56 np0005541914.localdomain ceph-mon[301710]: pgmap v550: 177 pgs: 177 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 94 KiB/s rd, 84 KiB/s wr, 134 op/s
Dec 02 10:13:56 np0005541914.localdomain ceph-mon[301710]: osdmap e225: 6 total, 6 up, 6 in
Dec 02 10:13:57 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v552: 177 pgs: 177 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 71 KiB/s rd, 23 KiB/s wr, 98 op/s
Dec 02 10:13:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e225 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:13:57 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "format": "json"}]: dispatch
Dec 02 10:13:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:afb0a218-82d6-4848-bc26-a77f5d927675, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:13:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:afb0a218-82d6-4848-bc26-a77f5d927675, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:13:57 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:13:57.609+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'afb0a218-82d6-4848-bc26-a77f5d927675' of type subvolume
Dec 02 10:13:57 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'afb0a218-82d6-4848-bc26-a77f5d927675' of type subvolume
Dec 02 10:13:57 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "force": true, "format": "json"}]: dispatch
Dec 02 10:13:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:13:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/afb0a218-82d6-4848-bc26-a77f5d927675'' moved to trashcan
Dec 02 10:13:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:13:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:afb0a218-82d6-4848-bc26-a77f5d927675, vol_name:cephfs) < ""
Dec 02 10:13:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e226 e226: 6 total, 6 up, 6 in
Dec 02 10:13:57 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:13:57 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:13:58 np0005541914.localdomain podman[323716]: 2025-12-02 10:13:58.076932592 +0000 UTC m=+0.079949328 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, name=ubi9-minimal, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., release=1755695350, io.openshift.expose-services=)
Dec 02 10:13:58 np0005541914.localdomain podman[323716]: 2025-12-02 10:13:58.088517128 +0000 UTC m=+0.091533864 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., release=1755695350, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, version=9.6, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, name=ubi9-minimal)
Dec 02 10:13:58 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:13:58 np0005541914.localdomain podman[323715]: 2025-12-02 10:13:58.182816314 +0000 UTC m=+0.187371967 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 10:13:58 np0005541914.localdomain podman[323715]: 2025-12-02 10:13:58.219912675 +0000 UTC m=+0.224468338 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 10:13:58 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:13:58 np0005541914.localdomain ceph-mon[301710]: pgmap v552: 177 pgs: 177 active+clean; 201 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 71 KiB/s rd, 23 KiB/s wr, 98 op/s
Dec 02 10:13:58 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "format": "json"}]: dispatch
Dec 02 10:13:58 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "afb0a218-82d6-4848-bc26-a77f5d927675", "force": true, "format": "json"}]: dispatch
Dec 02 10:13:58 np0005541914.localdomain ceph-mon[301710]: osdmap e226: 6 total, 6 up, 6 in
Dec 02 10:13:58 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e227 e227: 6 total, 6 up, 6 in
Dec 02 10:13:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:13:58.987 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:13:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v555: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 56 KiB/s wr, 29 op/s
Dec 02 10:13:59 np0005541914.localdomain ceph-mon[301710]: osdmap e227: 6 total, 6 up, 6 in
Dec 02 10:14:00 np0005541914.localdomain ceph-mon[301710]: pgmap v555: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 56 KiB/s wr, 29 op/s
Dec 02 10:14:00 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2655722515' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:14:00 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2655722515' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:14:01 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 02 10:14:01 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4053296986' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:14:01 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 02 10:14:01 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4053296986' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:14:01 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v556: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 56 KiB/s wr, 29 op/s
Dec 02 10:14:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:01.496 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:14:01 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1778989957' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:14:01 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1778989957' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:14:01 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/4053296986' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:14:01 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/4053296986' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:14:02 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:14:02 np0005541914.localdomain ceph-mon[301710]: pgmap v556: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 56 KiB/s wr, 29 op/s
Dec 02 10:14:02 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1151473184' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:14:02 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1151473184' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:14:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:14:03.183 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:14:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:14:03.183 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:14:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:14:03.184 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:14:03 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v557: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 104 KiB/s rd, 58 KiB/s wr, 150 op/s
Dec 02 10:14:03 np0005541914.localdomain podman[239757]: time="2025-12-02T10:14:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:14:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:14:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 10:14:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:14:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19239 "" "Go-http-client/1.1"
Dec 02 10:14:03 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e228 e228: 6 total, 6 up, 6 in
Dec 02 10:14:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:04.029 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:14:04 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 02 10:14:04 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2266163141' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:14:04 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 02 10:14:04 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2266163141' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:14:04 np0005541914.localdomain ceph-mon[301710]: pgmap v557: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 104 KiB/s rd, 58 KiB/s wr, 150 op/s
Dec 02 10:14:04 np0005541914.localdomain ceph-mon[301710]: osdmap e228: 6 total, 6 up, 6 in
Dec 02 10:14:04 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2495621874' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:14:04 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2495621874' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:14:04 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2266163141' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:14:04 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2266163141' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:14:04 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/2984015493' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:14:04 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:14:05 np0005541914.localdomain podman[323757]: 2025-12-02 10:14:05.074979989 +0000 UTC m=+0.077186163 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible)
Dec 02 10:14:05 np0005541914.localdomain podman[323757]: 2025-12-02 10:14:05.107431685 +0000 UTC m=+0.109637929 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec 02 10:14:05 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:14:05 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v559: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 91 KiB/s rd, 51 KiB/s wr, 130 op/s
Dec 02 10:14:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:05.543 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:14:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/3165726944' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:14:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:06.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:14:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:06.542 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:14:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:06.556 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:14:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:06.557 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:14:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:06.557 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:14:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:06.557 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 10:14:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:06.558 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:14:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e229 e229: 6 total, 6 up, 6 in
Dec 02 10:14:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:14:06
Dec 02 10:14:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 02 10:14:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] do_upmap
Dec 02 10:14:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] pools ['vms', 'images', 'volumes', '.mgr', 'manila_data', 'backups', 'manila_metadata']
Dec 02 10:14:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes
Dec 02 10:14:06 np0005541914.localdomain ceph-mon[301710]: pgmap v559: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 91 KiB/s rd, 51 KiB/s wr, 130 op/s
Dec 02 10:14:06 np0005541914.localdomain ceph-mon[301710]: osdmap e229: 6 total, 6 up, 6 in
Dec 02 10:14:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:14:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:14:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:14:07 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/793956818' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:14:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:14:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:14:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:07.021 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:14:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:14:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:14:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v561: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 73 KiB/s rd, 5.2 KiB/s wr, 99 op/s
Dec 02 10:14:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:07.262 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 10:14:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:07.265 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11483MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 10:14:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:07.265 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:14:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:07.266 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:14:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust
Dec 02 10:14:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:14:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 02 10:14:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:14:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32)
Dec 02 10:14:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:14:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32)
Dec 02 10:14:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:14:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32)
Dec 02 10:14:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:14:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32)
Dec 02 10:14:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:14:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 02 10:14:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:14:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0007099298576214405 of space, bias 4.0, pg target 0.5651041666666666 quantized to 16 (current 16)
Dec 02 10:14:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 02 10:14:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:14:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 02 10:14:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:14:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:14:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:14:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:14:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:14:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:14:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:14:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:07.357 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 10:14:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:07.357 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 10:14:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:07.380 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:14:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:14:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 02 10:14:07 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3658023958' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:14:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 02 10:14:07 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3658023958' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:14:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e230 e230: 6 total, 6 up, 6 in
Dec 02 10:14:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:14:07 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3646158002' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:14:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:07.818 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:14:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:07.826 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 10:14:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:07.841 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 10:14:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:07.844 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 10:14:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:07.844 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:14:07 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/793956818' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:14:07 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/3658023958' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:14:07 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/3658023958' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:14:07 np0005541914.localdomain ceph-mon[301710]: osdmap e230: 6 total, 6 up, 6 in
Dec 02 10:14:07 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/3646158002' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:14:08 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 02 10:14:08 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/219637169' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:14:08 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 02 10:14:08 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/219637169' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:14:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:08.845 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:14:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:08.846 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:14:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:08.846 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:14:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:08.846 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:14:08 np0005541914.localdomain ceph-mon[301710]: pgmap v561: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 73 KiB/s rd, 5.2 KiB/s wr, 99 op/s
Dec 02 10:14:08 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/219637169' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:14:08 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/219637169' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:14:09 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:09.033 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:14:09 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v563: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 65 KiB/s rd, 27 KiB/s wr, 89 op/s
Dec 02 10:14:09 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 02 10:14:09 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1054534984' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:14:09 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 02 10:14:09 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1054534984' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:14:09 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1054534984' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:14:09 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1054534984' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:14:09 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e231 e231: 6 total, 6 up, 6 in
Dec 02 10:14:10 np0005541914.localdomain ceph-mon[301710]: pgmap v563: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 65 KiB/s rd, 27 KiB/s wr, 89 op/s
Dec 02 10:14:10 np0005541914.localdomain ceph-mon[301710]: osdmap e231: 6 total, 6 up, 6 in
Dec 02 10:14:11 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v565: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 65 KiB/s rd, 27 KiB/s wr, 89 op/s
Dec 02 10:14:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:11.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:14:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:11.529 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 10:14:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:11.529 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 10:14:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:11.556 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 10:14:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:11.556 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:14:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:11.564 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:14:12 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/232663665' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:14:12 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/232663665' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:14:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:14:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:14:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:14:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:14:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:14:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:14:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:14:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:14:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:14:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:14:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:14:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:14:12 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:14:12 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:14:12Z|00231|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec 02 10:14:13 np0005541914.localdomain ceph-mon[301710]: pgmap v565: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 65 KiB/s rd, 27 KiB/s wr, 89 op/s
Dec 02 10:14:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "10335e0e-f484-4bf5-b0cc-29a04393ec4e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:14:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:10335e0e-f484-4bf5-b0cc-29a04393ec4e, vol_name:cephfs) < ""
Dec 02 10:14:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/10335e0e-f484-4bf5-b0cc-29a04393ec4e/.meta.tmp'
Dec 02 10:14:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/10335e0e-f484-4bf5-b0cc-29a04393ec4e/.meta.tmp' to config b'/volumes/_nogroup/10335e0e-f484-4bf5-b0cc-29a04393ec4e/.meta'
Dec 02 10:14:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:10335e0e-f484-4bf5-b0cc-29a04393ec4e, vol_name:cephfs) < ""
Dec 02 10:14:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "10335e0e-f484-4bf5-b0cc-29a04393ec4e", "format": "json"}]: dispatch
Dec 02 10:14:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:10335e0e-f484-4bf5-b0cc-29a04393ec4e, vol_name:cephfs) < ""
Dec 02 10:14:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v566: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 3.3 MiB/s rd, 27 KiB/s wr, 182 op/s
Dec 02 10:14:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:10335e0e-f484-4bf5-b0cc-29a04393ec4e, vol_name:cephfs) < ""
Dec 02 10:14:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "235a8d4c-ab29-4d51-b38b-3a594da63103", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:14:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:235a8d4c-ab29-4d51-b38b-3a594da63103, vol_name:cephfs) < ""
Dec 02 10:14:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/235a8d4c-ab29-4d51-b38b-3a594da63103/.meta.tmp'
Dec 02 10:14:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/235a8d4c-ab29-4d51-b38b-3a594da63103/.meta.tmp' to config b'/volumes/_nogroup/235a8d4c-ab29-4d51-b38b-3a594da63103/.meta'
Dec 02 10:14:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:235a8d4c-ab29-4d51-b38b-3a594da63103, vol_name:cephfs) < ""
Dec 02 10:14:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "235a8d4c-ab29-4d51-b38b-3a594da63103", "format": "json"}]: dispatch
Dec 02 10:14:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:235a8d4c-ab29-4d51-b38b-3a594da63103, vol_name:cephfs) < ""
Dec 02 10:14:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:235a8d4c-ab29-4d51-b38b-3a594da63103, vol_name:cephfs) < ""
Dec 02 10:14:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:13.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:14:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:13.555 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:14:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:13.556 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 10:14:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:14.036 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:14:14 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:14:14 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:14:15 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "10335e0e-f484-4bf5-b0cc-29a04393ec4e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:14:15 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "10335e0e-f484-4bf5-b0cc-29a04393ec4e", "format": "json"}]: dispatch
Dec 02 10:14:15 np0005541914.localdomain ceph-mon[301710]: pgmap v566: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 3.3 MiB/s rd, 27 KiB/s wr, 182 op/s
Dec 02 10:14:15 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "235a8d4c-ab29-4d51-b38b-3a594da63103", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:14:15 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "235a8d4c-ab29-4d51-b38b-3a594da63103", "format": "json"}]: dispatch
Dec 02 10:14:15 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v567: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.7 MiB/s rd, 22 KiB/s wr, 149 op/s
Dec 02 10:14:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:14:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:14:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:14:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:14:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:14:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:14:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:14:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:14:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:14:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:14:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:14:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:14:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:14:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:14:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:14:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:14:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:14:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:14:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:14:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:14:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:14:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.448 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:14:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.448 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:14:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.448 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:14:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:14:15.448 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:14:16 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "10335e0e-f484-4bf5-b0cc-29a04393ec4e", "auth_id": "tempest-cephx-id-185695304", "tenant_id": "5974c1b38c02486098e277d58b491dac", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:14:16 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-185695304, format:json, prefix:fs subvolume authorize, sub_name:10335e0e-f484-4bf5-b0cc-29a04393ec4e, tenant_id:5974c1b38c02486098e277d58b491dac, vol_name:cephfs) < ""
Dec 02 10:14:16 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-185695304", "format": "json"} v 0)
Dec 02 10:14:16 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-185695304", "format": "json"} : dispatch
Dec 02 10:14:16 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID tempest-cephx-id-185695304 with tenant 5974c1b38c02486098e277d58b491dac
Dec 02 10:14:16 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-185695304", "caps": ["mds", "allow rw path=/volumes/_nogroup/10335e0e-f484-4bf5-b0cc-29a04393ec4e/555d1535-2ead-4b78-97f7-0c5bf5ade719", "osd", "allow rw pool=manila_data namespace=fsvolumens_10335e0e-f484-4bf5-b0cc-29a04393ec4e", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:14:16 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-185695304", "caps": ["mds", "allow rw path=/volumes/_nogroup/10335e0e-f484-4bf5-b0cc-29a04393ec4e/555d1535-2ead-4b78-97f7-0c5bf5ade719", "osd", "allow rw pool=manila_data namespace=fsvolumens_10335e0e-f484-4bf5-b0cc-29a04393ec4e", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:14:16 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-185695304, format:json, prefix:fs subvolume authorize, sub_name:10335e0e-f484-4bf5-b0cc-29a04393ec4e, tenant_id:5974c1b38c02486098e277d58b491dac, vol_name:cephfs) < ""
Dec 02 10:14:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:16.596 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:14:16 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e232 e232: 6 total, 6 up, 6 in
Dec 02 10:14:16 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "feb9c965-7b4c-4671-ab34-1817317dacc0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:14:16 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:feb9c965-7b4c-4671-ab34-1817317dacc0, vol_name:cephfs) < ""
Dec 02 10:14:16 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/feb9c965-7b4c-4671-ab34-1817317dacc0/.meta.tmp'
Dec 02 10:14:16 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/feb9c965-7b4c-4671-ab34-1817317dacc0/.meta.tmp' to config b'/volumes/_nogroup/feb9c965-7b4c-4671-ab34-1817317dacc0/.meta'
Dec 02 10:14:16 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:feb9c965-7b4c-4671-ab34-1817317dacc0, vol_name:cephfs) < ""
Dec 02 10:14:16 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "feb9c965-7b4c-4671-ab34-1817317dacc0", "format": "json"}]: dispatch
Dec 02 10:14:16 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:feb9c965-7b4c-4671-ab34-1817317dacc0, vol_name:cephfs) < ""
Dec 02 10:14:16 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:feb9c965-7b4c-4671-ab34-1817317dacc0, vol_name:cephfs) < ""
Dec 02 10:14:17 np0005541914.localdomain ceph-mon[301710]: pgmap v567: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.7 MiB/s rd, 22 KiB/s wr, 149 op/s
Dec 02 10:14:17 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/3676005525' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:14:17 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-185695304", "format": "json"} : dispatch
Dec 02 10:14:17 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-185695304", "caps": ["mds", "allow rw path=/volumes/_nogroup/10335e0e-f484-4bf5-b0cc-29a04393ec4e/555d1535-2ead-4b78-97f7-0c5bf5ade719", "osd", "allow rw pool=manila_data namespace=fsvolumens_10335e0e-f484-4bf5-b0cc-29a04393ec4e", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:14:17 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-185695304", "caps": ["mds", "allow rw path=/volumes/_nogroup/10335e0e-f484-4bf5-b0cc-29a04393ec4e/555d1535-2ead-4b78-97f7-0c5bf5ade719", "osd", "allow rw pool=manila_data namespace=fsvolumens_10335e0e-f484-4bf5-b0cc-29a04393ec4e", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:14:17 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-185695304", "caps": ["mds", "allow rw path=/volumes/_nogroup/10335e0e-f484-4bf5-b0cc-29a04393ec4e/555d1535-2ead-4b78-97f7-0c5bf5ade719", "osd", "allow rw pool=manila_data namespace=fsvolumens_10335e0e-f484-4bf5-b0cc-29a04393ec4e", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:14:17 np0005541914.localdomain ceph-mon[301710]: osdmap e232: 6 total, 6 up, 6 in
Dec 02 10:14:17 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:14:17 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/3935642216' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:14:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "10335e0e-f484-4bf5-b0cc-29a04393ec4e", "auth_id": "tempest-cephx-id-185695304", "format": "json"}]: dispatch
Dec 02 10:14:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-185695304, format:json, prefix:fs subvolume deauthorize, sub_name:10335e0e-f484-4bf5-b0cc-29a04393ec4e, vol_name:cephfs) < ""
Dec 02 10:14:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v569: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 2.4 KiB/s wr, 81 op/s
Dec 02 10:14:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-185695304", "format": "json"} v 0)
Dec 02 10:14:17 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-185695304", "format": "json"} : dispatch
Dec 02 10:14:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-185695304"} v 0)
Dec 02 10:14:17 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-185695304"} : dispatch
Dec 02 10:14:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-185695304, format:json, prefix:fs subvolume deauthorize, sub_name:10335e0e-f484-4bf5-b0cc-29a04393ec4e, vol_name:cephfs) < ""
Dec 02 10:14:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "10335e0e-f484-4bf5-b0cc-29a04393ec4e", "auth_id": "tempest-cephx-id-185695304", "format": "json"}]: dispatch
Dec 02 10:14:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-185695304, format:json, prefix:fs subvolume evict, sub_name:10335e0e-f484-4bf5-b0cc-29a04393ec4e, vol_name:cephfs) < ""
Dec 02 10:14:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-185695304, client_metadata.root=/volumes/_nogroup/10335e0e-f484-4bf5-b0cc-29a04393ec4e/555d1535-2ead-4b78-97f7-0c5bf5ade719
Dec 02 10:14:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:14:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-185695304, format:json, prefix:fs subvolume evict, sub_name:10335e0e-f484-4bf5-b0cc-29a04393ec4e, vol_name:cephfs) < ""
Dec 02 10:14:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "10335e0e-f484-4bf5-b0cc-29a04393ec4e", "format": "json"}]: dispatch
Dec 02 10:14:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:10335e0e-f484-4bf5-b0cc-29a04393ec4e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:14:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:10335e0e-f484-4bf5-b0cc-29a04393ec4e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:14:17 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:14:17.448+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '10335e0e-f484-4bf5-b0cc-29a04393ec4e' of type subvolume
Dec 02 10:14:17 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '10335e0e-f484-4bf5-b0cc-29a04393ec4e' of type subvolume
Dec 02 10:14:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "10335e0e-f484-4bf5-b0cc-29a04393ec4e", "force": true, "format": "json"}]: dispatch
Dec 02 10:14:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:10335e0e-f484-4bf5-b0cc-29a04393ec4e, vol_name:cephfs) < ""
Dec 02 10:14:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/10335e0e-f484-4bf5-b0cc-29a04393ec4e'' moved to trashcan
Dec 02 10:14:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:14:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:10335e0e-f484-4bf5-b0cc-29a04393ec4e, vol_name:cephfs) < ""
Dec 02 10:14:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e232 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:14:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ed816090-7c9e-4964-a11f-502383746c0b", "size": 4294967296, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:14:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:ed816090-7c9e-4964-a11f-502383746c0b, vol_name:cephfs) < ""
Dec 02 10:14:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ed816090-7c9e-4964-a11f-502383746c0b/.meta.tmp'
Dec 02 10:14:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ed816090-7c9e-4964-a11f-502383746c0b/.meta.tmp' to config b'/volumes/_nogroup/ed816090-7c9e-4964-a11f-502383746c0b/.meta'
Dec 02 10:14:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:4294967296, sub_name:ed816090-7c9e-4964-a11f-502383746c0b, vol_name:cephfs) < ""
Dec 02 10:14:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ed816090-7c9e-4964-a11f-502383746c0b", "format": "json"}]: dispatch
Dec 02 10:14:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ed816090-7c9e-4964-a11f-502383746c0b, vol_name:cephfs) < ""
Dec 02 10:14:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ed816090-7c9e-4964-a11f-502383746c0b, vol_name:cephfs) < ""
Dec 02 10:14:18 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "10335e0e-f484-4bf5-b0cc-29a04393ec4e", "auth_id": "tempest-cephx-id-185695304", "tenant_id": "5974c1b38c02486098e277d58b491dac", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:14:18 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "feb9c965-7b4c-4671-ab34-1817317dacc0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:14:18 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "feb9c965-7b4c-4671-ab34-1817317dacc0", "format": "json"}]: dispatch
Dec 02 10:14:18 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-185695304", "format": "json"} : dispatch
Dec 02 10:14:18 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-185695304"} : dispatch
Dec 02 10:14:18 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-185695304"} : dispatch
Dec 02 10:14:18 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-185695304"}]': finished
Dec 02 10:14:18 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:14:18 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e233 e233: 6 total, 6 up, 6 in
Dec 02 10:14:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:19.039 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:14:19 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "10335e0e-f484-4bf5-b0cc-29a04393ec4e", "auth_id": "tempest-cephx-id-185695304", "format": "json"}]: dispatch
Dec 02 10:14:19 np0005541914.localdomain ceph-mon[301710]: pgmap v569: 177 pgs: 177 active+clean; 202 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 2.4 KiB/s wr, 81 op/s
Dec 02 10:14:19 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "10335e0e-f484-4bf5-b0cc-29a04393ec4e", "auth_id": "tempest-cephx-id-185695304", "format": "json"}]: dispatch
Dec 02 10:14:19 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "10335e0e-f484-4bf5-b0cc-29a04393ec4e", "format": "json"}]: dispatch
Dec 02 10:14:19 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "10335e0e-f484-4bf5-b0cc-29a04393ec4e", "force": true, "format": "json"}]: dispatch
Dec 02 10:14:19 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ed816090-7c9e-4964-a11f-502383746c0b", "size": 4294967296, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:14:19 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ed816090-7c9e-4964-a11f-502383746c0b", "format": "json"}]: dispatch
Dec 02 10:14:19 np0005541914.localdomain ceph-mon[301710]: osdmap e233: 6 total, 6 up, 6 in
Dec 02 10:14:19 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/570460872' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:14:19 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/570460872' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:14:19 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v571: 177 pgs: 177 active+clean; 249 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.7 MiB/s rd, 2.8 MiB/s wr, 157 op/s
Dec 02 10:14:19 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e234 e234: 6 total, 6 up, 6 in
Dec 02 10:14:20 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "feb9c965-7b4c-4671-ab34-1817317dacc0", "format": "json"}]: dispatch
Dec 02 10:14:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:feb9c965-7b4c-4671-ab34-1817317dacc0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:14:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:feb9c965-7b4c-4671-ab34-1817317dacc0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:14:20 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:14:20.254+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'feb9c965-7b4c-4671-ab34-1817317dacc0' of type subvolume
Dec 02 10:14:20 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'feb9c965-7b4c-4671-ab34-1817317dacc0' of type subvolume
Dec 02 10:14:20 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "feb9c965-7b4c-4671-ab34-1817317dacc0", "force": true, "format": "json"}]: dispatch
Dec 02 10:14:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:feb9c965-7b4c-4671-ab34-1817317dacc0, vol_name:cephfs) < ""
Dec 02 10:14:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/feb9c965-7b4c-4671-ab34-1817317dacc0'' moved to trashcan
Dec 02 10:14:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:14:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:feb9c965-7b4c-4671-ab34-1817317dacc0, vol_name:cephfs) < ""
Dec 02 10:14:20 np0005541914.localdomain ceph-mon[301710]: pgmap v571: 177 pgs: 177 active+clean; 249 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.7 MiB/s rd, 2.8 MiB/s wr, 157 op/s
Dec 02 10:14:20 np0005541914.localdomain ceph-mon[301710]: osdmap e234: 6 total, 6 up, 6 in
Dec 02 10:14:20 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/257893321' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:14:20 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/257893321' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:14:20 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e235 e235: 6 total, 6 up, 6 in
Dec 02 10:14:20 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 02 10:14:20 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2158575046' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:14:20 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 02 10:14:20 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2158575046' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:14:21 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "590dca3f-4f85-48ff-a801-1b49410a7fa1", "size": 3221225472, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:14:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:590dca3f-4f85-48ff-a801-1b49410a7fa1, vol_name:cephfs) < ""
Dec 02 10:14:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/590dca3f-4f85-48ff-a801-1b49410a7fa1/.meta.tmp'
Dec 02 10:14:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/590dca3f-4f85-48ff-a801-1b49410a7fa1/.meta.tmp' to config b'/volumes/_nogroup/590dca3f-4f85-48ff-a801-1b49410a7fa1/.meta'
Dec 02 10:14:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:3221225472, sub_name:590dca3f-4f85-48ff-a801-1b49410a7fa1, vol_name:cephfs) < ""
Dec 02 10:14:21 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "590dca3f-4f85-48ff-a801-1b49410a7fa1", "format": "json"}]: dispatch
Dec 02 10:14:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:590dca3f-4f85-48ff-a801-1b49410a7fa1, vol_name:cephfs) < ""
Dec 02 10:14:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:590dca3f-4f85-48ff-a801-1b49410a7fa1, vol_name:cephfs) < ""
Dec 02 10:14:21 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v574: 177 pgs: 177 active+clean; 249 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 78 KiB/s rd, 4.9 MiB/s wr, 133 op/s
Dec 02 10:14:21 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "feb9c965-7b4c-4671-ab34-1817317dacc0", "format": "json"}]: dispatch
Dec 02 10:14:21 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "feb9c965-7b4c-4671-ab34-1817317dacc0", "force": true, "format": "json"}]: dispatch
Dec 02 10:14:21 np0005541914.localdomain ceph-mon[301710]: osdmap e235: 6 total, 6 up, 6 in
Dec 02 10:14:21 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2158575046' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:14:21 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2158575046' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:14:21 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:14:21 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:21.630 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:14:22 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:14:22 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "590dca3f-4f85-48ff-a801-1b49410a7fa1", "size": 3221225472, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:14:22 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "590dca3f-4f85-48ff-a801-1b49410a7fa1", "format": "json"}]: dispatch
Dec 02 10:14:22 np0005541914.localdomain ceph-mon[301710]: pgmap v574: 177 pgs: 177 active+clean; 249 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 78 KiB/s rd, 4.9 MiB/s wr, 133 op/s
Dec 02 10:14:23 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v575: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 158 KiB/s rd, 3.8 MiB/s wr, 241 op/s
Dec 02 10:14:23 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "235a8d4c-ab29-4d51-b38b-3a594da63103", "format": "json"}]: dispatch
Dec 02 10:14:23 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:235a8d4c-ab29-4d51-b38b-3a594da63103, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:14:23 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:235a8d4c-ab29-4d51-b38b-3a594da63103, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:14:23 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '235a8d4c-ab29-4d51-b38b-3a594da63103' of type subvolume
Dec 02 10:14:23 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:14:23.510+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '235a8d4c-ab29-4d51-b38b-3a594da63103' of type subvolume
Dec 02 10:14:23 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "235a8d4c-ab29-4d51-b38b-3a594da63103", "force": true, "format": "json"}]: dispatch
Dec 02 10:14:23 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:235a8d4c-ab29-4d51-b38b-3a594da63103, vol_name:cephfs) < ""
Dec 02 10:14:23 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/235a8d4c-ab29-4d51-b38b-3a594da63103'' moved to trashcan
Dec 02 10:14:23 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:14:23 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:235a8d4c-ab29-4d51-b38b-3a594da63103, vol_name:cephfs) < ""
Dec 02 10:14:23 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e236 e236: 6 total, 6 up, 6 in
Dec 02 10:14:23 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:14:23 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:14:23 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:14:24 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:14:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:24.041 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:14:24 np0005541914.localdomain podman[323822]: 2025-12-02 10:14:24.08909509 +0000 UTC m=+0.084066635 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 02 10:14:24 np0005541914.localdomain podman[323822]: 2025-12-02 10:14:24.101753419 +0000 UTC m=+0.096724954 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 10:14:24 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:14:24 np0005541914.localdomain podman[323821]: 2025-12-02 10:14:24.143935095 +0000 UTC m=+0.145301076 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 02 10:14:24 np0005541914.localdomain podman[323829]: 2025-12-02 10:14:24.195097527 +0000 UTC m=+0.185195892 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:14:24 np0005541914.localdomain podman[323828]: 2025-12-02 10:14:24.247280721 +0000 UTC m=+0.237126368 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 02 10:14:24 np0005541914.localdomain podman[323829]: 2025-12-02 10:14:24.252794699 +0000 UTC m=+0.242893084 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 02 10:14:24 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:14:24 np0005541914.localdomain podman[323821]: 2025-12-02 10:14:24.277591811 +0000 UTC m=+0.278957732 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 02 10:14:24 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:14:24 np0005541914.localdomain podman[323828]: 2025-12-02 10:14:24.308909864 +0000 UTC m=+0.298755541 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec 02 10:14:24 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:14:24 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ed816090-7c9e-4964-a11f-502383746c0b", "format": "json"}]: dispatch
Dec 02 10:14:24 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ed816090-7c9e-4964-a11f-502383746c0b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:14:24 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ed816090-7c9e-4964-a11f-502383746c0b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:14:24 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:14:24.484+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ed816090-7c9e-4964-a11f-502383746c0b' of type subvolume
Dec 02 10:14:24 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ed816090-7c9e-4964-a11f-502383746c0b' of type subvolume
Dec 02 10:14:24 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ed816090-7c9e-4964-a11f-502383746c0b", "force": true, "format": "json"}]: dispatch
Dec 02 10:14:24 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ed816090-7c9e-4964-a11f-502383746c0b, vol_name:cephfs) < ""
Dec 02 10:14:24 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ed816090-7c9e-4964-a11f-502383746c0b'' moved to trashcan
Dec 02 10:14:24 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:14:24 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ed816090-7c9e-4964-a11f-502383746c0b, vol_name:cephfs) < ""
Dec 02 10:14:24 np0005541914.localdomain ceph-mon[301710]: pgmap v575: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 158 KiB/s rd, 3.8 MiB/s wr, 241 op/s
Dec 02 10:14:24 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "235a8d4c-ab29-4d51-b38b-3a594da63103", "format": "json"}]: dispatch
Dec 02 10:14:24 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "235a8d4c-ab29-4d51-b38b-3a594da63103", "force": true, "format": "json"}]: dispatch
Dec 02 10:14:24 np0005541914.localdomain ceph-mon[301710]: osdmap e236: 6 total, 6 up, 6 in
Dec 02 10:14:25 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v577: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 99 KiB/s rd, 78 KiB/s wr, 140 op/s
Dec 02 10:14:25 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e237 e237: 6 total, 6 up, 6 in
Dec 02 10:14:25 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ed816090-7c9e-4964-a11f-502383746c0b", "format": "json"}]: dispatch
Dec 02 10:14:25 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ed816090-7c9e-4964-a11f-502383746c0b", "force": true, "format": "json"}]: dispatch
Dec 02 10:14:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:26.672 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:14:26 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e238 e238: 6 total, 6 up, 6 in
Dec 02 10:14:26 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f9ec3f6d-7d6e-4cd3-a305-e37d986864dd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:14:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f9ec3f6d-7d6e-4cd3-a305-e37d986864dd, vol_name:cephfs) < ""
Dec 02 10:14:26 np0005541914.localdomain ceph-mon[301710]: pgmap v577: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 99 KiB/s rd, 78 KiB/s wr, 140 op/s
Dec 02 10:14:26 np0005541914.localdomain ceph-mon[301710]: osdmap e237: 6 total, 6 up, 6 in
Dec 02 10:14:26 np0005541914.localdomain ceph-mon[301710]: osdmap e238: 6 total, 6 up, 6 in
Dec 02 10:14:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f9ec3f6d-7d6e-4cd3-a305-e37d986864dd/.meta.tmp'
Dec 02 10:14:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f9ec3f6d-7d6e-4cd3-a305-e37d986864dd/.meta.tmp' to config b'/volumes/_nogroup/f9ec3f6d-7d6e-4cd3-a305-e37d986864dd/.meta'
Dec 02 10:14:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f9ec3f6d-7d6e-4cd3-a305-e37d986864dd, vol_name:cephfs) < ""
Dec 02 10:14:26 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f9ec3f6d-7d6e-4cd3-a305-e37d986864dd", "format": "json"}]: dispatch
Dec 02 10:14:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f9ec3f6d-7d6e-4cd3-a305-e37d986864dd, vol_name:cephfs) < ""
Dec 02 10:14:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f9ec3f6d-7d6e-4cd3-a305-e37d986864dd, vol_name:cephfs) < ""
Dec 02 10:14:27 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v580: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 99 KiB/s rd, 78 KiB/s wr, 140 op/s
Dec 02 10:14:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:14:27 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "590dca3f-4f85-48ff-a801-1b49410a7fa1", "format": "json"}]: dispatch
Dec 02 10:14:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:590dca3f-4f85-48ff-a801-1b49410a7fa1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:14:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:590dca3f-4f85-48ff-a801-1b49410a7fa1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:14:27 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:14:27.666+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '590dca3f-4f85-48ff-a801-1b49410a7fa1' of type subvolume
Dec 02 10:14:27 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '590dca3f-4f85-48ff-a801-1b49410a7fa1' of type subvolume
Dec 02 10:14:27 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "590dca3f-4f85-48ff-a801-1b49410a7fa1", "force": true, "format": "json"}]: dispatch
Dec 02 10:14:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:590dca3f-4f85-48ff-a801-1b49410a7fa1, vol_name:cephfs) < ""
Dec 02 10:14:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/590dca3f-4f85-48ff-a801-1b49410a7fa1'' moved to trashcan
Dec 02 10:14:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:14:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:590dca3f-4f85-48ff-a801-1b49410a7fa1, vol_name:cephfs) < ""
Dec 02 10:14:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e239 e239: 6 total, 6 up, 6 in
Dec 02 10:14:27 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f9ec3f6d-7d6e-4cd3-a305-e37d986864dd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:14:27 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f9ec3f6d-7d6e-4cd3-a305-e37d986864dd", "format": "json"}]: dispatch
Dec 02 10:14:27 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:14:28 np0005541914.localdomain ceph-mon[301710]: pgmap v580: 177 pgs: 177 active+clean; 203 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 99 KiB/s rd, 78 KiB/s wr, 140 op/s
Dec 02 10:14:28 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "590dca3f-4f85-48ff-a801-1b49410a7fa1", "format": "json"}]: dispatch
Dec 02 10:14:28 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "590dca3f-4f85-48ff-a801-1b49410a7fa1", "force": true, "format": "json"}]: dispatch
Dec 02 10:14:28 np0005541914.localdomain ceph-mon[301710]: osdmap e239: 6 total, 6 up, 6 in
Dec 02 10:14:28 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:14:28 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:14:29 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:29.043 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:14:29 np0005541914.localdomain podman[323904]: 2025-12-02 10:14:29.080147586 +0000 UTC m=+0.086132177 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 10:14:29 np0005541914.localdomain podman[323904]: 2025-12-02 10:14:29.087992447 +0000 UTC m=+0.093977018 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 10:14:29 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:14:29 np0005541914.localdomain podman[323905]: 2025-12-02 10:14:29.130962398 +0000 UTC m=+0.133969558 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.expose-services=, release=1755695350, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, version=9.6, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, name=ubi9-minimal, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc.)
Dec 02 10:14:29 np0005541914.localdomain podman[323905]: 2025-12-02 10:14:29.143492822 +0000 UTC m=+0.146499962 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, architecture=x86_64, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible)
Dec 02 10:14:29 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:14:29 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v582: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 77 KiB/s rd, 95 KiB/s wr, 107 op/s
Dec 02 10:14:29 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e240 e240: 6 total, 6 up, 6 in
Dec 02 10:14:30 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f9ec3f6d-7d6e-4cd3-a305-e37d986864dd", "format": "json"}]: dispatch
Dec 02 10:14:30 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f9ec3f6d-7d6e-4cd3-a305-e37d986864dd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:14:30 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f9ec3f6d-7d6e-4cd3-a305-e37d986864dd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:14:30 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:14:30.250+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f9ec3f6d-7d6e-4cd3-a305-e37d986864dd' of type subvolume
Dec 02 10:14:30 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f9ec3f6d-7d6e-4cd3-a305-e37d986864dd' of type subvolume
Dec 02 10:14:30 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f9ec3f6d-7d6e-4cd3-a305-e37d986864dd", "force": true, "format": "json"}]: dispatch
Dec 02 10:14:30 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f9ec3f6d-7d6e-4cd3-a305-e37d986864dd, vol_name:cephfs) < ""
Dec 02 10:14:30 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f9ec3f6d-7d6e-4cd3-a305-e37d986864dd'' moved to trashcan
Dec 02 10:14:30 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:14:30 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f9ec3f6d-7d6e-4cd3-a305-e37d986864dd, vol_name:cephfs) < ""
Dec 02 10:14:30 np0005541914.localdomain ceph-mon[301710]: pgmap v582: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 77 KiB/s rd, 95 KiB/s wr, 107 op/s
Dec 02 10:14:30 np0005541914.localdomain ceph-mon[301710]: osdmap e240: 6 total, 6 up, 6 in
Dec 02 10:14:31 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v584: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 78 KiB/s rd, 95 KiB/s wr, 107 op/s
Dec 02 10:14:31 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:31.676 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:14:31 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f9ec3f6d-7d6e-4cd3-a305-e37d986864dd", "format": "json"}]: dispatch
Dec 02 10:14:31 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f9ec3f6d-7d6e-4cd3-a305-e37d986864dd", "force": true, "format": "json"}]: dispatch
Dec 02 10:14:31 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e241 e241: 6 total, 6 up, 6 in
Dec 02 10:14:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:14:32 np0005541914.localdomain ceph-mon[301710]: pgmap v584: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 78 KiB/s rd, 95 KiB/s wr, 107 op/s
Dec 02 10:14:32 np0005541914.localdomain ceph-mon[301710]: osdmap e241: 6 total, 6 up, 6 in
Dec 02 10:14:32 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/3513386139' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:14:32 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/3513386139' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:14:33 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v586: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 141 KiB/s rd, 140 KiB/s wr, 196 op/s
Dec 02 10:14:33 np0005541914.localdomain sudo[323943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 10:14:33 np0005541914.localdomain sudo[323943]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:14:33 np0005541914.localdomain sudo[323943]: pam_unix(sudo:session): session closed for user root
Dec 02 10:14:33 np0005541914.localdomain sudo[323961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 10:14:33 np0005541914.localdomain sudo[323961]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:14:33 np0005541914.localdomain podman[239757]: time="2025-12-02T10:14:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:14:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:14:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 10:14:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:14:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19243 "" "Go-http-client/1.1"
Dec 02 10:14:33 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e242 e242: 6 total, 6 up, 6 in
Dec 02 10:14:34 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:34.045 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:14:34 np0005541914.localdomain sudo[323961]: pam_unix(sudo:session): session closed for user root
Dec 02 10:14:34 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 10:14:34 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:14:34 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 02 10:14:34 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:14:34 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 02 10:14:34 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] update: starting ev e420e6a0-7dd2-480b-8a7d-89098bda4b33 (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:14:34 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] complete: finished ev e420e6a0-7dd2-480b-8a7d-89098bda4b33 (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:14:34 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Completed event e420e6a0-7dd2-480b-8a7d-89098bda4b33 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 02 10:14:34 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 02 10:14:34 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:14:34 np0005541914.localdomain sudo[324011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 10:14:34 np0005541914.localdomain sudo[324011]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:14:34 np0005541914.localdomain sudo[324011]: pam_unix(sudo:session): session closed for user root
Dec 02 10:14:34 np0005541914.localdomain ceph-mon[301710]: pgmap v586: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 141 KiB/s rd, 140 KiB/s wr, 196 op/s
Dec 02 10:14:34 np0005541914.localdomain ceph-mon[301710]: osdmap e242: 6 total, 6 up, 6 in
Dec 02 10:14:34 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:14:34 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:14:34 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:14:34 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:14:34 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/3236740117' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:14:34 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/3236740117' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:14:35 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v588: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 71 KiB/s rd, 54 KiB/s wr, 99 op/s
Dec 02 10:14:35 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:35.767 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:14:35 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:14:35.767 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:14:35 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:14:35.769 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 02 10:14:35 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e243 e243: 6 total, 6 up, 6 in
Dec 02 10:14:35 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:14:36 np0005541914.localdomain podman[324029]: 2025-12-02 10:14:36.084056271 +0000 UTC m=+0.086960853 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 02 10:14:36 np0005541914.localdomain podman[324029]: 2025-12-02 10:14:36.122895055 +0000 UTC m=+0.125799627 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, container_name=multipathd)
Dec 02 10:14:36 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:14:36 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:14:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, vol_name:cephfs) < ""
Dec 02 10:14:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/.meta.tmp'
Dec 02 10:14:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/.meta.tmp' to config b'/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/.meta'
Dec 02 10:14:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, vol_name:cephfs) < ""
Dec 02 10:14:36 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "format": "json"}]: dispatch
Dec 02 10:14:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, vol_name:cephfs) < ""
Dec 02 10:14:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, vol_name:cephfs) < ""
Dec 02 10:14:36 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:36.705 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:14:36 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e244 e244: 6 total, 6 up, 6 in
Dec 02 10:14:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:14:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:14:36 np0005541914.localdomain ceph-mon[301710]: pgmap v588: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 71 KiB/s rd, 54 KiB/s wr, 99 op/s
Dec 02 10:14:36 np0005541914.localdomain ceph-mon[301710]: osdmap e243: 6 total, 6 up, 6 in
Dec 02 10:14:36 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:14:36 np0005541914.localdomain ceph-mon[301710]: osdmap e244: 6 total, 6 up, 6 in
Dec 02 10:14:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:14:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:14:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:14:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:14:37 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v591: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 81 KiB/s rd, 61 KiB/s wr, 112 op/s
Dec 02 10:14:37 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events
Dec 02 10:14:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 02 10:14:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:14:38 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:14:38 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "format": "json"}]: dispatch
Dec 02 10:14:38 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/3031033791' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:14:38 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/3031033791' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:14:38 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:14:39 np0005541914.localdomain ceph-mon[301710]: pgmap v591: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 81 KiB/s rd, 61 KiB/s wr, 112 op/s
Dec 02 10:14:39 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:39.050 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:14:39 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v592: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 70 KiB/s rd, 28 KiB/s wr, 94 op/s
Dec 02 10:14:39 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "auth_id": "eve49", "tenant_id": "8f75117f8554499b9fbaa9c9062eeeef", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:14:39 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, tenant_id:8f75117f8554499b9fbaa9c9062eeeef, vol_name:cephfs) < ""
Dec 02 10:14:39 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0)
Dec 02 10:14:39 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Dec 02 10:14:39 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID eve49 with tenant 8f75117f8554499b9fbaa9c9062eeeef
Dec 02 10:14:39 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:14:39 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:14:39 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve49, format:json, prefix:fs subvolume authorize, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, tenant_id:8f75117f8554499b9fbaa9c9062eeeef, vol_name:cephfs) < ""
Dec 02 10:14:40 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Dec 02 10:14:40 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:14:40 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:14:40 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:14:40 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:14:40.771 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:14:40 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:14:40.788 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:14:40Z, description=, device_id=3692a4cb-56a0-4a89-90aa-c2a2654d3e13, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034a7dd30>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034a7ddc0>], id=3068ca06-aff6-4755-8b24-5457386fd1c7, ip_allocation=immediate, mac_address=fa:16:3e:58:d3:5a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3470, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:14:40Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:14:41 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses
Dec 02 10:14:41 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:14:41 np0005541914.localdomain podman[324065]: 2025-12-02 10:14:41.012848257 +0000 UTC m=+0.058825779 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 02 10:14:41 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:14:41 np0005541914.localdomain ceph-mon[301710]: pgmap v592: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 70 KiB/s rd, 28 KiB/s wr, 94 op/s
Dec 02 10:14:41 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "auth_id": "eve49", "tenant_id": "8f75117f8554499b9fbaa9c9062eeeef", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:14:41 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v593: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 58 KiB/s rd, 23 KiB/s wr, 77 op/s
Dec 02 10:14:41 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:14:41.266 262347 INFO neutron.agent.dhcp.agent [None req-b8ea98ec-7624-454d-b0e2-63f13f9c0a06 - - - - - -] DHCP configuration for ports {'3068ca06-aff6-4755-8b24-5457386fd1c7'} is completed
Dec 02 10:14:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:41.569 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:14:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:41.753 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:14:41 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e245 e245: 6 total, 6 up, 6 in
Dec 02 10:14:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:14:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:14:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:14:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:14:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:14:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:14:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:14:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:14:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:14:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:14:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:14:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:14:42 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:14:42 np0005541914.localdomain ceph-mon[301710]: pgmap v593: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 58 KiB/s rd, 23 KiB/s wr, 77 op/s
Dec 02 10:14:42 np0005541914.localdomain ceph-mon[301710]: osdmap e245: 6 total, 6 up, 6 in
Dec 02 10:14:43 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "auth_id": "eve48", "tenant_id": "8f75117f8554499b9fbaa9c9062eeeef", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:14:43 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, tenant_id:8f75117f8554499b9fbaa9c9062eeeef, vol_name:cephfs) < ""
Dec 02 10:14:43 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0)
Dec 02 10:14:43 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Dec 02 10:14:43 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID eve48 with tenant 8f75117f8554499b9fbaa9c9062eeeef
Dec 02 10:14:43 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v595: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 58 KiB/s rd, 64 KiB/s wr, 80 op/s
Dec 02 10:14:43 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:14:43 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:14:43 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve48, format:json, prefix:fs subvolume authorize, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, tenant_id:8f75117f8554499b9fbaa9c9062eeeef, vol_name:cephfs) < ""
Dec 02 10:14:43 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Dec 02 10:14:43 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:14:43 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:14:43 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:14:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:44.056 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:14:44 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "auth_id": "eve48", "tenant_id": "8f75117f8554499b9fbaa9c9062eeeef", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:14:44 np0005541914.localdomain ceph-mon[301710]: pgmap v595: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 58 KiB/s rd, 64 KiB/s wr, 80 op/s
Dec 02 10:14:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:44.899 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:14:45 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v596: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 49 KiB/s rd, 55 KiB/s wr, 69 op/s
Dec 02 10:14:46 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "334c2711-d0f6-419e-922d-408205cc4ec2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:14:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:334c2711-d0f6-419e-922d-408205cc4ec2, vol_name:cephfs) < ""
Dec 02 10:14:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/334c2711-d0f6-419e-922d-408205cc4ec2/.meta.tmp'
Dec 02 10:14:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/334c2711-d0f6-419e-922d-408205cc4ec2/.meta.tmp' to config b'/volumes/_nogroup/334c2711-d0f6-419e-922d-408205cc4ec2/.meta'
Dec 02 10:14:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:334c2711-d0f6-419e-922d-408205cc4ec2, vol_name:cephfs) < ""
Dec 02 10:14:46 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "334c2711-d0f6-419e-922d-408205cc4ec2", "format": "json"}]: dispatch
Dec 02 10:14:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:334c2711-d0f6-419e-922d-408205cc4ec2, vol_name:cephfs) < ""
Dec 02 10:14:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:334c2711-d0f6-419e-922d-408205cc4ec2, vol_name:cephfs) < ""
Dec 02 10:14:46 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "auth_id": "eve48", "format": "json"}]: dispatch
Dec 02 10:14:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, vol_name:cephfs) < ""
Dec 02 10:14:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:46.756 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:14:46 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.eve48", "format": "json"} v 0)
Dec 02 10:14:46 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Dec 02 10:14:46 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve48"} v 0)
Dec 02 10:14:46 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch
Dec 02 10:14:46 np0005541914.localdomain ceph-mon[301710]: pgmap v596: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 49 KiB/s rd, 55 KiB/s wr, 69 op/s
Dec 02 10:14:46 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:14:46 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch
Dec 02 10:14:46 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Dec 02 10:14:46 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch
Dec 02 10:14:46 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished
Dec 02 10:14:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve48, format:json, prefix:fs subvolume deauthorize, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, vol_name:cephfs) < ""
Dec 02 10:14:46 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "auth_id": "eve48", "format": "json"}]: dispatch
Dec 02 10:14:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, vol_name:cephfs) < ""
Dec 02 10:14:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve48, client_metadata.root=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1
Dec 02 10:14:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:14:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve48, format:json, prefix:fs subvolume evict, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, vol_name:cephfs) < ""
Dec 02 10:14:47 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v597: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 47 KiB/s wr, 58 op/s
Dec 02 10:14:47 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:14:47 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "334c2711-d0f6-419e-922d-408205cc4ec2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:14:47 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "334c2711-d0f6-419e-922d-408205cc4ec2", "format": "json"}]: dispatch
Dec 02 10:14:47 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "auth_id": "eve48", "format": "json"}]: dispatch
Dec 02 10:14:47 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "auth_id": "eve48", "format": "json"}]: dispatch
Dec 02 10:14:48 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "25a2b72d-c9e0-4927-869c-054b3b3fd314", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:14:48 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:25a2b72d-c9e0-4927-869c-054b3b3fd314, vol_name:cephfs) < ""
Dec 02 10:14:48 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/25a2b72d-c9e0-4927-869c-054b3b3fd314/.meta.tmp'
Dec 02 10:14:48 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/25a2b72d-c9e0-4927-869c-054b3b3fd314/.meta.tmp' to config b'/volumes/_nogroup/25a2b72d-c9e0-4927-869c-054b3b3fd314/.meta'
Dec 02 10:14:48 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:25a2b72d-c9e0-4927-869c-054b3b3fd314, vol_name:cephfs) < ""
Dec 02 10:14:48 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "25a2b72d-c9e0-4927-869c-054b3b3fd314", "format": "json"}]: dispatch
Dec 02 10:14:48 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:25a2b72d-c9e0-4927-869c-054b3b3fd314, vol_name:cephfs) < ""
Dec 02 10:14:48 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:25a2b72d-c9e0-4927-869c-054b3b3fd314, vol_name:cephfs) < ""
Dec 02 10:14:48 np0005541914.localdomain ceph-mon[301710]: pgmap v597: 177 pgs: 177 active+clean; 204 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 47 KiB/s wr, 58 op/s
Dec 02 10:14:48 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:14:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:49.059 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:14:49 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v598: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.0 MiB/s rd, 90 KiB/s wr, 14 op/s
Dec 02 10:14:49 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "334c2711-d0f6-419e-922d-408205cc4ec2", "snap_name": "4297d647-9a4c-4f1f-9f4b-d5919a5d649a", "format": "json"}]: dispatch
Dec 02 10:14:49 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:4297d647-9a4c-4f1f-9f4b-d5919a5d649a, sub_name:334c2711-d0f6-419e-922d-408205cc4ec2, vol_name:cephfs) < ""
Dec 02 10:14:49 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:4297d647-9a4c-4f1f-9f4b-d5919a5d649a, sub_name:334c2711-d0f6-419e-922d-408205cc4ec2, vol_name:cephfs) < ""
Dec 02 10:14:49 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "25a2b72d-c9e0-4927-869c-054b3b3fd314", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:14:49 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "25a2b72d-c9e0-4927-869c-054b3b3fd314", "format": "json"}]: dispatch
Dec 02 10:14:50 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "auth_id": "eve47", "tenant_id": "8f75117f8554499b9fbaa9c9062eeeef", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:14:50 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, tenant_id:8f75117f8554499b9fbaa9c9062eeeef, vol_name:cephfs) < ""
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0)
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Dec 02 10:14:50 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID eve47 with tenant 8f75117f8554499b9fbaa9c9062eeeef
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0.
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:14:50.146596) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670490146658, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2296, "num_deletes": 265, "total_data_size": 3071162, "memory_usage": 3119360, "flush_reason": "Manual Compaction"}
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670490159433, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 1997238, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28897, "largest_seqno": 31188, "table_properties": {"data_size": 1987914, "index_size": 5705, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 22490, "raw_average_key_size": 22, "raw_value_size": 1968271, "raw_average_value_size": 1948, "num_data_blocks": 246, "num_entries": 1010, "num_filter_entries": 1010, "num_deletions": 265, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764670381, "oldest_key_time": 1764670381, "file_creation_time": 1764670490, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 12921 microseconds, and 5792 cpu microseconds.
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:14:50.159519) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 1997238 bytes OK
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:14:50.159548) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:14:50.161721) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:14:50.161742) EVENT_LOG_v1 {"time_micros": 1764670490161736, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:14:50.161765) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3060237, prev total WAL file size 3060237, number of live WAL files 2.
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:14:50.162746) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132383031' seq:72057594037927935, type:22 .. '7061786F73003133303533' seq:0, type:0; will stop at (end)
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(1950KB)], [45(17MB)]
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670490162798, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 20452090, "oldest_snapshot_seqno": -1}
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 13999 keys, 18904073 bytes, temperature: kUnknown
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670490265357, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 18904073, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18821563, "index_size": 46441, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35013, "raw_key_size": 374725, "raw_average_key_size": 26, "raw_value_size": 18580956, "raw_average_value_size": 1327, "num_data_blocks": 1750, "num_entries": 13999, "num_filter_entries": 13999, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669502, "oldest_key_time": 0, "file_creation_time": 1764670490, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}}
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:14:50.266041) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 18904073 bytes
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:14:50.268239) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 199.2 rd, 184.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 17.6 +0.0 blob) out(18.0 +0.0 blob), read-write-amplify(19.7) write-amplify(9.5) OK, records in: 14542, records dropped: 543 output_compression: NoCompression
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:14:50.268268) EVENT_LOG_v1 {"time_micros": 1764670490268256, "job": 26, "event": "compaction_finished", "compaction_time_micros": 102690, "compaction_time_cpu_micros": 50012, "output_level": 6, "num_output_files": 1, "total_output_size": 18904073, "num_input_records": 14542, "num_output_records": 13999, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670490269012, "job": 26, "event": "table_file_deletion", "file_number": 47}
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670490271871, "job": 26, "event": "table_file_deletion", "file_number": 45}
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:14:50.162666) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:14:50.271935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:14:50.271941) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:14:50.271944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:14:50.271947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:14:50.271950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:14:50 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:eve47, format:json, prefix:fs subvolume authorize, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, tenant_id:8f75117f8554499b9fbaa9c9062eeeef, vol_name:cephfs) < ""
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: pgmap v598: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.0 MiB/s rd, 90 KiB/s wr, 14 op/s
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "334c2711-d0f6-419e-922d-408205cc4ec2", "snap_name": "4297d647-9a4c-4f1f-9f4b-d5919a5d649a", "format": "json"}]: dispatch
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "auth_id": "eve47", "tenant_id": "8f75117f8554499b9fbaa9c9062eeeef", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:14:50 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1", "osd", "allow rw pool=manila_data namespace=fsvolumens_aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:14:51 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v599: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.0 MiB/s rd, 90 KiB/s wr, 14 op/s
Dec 02 10:14:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:51.760 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:14:52 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "25a2b72d-c9e0-4927-869c-054b3b3fd314", "format": "json"}]: dispatch
Dec 02 10:14:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:25a2b72d-c9e0-4927-869c-054b3b3fd314, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:14:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:25a2b72d-c9e0-4927-869c-054b3b3fd314, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:14:52 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:14:52.050+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '25a2b72d-c9e0-4927-869c-054b3b3fd314' of type subvolume
Dec 02 10:14:52 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '25a2b72d-c9e0-4927-869c-054b3b3fd314' of type subvolume
Dec 02 10:14:52 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "25a2b72d-c9e0-4927-869c-054b3b3fd314", "force": true, "format": "json"}]: dispatch
Dec 02 10:14:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:25a2b72d-c9e0-4927-869c-054b3b3fd314, vol_name:cephfs) < ""
Dec 02 10:14:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/25a2b72d-c9e0-4927-869c-054b3b3fd314'' moved to trashcan
Dec 02 10:14:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:14:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:25a2b72d-c9e0-4927-869c-054b3b3fd314, vol_name:cephfs) < ""
Dec 02 10:14:52 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ff6ef38f-1e0f-40c8-83c7-811b055e05a4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:14:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ff6ef38f-1e0f-40c8-83c7-811b055e05a4, vol_name:cephfs) < ""
Dec 02 10:14:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ff6ef38f-1e0f-40c8-83c7-811b055e05a4/.meta.tmp'
Dec 02 10:14:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ff6ef38f-1e0f-40c8-83c7-811b055e05a4/.meta.tmp' to config b'/volumes/_nogroup/ff6ef38f-1e0f-40c8-83c7-811b055e05a4/.meta'
Dec 02 10:14:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ff6ef38f-1e0f-40c8-83c7-811b055e05a4, vol_name:cephfs) < ""
Dec 02 10:14:52 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ff6ef38f-1e0f-40c8-83c7-811b055e05a4", "format": "json"}]: dispatch
Dec 02 10:14:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ff6ef38f-1e0f-40c8-83c7-811b055e05a4, vol_name:cephfs) < ""
Dec 02 10:14:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ff6ef38f-1e0f-40c8-83c7-811b055e05a4, vol_name:cephfs) < ""
Dec 02 10:14:52 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:14:52 np0005541914.localdomain ceph-mon[301710]: pgmap v599: 177 pgs: 177 active+clean; 205 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 2.0 MiB/s rd, 90 KiB/s wr, 14 op/s
Dec 02 10:14:52 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "25a2b72d-c9e0-4927-869c-054b3b3fd314", "format": "json"}]: dispatch
Dec 02 10:14:52 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "25a2b72d-c9e0-4927-869c-054b3b3fd314", "force": true, "format": "json"}]: dispatch
Dec 02 10:14:52 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:14:53 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v600: 177 pgs: 177 active+clean; 252 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.8 MiB/s rd, 2.0 MiB/s wr, 52 op/s
Dec 02 10:14:53 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "794bbe46-18dc-46bb-ae82-7c247e68f409", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:14:53 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:794bbe46-18dc-46bb-ae82-7c247e68f409, vol_name:cephfs) < ""
Dec 02 10:14:53 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/794bbe46-18dc-46bb-ae82-7c247e68f409/.meta.tmp'
Dec 02 10:14:53 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/794bbe46-18dc-46bb-ae82-7c247e68f409/.meta.tmp' to config b'/volumes/_nogroup/794bbe46-18dc-46bb-ae82-7c247e68f409/.meta'
Dec 02 10:14:53 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:794bbe46-18dc-46bb-ae82-7c247e68f409, vol_name:cephfs) < ""
Dec 02 10:14:53 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "794bbe46-18dc-46bb-ae82-7c247e68f409", "format": "json"}]: dispatch
Dec 02 10:14:53 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:794bbe46-18dc-46bb-ae82-7c247e68f409, vol_name:cephfs) < ""
Dec 02 10:14:53 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:794bbe46-18dc-46bb-ae82-7c247e68f409, vol_name:cephfs) < ""
Dec 02 10:14:53 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "auth_id": "eve47", "format": "json"}]: dispatch
Dec 02 10:14:53 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, vol_name:cephfs) < ""
Dec 02 10:14:53 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.eve47", "format": "json"} v 0)
Dec 02 10:14:53 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Dec 02 10:14:53 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve47"} v 0)
Dec 02 10:14:53 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch
Dec 02 10:14:53 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e246 e246: 6 total, 6 up, 6 in
Dec 02 10:14:53 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ff6ef38f-1e0f-40c8-83c7-811b055e05a4", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:14:53 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ff6ef38f-1e0f-40c8-83c7-811b055e05a4", "format": "json"}]: dispatch
Dec 02 10:14:53 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:14:53 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2237002208' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:14:53 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Dec 02 10:14:53 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch
Dec 02 10:14:53 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch
Dec 02 10:14:53 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Dec 02 10:14:53 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve47, format:json, prefix:fs subvolume deauthorize, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, vol_name:cephfs) < ""
Dec 02 10:14:53 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "auth_id": "eve47", "format": "json"}]: dispatch
Dec 02 10:14:53 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, vol_name:cephfs) < ""
Dec 02 10:14:53 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve47, client_metadata.root=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1
Dec 02 10:14:53 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:14:53 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve47, format:json, prefix:fs subvolume evict, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, vol_name:cephfs) < ""
Dec 02 10:14:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:54.093 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:14:54 np0005541914.localdomain ceph-mon[301710]: pgmap v600: 177 pgs: 177 active+clean; 252 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.8 MiB/s rd, 2.0 MiB/s wr, 52 op/s
Dec 02 10:14:54 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "794bbe46-18dc-46bb-ae82-7c247e68f409", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:14:54 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "794bbe46-18dc-46bb-ae82-7c247e68f409", "format": "json"}]: dispatch
Dec 02 10:14:54 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "auth_id": "eve47", "format": "json"}]: dispatch
Dec 02 10:14:54 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished
Dec 02 10:14:54 np0005541914.localdomain ceph-mon[301710]: osdmap e246: 6 total, 6 up, 6 in
Dec 02 10:14:54 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "auth_id": "eve47", "format": "json"}]: dispatch
Dec 02 10:14:54 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e247 e247: 6 total, 6 up, 6 in
Dec 02 10:14:54 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:14:54 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:14:55 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:14:55 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:14:55 np0005541914.localdomain podman[324088]: 2025-12-02 10:14:55.107237842 +0000 UTC m=+0.101984115 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:14:55 np0005541914.localdomain podman[324088]: 2025-12-02 10:14:55.140083301 +0000 UTC m=+0.134829604 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 02 10:14:55 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:14:55 np0005541914.localdomain podman[324093]: 2025-12-02 10:14:55.15826779 +0000 UTC m=+0.140524059 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 02 10:14:55 np0005541914.localdomain podman[324093]: 2025-12-02 10:14:55.243901641 +0000 UTC m=+0.226157940 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:14:55 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v603: 177 pgs: 177 active+clean; 252 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 2.8 MiB/s wr, 73 op/s
Dec 02 10:14:55 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:14:55 np0005541914.localdomain podman[324090]: 2025-12-02 10:14:55.250958408 +0000 UTC m=+0.236879419 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 02 10:14:55 np0005541914.localdomain podman[324089]: 2025-12-02 10:14:55.316971776 +0000 UTC m=+0.302344651 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 10:14:55 np0005541914.localdomain podman[324090]: 2025-12-02 10:14:55.334043431 +0000 UTC m=+0.319964462 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, io.buildah.version=1.41.3)
Dec 02 10:14:55 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:14:55 np0005541914.localdomain podman[324089]: 2025-12-02 10:14:55.350636841 +0000 UTC m=+0.336009736 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 10:14:55 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:14:55 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ec95b4f7-9427-4d16-81f2-f3cade322496", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:14:55 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ec95b4f7-9427-4d16-81f2-f3cade322496, vol_name:cephfs) < ""
Dec 02 10:14:55 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ec95b4f7-9427-4d16-81f2-f3cade322496/.meta.tmp'
Dec 02 10:14:55 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ec95b4f7-9427-4d16-81f2-f3cade322496/.meta.tmp' to config b'/volumes/_nogroup/ec95b4f7-9427-4d16-81f2-f3cade322496/.meta'
Dec 02 10:14:55 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ec95b4f7-9427-4d16-81f2-f3cade322496, vol_name:cephfs) < ""
Dec 02 10:14:55 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ec95b4f7-9427-4d16-81f2-f3cade322496", "format": "json"}]: dispatch
Dec 02 10:14:55 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ec95b4f7-9427-4d16-81f2-f3cade322496, vol_name:cephfs) < ""
Dec 02 10:14:55 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ec95b4f7-9427-4d16-81f2-f3cade322496, vol_name:cephfs) < ""
Dec 02 10:14:55 np0005541914.localdomain ceph-mon[301710]: osdmap e247: 6 total, 6 up, 6 in
Dec 02 10:14:55 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:14:56 np0005541914.localdomain systemd[1]: tmp-crun.wosO8T.mount: Deactivated successfully.
Dec 02 10:14:56 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "794bbe46-18dc-46bb-ae82-7c247e68f409", "format": "json"}]: dispatch
Dec 02 10:14:56 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:794bbe46-18dc-46bb-ae82-7c247e68f409, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:14:56 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:794bbe46-18dc-46bb-ae82-7c247e68f409, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:14:56 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '794bbe46-18dc-46bb-ae82-7c247e68f409' of type subvolume
Dec 02 10:14:56 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:14:56.744+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '794bbe46-18dc-46bb-ae82-7c247e68f409' of type subvolume
Dec 02 10:14:56 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "794bbe46-18dc-46bb-ae82-7c247e68f409", "force": true, "format": "json"}]: dispatch
Dec 02 10:14:56 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:794bbe46-18dc-46bb-ae82-7c247e68f409, vol_name:cephfs) < ""
Dec 02 10:14:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:56.809 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:14:56 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/794bbe46-18dc-46bb-ae82-7c247e68f409'' moved to trashcan
Dec 02 10:14:56 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:14:56 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:794bbe46-18dc-46bb-ae82-7c247e68f409, vol_name:cephfs) < ""
Dec 02 10:14:56 np0005541914.localdomain ceph-mon[301710]: pgmap v603: 177 pgs: 177 active+clean; 252 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 2.8 MiB/s wr, 73 op/s
Dec 02 10:14:56 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ec95b4f7-9427-4d16-81f2-f3cade322496", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:14:56 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ec95b4f7-9427-4d16-81f2-f3cade322496", "format": "json"}]: dispatch
Dec 02 10:14:57 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v604: 177 pgs: 177 active+clean; 252 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 32 KiB/s rd, 2.7 MiB/s wr, 57 op/s
Dec 02 10:14:57 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "auth_id": "eve49", "format": "json"}]: dispatch
Dec 02 10:14:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, vol_name:cephfs) < ""
Dec 02 10:14:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:14:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.eve49", "format": "json"} v 0)
Dec 02 10:14:57 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Dec 02 10:14:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.eve49"} v 0)
Dec 02 10:14:57 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch
Dec 02 10:14:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:eve49, format:json, prefix:fs subvolume deauthorize, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, vol_name:cephfs) < ""
Dec 02 10:14:57 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "auth_id": "eve49", "format": "json"}]: dispatch
Dec 02 10:14:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, vol_name:cephfs) < ""
Dec 02 10:14:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=eve49, client_metadata.root=/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f/0f93d180-183a-4fa4-8649-7ba3ef8441e1
Dec 02 10:14:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:14:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:eve49, format:json, prefix:fs subvolume evict, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, vol_name:cephfs) < ""
Dec 02 10:14:57 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "format": "json"}]: dispatch
Dec 02 10:14:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:14:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:14:57 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f' of type subvolume
Dec 02 10:14:57 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:14:57.874+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f' of type subvolume
Dec 02 10:14:57 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "force": true, "format": "json"}]: dispatch
Dec 02 10:14:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, vol_name:cephfs) < ""
Dec 02 10:14:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f'' moved to trashcan
Dec 02 10:14:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:14:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f, vol_name:cephfs) < ""
Dec 02 10:14:57 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "794bbe46-18dc-46bb-ae82-7c247e68f409", "format": "json"}]: dispatch
Dec 02 10:14:57 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "794bbe46-18dc-46bb-ae82-7c247e68f409", "force": true, "format": "json"}]: dispatch
Dec 02 10:14:57 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Dec 02 10:14:57 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch
Dec 02 10:14:57 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch
Dec 02 10:14:57 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished
Dec 02 10:14:57 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1466419913' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:14:58 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ec95b4f7-9427-4d16-81f2-f3cade322496", "format": "json"}]: dispatch
Dec 02 10:14:58 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ec95b4f7-9427-4d16-81f2-f3cade322496, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:14:58 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ec95b4f7-9427-4d16-81f2-f3cade322496, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:14:58 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ec95b4f7-9427-4d16-81f2-f3cade322496' of type subvolume
Dec 02 10:14:58 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:14:58.660+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ec95b4f7-9427-4d16-81f2-f3cade322496' of type subvolume
Dec 02 10:14:58 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ec95b4f7-9427-4d16-81f2-f3cade322496", "force": true, "format": "json"}]: dispatch
Dec 02 10:14:58 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ec95b4f7-9427-4d16-81f2-f3cade322496, vol_name:cephfs) < ""
Dec 02 10:14:58 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ec95b4f7-9427-4d16-81f2-f3cade322496'' moved to trashcan
Dec 02 10:14:58 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:14:58 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ec95b4f7-9427-4d16-81f2-f3cade322496, vol_name:cephfs) < ""
Dec 02 10:14:59 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e248 e248: 6 total, 6 up, 6 in
Dec 02 10:14:59 np0005541914.localdomain ceph-mon[301710]: pgmap v604: 177 pgs: 177 active+clean; 252 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 32 KiB/s rd, 2.7 MiB/s wr, 57 op/s
Dec 02 10:14:59 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "auth_id": "eve49", "format": "json"}]: dispatch
Dec 02 10:14:59 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "auth_id": "eve49", "format": "json"}]: dispatch
Dec 02 10:14:59 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "format": "json"}]: dispatch
Dec 02 10:14:59 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "aa1ea9b3-cd6e-4bc7-a88f-b8893a4beb4f", "force": true, "format": "json"}]: dispatch
Dec 02 10:14:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:14:59.098 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:14:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v606: 177 pgs: 177 active+clean; 299 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 3.7 MiB/s wr, 75 op/s
Dec 02 10:14:59 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:14:59 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:15:00 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ec95b4f7-9427-4d16-81f2-f3cade322496", "format": "json"}]: dispatch
Dec 02 10:15:00 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ec95b4f7-9427-4d16-81f2-f3cade322496", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:00 np0005541914.localdomain ceph-mon[301710]: osdmap e248: 6 total, 6 up, 6 in
Dec 02 10:15:00 np0005541914.localdomain systemd[1]: tmp-crun.gWAyce.mount: Deactivated successfully.
Dec 02 10:15:00 np0005541914.localdomain podman[324171]: 2025-12-02 10:15:00.076065186 +0000 UTC m=+0.078154973 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 10:15:00 np0005541914.localdomain podman[324171]: 2025-12-02 10:15:00.089026274 +0000 UTC m=+0.091116051 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 02 10:15:00 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:15:00 np0005541914.localdomain podman[324172]: 2025-12-02 10:15:00.142519808 +0000 UTC m=+0.142280363 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.openshift.tags=minimal rhel9, config_id=edpm, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1755695350, vendor=Red Hat, Inc.)
Dec 02 10:15:00 np0005541914.localdomain podman[324172]: 2025-12-02 10:15:00.185990763 +0000 UTC m=+0.185751278 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, name=ubi9-minimal, architecture=x86_64, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, release=1755695350, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 02 10:15:00 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:15:00 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "95ddc110-cc3c-4c61-8c87-bf390fb060a5", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:15:00 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:95ddc110-cc3c-4c61-8c87-bf390fb060a5, vol_name:cephfs) < ""
Dec 02 10:15:00 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/95ddc110-cc3c-4c61-8c87-bf390fb060a5/.meta.tmp'
Dec 02 10:15:00 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/95ddc110-cc3c-4c61-8c87-bf390fb060a5/.meta.tmp' to config b'/volumes/_nogroup/95ddc110-cc3c-4c61-8c87-bf390fb060a5/.meta'
Dec 02 10:15:00 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:95ddc110-cc3c-4c61-8c87-bf390fb060a5, vol_name:cephfs) < ""
Dec 02 10:15:00 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "95ddc110-cc3c-4c61-8c87-bf390fb060a5", "format": "json"}]: dispatch
Dec 02 10:15:00 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:95ddc110-cc3c-4c61-8c87-bf390fb060a5, vol_name:cephfs) < ""
Dec 02 10:15:00 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:95ddc110-cc3c-4c61-8c87-bf390fb060a5, vol_name:cephfs) < ""
Dec 02 10:15:01 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e249 e249: 6 total, 6 up, 6 in
Dec 02 10:15:01 np0005541914.localdomain ceph-mon[301710]: pgmap v606: 177 pgs: 177 active+clean; 299 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 3.7 MiB/s wr, 75 op/s
Dec 02 10:15:01 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:15:01 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v608: 177 pgs: 177 active+clean; 299 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 3.4 MiB/s rd, 3.5 MiB/s wr, 72 op/s
Dec 02 10:15:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:01.842 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:01 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ff6ef38f-1e0f-40c8-83c7-811b055e05a4", "format": "json"}]: dispatch
Dec 02 10:15:01 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ff6ef38f-1e0f-40c8-83c7-811b055e05a4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:15:01 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ff6ef38f-1e0f-40c8-83c7-811b055e05a4, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:15:01 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:15:01.865+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ff6ef38f-1e0f-40c8-83c7-811b055e05a4' of type subvolume
Dec 02 10:15:01 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ff6ef38f-1e0f-40c8-83c7-811b055e05a4' of type subvolume
Dec 02 10:15:01 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ff6ef38f-1e0f-40c8-83c7-811b055e05a4", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:01 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ff6ef38f-1e0f-40c8-83c7-811b055e05a4, vol_name:cephfs) < ""
Dec 02 10:15:01 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ff6ef38f-1e0f-40c8-83c7-811b055e05a4'' moved to trashcan
Dec 02 10:15:01 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:15:01 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ff6ef38f-1e0f-40c8-83c7-811b055e05a4, vol_name:cephfs) < ""
Dec 02 10:15:02 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "95ddc110-cc3c-4c61-8c87-bf390fb060a5", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:15:02 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "95ddc110-cc3c-4c61-8c87-bf390fb060a5", "format": "json"}]: dispatch
Dec 02 10:15:02 np0005541914.localdomain ceph-mon[301710]: osdmap e249: 6 total, 6 up, 6 in
Dec 02 10:15:02 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2758567838' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:15:02 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2758567838' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:15:02 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:15:03 np0005541914.localdomain ceph-mon[301710]: pgmap v608: 177 pgs: 177 active+clean; 299 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 3.4 MiB/s rd, 3.5 MiB/s wr, 72 op/s
Dec 02 10:15:03 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ff6ef38f-1e0f-40c8-83c7-811b055e05a4", "format": "json"}]: dispatch
Dec 02 10:15:03 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ff6ef38f-1e0f-40c8-83c7-811b055e05a4", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:03 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e250 e250: 6 total, 6 up, 6 in
Dec 02 10:15:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:03.184 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:15:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:03.184 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:15:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:03.184 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:15:03 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v610: 177 pgs: 177 active+clean; 299 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 7.2 MiB/s rd, 7.3 MiB/s wr, 200 op/s
Dec 02 10:15:03 np0005541914.localdomain podman[239757]: time="2025-12-02T10:15:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:15:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:15:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 10:15:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:15:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19253 "" "Go-http-client/1.1"
Dec 02 10:15:03 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "95ddc110-cc3c-4c61-8c87-bf390fb060a5", "format": "json"}]: dispatch
Dec 02 10:15:03 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:95ddc110-cc3c-4c61-8c87-bf390fb060a5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:15:03 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:95ddc110-cc3c-4c61-8c87-bf390fb060a5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:15:03 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:15:03.810+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '95ddc110-cc3c-4c61-8c87-bf390fb060a5' of type subvolume
Dec 02 10:15:03 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '95ddc110-cc3c-4c61-8c87-bf390fb060a5' of type subvolume
Dec 02 10:15:03 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "95ddc110-cc3c-4c61-8c87-bf390fb060a5", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:03 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:95ddc110-cc3c-4c61-8c87-bf390fb060a5, vol_name:cephfs) < ""
Dec 02 10:15:03 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/95ddc110-cc3c-4c61-8c87-bf390fb060a5'' moved to trashcan
Dec 02 10:15:03 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:15:03 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:95ddc110-cc3c-4c61-8c87-bf390fb060a5, vol_name:cephfs) < ""
Dec 02 10:15:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:04.125 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:04 np0005541914.localdomain ceph-mon[301710]: osdmap e250: 6 total, 6 up, 6 in
Dec 02 10:15:04 np0005541914.localdomain ceph-mon[301710]: pgmap v610: 177 pgs: 177 active+clean; 299 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 7.2 MiB/s rd, 7.3 MiB/s wr, 200 op/s
Dec 02 10:15:04 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "95ddc110-cc3c-4c61-8c87-bf390fb060a5", "format": "json"}]: dispatch
Dec 02 10:15:04 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "95ddc110-cc3c-4c61-8c87-bf390fb060a5", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:04 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:15:04 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < ""
Dec 02 10:15:04 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/.meta.tmp'
Dec 02 10:15:04 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/.meta.tmp' to config b'/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/.meta'
Dec 02 10:15:04 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < ""
Dec 02 10:15:04 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "format": "json"}]: dispatch
Dec 02 10:15:04 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < ""
Dec 02 10:15:04 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < ""
Dec 02 10:15:05 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "9fcb2cae-930f-42ab-bc64-d18acc6b4eec", "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:15:05 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:9fcb2cae-930f-42ab-bc64-d18acc6b4eec, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Dec 02 10:15:05 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:9fcb2cae-930f-42ab-bc64-d18acc6b4eec, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Dec 02 10:15:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/878020292' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:15:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/878020292' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:15:05 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:15:05 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "format": "json"}]: dispatch
Dec 02 10:15:05 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:15:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/154205066' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:15:05 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "9fcb2cae-930f-42ab-bc64-d18acc6b4eec", "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:15:05 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e251 e251: 6 total, 6 up, 6 in
Dec 02 10:15:05 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v612: 177 pgs: 177 active+clean; 299 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 124 op/s
Dec 02 10:15:06 np0005541914.localdomain ceph-mon[301710]: osdmap e251: 6 total, 6 up, 6 in
Dec 02 10:15:06 np0005541914.localdomain ceph-mon[301710]: pgmap v612: 177 pgs: 177 active+clean; 299 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 124 op/s
Dec 02 10:15:06 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/153256759' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:15:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:06.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:15:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e252 e252: 6 total, 6 up, 6 in
Dec 02 10:15:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:06.872 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:15:06
Dec 02 10:15:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 02 10:15:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] do_upmap
Dec 02 10:15:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] pools ['manila_metadata', '.mgr', 'images', 'manila_data', 'volumes', 'vms', 'backups']
Dec 02 10:15:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes
Dec 02 10:15:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:15:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:15:06 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:15:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:15:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:15:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:15:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:15:07 np0005541914.localdomain podman[324213]: 2025-12-02 10:15:07.097885804 +0000 UTC m=+0.095470264 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 02 10:15:07 np0005541914.localdomain podman[324213]: 2025-12-02 10:15:07.139003277 +0000 UTC m=+0.136587747 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 02 10:15:07 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:15:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "874ecabd-a028-4aa4-9a5c-9d15f18fe0c8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:15:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:874ecabd-a028-4aa4-9a5c-9d15f18fe0c8, vol_name:cephfs) < ""
Dec 02 10:15:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/874ecabd-a028-4aa4-9a5c-9d15f18fe0c8/.meta.tmp'
Dec 02 10:15:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/874ecabd-a028-4aa4-9a5c-9d15f18fe0c8/.meta.tmp' to config b'/volumes/_nogroup/874ecabd-a028-4aa4-9a5c-9d15f18fe0c8/.meta'
Dec 02 10:15:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:874ecabd-a028-4aa4-9a5c-9d15f18fe0c8, vol_name:cephfs) < ""
Dec 02 10:15:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "874ecabd-a028-4aa4-9a5c-9d15f18fe0c8", "format": "json"}]: dispatch
Dec 02 10:15:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:874ecabd-a028-4aa4-9a5c-9d15f18fe0c8, vol_name:cephfs) < ""
Dec 02 10:15:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:874ecabd-a028-4aa4-9a5c-9d15f18fe0c8, vol_name:cephfs) < ""
Dec 02 10:15:07 np0005541914.localdomain systemd-journald[47679]: Data hash table of /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal has a fill level at 75.0 (53723 of 71630 items, 25165824 file size, 468 bytes per hash table item), suggesting rotation.
Dec 02 10:15:07 np0005541914.localdomain systemd-journald[47679]: /run/log/journal/510530184876bdc0ebb29e7199f63471/system.journal: Journal header limits reached or header out-of-date, rotating.
Dec 02 10:15:07 np0005541914.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 02 10:15:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v614: 177 pgs: 177 active+clean; 299 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 124 op/s
Dec 02 10:15:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust
Dec 02 10:15:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:15:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 02 10:15:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:15:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32)
Dec 02 10:15:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:15:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0029694915549972082 of space, bias 1.0, pg target 0.5929084804811092 quantized to 32 (current 32)
Dec 02 10:15:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:15:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8570103846780196 quantized to 32 (current 32)
Dec 02 10:15:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:15:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.001483655255443886 of space, bias 1.0, pg target 0.2947528440815187 quantized to 32 (current 32)
Dec 02 10:15:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:15:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 8.17891541038526e-07 of space, bias 1.0, pg target 0.00016248778615298717 quantized to 32 (current 32)
Dec 02 10:15:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:15:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0010588969151312116 of space, bias 4.0, pg target 0.8414700818909361 quantized to 16 (current 16)
Dec 02 10:15:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 02 10:15:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:15:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 02 10:15:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:15:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:15:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:15:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:15:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:15:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:15:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:15:07 np0005541914.localdomain rsyslogd[759]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Dec 02 10:15:07 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/3613759236' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:15:07 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/3613759236' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:15:07 np0005541914.localdomain ceph-mon[301710]: osdmap e252: 6 total, 6 up, 6 in
Dec 02 10:15:07 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:15:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:07.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:15:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:07.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:15:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:15:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:07.547 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:15:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:07.548 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:15:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:07.548 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:15:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:07.548 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 10:15:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:07.548 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:15:08 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:15:08 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2291623585' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:15:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:08.027 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:15:08 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "9fcb2cae-930f-42ab-bc64-d18acc6b4eec", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:08 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:9fcb2cae-930f-42ab-bc64-d18acc6b4eec, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Dec 02 10:15:08 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:9fcb2cae-930f-42ab-bc64-d18acc6b4eec, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Dec 02 10:15:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:08.239 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 10:15:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:08.241 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11464MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 10:15:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:08.241 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:15:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:08.242 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:15:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:08.300 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 10:15:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:08.301 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 10:15:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:08.332 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:15:08 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d194b0f5-d0ac-4694-aaca-c67668af8e04", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:15:08 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d194b0f5-d0ac-4694-aaca-c67668af8e04, vol_name:cephfs) < ""
Dec 02 10:15:08 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d194b0f5-d0ac-4694-aaca-c67668af8e04/.meta.tmp'
Dec 02 10:15:08 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d194b0f5-d0ac-4694-aaca-c67668af8e04/.meta.tmp' to config b'/volumes/_nogroup/d194b0f5-d0ac-4694-aaca-c67668af8e04/.meta'
Dec 02 10:15:08 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d194b0f5-d0ac-4694-aaca-c67668af8e04, vol_name:cephfs) < ""
Dec 02 10:15:08 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d194b0f5-d0ac-4694-aaca-c67668af8e04", "format": "json"}]: dispatch
Dec 02 10:15:08 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d194b0f5-d0ac-4694-aaca-c67668af8e04, vol_name:cephfs) < ""
Dec 02 10:15:08 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d194b0f5-d0ac-4694-aaca-c67668af8e04, vol_name:cephfs) < ""
Dec 02 10:15:08 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:15:08 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2041560746' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:15:08 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "874ecabd-a028-4aa4-9a5c-9d15f18fe0c8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:15:08 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "874ecabd-a028-4aa4-9a5c-9d15f18fe0c8", "format": "json"}]: dispatch
Dec 02 10:15:08 np0005541914.localdomain ceph-mon[301710]: pgmap v614: 177 pgs: 177 active+clean; 299 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 124 op/s
Dec 02 10:15:08 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/2291623585' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:15:08 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:15:08 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/2041560746' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:15:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:08.810 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:15:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:08.818 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 10:15:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:08.842 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 10:15:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:08.844 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 10:15:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:08.844 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:15:08 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:15:08.863 2 INFO neutron.agent.securitygroups_rpc [None req-e4074800-d361-45b9-b812-e8981daf28f3 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Security group rule updated ['10785715-ddea-43bb-82fa-9f44a2fb1faa']
Dec 02 10:15:09 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:09.175 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:09 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:15:09.250 2 INFO neutron.agent.securitygroups_rpc [None req-ee552935-4da7-44ca-8e38-6eb6181199e8 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Security group rule updated ['10785715-ddea-43bb-82fa-9f44a2fb1faa']
Dec 02 10:15:09 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v615: 177 pgs: 177 active+clean; 207 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 52 KiB/s rd, 62 KiB/s wr, 80 op/s
Dec 02 10:15:09 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "9fcb2cae-930f-42ab-bc64-d18acc6b4eec", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:09 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d194b0f5-d0ac-4694-aaca-c67668af8e04", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:15:09 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d194b0f5-d0ac-4694-aaca-c67668af8e04", "format": "json"}]: dispatch
Dec 02 10:15:09 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:15:09 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:738f4ca9-41a9-48cc-8ca1-8d9ae9041202, vol_name:cephfs) < ""
Dec 02 10:15:09 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:09.846 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:15:09 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/738f4ca9-41a9-48cc-8ca1-8d9ae9041202/.meta.tmp'
Dec 02 10:15:09 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/738f4ca9-41a9-48cc-8ca1-8d9ae9041202/.meta.tmp' to config b'/volumes/_nogroup/738f4ca9-41a9-48cc-8ca1-8d9ae9041202/.meta'
Dec 02 10:15:09 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:738f4ca9-41a9-48cc-8ca1-8d9ae9041202, vol_name:cephfs) < ""
Dec 02 10:15:09 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "format": "json"}]: dispatch
Dec 02 10:15:09 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:738f4ca9-41a9-48cc-8ca1-8d9ae9041202, vol_name:cephfs) < ""
Dec 02 10:15:09 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:738f4ca9-41a9-48cc-8ca1-8d9ae9041202, vol_name:cephfs) < ""
Dec 02 10:15:10 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "874ecabd-a028-4aa4-9a5c-9d15f18fe0c8", "format": "json"}]: dispatch
Dec 02 10:15:10 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:874ecabd-a028-4aa4-9a5c-9d15f18fe0c8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:15:10 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:874ecabd-a028-4aa4-9a5c-9d15f18fe0c8, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:15:10 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '874ecabd-a028-4aa4-9a5c-9d15f18fe0c8' of type subvolume
Dec 02 10:15:10 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:15:10.347+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '874ecabd-a028-4aa4-9a5c-9d15f18fe0c8' of type subvolume
Dec 02 10:15:10 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "874ecabd-a028-4aa4-9a5c-9d15f18fe0c8", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:10 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:874ecabd-a028-4aa4-9a5c-9d15f18fe0c8, vol_name:cephfs) < ""
Dec 02 10:15:10 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/874ecabd-a028-4aa4-9a5c-9d15f18fe0c8'' moved to trashcan
Dec 02 10:15:10 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:15:10 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:874ecabd-a028-4aa4-9a5c-9d15f18fe0c8, vol_name:cephfs) < ""
Dec 02 10:15:10 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:10.526 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:15:10 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:10.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:15:10 np0005541914.localdomain ceph-mon[301710]: pgmap v615: 177 pgs: 177 active+clean; 207 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 52 KiB/s rd, 62 KiB/s wr, 80 op/s
Dec 02 10:15:10 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:15:10 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "format": "json"}]: dispatch
Dec 02 10:15:10 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:15:11 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v616: 177 pgs: 177 active+clean; 207 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 40 KiB/s rd, 47 KiB/s wr, 61 op/s
Dec 02 10:15:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:11.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:15:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:11.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 10:15:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:11.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 10:15:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:11.556 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 10:15:11 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "20d14646-9b62-4b24-984f-6434ad453069", "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:15:11 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_create(format:json, group_name:20d14646-9b62-4b24-984f-6434ad453069, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Dec 02 10:15:11 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_create(format:json, group_name:20d14646-9b62-4b24-984f-6434ad453069, mode:0755, prefix:fs subvolumegroup create, vol_name:cephfs) < ""
Dec 02 10:15:11 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "d194b0f5-d0ac-4694-aaca-c67668af8e04", "auth_id": "tempest-cephx-id-1696860369", "tenant_id": "82d5a09e66904b8ca3c7a7850f1e5c52", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:15:11 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume authorize, sub_name:d194b0f5-d0ac-4694-aaca-c67668af8e04, tenant_id:82d5a09e66904b8ca3c7a7850f1e5c52, vol_name:cephfs) < ""
Dec 02 10:15:11 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e253 e253: 6 total, 6 up, 6 in
Dec 02 10:15:11 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} v 0)
Dec 02 10:15:11 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch
Dec 02 10:15:11 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID tempest-cephx-id-1696860369 with tenant 82d5a09e66904b8ca3c7a7850f1e5c52
Dec 02 10:15:11 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/d194b0f5-d0ac-4694-aaca-c67668af8e04/f0230cb5-166a-4bc3-a680-7635315554d3", "osd", "allow rw pool=manila_data namespace=fsvolumens_d194b0f5-d0ac-4694-aaca-c67668af8e04", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:15:11 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/d194b0f5-d0ac-4694-aaca-c67668af8e04/f0230cb5-166a-4bc3-a680-7635315554d3", "osd", "allow rw pool=manila_data namespace=fsvolumens_d194b0f5-d0ac-4694-aaca-c67668af8e04", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:15:11 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "874ecabd-a028-4aa4-9a5c-9d15f18fe0c8", "format": "json"}]: dispatch
Dec 02 10:15:11 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "874ecabd-a028-4aa4-9a5c-9d15f18fe0c8", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:11 np0005541914.localdomain ceph-mon[301710]: osdmap e253: 6 total, 6 up, 6 in
Dec 02 10:15:11 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch
Dec 02 10:15:11 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/d194b0f5-d0ac-4694-aaca-c67668af8e04/f0230cb5-166a-4bc3-a680-7635315554d3", "osd", "allow rw pool=manila_data namespace=fsvolumens_d194b0f5-d0ac-4694-aaca-c67668af8e04", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:15:11 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/d194b0f5-d0ac-4694-aaca-c67668af8e04/f0230cb5-166a-4bc3-a680-7635315554d3", "osd", "allow rw pool=manila_data namespace=fsvolumens_d194b0f5-d0ac-4694-aaca-c67668af8e04", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:15:11 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/d194b0f5-d0ac-4694-aaca-c67668af8e04/f0230cb5-166a-4bc3-a680-7635315554d3", "osd", "allow rw pool=manila_data namespace=fsvolumens_d194b0f5-d0ac-4694-aaca-c67668af8e04", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:15:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:11.914 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:11 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume authorize, sub_name:d194b0f5-d0ac-4694-aaca-c67668af8e04, tenant_id:82d5a09e66904b8ca3c7a7850f1e5c52, vol_name:cephfs) < ""
Dec 02 10:15:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:15:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:15:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:15:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:15:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:15:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:15:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:15:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:15:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:15:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:15:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:15:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:15:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:12.327 281049 DEBUG oslo_concurrency.lockutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Acquiring lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:15:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:12.328 281049 DEBUG oslo_concurrency.lockutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:15:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:12.343 281049 DEBUG nova.compute.manager [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 02 10:15:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:12.406 281049 DEBUG oslo_concurrency.lockutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:15:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:12.406 281049 DEBUG oslo_concurrency.lockutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:15:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:12.411 281049 DEBUG nova.virt.hardware [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 02 10:15:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:12.411 281049 INFO nova.compute.claims [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Claim successful on node np0005541914.localdomain
Dec 02 10:15:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:12.513 281049 DEBUG oslo_concurrency.processutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:15:12 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:15:12 np0005541914.localdomain ceph-mon[301710]: pgmap v616: 177 pgs: 177 active+clean; 207 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 40 KiB/s rd, 47 KiB/s wr, 61 op/s
Dec 02 10:15:12 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "20d14646-9b62-4b24-984f-6434ad453069", "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:15:12 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "d194b0f5-d0ac-4694-aaca-c67668af8e04", "auth_id": "tempest-cephx-id-1696860369", "tenant_id": "82d5a09e66904b8ca3c7a7850f1e5c52", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:15:12 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:15:12 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/703101738' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:15:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:12.963 281049 DEBUG oslo_concurrency.processutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:15:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:12.969 281049 DEBUG nova.compute.provider_tree [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 10:15:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:12.985 281049 DEBUG nova.scheduler.client.report [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 10:15:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:13.006 281049 DEBUG oslo_concurrency.lockutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:15:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:13.006 281049 DEBUG nova.compute.manager [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 02 10:15:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:13.058 281049 DEBUG nova.compute.manager [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 02 10:15:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:13.059 281049 DEBUG nova.network.neutron [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 02 10:15:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:13.071 281049 INFO nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 02 10:15:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:13.093 281049 DEBUG nova.compute.manager [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 02 10:15:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:13.186 281049 DEBUG nova.compute.manager [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 02 10:15:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:13.190 281049 DEBUG nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 02 10:15:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:13.191 281049 INFO nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Creating image(s)
Dec 02 10:15:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "07b7e455-1272-48fc-92f9-fd54c3fafcb0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:15:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:07b7e455-1272-48fc-92f9-fd54c3fafcb0, vol_name:cephfs) < ""
Dec 02 10:15:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:13.239 281049 DEBUG nova.storage.rbd_utils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] rbd image e4135ac9-548a-4e8d-99d6-cde8dedb2c77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 02 10:15:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v618: 177 pgs: 177 active+clean; 208 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 40 KiB/s rd, 138 KiB/s wr, 67 op/s
Dec 02 10:15:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/07b7e455-1272-48fc-92f9-fd54c3fafcb0/.meta.tmp'
Dec 02 10:15:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/07b7e455-1272-48fc-92f9-fd54c3fafcb0/.meta.tmp' to config b'/volumes/_nogroup/07b7e455-1272-48fc-92f9-fd54c3fafcb0/.meta'
Dec 02 10:15:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:07b7e455-1272-48fc-92f9-fd54c3fafcb0, vol_name:cephfs) < ""
Dec 02 10:15:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "07b7e455-1272-48fc-92f9-fd54c3fafcb0", "format": "json"}]: dispatch
Dec 02 10:15:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:07b7e455-1272-48fc-92f9-fd54c3fafcb0, vol_name:cephfs) < ""
Dec 02 10:15:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:07b7e455-1272-48fc-92f9-fd54c3fafcb0, vol_name:cephfs) < ""
Dec 02 10:15:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:13.288 281049 DEBUG nova.storage.rbd_utils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] rbd image e4135ac9-548a-4e8d-99d6-cde8dedb2c77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 02 10:15:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:13.333 281049 DEBUG nova.storage.rbd_utils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] rbd image e4135ac9-548a-4e8d-99d6-cde8dedb2c77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 02 10:15:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:13.339 281049 DEBUG oslo_concurrency.processutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:15:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:13.362 281049 DEBUG nova.policy [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0e5c738ba752455b908099b234a743a2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd858413a9b01463f96545916d2abe5ab', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 02 10:15:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:13.416 281049 DEBUG oslo_concurrency.processutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:15:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:13.417 281049 DEBUG oslo_concurrency.lockutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Acquiring lock "43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:15:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:13.418 281049 DEBUG oslo_concurrency.lockutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:15:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:13.419 281049 DEBUG oslo_concurrency.lockutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:15:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:13.457 281049 DEBUG nova.storage.rbd_utils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] rbd image e4135ac9-548a-4e8d-99d6-cde8dedb2c77_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 02 10:15:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:13.463 281049 DEBUG oslo_concurrency.processutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc e4135ac9-548a-4e8d-99d6-cde8dedb2c77_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:15:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f9911edf-08c0-404a-9b15-1750f599217e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:15:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f9911edf-08c0-404a-9b15-1750f599217e, vol_name:cephfs) < ""
Dec 02 10:15:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:13.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:15:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f9911edf-08c0-404a-9b15-1750f599217e/.meta.tmp'
Dec 02 10:15:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f9911edf-08c0-404a-9b15-1750f599217e/.meta.tmp' to config b'/volumes/_nogroup/f9911edf-08c0-404a-9b15-1750f599217e/.meta'
Dec 02 10:15:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f9911edf-08c0-404a-9b15-1750f599217e, vol_name:cephfs) < ""
Dec 02 10:15:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f9911edf-08c0-404a-9b15-1750f599217e", "format": "json"}]: dispatch
Dec 02 10:15:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f9911edf-08c0-404a-9b15-1750f599217e, vol_name:cephfs) < ""
Dec 02 10:15:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f9911edf-08c0-404a-9b15-1750f599217e, vol_name:cephfs) < ""
Dec 02 10:15:13 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:15:13.701 2 INFO neutron.agent.securitygroups_rpc [req-a541e13d-87f6-4580-832f-af5d7aef99a4 req-c15ffc3e-ba6d-409e-8103-3b4ea0d7e66e 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Security group member updated ['10785715-ddea-43bb-82fa-9f44a2fb1faa']
Dec 02 10:15:13 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/703101738' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:15:13 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:15:13 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:15:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:14.035 281049 DEBUG oslo_concurrency.processutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/43cc3eae4d6ab33a15526950b68aad5ba6c1c8fc e4135ac9-548a-4e8d-99d6-cde8dedb2c77_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:15:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:14.086 281049 DEBUG nova.network.neutron [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Successfully created port: 5312b3e8-70f6-4e16-95ba-31b46130d41f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 02 10:15:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:14.154 281049 DEBUG nova.storage.rbd_utils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] resizing rbd image e4135ac9-548a-4e8d-99d6-cde8dedb2c77_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 02 10:15:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:14.210 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:14.320 281049 DEBUG nova.objects.instance [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lazy-loading 'migration_context' on Instance uuid e4135ac9-548a-4e8d-99d6-cde8dedb2c77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 02 10:15:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:14.407 281049 DEBUG nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 02 10:15:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:14.407 281049 DEBUG nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Ensure instance console log exists: /var/lib/nova/instances/e4135ac9-548a-4e8d-99d6-cde8dedb2c77/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 02 10:15:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:14.408 281049 DEBUG oslo_concurrency.lockutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:15:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:14.408 281049 DEBUG oslo_concurrency.lockutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:15:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:14.409 281049 DEBUG oslo_concurrency.lockutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:15:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:14.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:15:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:14.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 10:15:14 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "20d14646-9b62-4b24-984f-6434ad453069", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:14 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:20d14646-9b62-4b24-984f-6434ad453069, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Dec 02 10:15:14 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolumegroup_rm(force:True, format:json, group_name:20d14646-9b62-4b24-984f-6434ad453069, prefix:fs subvolumegroup rm, vol_name:cephfs) < ""
Dec 02 10:15:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:14.867 281049 DEBUG nova.network.neutron [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Successfully updated port: 5312b3e8-70f6-4e16-95ba-31b46130d41f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 02 10:15:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:14.886 281049 DEBUG oslo_concurrency.lockutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Acquiring lock "refresh_cache-e4135ac9-548a-4e8d-99d6-cde8dedb2c77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 02 10:15:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:14.887 281049 DEBUG oslo_concurrency.lockutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Acquired lock "refresh_cache-e4135ac9-548a-4e8d-99d6-cde8dedb2c77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 02 10:15:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:14.887 281049 DEBUG nova.network.neutron [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 02 10:15:14 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "07b7e455-1272-48fc-92f9-fd54c3fafcb0", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:15:14 np0005541914.localdomain ceph-mon[301710]: pgmap v618: 177 pgs: 177 active+clean; 208 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 40 KiB/s rd, 138 KiB/s wr, 67 op/s
Dec 02 10:15:14 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "07b7e455-1272-48fc-92f9-fd54c3fafcb0", "format": "json"}]: dispatch
Dec 02 10:15:14 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f9911edf-08c0-404a-9b15-1750f599217e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:15:14 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f9911edf-08c0-404a-9b15-1750f599217e", "format": "json"}]: dispatch
Dec 02 10:15:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:14.980 281049 DEBUG nova.network.neutron [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 02 10:15:15 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "d194b0f5-d0ac-4694-aaca-c67668af8e04", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch
Dec 02 10:15:15 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume deauthorize, sub_name:d194b0f5-d0ac-4694-aaca-c67668af8e04, vol_name:cephfs) < ""
Dec 02 10:15:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:15.144 281049 DEBUG nova.compute.manager [req-66204e53-d52e-4dd8-8475-948bd54203dc req-557429bc-2aac-4165-839e-ad21920284bc dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Received event network-changed-5312b3e8-70f6-4e16-95ba-31b46130d41f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 02 10:15:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:15.145 281049 DEBUG nova.compute.manager [req-66204e53-d52e-4dd8-8475-948bd54203dc req-557429bc-2aac-4165-839e-ad21920284bc dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Refreshing instance network info cache due to event network-changed-5312b3e8-70f6-4e16-95ba-31b46130d41f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 02 10:15:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:15.146 281049 DEBUG oslo_concurrency.lockutils [req-66204e53-d52e-4dd8-8475-948bd54203dc req-557429bc-2aac-4165-839e-ad21920284bc dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "refresh_cache-e4135ac9-548a-4e8d-99d6-cde8dedb2c77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 02 10:15:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:15.216 281049 DEBUG nova.network.neutron [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Updating instance_info_cache with network_info: [{"id": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "address": "fa:16:3e:77:0c:21", "network": {"id": "8703a229-8c49-443e-95c6-aff62a358434", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1306125232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d858413a9b01463f96545916d2abe5ab", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5312b3e8-70", "ovs_interfaceid": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 02 10:15:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:15.238 281049 DEBUG oslo_concurrency.lockutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Releasing lock "refresh_cache-e4135ac9-548a-4e8d-99d6-cde8dedb2c77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 02 10:15:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:15.239 281049 DEBUG nova.compute.manager [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Instance network_info: |[{"id": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "address": "fa:16:3e:77:0c:21", "network": {"id": "8703a229-8c49-443e-95c6-aff62a358434", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1306125232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d858413a9b01463f96545916d2abe5ab", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5312b3e8-70", "ovs_interfaceid": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 02 10:15:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:15.239 281049 DEBUG oslo_concurrency.lockutils [req-66204e53-d52e-4dd8-8475-948bd54203dc req-557429bc-2aac-4165-839e-ad21920284bc dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquired lock "refresh_cache-e4135ac9-548a-4e8d-99d6-cde8dedb2c77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 02 10:15:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:15.240 281049 DEBUG nova.network.neutron [req-66204e53-d52e-4dd8-8475-948bd54203dc req-557429bc-2aac-4165-839e-ad21920284bc dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Refreshing network info cache for port 5312b3e8-70f6-4e16-95ba-31b46130d41f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 02 10:15:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:15.245 281049 DEBUG nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Start _get_guest_xml network_info=[{"id": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "address": "fa:16:3e:77:0c:21", "network": {"id": "8703a229-8c49-443e-95c6-aff62a358434", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1306125232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d858413a9b01463f96545916d2abe5ab", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5312b3e8-70", "ovs_interfaceid": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T10:01:53Z,direct_url=<?>,disk_format='qcow2',id=d85e840d-fa56-497b-b5bd-b49584d3e97a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e2d97696ab6749899bb8ba5ce29a3de2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T10:01:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_format': None, 'encryption_secret_uuid': None, 'encryption_options': None, 'device_type': 'disk', 'boot_index': 0, 'guest_format': None, 'disk_bus': 'virtio', 'encrypted': False, 'size': 0, 'device_name': '/dev/vda', 'image_id': 'd85e840d-fa56-497b-b5bd-b49584d3e97a'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 02 10:15:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:15.252 281049 WARNING nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 10:15:15 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v619: 177 pgs: 177 active+clean; 208 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 131 KiB/s wr, 63 op/s
Dec 02 10:15:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:15.261 281049 DEBUG nova.virt.libvirt.host [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Searching host: 'np0005541914.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 02 10:15:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:15.261 281049 DEBUG nova.virt.libvirt.host [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 02 10:15:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:15.264 281049 DEBUG nova.virt.libvirt.host [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Searching host: 'np0005541914.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 02 10:15:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:15.264 281049 DEBUG nova.virt.libvirt.host [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 02 10:15:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:15.265 281049 DEBUG nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 02 10:15:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:15.265 281049 DEBUG nova.virt.hardware [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-02T10:01:31Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='82beb986-6d20-42dc-b738-1cef87dee30f',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-02T10:01:53Z,direct_url=<?>,disk_format='qcow2',id=d85e840d-fa56-497b-b5bd-b49584d3e97a,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='e2d97696ab6749899bb8ba5ce29a3de2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-02T10:01:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 02 10:15:15 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} v 0)
Dec 02 10:15:15 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch
Dec 02 10:15:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:15.266 281049 DEBUG nova.virt.hardware [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 02 10:15:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:15.266 281049 DEBUG nova.virt.hardware [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 02 10:15:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:15.267 281049 DEBUG nova.virt.hardware [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 02 10:15:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:15.267 281049 DEBUG nova.virt.hardware [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 02 10:15:15 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} v 0)
Dec 02 10:15:15 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch
Dec 02 10:15:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:15.267 281049 DEBUG nova.virt.hardware [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 02 10:15:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:15.268 281049 DEBUG nova.virt.hardware [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 02 10:15:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:15.268 281049 DEBUG nova.virt.hardware [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 02 10:15:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:15.269 281049 DEBUG nova.virt.hardware [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 02 10:15:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:15.269 281049 DEBUG nova.virt.hardware [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 02 10:15:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:15.269 281049 DEBUG nova.virt.hardware [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 02 10:15:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:15.274 281049 DEBUG oslo_concurrency.processutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:15:15 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume deauthorize, sub_name:d194b0f5-d0ac-4694-aaca-c67668af8e04, vol_name:cephfs) < ""
Dec 02 10:15:15 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "d194b0f5-d0ac-4694-aaca-c67668af8e04", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch
Dec 02 10:15:15 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume evict, sub_name:d194b0f5-d0ac-4694-aaca-c67668af8e04, vol_name:cephfs) < ""
Dec 02 10:15:15 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1696860369, client_metadata.root=/volumes/_nogroup/d194b0f5-d0ac-4694-aaca-c67668af8e04/f0230cb5-166a-4bc3-a680-7635315554d3
Dec 02 10:15:15 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:15:15 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume evict, sub_name:d194b0f5-d0ac-4694-aaca-c67668af8e04, vol_name:cephfs) < ""
Dec 02 10:15:15 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d194b0f5-d0ac-4694-aaca-c67668af8e04", "format": "json"}]: dispatch
Dec 02 10:15:15 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d194b0f5-d0ac-4694-aaca-c67668af8e04, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:15:15 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d194b0f5-d0ac-4694-aaca-c67668af8e04, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:15:15 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd194b0f5-d0ac-4694-aaca-c67668af8e04' of type subvolume
Dec 02 10:15:15 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:15:15.593+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd194b0f5-d0ac-4694-aaca-c67668af8e04' of type subvolume
Dec 02 10:15:15 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d194b0f5-d0ac-4694-aaca-c67668af8e04", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:15 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d194b0f5-d0ac-4694-aaca-c67668af8e04, vol_name:cephfs) < ""
Dec 02 10:15:15 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d194b0f5-d0ac-4694-aaca-c67668af8e04'' moved to trashcan
Dec 02 10:15:15 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:15:15 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d194b0f5-d0ac-4694-aaca-c67668af8e04, vol_name:cephfs) < ""
Dec 02 10:15:15 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 02 10:15:15 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2486108379' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:15:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:15.743 281049 DEBUG oslo_concurrency.processutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:15:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:15.784 281049 DEBUG nova.storage.rbd_utils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] rbd image e4135ac9-548a-4e8d-99d6-cde8dedb2c77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 02 10:15:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:15.789 281049 DEBUG oslo_concurrency.processutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:15:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:15.815 281049 DEBUG nova.network.neutron [req-66204e53-d52e-4dd8-8475-948bd54203dc req-557429bc-2aac-4165-839e-ad21920284bc dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Updated VIF entry in instance network info cache for port 5312b3e8-70f6-4e16-95ba-31b46130d41f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 02 10:15:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:15.816 281049 DEBUG nova.network.neutron [req-66204e53-d52e-4dd8-8475-948bd54203dc req-557429bc-2aac-4165-839e-ad21920284bc dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Updating instance_info_cache with network_info: [{"id": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "address": "fa:16:3e:77:0c:21", "network": {"id": "8703a229-8c49-443e-95c6-aff62a358434", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1306125232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d858413a9b01463f96545916d2abe5ab", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5312b3e8-70", "ovs_interfaceid": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 02 10:15:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:15.839 281049 DEBUG oslo_concurrency.lockutils [req-66204e53-d52e-4dd8-8475-948bd54203dc req-557429bc-2aac-4165-839e-ad21920284bc dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Releasing lock "refresh_cache-e4135ac9-548a-4e8d-99d6-cde8dedb2c77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 02 10:15:15 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "20d14646-9b62-4b24-984f-6434ad453069", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:15 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "d194b0f5-d0ac-4694-aaca-c67668af8e04", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch
Dec 02 10:15:15 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch
Dec 02 10:15:15 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch
Dec 02 10:15:15 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch
Dec 02 10:15:15 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"}]': finished
Dec 02 10:15:15 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/2486108379' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:15:16 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 02 10:15:16 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3419331710' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:16.224 281049 DEBUG oslo_concurrency.processutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:16.226 281049 DEBUG nova.virt.libvirt.vif [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T10:15:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-296444076',display_name='tempest-VolumesBackupsTest-instance-296444076',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005541914.localdomain',hostname='tempest-volumesbackupstest-instance-296444076',id=11,image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHkG+iQFuqjdTdoAEp/kY7cw/kNkZh2LbPeLiGtN8Y97oQKWkY5uonMIVaaGJFGigPwU4U46n3JFHVn8N98Xn7K+8moZz1t1gU5zOrLM/YgrB2LfY32eA3cmwq2A59hxHw==',key_name='tempest-keypair-352232817',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005541914.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005541914.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d858413a9b01463f96545916d2abe5ab',ramdisk_id='',reservation_id='r-nwps7030',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-479123361',owner_user_name='tempest-VolumesBackupsTest-479123361-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T10:15:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0e5c738ba752455b908099b234a743a2',uuid=e4135ac9-548a-4e8d-99d6-cde8dedb2c77,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "address": "fa:16:3e:77:0c:21", "network": {"id": "8703a229-8c49-443e-95c6-aff62a358434", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1306125232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d858413a9b01463f96545916d2abe5ab", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5312b3e8-70", "ovs_interfaceid": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:16.227 281049 DEBUG nova.network.os_vif_util [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Converting VIF {"id": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "address": "fa:16:3e:77:0c:21", "network": {"id": "8703a229-8c49-443e-95c6-aff62a358434", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1306125232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d858413a9b01463f96545916d2abe5ab", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5312b3e8-70", "ovs_interfaceid": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:16.228 281049 DEBUG nova.network.os_vif_util [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:0c:21,bridge_name='br-int',has_traffic_filtering=True,id=5312b3e8-70f6-4e16-95ba-31b46130d41f,network=Network(8703a229-8c49-443e-95c6-aff62a358434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5312b3e8-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:16.229 281049 DEBUG nova.objects.instance [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lazy-loading 'pci_devices' on Instance uuid e4135ac9-548a-4e8d-99d6-cde8dedb2c77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:16.248 281049 DEBUG nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] End _get_guest_xml xml=<domain type="kvm">
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:   <uuid>e4135ac9-548a-4e8d-99d6-cde8dedb2c77</uuid>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:   <name>instance-0000000b</name>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:   <memory>131072</memory>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:   <vcpu>1</vcpu>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:   <metadata>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:       <nova:name>tempest-VolumesBackupsTest-instance-296444076</nova:name>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:       <nova:creationTime>2025-12-02 10:15:15</nova:creationTime>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:       <nova:flavor name="m1.nano">
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:         <nova:memory>128</nova:memory>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:         <nova:disk>1</nova:disk>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:         <nova:swap>0</nova:swap>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:         <nova:ephemeral>0</nova:ephemeral>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:         <nova:vcpus>1</nova:vcpus>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:       </nova:flavor>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:       <nova:owner>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:         <nova:user uuid="0e5c738ba752455b908099b234a743a2">tempest-VolumesBackupsTest-479123361-project-member</nova:user>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:         <nova:project uuid="d858413a9b01463f96545916d2abe5ab">tempest-VolumesBackupsTest-479123361</nova:project>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:       </nova:owner>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:       <nova:root type="image" uuid="d85e840d-fa56-497b-b5bd-b49584d3e97a"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:       <nova:ports>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:         <nova:port uuid="5312b3e8-70f6-4e16-95ba-31b46130d41f">
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:         </nova:port>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:       </nova:ports>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     </nova:instance>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:   </metadata>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:   <sysinfo type="smbios">
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <system>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:       <entry name="manufacturer">RDO</entry>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:       <entry name="product">OpenStack Compute</entry>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:       <entry name="serial">e4135ac9-548a-4e8d-99d6-cde8dedb2c77</entry>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:       <entry name="uuid">e4135ac9-548a-4e8d-99d6-cde8dedb2c77</entry>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:       <entry name="family">Virtual Machine</entry>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     </system>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:   </sysinfo>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:   <os>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <boot dev="hd"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <smbios mode="sysinfo"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:   </os>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:   <features>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <acpi/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <apic/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <vmcoreinfo/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:   </features>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:   <clock offset="utc">
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <timer name="pit" tickpolicy="delay"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <timer name="hpet" present="no"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:   </clock>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:   <cpu mode="host-model" match="exact">
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <topology sockets="1" cores="1" threads="1"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:   </cpu>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:   <devices>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <disk type="network" device="disk">
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:       <driver type="raw" cache="none"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:       <source protocol="rbd" name="vms/e4135ac9-548a-4e8d-99d6-cde8dedb2c77_disk">
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:         <host name="172.18.0.103" port="6789"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:         <host name="172.18.0.104" port="6789"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:         <host name="172.18.0.105" port="6789"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:       </source>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:       <auth username="openstack">
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:         <secret type="ceph" uuid="c7c8e171-a193-56fb-95fa-8879fcfa7074"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:       </auth>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:       <target dev="vda" bus="virtio"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     </disk>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <disk type="network" device="cdrom">
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:       <driver type="raw" cache="none"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:       <source protocol="rbd" name="vms/e4135ac9-548a-4e8d-99d6-cde8dedb2c77_disk.config">
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:         <host name="172.18.0.103" port="6789"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:         <host name="172.18.0.104" port="6789"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:         <host name="172.18.0.105" port="6789"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:       </source>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:       <auth username="openstack">
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:         <secret type="ceph" uuid="c7c8e171-a193-56fb-95fa-8879fcfa7074"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:       </auth>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:       <target dev="sda" bus="sata"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     </disk>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <interface type="ethernet">
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:       <mac address="fa:16:3e:77:0c:21"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:       <model type="virtio"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:       <driver name="vhost" rx_queue_size="512"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:       <mtu size="1442"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:       <target dev="tap5312b3e8-70"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     </interface>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <serial type="pty">
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:       <log file="/var/lib/nova/instances/e4135ac9-548a-4e8d-99d6-cde8dedb2c77/console.log" append="off"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     </serial>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <video>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:       <model type="virtio"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     </video>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <input type="tablet" bus="usb"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <rng model="virtio">
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:       <backend model="random">/dev/urandom</backend>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     </rng>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <controller type="pci" model="pcie-root-port"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <controller type="usb" index="0"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     <memballoon model="virtio">
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:       <stats period="10"/>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:     </memballoon>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:   </devices>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]: </domain>
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:16.249 281049 DEBUG nova.compute.manager [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Preparing to wait for external event network-vif-plugged-5312b3e8-70f6-4e16-95ba-31b46130d41f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:16.249 281049 DEBUG oslo_concurrency.lockutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Acquiring lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:16.250 281049 DEBUG oslo_concurrency.lockutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:16.250 281049 DEBUG oslo_concurrency.lockutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:16.251 281049 DEBUG nova.virt.libvirt.vif [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-02T10:15:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-296444076',display_name='tempest-VolumesBackupsTest-instance-296444076',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005541914.localdomain',hostname='tempest-volumesbackupstest-instance-296444076',id=11,image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHkG+iQFuqjdTdoAEp/kY7cw/kNkZh2LbPeLiGtN8Y97oQKWkY5uonMIVaaGJFGigPwU4U46n3JFHVn8N98Xn7K+8moZz1t1gU5zOrLM/YgrB2LfY32eA3cmwq2A59hxHw==',key_name='tempest-keypair-352232817',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005541914.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005541914.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d858413a9b01463f96545916d2abe5ab',ramdisk_id='',reservation_id='r-nwps7030',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-479123361',owner_user_name='tempest-VolumesBackupsTest-479123361-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-02T10:15:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0e5c738ba752455b908099b234a743a2',uuid=e4135ac9-548a-4e8d-99d6-cde8dedb2c77,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "address": "fa:16:3e:77:0c:21", "network": {"id": "8703a229-8c49-443e-95c6-aff62a358434", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1306125232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d858413a9b01463f96545916d2abe5ab", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5312b3e8-70", "ovs_interfaceid": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:16.252 281049 DEBUG nova.network.os_vif_util [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Converting VIF {"id": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "address": "fa:16:3e:77:0c:21", "network": {"id": "8703a229-8c49-443e-95c6-aff62a358434", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1306125232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d858413a9b01463f96545916d2abe5ab", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5312b3e8-70", "ovs_interfaceid": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:16.252 281049 DEBUG nova.network.os_vif_util [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:0c:21,bridge_name='br-int',has_traffic_filtering=True,id=5312b3e8-70f6-4e16-95ba-31b46130d41f,network=Network(8703a229-8c49-443e-95c6-aff62a358434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5312b3e8-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:16.253 281049 DEBUG os_vif [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:0c:21,bridge_name='br-int',has_traffic_filtering=True,id=5312b3e8-70f6-4e16-95ba-31b46130d41f,network=Network(8703a229-8c49-443e-95c6-aff62a358434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5312b3e8-70') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:16.254 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:16.254 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:16.255 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:16.258 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:16.258 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5312b3e8-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:16.259 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5312b3e8-70, col_values=(('external_ids', {'iface-id': '5312b3e8-70f6-4e16-95ba-31b46130d41f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:77:0c:21', 'vm-uuid': 'e4135ac9-548a-4e8d-99d6-cde8dedb2c77'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:16.261 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:16.264 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:16.267 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:16.268 281049 INFO os_vif [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:0c:21,bridge_name='br-int',has_traffic_filtering=True,id=5312b3e8-70f6-4e16-95ba-31b46130d41f,network=Network(8703a229-8c49-443e-95c6-aff62a358434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5312b3e8-70')
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:16.319 281049 DEBUG nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:16.320 281049 DEBUG nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:16.320 281049 DEBUG nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] No VIF found with MAC fa:16:3e:77:0c:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:16.321 281049 INFO nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Using config drive
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:16.362 281049 DEBUG nova.storage.rbd_utils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] rbd image e4135ac9-548a-4e8d-99d6-cde8dedb2c77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 02 10:15:16 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "07b7e455-1272-48fc-92f9-fd54c3fafcb0", "auth_id": "Joe", "tenant_id": "0fe90f11d3f64e12b3591732792a929e", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:15:16 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:07b7e455-1272-48fc-92f9-fd54c3fafcb0, tenant_id:0fe90f11d3f64e12b3591732792a929e, vol_name:cephfs) < ""
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:16.478 281049 INFO nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Creating config drive at /var/lib/nova/instances/e4135ac9-548a-4e8d-99d6-cde8dedb2c77/disk.config
Dec 02 10:15:16 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0)
Dec 02 10:15:16 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Dec 02 10:15:16 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID Joe with tenant 0fe90f11d3f64e12b3591732792a929e
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:16.489 281049 DEBUG oslo_concurrency.processutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e4135ac9-548a-4e8d-99d6-cde8dedb2c77/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwbjo22t0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:15:16 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/07b7e455-1272-48fc-92f9-fd54c3fafcb0/1c901b4c-031b-4c12-b1cb-8ac5e6296378", "osd", "allow rw pool=manila_data namespace=fsvolumens_07b7e455-1272-48fc-92f9-fd54c3fafcb0", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:15:16 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/07b7e455-1272-48fc-92f9-fd54c3fafcb0/1c901b4c-031b-4c12-b1cb-8ac5e6296378", "osd", "allow rw pool=manila_data namespace=fsvolumens_07b7e455-1272-48fc-92f9-fd54c3fafcb0", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:15:16 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:07b7e455-1272-48fc-92f9-fd54c3fafcb0, tenant_id:0fe90f11d3f64e12b3591732792a929e, vol_name:cephfs) < ""
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:16.617 281049 DEBUG oslo_concurrency.processutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e4135ac9-548a-4e8d-99d6-cde8dedb2c77/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwbjo22t0" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:16.662 281049 DEBUG nova.storage.rbd_utils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] rbd image e4135ac9-548a-4e8d-99d6-cde8dedb2c77_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:16.667 281049 DEBUG oslo_concurrency.processutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e4135ac9-548a-4e8d-99d6-cde8dedb2c77/disk.config e4135ac9-548a-4e8d-99d6-cde8dedb2c77_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:16.890 281049 DEBUG oslo_concurrency.processutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e4135ac9-548a-4e8d-99d6-cde8dedb2c77/disk.config e4135ac9-548a-4e8d-99d6-cde8dedb2c77_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.223s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:15:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:16.891 281049 INFO nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Deleting local config drive /var/lib/nova/instances/e4135ac9-548a-4e8d-99d6-cde8dedb2c77/disk.config because it was imported into RBD.
Dec 02 10:15:16 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f9911edf-08c0-404a-9b15-1750f599217e", "format": "json"}]: dispatch
Dec 02 10:15:16 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f9911edf-08c0-404a-9b15-1750f599217e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:15:16 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f9911edf-08c0-404a-9b15-1750f599217e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:15:16 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:15:16.907+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f9911edf-08c0-404a-9b15-1750f599217e' of type subvolume
Dec 02 10:15:16 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f9911edf-08c0-404a-9b15-1750f599217e' of type subvolume
Dec 02 10:15:16 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f9911edf-08c0-404a-9b15-1750f599217e", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:16 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f9911edf-08c0-404a-9b15-1750f599217e, vol_name:cephfs) < ""
Dec 02 10:15:16 np0005541914.localdomain systemd[1]: Started libvirt secret daemon.
Dec 02 10:15:16 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f9911edf-08c0-404a-9b15-1750f599217e'' moved to trashcan
Dec 02 10:15:16 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:15:16 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f9911edf-08c0-404a-9b15-1750f599217e, vol_name:cephfs) < ""
Dec 02 10:15:16 np0005541914.localdomain ceph-mon[301710]: pgmap v619: 177 pgs: 177 active+clean; 208 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 131 KiB/s wr, 63 op/s
Dec 02 10:15:16 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "d194b0f5-d0ac-4694-aaca-c67668af8e04", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch
Dec 02 10:15:16 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d194b0f5-d0ac-4694-aaca-c67668af8e04", "format": "json"}]: dispatch
Dec 02 10:15:16 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d194b0f5-d0ac-4694-aaca-c67668af8e04", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:16 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/3419331710' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:15:16 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Dec 02 10:15:16 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/07b7e455-1272-48fc-92f9-fd54c3fafcb0/1c901b4c-031b-4c12-b1cb-8ac5e6296378", "osd", "allow rw pool=manila_data namespace=fsvolumens_07b7e455-1272-48fc-92f9-fd54c3fafcb0", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:15:16 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/07b7e455-1272-48fc-92f9-fd54c3fafcb0/1c901b4c-031b-4c12-b1cb-8ac5e6296378", "osd", "allow rw pool=manila_data namespace=fsvolumens_07b7e455-1272-48fc-92f9-fd54c3fafcb0", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:15:16 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/07b7e455-1272-48fc-92f9-fd54c3fafcb0/1c901b4c-031b-4c12-b1cb-8ac5e6296378", "osd", "allow rw pool=manila_data namespace=fsvolumens_07b7e455-1272-48fc-92f9-fd54c3fafcb0", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:15:17 np0005541914.localdomain kernel: device tap5312b3e8-70 entered promiscuous mode
Dec 02 10:15:17 np0005541914.localdomain NetworkManager[5967]: <info>  [1764670517.0304] manager: (tap5312b3e8-70): new Tun device (/org/freedesktop/NetworkManager/Devices/47)
Dec 02 10:15:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:17.032 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:17 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:15:17Z|00232|binding|INFO|Claiming lport 5312b3e8-70f6-4e16-95ba-31b46130d41f for this chassis.
Dec 02 10:15:17 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:15:17Z|00233|binding|INFO|5312b3e8-70f6-4e16-95ba-31b46130d41f: Claiming fa:16:3e:77:0c:21 10.100.0.8
Dec 02 10:15:17 np0005541914.localdomain systemd-udevd[324615]: Network interface NamePolicy= disabled on kernel command line.
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:17.041 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:0c:21 10.100.0.8'], port_security=['fa:16:3e:77:0c:21 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e4135ac9-548a-4e8d-99d6-cde8dedb2c77', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8703a229-8c49-443e-95c6-aff62a358434', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd858413a9b01463f96545916d2abe5ab', 'neutron:revision_number': '2', 'neutron:security_group_ids': '10785715-ddea-43bb-82fa-9f44a2fb1faa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=22d83034-71a8-46e9-a33a-f696e74c13f0, chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=5312b3e8-70f6-4e16-95ba-31b46130d41f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:17.044 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 5312b3e8-70f6-4e16-95ba-31b46130d41f in datapath 8703a229-8c49-443e-95c6-aff62a358434 bound to our chassis
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:17.046 159483 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8703a229-8c49-443e-95c6-aff62a358434
Dec 02 10:15:17 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:15:17Z|00234|binding|INFO|Setting lport 5312b3e8-70f6-4e16-95ba-31b46130d41f ovn-installed in OVS
Dec 02 10:15:17 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:15:17Z|00235|binding|INFO|Setting lport 5312b3e8-70f6-4e16-95ba-31b46130d41f up in Southbound
Dec 02 10:15:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:17.049 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:17.056 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:17 np0005541914.localdomain NetworkManager[5967]: <info>  [1764670517.0583] device (tap5312b3e8-70): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external')
Dec 02 10:15:17 np0005541914.localdomain NetworkManager[5967]: <info>  [1764670517.0590] device (tap5312b3e8-70): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external')
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:17.059 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[088b9412-6ff1-42f6-be6a-b7bde98df4cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:17.060 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8703a229-81 in ovnmeta-8703a229-8c49-443e-95c6-aff62a358434 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:17.063 262550 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8703a229-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:17.063 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[b5e397fb-e5f7-4bfb-b7c3-25aa1362c42c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:17.067 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[d0380a70-4ad2-44b0-ada1-87bb25759b32]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:15:17 np0005541914.localdomain systemd-machined[202765]: New machine qemu-6-instance-0000000b.
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:17.080 159602 DEBUG oslo.privsep.daemon [-] privsep: reply[061e2840-54cd-4206-b299-648d29109962]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:15:17 np0005541914.localdomain systemd[1]: Started Virtual Machine qemu-6-instance-0000000b.
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:17.095 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[ab854f98-b0e7-409f-a545-32098d1ad2bb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:17.118 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[d9cb3f93-70b3-4237-a229-29720576a107]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:17.123 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[763903ac-7611-4efb-9840-5e43b3698b32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:15:17 np0005541914.localdomain NetworkManager[5967]: <info>  [1764670517.1257] manager: (tap8703a229-80): new Veth device (/org/freedesktop/NetworkManager/Devices/48)
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:17.150 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[1a91300b-2b57-4cf0-8685-28ee2492e765]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:17.154 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[43b26a3c-478e-4c85-a4b4-792a2c1522d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:15:17 np0005541914.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap8703a229-81: link becomes ready
Dec 02 10:15:17 np0005541914.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap8703a229-80: link becomes ready
Dec 02 10:15:17 np0005541914.localdomain NetworkManager[5967]: <info>  [1764670517.1757] device (tap8703a229-80): carrier: link connected
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:17.181 308685 DEBUG oslo.privsep.daemon [-] privsep: reply[78da7015-c948-45ad-9fad-ee284fcfe31d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:17.199 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[3040bae0-ef4f-4362-895c-e09049ea50d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8703a229-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:75:e2:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1268134, 'reachable_time': 27752, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324652, 'error': None, 'target': 'ovnmeta-8703a229-8c49-443e-95c6-aff62a358434', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:17.216 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[096c8182-289a-4fcf-a0f0-bb4f4d413d5c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe75:e26c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1268134, 'tstamp': 1268134}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324660, 'error': None, 'target': 'ovnmeta-8703a229-8c49-443e-95c6-aff62a358434', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:17.232 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[4e9eb788-4d9f-4ade-a3af-6e4f10cb8aab]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8703a229-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:75:e2:6c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1268134, 'reachable_time': 27752, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 324669, 'error': None, 'target': 'ovnmeta-8703a229-8c49-443e-95c6-aff62a358434', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:15:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:17.239 281049 DEBUG nova.compute.manager [req-4bc1cba7-731c-4aa9-88ad-da105c6ab1c3 req-23af33a0-1320-4bd1-9c41-e4afe8ba83c1 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Received event network-vif-plugged-5312b3e8-70f6-4e16-95ba-31b46130d41f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 02 10:15:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:17.240 281049 DEBUG oslo_concurrency.lockutils [req-4bc1cba7-731c-4aa9-88ad-da105c6ab1c3 req-23af33a0-1320-4bd1-9c41-e4afe8ba83c1 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:15:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:17.241 281049 DEBUG oslo_concurrency.lockutils [req-4bc1cba7-731c-4aa9-88ad-da105c6ab1c3 req-23af33a0-1320-4bd1-9c41-e4afe8ba83c1 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:15:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:17.241 281049 DEBUG oslo_concurrency.lockutils [req-4bc1cba7-731c-4aa9-88ad-da105c6ab1c3 req-23af33a0-1320-4bd1-9c41-e4afe8ba83c1 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:15:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:17.242 281049 DEBUG nova.compute.manager [req-4bc1cba7-731c-4aa9-88ad-da105c6ab1c3 req-23af33a0-1320-4bd1-9c41-e4afe8ba83c1 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Processing event network-vif-plugged-5312b3e8-70f6-4e16-95ba-31b46130d41f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 02 10:15:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v620: 177 pgs: 177 active+clean; 208 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 32 KiB/s rd, 111 KiB/s wr, 53 op/s
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:17.263 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[47b2dc7d-4b5c-4616-a59f-822403e6651a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:17.320 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf0244f-e19f-40fb-9834-1ff358bc56ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:17.322 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8703a229-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:17.323 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:17.323 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8703a229-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:15:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:17.326 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:17 np0005541914.localdomain kernel: device tap8703a229-80 entered promiscuous mode
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:17.336 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8703a229-80, col_values=(('external_ids', {'iface-id': '37cd0238-9054-48a1-8d6c-4a73284b3493'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:15:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:17.337 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:17 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:15:17Z|00236|binding|INFO|Releasing lport 37cd0238-9054-48a1-8d6c-4a73284b3493 from this chassis (sb_readonly=0)
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:17.339 159483 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8703a229-8c49-443e-95c6-aff62a358434.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8703a229-8c49-443e-95c6-aff62a358434.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:17.340 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[9171a0e3-a81d-487e-85ac-8bebbac536c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:17.341 159483 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]: global
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]:     log         /dev/log local0 debug
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]:     log-tag     haproxy-metadata-proxy-8703a229-8c49-443e-95c6-aff62a358434
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]:     user        root
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]:     group       root
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]:     maxconn     1024
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]:     pidfile     /var/lib/neutron/external/pids/8703a229-8c49-443e-95c6-aff62a358434.pid.haproxy
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]:     daemon
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]: 
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]: defaults
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]:     log global
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]:     mode http
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]:     option httplog
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]:     option dontlognull
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]:     option http-server-close
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]:     option forwardfor
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]:     retries                 3
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]:     timeout http-request    30s
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]:     timeout connect         30s
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]:     timeout client          32s
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]:     timeout server          32s
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]:     timeout http-keep-alive 30s
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]: 
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]: 
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]: listen listener
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]:     bind 169.254.169.254:80
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]:     server metadata /var/lib/neutron/metadata_proxy
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]:     http-request add-header X-OVN-Network-ID 8703a229-8c49-443e-95c6-aff62a358434
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 02 10:15:17 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:17.342 159483 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8703a229-8c49-443e-95c6-aff62a358434', 'env', 'PROCESS_TAG=haproxy-8703a229-8c49-443e-95c6-aff62a358434', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8703a229-8c49-443e-95c6-aff62a358434.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 02 10:15:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:17.347 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:17.389 281049 DEBUG nova.virt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Emitting event <LifecycleEvent: 1764670517.3891208, e4135ac9-548a-4e8d-99d6-cde8dedb2c77 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 02 10:15:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:17.390 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] VM Started (Lifecycle Event)
Dec 02 10:15:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:17.393 281049 DEBUG nova.compute.manager [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 02 10:15:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:17.412 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 02 10:15:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:17.415 281049 DEBUG nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 02 10:15:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:17.419 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 02 10:15:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:17.423 281049 INFO nova.virt.libvirt.driver [-] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Instance spawned successfully.
Dec 02 10:15:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:17.423 281049 DEBUG nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 02 10:15:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:17.436 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 02 10:15:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:17.437 281049 DEBUG nova.virt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Emitting event <LifecycleEvent: 1764670517.3893542, e4135ac9-548a-4e8d-99d6-cde8dedb2c77 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 02 10:15:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:17.437 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] VM Paused (Lifecycle Event)
Dec 02 10:15:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:17.449 281049 DEBUG nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 02 10:15:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:17.450 281049 DEBUG nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 02 10:15:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:17.451 281049 DEBUG nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 02 10:15:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:17.452 281049 DEBUG nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 02 10:15:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:17.452 281049 DEBUG nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 02 10:15:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:17.453 281049 DEBUG nova.virt.libvirt.driver [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 02 10:15:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:17.460 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 02 10:15:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:17.465 281049 DEBUG nova.virt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Emitting event <LifecycleEvent: 1764670517.3964138, e4135ac9-548a-4e8d-99d6-cde8dedb2c77 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 02 10:15:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:17.466 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] VM Resumed (Lifecycle Event)
Dec 02 10:15:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:17.483 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 02 10:15:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:17.487 281049 DEBUG nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 02 10:15:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:17.503 281049 INFO nova.compute.manager [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 02 10:15:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:17.508 281049 INFO nova.compute.manager [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Took 4.32 seconds to spawn the instance on the hypervisor.
Dec 02 10:15:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:17.509 281049 DEBUG nova.compute.manager [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 02 10:15:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:15:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:17.570 281049 INFO nova.compute.manager [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Took 5.18 seconds to build instance.
Dec 02 10:15:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:17.586 281049 DEBUG oslo_concurrency.lockutils [None req-a541e13d-87f6-4580-832f-af5d7aef99a4 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.259s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:15:17 np0005541914.localdomain podman[324729]: 
Dec 02 10:15:17 np0005541914.localdomain podman[324729]: 2025-12-02 10:15:17.811407575 +0000 UTC m=+0.094761263 container create 9540b38500ddcc3f9720744ddb7bfd0538c7f46acca7cf67f58475e81d15f8e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8703a229-8c49-443e-95c6-aff62a358434, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:15:17 np0005541914.localdomain podman[324729]: 2025-12-02 10:15:17.766064892 +0000 UTC m=+0.049418590 image pull  quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 02 10:15:17 np0005541914.localdomain systemd[1]: Started libpod-conmon-9540b38500ddcc3f9720744ddb7bfd0538c7f46acca7cf67f58475e81d15f8e8.scope.
Dec 02 10:15:17 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 10:15:17 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9251bae03dba350098b1f5dbad067aff0e21633b444c29635f1cf251c0cbf4bf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 02 10:15:17 np0005541914.localdomain podman[324729]: 2025-12-02 10:15:17.921808667 +0000 UTC m=+0.205162325 container init 9540b38500ddcc3f9720744ddb7bfd0538c7f46acca7cf67f58475e81d15f8e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8703a229-8c49-443e-95c6-aff62a358434, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:15:17 np0005541914.localdomain podman[324729]: 2025-12-02 10:15:17.935775026 +0000 UTC m=+0.219128684 container start 9540b38500ddcc3f9720744ddb7bfd0538c7f46acca7cf67f58475e81d15f8e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8703a229-8c49-443e-95c6-aff62a358434, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:15:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f6f1c661-5fb8-4466-9254-d282f758f450", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:15:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f6f1c661-5fb8-4466-9254-d282f758f450, vol_name:cephfs) < ""
Dec 02 10:15:17 np0005541914.localdomain neutron-haproxy-ovnmeta-8703a229-8c49-443e-95c6-aff62a358434[324743]: [NOTICE]   (324747) : New worker (324749) forked
Dec 02 10:15:17 np0005541914.localdomain neutron-haproxy-ovnmeta-8703a229-8c49-443e-95c6-aff62a358434[324743]: [NOTICE]   (324747) : Loading success.
Dec 02 10:15:17 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "07b7e455-1272-48fc-92f9-fd54c3fafcb0", "auth_id": "Joe", "tenant_id": "0fe90f11d3f64e12b3591732792a929e", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:15:17 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f9911edf-08c0-404a-9b15-1750f599217e", "format": "json"}]: dispatch
Dec 02 10:15:17 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f9911edf-08c0-404a-9b15-1750f599217e", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:18 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/f6f1c661-5fb8-4466-9254-d282f758f450/.meta.tmp'
Dec 02 10:15:18 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/f6f1c661-5fb8-4466-9254-d282f758f450/.meta.tmp' to config b'/volumes/_nogroup/f6f1c661-5fb8-4466-9254-d282f758f450/.meta'
Dec 02 10:15:18 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:f6f1c661-5fb8-4466-9254-d282f758f450, vol_name:cephfs) < ""
Dec 02 10:15:18 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f6f1c661-5fb8-4466-9254-d282f758f450", "format": "json"}]: dispatch
Dec 02 10:15:18 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f6f1c661-5fb8-4466-9254-d282f758f450, vol_name:cephfs) < ""
Dec 02 10:15:18 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:f6f1c661-5fb8-4466-9254-d282f758f450, vol_name:cephfs) < ""
Dec 02 10:15:18 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4b34f061-715a-44a3-9eab-41d055e085ea", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:15:18 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4b34f061-715a-44a3-9eab-41d055e085ea, vol_name:cephfs) < ""
Dec 02 10:15:18 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/4b34f061-715a-44a3-9eab-41d055e085ea/.meta.tmp'
Dec 02 10:15:18 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/4b34f061-715a-44a3-9eab-41d055e085ea/.meta.tmp' to config b'/volumes/_nogroup/4b34f061-715a-44a3-9eab-41d055e085ea/.meta'
Dec 02 10:15:18 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:4b34f061-715a-44a3-9eab-41d055e085ea, vol_name:cephfs) < ""
Dec 02 10:15:18 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4b34f061-715a-44a3-9eab-41d055e085ea", "format": "json"}]: dispatch
Dec 02 10:15:18 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4b34f061-715a-44a3-9eab-41d055e085ea, vol_name:cephfs) < ""
Dec 02 10:15:18 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:4b34f061-715a-44a3-9eab-41d055e085ea, vol_name:cephfs) < ""
Dec 02 10:15:19 np0005541914.localdomain ceph-mon[301710]: pgmap v620: 177 pgs: 177 active+clean; 208 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 32 KiB/s rd, 111 KiB/s wr, 53 op/s
Dec 02 10:15:19 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f6f1c661-5fb8-4466-9254-d282f758f450", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:15:19 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f6f1c661-5fb8-4466-9254-d282f758f450", "format": "json"}]: dispatch
Dec 02 10:15:19 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:15:19 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:15:19 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/2971877282' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:15:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:19.178 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:19 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v621: 177 pgs: 177 active+clean; 255 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 91 KiB/s rd, 2.3 MiB/s wr, 57 op/s
Dec 02 10:15:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:19.275 281049 DEBUG nova.compute.manager [req-6b9ca700-6828-4318-aeaf-b1dc0c2a069a req-8f19298d-f4f0-4be8-ade1-0ae5114e2947 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Received event network-vif-plugged-5312b3e8-70f6-4e16-95ba-31b46130d41f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 02 10:15:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:19.277 281049 DEBUG oslo_concurrency.lockutils [req-6b9ca700-6828-4318-aeaf-b1dc0c2a069a req-8f19298d-f4f0-4be8-ade1-0ae5114e2947 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:15:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:19.278 281049 DEBUG oslo_concurrency.lockutils [req-6b9ca700-6828-4318-aeaf-b1dc0c2a069a req-8f19298d-f4f0-4be8-ade1-0ae5114e2947 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:15:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:19.278 281049 DEBUG oslo_concurrency.lockutils [req-6b9ca700-6828-4318-aeaf-b1dc0c2a069a req-8f19298d-f4f0-4be8-ade1-0ae5114e2947 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:15:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:19.279 281049 DEBUG nova.compute.manager [req-6b9ca700-6828-4318-aeaf-b1dc0c2a069a req-8f19298d-f4f0-4be8-ade1-0ae5114e2947 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] No waiting events found dispatching network-vif-plugged-5312b3e8-70f6-4e16-95ba-31b46130d41f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 02 10:15:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:19.280 281049 WARNING nova.compute.manager [req-6b9ca700-6828-4318-aeaf-b1dc0c2a069a req-8f19298d-f4f0-4be8-ade1-0ae5114e2947 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Received unexpected event network-vif-plugged-5312b3e8-70f6-4e16-95ba-31b46130d41f for instance with vm_state active and task_state None.
Dec 02 10:15:20 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a1ba20ee-ed37-461f-8a6b-289e0637343e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:15:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, vol_name:cephfs) < ""
Dec 02 10:15:20 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4b34f061-715a-44a3-9eab-41d055e085ea", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:15:20 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4b34f061-715a-44a3-9eab-41d055e085ea", "format": "json"}]: dispatch
Dec 02 10:15:20 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/3740406768' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:15:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/a1ba20ee-ed37-461f-8a6b-289e0637343e/.meta.tmp'
Dec 02 10:15:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a1ba20ee-ed37-461f-8a6b-289e0637343e/.meta.tmp' to config b'/volumes/_nogroup/a1ba20ee-ed37-461f-8a6b-289e0637343e/.meta'
Dec 02 10:15:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, vol_name:cephfs) < ""
Dec 02 10:15:20 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a1ba20ee-ed37-461f-8a6b-289e0637343e", "format": "json"}]: dispatch
Dec 02 10:15:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, vol_name:cephfs) < ""
Dec 02 10:15:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, vol_name:cephfs) < ""
Dec 02 10:15:20 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:15:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489, vol_name:cephfs) < ""
Dec 02 10:15:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489/.meta.tmp'
Dec 02 10:15:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489/.meta.tmp' to config b'/volumes/_nogroup/ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489/.meta'
Dec 02 10:15:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489, vol_name:cephfs) < ""
Dec 02 10:15:20 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489", "format": "json"}]: dispatch
Dec 02 10:15:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489, vol_name:cephfs) < ""
Dec 02 10:15:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489, vol_name:cephfs) < ""
Dec 02 10:15:21 np0005541914.localdomain ceph-mon[301710]: pgmap v621: 177 pgs: 177 active+clean; 255 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 91 KiB/s rd, 2.3 MiB/s wr, 57 op/s
Dec 02 10:15:21 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a1ba20ee-ed37-461f-8a6b-289e0637343e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:15:21 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a1ba20ee-ed37-461f-8a6b-289e0637343e", "format": "json"}]: dispatch
Dec 02 10:15:21 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:15:21 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:15:21 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v622: 177 pgs: 177 active+clean; 255 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 91 KiB/s rd, 2.3 MiB/s wr, 57 op/s
Dec 02 10:15:21 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f6f1c661-5fb8-4466-9254-d282f758f450", "format": "json"}]: dispatch
Dec 02 10:15:21 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:21.305 281049 DEBUG nova.compute.manager [req-5aa9bf93-2dd9-4f25-b5b3-e3af1c53f123 req-798db159-6d8b-4b86-bae7-a3a6bae63708 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Received event network-changed-5312b3e8-70f6-4e16-95ba-31b46130d41f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 02 10:15:21 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:21.305 281049 DEBUG nova.compute.manager [req-5aa9bf93-2dd9-4f25-b5b3-e3af1c53f123 req-798db159-6d8b-4b86-bae7-a3a6bae63708 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Refreshing instance network info cache due to event network-changed-5312b3e8-70f6-4e16-95ba-31b46130d41f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 02 10:15:21 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:21.306 281049 DEBUG oslo_concurrency.lockutils [req-5aa9bf93-2dd9-4f25-b5b3-e3af1c53f123 req-798db159-6d8b-4b86-bae7-a3a6bae63708 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "refresh_cache-e4135ac9-548a-4e8d-99d6-cde8dedb2c77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 02 10:15:21 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:21.306 281049 DEBUG oslo_concurrency.lockutils [req-5aa9bf93-2dd9-4f25-b5b3-e3af1c53f123 req-798db159-6d8b-4b86-bae7-a3a6bae63708 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquired lock "refresh_cache-e4135ac9-548a-4e8d-99d6-cde8dedb2c77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 02 10:15:21 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:21.307 281049 DEBUG nova.network.neutron [req-5aa9bf93-2dd9-4f25-b5b3-e3af1c53f123 req-798db159-6d8b-4b86-bae7-a3a6bae63708 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Refreshing network info cache for port 5312b3e8-70f6-4e16-95ba-31b46130d41f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 02 10:15:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:f6f1c661-5fb8-4466-9254-d282f758f450, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:15:21 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:21.311 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:f6f1c661-5fb8-4466-9254-d282f758f450, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:15:21 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:15:21.317+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f6f1c661-5fb8-4466-9254-d282f758f450' of type subvolume
Dec 02 10:15:21 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'f6f1c661-5fb8-4466-9254-d282f758f450' of type subvolume
Dec 02 10:15:21 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f6f1c661-5fb8-4466-9254-d282f758f450", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f6f1c661-5fb8-4466-9254-d282f758f450, vol_name:cephfs) < ""
Dec 02 10:15:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/f6f1c661-5fb8-4466-9254-d282f758f450'' moved to trashcan
Dec 02 10:15:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:15:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:f6f1c661-5fb8-4466-9254-d282f758f450, vol_name:cephfs) < ""
Dec 02 10:15:21 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4b34f061-715a-44a3-9eab-41d055e085ea", "auth_id": "tempest-cephx-id-1696860369", "tenant_id": "82d5a09e66904b8ca3c7a7850f1e5c52", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:15:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume authorize, sub_name:4b34f061-715a-44a3-9eab-41d055e085ea, tenant_id:82d5a09e66904b8ca3c7a7850f1e5c52, vol_name:cephfs) < ""
Dec 02 10:15:21 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} v 0)
Dec 02 10:15:21 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch
Dec 02 10:15:21 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID tempest-cephx-id-1696860369 with tenant 82d5a09e66904b8ca3c7a7850f1e5c52
Dec 02 10:15:21 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/4b34f061-715a-44a3-9eab-41d055e085ea/952959d8-8df4-478f-98b8-ef136b3959a9", "osd", "allow rw pool=manila_data namespace=fsvolumens_4b34f061-715a-44a3-9eab-41d055e085ea", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:15:21 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/4b34f061-715a-44a3-9eab-41d055e085ea/952959d8-8df4-478f-98b8-ef136b3959a9", "osd", "allow rw pool=manila_data namespace=fsvolumens_4b34f061-715a-44a3-9eab-41d055e085ea", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:15:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume authorize, sub_name:4b34f061-715a-44a3-9eab-41d055e085ea, tenant_id:82d5a09e66904b8ca3c7a7850f1e5c52, vol_name:cephfs) < ""
Dec 02 10:15:21 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:21.753 281049 DEBUG nova.network.neutron [req-5aa9bf93-2dd9-4f25-b5b3-e3af1c53f123 req-798db159-6d8b-4b86-bae7-a3a6bae63708 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Updated VIF entry in instance network info cache for port 5312b3e8-70f6-4e16-95ba-31b46130d41f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 02 10:15:21 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:21.754 281049 DEBUG nova.network.neutron [req-5aa9bf93-2dd9-4f25-b5b3-e3af1c53f123 req-798db159-6d8b-4b86-bae7-a3a6bae63708 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Updating instance_info_cache with network_info: [{"id": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "address": "fa:16:3e:77:0c:21", "network": {"id": "8703a229-8c49-443e-95c6-aff62a358434", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1306125232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d858413a9b01463f96545916d2abe5ab", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5312b3e8-70", "ovs_interfaceid": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 02 10:15:21 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:21.771 281049 DEBUG oslo_concurrency.lockutils [req-5aa9bf93-2dd9-4f25-b5b3-e3af1c53f123 req-798db159-6d8b-4b86-bae7-a3a6bae63708 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Releasing lock "refresh_cache-e4135ac9-548a-4e8d-99d6-cde8dedb2c77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 02 10:15:22 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:15:22 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489", "format": "json"}]: dispatch
Dec 02 10:15:22 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch
Dec 02 10:15:22 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/4b34f061-715a-44a3-9eab-41d055e085ea/952959d8-8df4-478f-98b8-ef136b3959a9", "osd", "allow rw pool=manila_data namespace=fsvolumens_4b34f061-715a-44a3-9eab-41d055e085ea", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:15:22 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/4b34f061-715a-44a3-9eab-41d055e085ea/952959d8-8df4-478f-98b8-ef136b3959a9", "osd", "allow rw pool=manila_data namespace=fsvolumens_4b34f061-715a-44a3-9eab-41d055e085ea", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:15:22 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/4b34f061-715a-44a3-9eab-41d055e085ea/952959d8-8df4-478f-98b8-ef136b3959a9", "osd", "allow rw pool=manila_data namespace=fsvolumens_4b34f061-715a-44a3-9eab-41d055e085ea", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:15:22 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:15:23 np0005541914.localdomain ceph-mon[301710]: pgmap v622: 177 pgs: 177 active+clean; 255 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 91 KiB/s rd, 2.3 MiB/s wr, 57 op/s
Dec 02 10:15:23 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f6f1c661-5fb8-4466-9254-d282f758f450", "format": "json"}]: dispatch
Dec 02 10:15:23 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f6f1c661-5fb8-4466-9254-d282f758f450", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:23 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "4b34f061-715a-44a3-9eab-41d055e085ea", "auth_id": "tempest-cephx-id-1696860369", "tenant_id": "82d5a09e66904b8ca3c7a7850f1e5c52", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:15:23 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a1ba20ee-ed37-461f-8a6b-289e0637343e", "auth_id": "Joe", "tenant_id": "3212fac1e026474b9022ee93e4d925a9", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:15:23 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, tenant_id:3212fac1e026474b9022ee93e4d925a9, vol_name:cephfs) < ""
Dec 02 10:15:23 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0)
Dec 02 10:15:23 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Dec 02 10:15:23 np0005541914.localdomain ceph-mgr[287188]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: Joe is already in use
Dec 02 10:15:23 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:Joe, format:json, prefix:fs subvolume authorize, sub_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, tenant_id:3212fac1e026474b9022ee93e4d925a9, vol_name:cephfs) < ""
Dec 02 10:15:23 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:15:23.162+0000 7fd37dd6f640 -1 mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use
Dec 02 10:15:23 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (1) Operation not permitted auth ID: Joe is already in use
Dec 02 10:15:23 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v623: 177 pgs: 177 active+clean; 256 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 119 op/s
Dec 02 10:15:23 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489", "format": "json"}]: dispatch
Dec 02 10:15:23 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:15:23 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:15:23 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:15:23.645+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489' of type subvolume
Dec 02 10:15:23 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489' of type subvolume
Dec 02 10:15:23 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:23 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489, vol_name:cephfs) < ""
Dec 02 10:15:23 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489'' moved to trashcan
Dec 02 10:15:23 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:15:23 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489, vol_name:cephfs) < ""
Dec 02 10:15:24 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a1ba20ee-ed37-461f-8a6b-289e0637343e", "auth_id": "Joe", "tenant_id": "3212fac1e026474b9022ee93e4d925a9", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:15:24 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Dec 02 10:15:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:24.181 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:24 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "816532a3-40a4-4c5f-a808-14898d84932f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:15:24 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:816532a3-40a4-4c5f-a808-14898d84932f, vol_name:cephfs) < ""
Dec 02 10:15:24 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/816532a3-40a4-4c5f-a808-14898d84932f/.meta.tmp'
Dec 02 10:15:24 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/816532a3-40a4-4c5f-a808-14898d84932f/.meta.tmp' to config b'/volumes/_nogroup/816532a3-40a4-4c5f-a808-14898d84932f/.meta'
Dec 02 10:15:24 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:816532a3-40a4-4c5f-a808-14898d84932f, vol_name:cephfs) < ""
Dec 02 10:15:24 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "816532a3-40a4-4c5f-a808-14898d84932f", "format": "json"}]: dispatch
Dec 02 10:15:24 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:816532a3-40a4-4c5f-a808-14898d84932f, vol_name:cephfs) < ""
Dec 02 10:15:24 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:816532a3-40a4-4c5f-a808-14898d84932f, vol_name:cephfs) < ""
Dec 02 10:15:24 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4b34f061-715a-44a3-9eab-41d055e085ea", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch
Dec 02 10:15:24 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume deauthorize, sub_name:4b34f061-715a-44a3-9eab-41d055e085ea, vol_name:cephfs) < ""
Dec 02 10:15:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} v 0)
Dec 02 10:15:24 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch
Dec 02 10:15:24 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} v 0)
Dec 02 10:15:24 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch
Dec 02 10:15:24 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume deauthorize, sub_name:4b34f061-715a-44a3-9eab-41d055e085ea, vol_name:cephfs) < ""
Dec 02 10:15:24 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4b34f061-715a-44a3-9eab-41d055e085ea", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch
Dec 02 10:15:24 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume evict, sub_name:4b34f061-715a-44a3-9eab-41d055e085ea, vol_name:cephfs) < ""
Dec 02 10:15:24 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1696860369, client_metadata.root=/volumes/_nogroup/4b34f061-715a-44a3-9eab-41d055e085ea/952959d8-8df4-478f-98b8-ef136b3959a9
Dec 02 10:15:24 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:15:24 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume evict, sub_name:4b34f061-715a-44a3-9eab-41d055e085ea, vol_name:cephfs) < ""
Dec 02 10:15:25 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4b34f061-715a-44a3-9eab-41d055e085ea", "format": "json"}]: dispatch
Dec 02 10:15:25 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:4b34f061-715a-44a3-9eab-41d055e085ea, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:15:25 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:4b34f061-715a-44a3-9eab-41d055e085ea, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:15:25 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:15:25.064+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4b34f061-715a-44a3-9eab-41d055e085ea' of type subvolume
Dec 02 10:15:25 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '4b34f061-715a-44a3-9eab-41d055e085ea' of type subvolume
Dec 02 10:15:25 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4b34f061-715a-44a3-9eab-41d055e085ea", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:25 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4b34f061-715a-44a3-9eab-41d055e085ea, vol_name:cephfs) < ""
Dec 02 10:15:25 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/4b34f061-715a-44a3-9eab-41d055e085ea'' moved to trashcan
Dec 02 10:15:25 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:15:25 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:4b34f061-715a-44a3-9eab-41d055e085ea, vol_name:cephfs) < ""
Dec 02 10:15:25 np0005541914.localdomain ceph-mon[301710]: pgmap v623: 177 pgs: 177 active+clean; 256 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 119 op/s
Dec 02 10:15:25 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489", "format": "json"}]: dispatch
Dec 02 10:15:25 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ff1ce34f-180e-4cd7-80c2-e7cc0a1e2489", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:25 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:15:25 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch
Dec 02 10:15:25 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch
Dec 02 10:15:25 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch
Dec 02 10:15:25 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"}]': finished
Dec 02 10:15:25 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v624: 177 pgs: 177 active+clean; 256 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 110 op/s
Dec 02 10:15:25 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:15:25 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:15:25 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:15:26 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:15:26 np0005541914.localdomain systemd[1]: tmp-crun.W1rAeg.mount: Deactivated successfully.
Dec 02 10:15:26 np0005541914.localdomain podman[324767]: 2025-12-02 10:15:26.097084884 +0000 UTC m=+0.083443855 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 02 10:15:26 np0005541914.localdomain podman[324760]: 2025-12-02 10:15:26.067892326 +0000 UTC m=+0.066323548 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 02 10:15:26 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "816532a3-40a4-4c5f-a808-14898d84932f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:15:26 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "816532a3-40a4-4c5f-a808-14898d84932f", "format": "json"}]: dispatch
Dec 02 10:15:26 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "4b34f061-715a-44a3-9eab-41d055e085ea", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch
Dec 02 10:15:26 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "4b34f061-715a-44a3-9eab-41d055e085ea", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch
Dec 02 10:15:26 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4b34f061-715a-44a3-9eab-41d055e085ea", "format": "json"}]: dispatch
Dec 02 10:15:26 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4b34f061-715a-44a3-9eab-41d055e085ea", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:26 np0005541914.localdomain podman[324760]: 2025-12-02 10:15:26.151883627 +0000 UTC m=+0.150314829 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 02 10:15:26 np0005541914.localdomain podman[324767]: 2025-12-02 10:15:26.168954922 +0000 UTC m=+0.155313923 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 02 10:15:26 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:15:26 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:15:26 np0005541914.localdomain podman[324759]: 2025-12-02 10:15:26.249300821 +0000 UTC m=+0.243023939 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec 02 10:15:26 np0005541914.localdomain podman[324759]: 2025-12-02 10:15:26.279295913 +0000 UTC m=+0.273019021 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true)
Dec 02 10:15:26 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:15:26 np0005541914.localdomain podman[324761]: 2025-12-02 10:15:26.299343738 +0000 UTC m=+0.292450896 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute)
Dec 02 10:15:26 np0005541914.localdomain podman[324761]: 2025-12-02 10:15:26.306838139 +0000 UTC m=+0.299945297 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125)
Dec 02 10:15:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:26.313 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:26 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:15:26 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a1ba20ee-ed37-461f-8a6b-289e0637343e", "auth_id": "tempest-cephx-id-2071519372", "tenant_id": "3212fac1e026474b9022ee93e4d925a9", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:15:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2071519372, format:json, prefix:fs subvolume authorize, sub_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, tenant_id:3212fac1e026474b9022ee93e4d925a9, vol_name:cephfs) < ""
Dec 02 10:15:26 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2071519372", "format": "json"} v 0)
Dec 02 10:15:26 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2071519372", "format": "json"} : dispatch
Dec 02 10:15:26 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID tempest-cephx-id-2071519372 with tenant 3212fac1e026474b9022ee93e4d925a9
Dec 02 10:15:26 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2071519372", "caps": ["mds", "allow rw path=/volumes/_nogroup/a1ba20ee-ed37-461f-8a6b-289e0637343e/44bd8d01-8657-4e23-ba40-e9561a6ed94b", "osd", "allow rw pool=manila_data namespace=fsvolumens_a1ba20ee-ed37-461f-8a6b-289e0637343e", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:15:26 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2071519372", "caps": ["mds", "allow rw path=/volumes/_nogroup/a1ba20ee-ed37-461f-8a6b-289e0637343e/44bd8d01-8657-4e23-ba40-e9561a6ed94b", "osd", "allow rw pool=manila_data namespace=fsvolumens_a1ba20ee-ed37-461f-8a6b-289e0637343e", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:15:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-2071519372, format:json, prefix:fs subvolume authorize, sub_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, tenant_id:3212fac1e026474b9022ee93e4d925a9, vol_name:cephfs) < ""
Dec 02 10:15:26 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fb9dc736-c0fd-42af-8ddc-944e8a1e50c5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:15:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fb9dc736-c0fd-42af-8ddc-944e8a1e50c5, vol_name:cephfs) < ""
Dec 02 10:15:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fb9dc736-c0fd-42af-8ddc-944e8a1e50c5/.meta.tmp'
Dec 02 10:15:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fb9dc736-c0fd-42af-8ddc-944e8a1e50c5/.meta.tmp' to config b'/volumes/_nogroup/fb9dc736-c0fd-42af-8ddc-944e8a1e50c5/.meta'
Dec 02 10:15:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:fb9dc736-c0fd-42af-8ddc-944e8a1e50c5, vol_name:cephfs) < ""
Dec 02 10:15:26 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fb9dc736-c0fd-42af-8ddc-944e8a1e50c5", "format": "json"}]: dispatch
Dec 02 10:15:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fb9dc736-c0fd-42af-8ddc-944e8a1e50c5, vol_name:cephfs) < ""
Dec 02 10:15:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:fb9dc736-c0fd-42af-8ddc-944e8a1e50c5, vol_name:cephfs) < ""
Dec 02 10:15:27 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "334c2711-d0f6-419e-922d-408205cc4ec2", "snap_name": "4297d647-9a4c-4f1f-9f4b-d5919a5d649a_de240f12-a8e2-4a29-90c2-24d0d5497a6c", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4297d647-9a4c-4f1f-9f4b-d5919a5d649a_de240f12-a8e2-4a29-90c2-24d0d5497a6c, sub_name:334c2711-d0f6-419e-922d-408205cc4ec2, vol_name:cephfs) < ""
Dec 02 10:15:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/334c2711-d0f6-419e-922d-408205cc4ec2/.meta.tmp'
Dec 02 10:15:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/334c2711-d0f6-419e-922d-408205cc4ec2/.meta.tmp' to config b'/volumes/_nogroup/334c2711-d0f6-419e-922d-408205cc4ec2/.meta'
Dec 02 10:15:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4297d647-9a4c-4f1f-9f4b-d5919a5d649a_de240f12-a8e2-4a29-90c2-24d0d5497a6c, sub_name:334c2711-d0f6-419e-922d-408205cc4ec2, vol_name:cephfs) < ""
Dec 02 10:15:27 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "334c2711-d0f6-419e-922d-408205cc4ec2", "snap_name": "4297d647-9a4c-4f1f-9f4b-d5919a5d649a", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4297d647-9a4c-4f1f-9f4b-d5919a5d649a, sub_name:334c2711-d0f6-419e-922d-408205cc4ec2, vol_name:cephfs) < ""
Dec 02 10:15:27 np0005541914.localdomain systemd[1]: tmp-crun.pRo0PA.mount: Deactivated successfully.
Dec 02 10:15:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/334c2711-d0f6-419e-922d-408205cc4ec2/.meta.tmp'
Dec 02 10:15:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/334c2711-d0f6-419e-922d-408205cc4ec2/.meta.tmp' to config b'/volumes/_nogroup/334c2711-d0f6-419e-922d-408205cc4ec2/.meta'
Dec 02 10:15:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:4297d647-9a4c-4f1f-9f4b-d5919a5d649a, sub_name:334c2711-d0f6-419e-922d-408205cc4ec2, vol_name:cephfs) < ""
Dec 02 10:15:27 np0005541914.localdomain ceph-mon[301710]: pgmap v624: 177 pgs: 177 active+clean; 256 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 110 op/s
Dec 02 10:15:27 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2071519372", "format": "json"} : dispatch
Dec 02 10:15:27 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2071519372", "caps": ["mds", "allow rw path=/volumes/_nogroup/a1ba20ee-ed37-461f-8a6b-289e0637343e/44bd8d01-8657-4e23-ba40-e9561a6ed94b", "osd", "allow rw pool=manila_data namespace=fsvolumens_a1ba20ee-ed37-461f-8a6b-289e0637343e", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:15:27 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2071519372", "caps": ["mds", "allow rw path=/volumes/_nogroup/a1ba20ee-ed37-461f-8a6b-289e0637343e/44bd8d01-8657-4e23-ba40-e9561a6ed94b", "osd", "allow rw pool=manila_data namespace=fsvolumens_a1ba20ee-ed37-461f-8a6b-289e0637343e", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:15:27 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-2071519372", "caps": ["mds", "allow rw path=/volumes/_nogroup/a1ba20ee-ed37-461f-8a6b-289e0637343e/44bd8d01-8657-4e23-ba40-e9561a6ed94b", "osd", "allow rw pool=manila_data namespace=fsvolumens_a1ba20ee-ed37-461f-8a6b-289e0637343e", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:15:27 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:15:27 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v625: 177 pgs: 177 active+clean; 256 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 110 op/s
Dec 02 10:15:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:15:27 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:15:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2, vol_name:cephfs) < ""
Dec 02 10:15:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2/.meta.tmp'
Dec 02 10:15:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2/.meta.tmp' to config b'/volumes/_nogroup/76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2/.meta'
Dec 02 10:15:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2, vol_name:cephfs) < ""
Dec 02 10:15:27 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2", "format": "json"}]: dispatch
Dec 02 10:15:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2, vol_name:cephfs) < ""
Dec 02 10:15:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2, vol_name:cephfs) < ""
Dec 02 10:15:27 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "816532a3-40a4-4c5f-a808-14898d84932f", "format": "json"}]: dispatch
Dec 02 10:15:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:816532a3-40a4-4c5f-a808-14898d84932f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:15:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:816532a3-40a4-4c5f-a808-14898d84932f, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:15:27 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:15:27.960+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '816532a3-40a4-4c5f-a808-14898d84932f' of type subvolume
Dec 02 10:15:27 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '816532a3-40a4-4c5f-a808-14898d84932f' of type subvolume
Dec 02 10:15:27 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "816532a3-40a4-4c5f-a808-14898d84932f", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:816532a3-40a4-4c5f-a808-14898d84932f, vol_name:cephfs) < ""
Dec 02 10:15:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/816532a3-40a4-4c5f-a808-14898d84932f'' moved to trashcan
Dec 02 10:15:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:15:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:816532a3-40a4-4c5f-a808-14898d84932f, vol_name:cephfs) < ""
Dec 02 10:15:28 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "a1ba20ee-ed37-461f-8a6b-289e0637343e", "auth_id": "tempest-cephx-id-2071519372", "tenant_id": "3212fac1e026474b9022ee93e4d925a9", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:15:28 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "fb9dc736-c0fd-42af-8ddc-944e8a1e50c5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:15:28 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "fb9dc736-c0fd-42af-8ddc-944e8a1e50c5", "format": "json"}]: dispatch
Dec 02 10:15:28 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "334c2711-d0f6-419e-922d-408205cc4ec2", "snap_name": "4297d647-9a4c-4f1f-9f4b-d5919a5d649a_de240f12-a8e2-4a29-90c2-24d0d5497a6c", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:28 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "334c2711-d0f6-419e-922d-408205cc4ec2", "snap_name": "4297d647-9a4c-4f1f-9f4b-d5919a5d649a", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:28 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:15:29 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:29.186 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:29 np0005541914.localdomain ceph-mon[301710]: pgmap v625: 177 pgs: 177 active+clean; 256 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 110 op/s
Dec 02 10:15:29 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:15:29 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2", "format": "json"}]: dispatch
Dec 02 10:15:29 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "816532a3-40a4-4c5f-a808-14898d84932f", "format": "json"}]: dispatch
Dec 02 10:15:29 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "816532a3-40a4-4c5f-a808-14898d84932f", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:29 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v626: 177 pgs: 177 active+clean; 257 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 117 op/s
Dec 02 10:15:29 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a1ba20ee-ed37-461f-8a6b-289e0637343e", "auth_id": "Joe", "format": "json"}]: dispatch
Dec 02 10:15:29 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, vol_name:cephfs) < ""
Dec 02 10:15:29 np0005541914.localdomain ceph-mgr[287188]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'Joe' for subvolume 'a1ba20ee-ed37-461f-8a6b-289e0637343e'
Dec 02 10:15:29 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, vol_name:cephfs) < ""
Dec 02 10:15:29 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a1ba20ee-ed37-461f-8a6b-289e0637343e", "auth_id": "Joe", "format": "json"}]: dispatch
Dec 02 10:15:29 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, vol_name:cephfs) < ""
Dec 02 10:15:29 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/a1ba20ee-ed37-461f-8a6b-289e0637343e/44bd8d01-8657-4e23-ba40-e9561a6ed94b
Dec 02 10:15:29 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:15:29 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, vol_name:cephfs) < ""
Dec 02 10:15:29 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "fb9dc736-c0fd-42af-8ddc-944e8a1e50c5", "snap_name": "fd340733-2ac3-47a8-9e18-7daf7e9911c9", "format": "json"}]: dispatch
Dec 02 10:15:29 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:fd340733-2ac3-47a8-9e18-7daf7e9911c9, sub_name:fb9dc736-c0fd-42af-8ddc-944e8a1e50c5, vol_name:cephfs) < ""
Dec 02 10:15:29 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:fd340733-2ac3-47a8-9e18-7daf7e9911c9, sub_name:fb9dc736-c0fd-42af-8ddc-944e8a1e50c5, vol_name:cephfs) < ""
Dec 02 10:15:30 np0005541914.localdomain ceph-mon[301710]: pgmap v626: 177 pgs: 177 active+clean; 257 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 117 op/s
Dec 02 10:15:30 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a1ba20ee-ed37-461f-8a6b-289e0637343e", "auth_id": "Joe", "format": "json"}]: dispatch
Dec 02 10:15:30 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a1ba20ee-ed37-461f-8a6b-289e0637343e", "auth_id": "Joe", "format": "json"}]: dispatch
Dec 02 10:15:30 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "fb9dc736-c0fd-42af-8ddc-944e8a1e50c5", "snap_name": "fd340733-2ac3-47a8-9e18-7daf7e9911c9", "format": "json"}]: dispatch
Dec 02 10:15:30 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "334c2711-d0f6-419e-922d-408205cc4ec2", "format": "json"}]: dispatch
Dec 02 10:15:30 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:334c2711-d0f6-419e-922d-408205cc4ec2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:15:30 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:334c2711-d0f6-419e-922d-408205cc4ec2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:15:30 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:15:30.210+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '334c2711-d0f6-419e-922d-408205cc4ec2' of type subvolume
Dec 02 10:15:30 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '334c2711-d0f6-419e-922d-408205cc4ec2' of type subvolume
Dec 02 10:15:30 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "334c2711-d0f6-419e-922d-408205cc4ec2", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:30 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:334c2711-d0f6-419e-922d-408205cc4ec2, vol_name:cephfs) < ""
Dec 02 10:15:30 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/334c2711-d0f6-419e-922d-408205cc4ec2'' moved to trashcan
Dec 02 10:15:30 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:15:30 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:334c2711-d0f6-419e-922d-408205cc4ec2, vol_name:cephfs) < ""
Dec 02 10:15:30 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:15:30Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:77:0c:21 10.100.0.8
Dec 02 10:15:30 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:15:30Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:77:0c:21 10.100.0.8
Dec 02 10:15:30 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:15:30 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:15:31 np0005541914.localdomain podman[324841]: 2025-12-02 10:15:31.104767513 +0000 UTC m=+0.097119904 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 02 10:15:31 np0005541914.localdomain podman[324841]: 2025-12-02 10:15:31.115240595 +0000 UTC m=+0.107592976 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 10:15:31 np0005541914.localdomain podman[324842]: 2025-12-02 10:15:31.154210643 +0000 UTC m=+0.143869712 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, architecture=x86_64, config_id=edpm, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 02 10:15:31 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:15:31 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2", "auth_id": "tempest-cephx-id-1696860369", "tenant_id": "82d5a09e66904b8ca3c7a7850f1e5c52", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:15:31 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume authorize, sub_name:76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2, tenant_id:82d5a09e66904b8ca3c7a7850f1e5c52, vol_name:cephfs) < ""
Dec 02 10:15:31 np0005541914.localdomain podman[324842]: 2025-12-02 10:15:31.196038098 +0000 UTC m=+0.185697167 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.buildah.version=1.33.7, release=1755695350, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-type=git, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, managed_by=edpm_ansible)
Dec 02 10:15:31 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} v 0)
Dec 02 10:15:31 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch
Dec 02 10:15:31 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID tempest-cephx-id-1696860369 with tenant 82d5a09e66904b8ca3c7a7850f1e5c52
Dec 02 10:15:31 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:15:31 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "334c2711-d0f6-419e-922d-408205cc4ec2", "format": "json"}]: dispatch
Dec 02 10:15:31 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "334c2711-d0f6-419e-922d-408205cc4ec2", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:31 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch
Dec 02 10:15:31 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2/97908fdd-14b6-443f-bfcc-d98424d8ba49", "osd", "allow rw pool=manila_data namespace=fsvolumens_76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:15:31 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2/97908fdd-14b6-443f-bfcc-d98424d8ba49", "osd", "allow rw pool=manila_data namespace=fsvolumens_76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:15:31 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v627: 177 pgs: 177 active+clean; 257 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 150 KiB/s wr, 73 op/s
Dec 02 10:15:31 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume authorize, sub_name:76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2, tenant_id:82d5a09e66904b8ca3c7a7850f1e5c52, vol_name:cephfs) < ""
Dec 02 10:15:31 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:31.317 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:32 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2", "auth_id": "tempest-cephx-id-1696860369", "tenant_id": "82d5a09e66904b8ca3c7a7850f1e5c52", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:15:32 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2/97908fdd-14b6-443f-bfcc-d98424d8ba49", "osd", "allow rw pool=manila_data namespace=fsvolumens_76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:15:32 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2/97908fdd-14b6-443f-bfcc-d98424d8ba49", "osd", "allow rw pool=manila_data namespace=fsvolumens_76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:15:32 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2/97908fdd-14b6-443f-bfcc-d98424d8ba49", "osd", "allow rw pool=manila_data namespace=fsvolumens_76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:15:32 np0005541914.localdomain ceph-mon[301710]: pgmap v627: 177 pgs: 177 active+clean; 257 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 150 KiB/s wr, 73 op/s
Dec 02 10:15:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:15:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e254 e254: 6 total, 6 up, 6 in
Dec 02 10:15:32 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a1ba20ee-ed37-461f-8a6b-289e0637343e", "auth_id": "tempest-cephx-id-2071519372", "format": "json"}]: dispatch
Dec 02 10:15:32 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2071519372, format:json, prefix:fs subvolume deauthorize, sub_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, vol_name:cephfs) < ""
Dec 02 10:15:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-2071519372", "format": "json"} v 0)
Dec 02 10:15:32 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2071519372", "format": "json"} : dispatch
Dec 02 10:15:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-2071519372"} v 0)
Dec 02 10:15:32 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2071519372"} : dispatch
Dec 02 10:15:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-2071519372, format:json, prefix:fs subvolume deauthorize, sub_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, vol_name:cephfs) < ""
Dec 02 10:15:33 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a1ba20ee-ed37-461f-8a6b-289e0637343e", "auth_id": "tempest-cephx-id-2071519372", "format": "json"}]: dispatch
Dec 02 10:15:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2071519372, format:json, prefix:fs subvolume evict, sub_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, vol_name:cephfs) < ""
Dec 02 10:15:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-2071519372, client_metadata.root=/volumes/_nogroup/a1ba20ee-ed37-461f-8a6b-289e0637343e/44bd8d01-8657-4e23-ba40-e9561a6ed94b
Dec 02 10:15:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:15:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-2071519372, format:json, prefix:fs subvolume evict, sub_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, vol_name:cephfs) < ""
Dec 02 10:15:33 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v629: 177 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 173 active+clean; 290 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 KiB/s rd, 2.7 MiB/s wr, 92 op/s
Dec 02 10:15:33 np0005541914.localdomain podman[239757]: time="2025-12-02T10:15:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:15:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:15:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157932 "" "Go-http-client/1.1"
Dec 02 10:15:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:15:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19736 "" "Go-http-client/1.1"
Dec 02 10:15:33 np0005541914.localdomain ceph-mon[301710]: osdmap e254: 6 total, 6 up, 6 in
Dec 02 10:15:33 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "a1ba20ee-ed37-461f-8a6b-289e0637343e", "auth_id": "tempest-cephx-id-2071519372", "format": "json"}]: dispatch
Dec 02 10:15:33 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-2071519372", "format": "json"} : dispatch
Dec 02 10:15:33 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2071519372"} : dispatch
Dec 02 10:15:33 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-2071519372"} : dispatch
Dec 02 10:15:33 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-2071519372"}]': finished
Dec 02 10:15:33 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "a1ba20ee-ed37-461f-8a6b-289e0637343e", "auth_id": "tempest-cephx-id-2071519372", "format": "json"}]: dispatch
Dec 02 10:15:33 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "fb9dc736-c0fd-42af-8ddc-944e8a1e50c5", "snap_name": "fd340733-2ac3-47a8-9e18-7daf7e9911c9_af7ac55c-f3f5-4ae4-aac4-245996ebb306", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fd340733-2ac3-47a8-9e18-7daf7e9911c9_af7ac55c-f3f5-4ae4-aac4-245996ebb306, sub_name:fb9dc736-c0fd-42af-8ddc-944e8a1e50c5, vol_name:cephfs) < ""
Dec 02 10:15:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fb9dc736-c0fd-42af-8ddc-944e8a1e50c5/.meta.tmp'
Dec 02 10:15:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fb9dc736-c0fd-42af-8ddc-944e8a1e50c5/.meta.tmp' to config b'/volumes/_nogroup/fb9dc736-c0fd-42af-8ddc-944e8a1e50c5/.meta'
Dec 02 10:15:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fd340733-2ac3-47a8-9e18-7daf7e9911c9_af7ac55c-f3f5-4ae4-aac4-245996ebb306, sub_name:fb9dc736-c0fd-42af-8ddc-944e8a1e50c5, vol_name:cephfs) < ""
Dec 02 10:15:33 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "fb9dc736-c0fd-42af-8ddc-944e8a1e50c5", "snap_name": "fd340733-2ac3-47a8-9e18-7daf7e9911c9", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fd340733-2ac3-47a8-9e18-7daf7e9911c9, sub_name:fb9dc736-c0fd-42af-8ddc-944e8a1e50c5, vol_name:cephfs) < ""
Dec 02 10:15:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/fb9dc736-c0fd-42af-8ddc-944e8a1e50c5/.meta.tmp'
Dec 02 10:15:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/fb9dc736-c0fd-42af-8ddc-944e8a1e50c5/.meta.tmp' to config b'/volumes/_nogroup/fb9dc736-c0fd-42af-8ddc-944e8a1e50c5/.meta'
Dec 02 10:15:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:fd340733-2ac3-47a8-9e18-7daf7e9911c9, sub_name:fb9dc736-c0fd-42af-8ddc-944e8a1e50c5, vol_name:cephfs) < ""
Dec 02 10:15:34 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:34.189 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:34 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch
Dec 02 10:15:34 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume deauthorize, sub_name:76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2, vol_name:cephfs) < ""
Dec 02 10:15:34 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} v 0)
Dec 02 10:15:34 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch
Dec 02 10:15:34 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} v 0)
Dec 02 10:15:34 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch
Dec 02 10:15:34 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume deauthorize, sub_name:76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2, vol_name:cephfs) < ""
Dec 02 10:15:34 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch
Dec 02 10:15:34 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume evict, sub_name:76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2, vol_name:cephfs) < ""
Dec 02 10:15:34 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1696860369, client_metadata.root=/volumes/_nogroup/76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2/97908fdd-14b6-443f-bfcc-d98424d8ba49
Dec 02 10:15:34 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:15:34 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume evict, sub_name:76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2, vol_name:cephfs) < ""
Dec 02 10:15:34 np0005541914.localdomain sudo[324884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 10:15:34 np0005541914.localdomain sudo[324884]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:15:34 np0005541914.localdomain sudo[324884]: pam_unix(sudo:session): session closed for user root
Dec 02 10:15:34 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2", "format": "json"}]: dispatch
Dec 02 10:15:34 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:15:34 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:15:34 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:15:34.692+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2' of type subvolume
Dec 02 10:15:34 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2' of type subvolume
Dec 02 10:15:34 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:34 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2, vol_name:cephfs) < ""
Dec 02 10:15:34 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2'' moved to trashcan
Dec 02 10:15:34 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:15:34 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2, vol_name:cephfs) < ""
Dec 02 10:15:34 np0005541914.localdomain sudo[324903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 10:15:34 np0005541914.localdomain sudo[324903]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:15:35 np0005541914.localdomain ceph-mon[301710]: pgmap v629: 177 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 173 active+clean; 290 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 KiB/s rd, 2.7 MiB/s wr, 92 op/s
Dec 02 10:15:35 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "fb9dc736-c0fd-42af-8ddc-944e8a1e50c5", "snap_name": "fd340733-2ac3-47a8-9e18-7daf7e9911c9_af7ac55c-f3f5-4ae4-aac4-245996ebb306", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:35 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "fb9dc736-c0fd-42af-8ddc-944e8a1e50c5", "snap_name": "fd340733-2ac3-47a8-9e18-7daf7e9911c9", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:35 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch
Dec 02 10:15:35 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch
Dec 02 10:15:35 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch
Dec 02 10:15:35 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"}]': finished
Dec 02 10:15:35 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v630: 177 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 173 active+clean; 290 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 KiB/s rd, 2.7 MiB/s wr, 92 op/s
Dec 02 10:15:35 np0005541914.localdomain sudo[324903]: pam_unix(sudo:session): session closed for user root
Dec 02 10:15:35 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 10:15:35 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:15:35 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 02 10:15:35 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:15:35 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 02 10:15:35 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] update: starting ev 008bc7f7-26c0-4b79-b645-f20b0a2c87ee (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:15:35 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] complete: finished ev 008bc7f7-26c0-4b79-b645-f20b0a2c87ee (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:15:35 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Completed event 008bc7f7-26c0-4b79-b645-f20b0a2c87ee (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 02 10:15:35 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 02 10:15:35 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:15:35 np0005541914.localdomain sudo[324952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 10:15:35 np0005541914.localdomain sudo[324952]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:15:35 np0005541914.localdomain sudo[324952]: pam_unix(sudo:session): session closed for user root
Dec 02 10:15:36 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch
Dec 02 10:15:36 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch
Dec 02 10:15:36 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2", "format": "json"}]: dispatch
Dec 02 10:15:36 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "76f373b7-a3c0-41f8-a1fb-77eeaafdd9b2", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:36 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:15:36 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:15:36 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:15:36 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:15:36 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "07b7e455-1272-48fc-92f9-fd54c3fafcb0", "auth_id": "Joe", "format": "json"}]: dispatch
Dec 02 10:15:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:07b7e455-1272-48fc-92f9-fd54c3fafcb0, vol_name:cephfs) < ""
Dec 02 10:15:36 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.Joe", "format": "json"} v 0)
Dec 02 10:15:36 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Dec 02 10:15:36 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.Joe"} v 0)
Dec 02 10:15:36 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch
Dec 02 10:15:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:Joe, format:json, prefix:fs subvolume deauthorize, sub_name:07b7e455-1272-48fc-92f9-fd54c3fafcb0, vol_name:cephfs) < ""
Dec 02 10:15:36 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "07b7e455-1272-48fc-92f9-fd54c3fafcb0", "auth_id": "Joe", "format": "json"}]: dispatch
Dec 02 10:15:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:07b7e455-1272-48fc-92f9-fd54c3fafcb0, vol_name:cephfs) < ""
Dec 02 10:15:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=Joe, client_metadata.root=/volumes/_nogroup/07b7e455-1272-48fc-92f9-fd54c3fafcb0/1c901b4c-031b-4c12-b1cb-8ac5e6296378
Dec 02 10:15:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:15:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:Joe, format:json, prefix:fs subvolume evict, sub_name:07b7e455-1272-48fc-92f9-fd54c3fafcb0, vol_name:cephfs) < ""
Dec 02 10:15:36 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:36.319 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:36 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:36.538 281049 DEBUG oslo_concurrency.lockutils [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Acquiring lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:15:36 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:36.539 281049 DEBUG oslo_concurrency.lockutils [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:15:36 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:36.558 281049 DEBUG nova.objects.instance [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lazy-loading 'flavor' on Instance uuid e4135ac9-548a-4e8d-99d6-cde8dedb2c77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 02 10:15:36 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:36.610 281049 INFO nova.virt.libvirt.driver [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Ignoring supplied device name: /dev/vdb
Dec 02 10:15:36 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:36.628 281049 DEBUG oslo_concurrency.lockutils [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:15:36 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:36.751 281049 DEBUG oslo_concurrency.lockutils [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Acquiring lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:15:36 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:36.752 281049 DEBUG oslo_concurrency.lockutils [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:15:36 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:36.752 281049 INFO nova.compute.manager [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Attaching volume eb88c64a-7c29-421c-91ad-190ba7bbf450 to /dev/vdb
Dec 02 10:15:36 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:36.846 281049 DEBUG os_brick.utils [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.108', 'multipath': True, 'enforce_multipath': True, 'host': 'np0005541914.localdomain', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 02 10:15:36 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:36.848 281049 INFO oslo.privsep.daemon [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpi9k9djog/privsep.sock']
Dec 02 10:15:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:15:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:15:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:15:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:15:37 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fb9dc736-c0fd-42af-8ddc-944e8a1e50c5", "format": "json"}]: dispatch
Dec 02 10:15:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:fb9dc736-c0fd-42af-8ddc-944e8a1e50c5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:15:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:fb9dc736-c0fd-42af-8ddc-944e8a1e50c5, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:15:37 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:15:37.034+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fb9dc736-c0fd-42af-8ddc-944e8a1e50c5' of type subvolume
Dec 02 10:15:37 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'fb9dc736-c0fd-42af-8ddc-944e8a1e50c5' of type subvolume
Dec 02 10:15:37 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fb9dc736-c0fd-42af-8ddc-944e8a1e50c5", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fb9dc736-c0fd-42af-8ddc-944e8a1e50c5, vol_name:cephfs) < ""
Dec 02 10:15:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:15:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:15:37 np0005541914.localdomain ceph-mon[301710]: pgmap v630: 177 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 173 active+clean; 290 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 KiB/s rd, 2.7 MiB/s wr, 92 op/s
Dec 02 10:15:37 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "07b7e455-1272-48fc-92f9-fd54c3fafcb0", "auth_id": "Joe", "format": "json"}]: dispatch
Dec 02 10:15:37 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Dec 02 10:15:37 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch
Dec 02 10:15:37 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch
Dec 02 10:15:37 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished
Dec 02 10:15:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/fb9dc736-c0fd-42af-8ddc-944e8a1e50c5'' moved to trashcan
Dec 02 10:15:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:15:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:fb9dc736-c0fd-42af-8ddc-944e8a1e50c5, vol_name:cephfs) < ""
Dec 02 10:15:37 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v631: 177 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 173 active+clean; 290 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 KiB/s rd, 2.7 MiB/s wr, 92 op/s
Dec 02 10:15:37 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events
Dec 02 10:15:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 02 10:15:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:15:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:37.621 281049 INFO oslo.privsep.daemon [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Spawned new privsep daemon via rootwrap
Dec 02 10:15:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:37.492 324975 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 02 10:15:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:37.499 324975 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 02 10:15:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:37.503 324975 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Dec 02 10:15:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:37.503 324975 INFO oslo.privsep.daemon [-] privsep daemon running as pid 324975
Dec 02 10:15:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:37.626 324975 DEBUG oslo.privsep.daemon [-] privsep: reply[64523c19-068d-4c99-aadf-d61d198833e3]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:15:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:37.710 324975 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:15:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:37.717 324975 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:15:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:37.717 324975 DEBUG oslo.privsep.daemon [-] privsep: reply[093827e1-c48a-4967-8f1e-e9bba1d767a7]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:15:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:37.720 324975 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:15:37 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "auth_id": "tempest-cephx-id-1696860369", "tenant_id": "82d5a09e66904b8ca3c7a7850f1e5c52", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:15:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume authorize, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, tenant_id:82d5a09e66904b8ca3c7a7850f1e5c52, vol_name:cephfs) < ""
Dec 02 10:15:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:37.730 324975 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:15:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:37.731 324975 DEBUG oslo.privsep.daemon [-] privsep: reply[f27ab309-393d-4a90-9904-1b87bd4cf04e]: (4, ('InitiatorName=iqn.1994-05.com.redhat:cd5f4359d661\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:15:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} v 0)
Dec 02 10:15:37 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch
Dec 02 10:15:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:37.735 324975 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:15:37 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID tempest-cephx-id-1696860369 with tenant 82d5a09e66904b8ca3c7a7850f1e5c52
Dec 02 10:15:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:37.747 324975 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:15:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:37.748 324975 DEBUG oslo.privsep.daemon [-] privsep: reply[be7a37ae-4776-4eb9-a2a5-508553273701]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:15:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:37.752 324975 DEBUG oslo.privsep.daemon [-] privsep: reply[53098531-2397-41de-ad94-b5fc854dbafe]: (4, '64aa5208-7bf7-490c-857b-3c1a3cae8bb3') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:15:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:37.753 281049 DEBUG oslo_concurrency.processutils [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:15:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:15:37 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:15:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:37.782 281049 DEBUG oslo_concurrency.processutils [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:15:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:37.786 281049 DEBUG os_brick.initiator.connectors.lightos [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 02 10:15:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:37.788 281049 DEBUG os_brick.initiator.connectors.lightos [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 02 10:15:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:37.788 281049 DEBUG os_brick.initiator.connectors.lightos [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:64aa5208-7bf7-490c-857b-3c1a3cae8bb3 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 02 10:15:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:37.789 281049 DEBUG os_brick.utils [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] <== get_connector_properties: return (942ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.108', 'host': 'np0005541914.localdomain', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:cd5f4359d661', 'do_local_attach': False, 'nvme_hostid': '64aa5208-7bf7-490c-857b-3c1a3cae8bb3', 'system uuid': '64aa5208-7bf7-490c-857b-3c1a3cae8bb3', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:64aa5208-7bf7-490c-857b-3c1a3cae8bb3', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 02 10:15:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:37.790 281049 DEBUG nova.virt.block_device [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Updating existing volume attachment record: df45d2d8-6af6-424c-887e-0dc643e47ee7 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 02 10:15:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e255 e255: 6 total, 6 up, 6 in
Dec 02 10:15:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume authorize, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, tenant_id:82d5a09e66904b8ca3c7a7850f1e5c52, vol_name:cephfs) < ""
Dec 02 10:15:37 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:15:38 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "07b7e455-1272-48fc-92f9-fd54c3fafcb0", "auth_id": "Joe", "format": "json"}]: dispatch
Dec 02 10:15:38 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "fb9dc736-c0fd-42af-8ddc-944e8a1e50c5", "format": "json"}]: dispatch
Dec 02 10:15:38 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "fb9dc736-c0fd-42af-8ddc-944e8a1e50c5", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:38 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:15:38 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch
Dec 02 10:15:38 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:15:38 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:15:38 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:15:38 np0005541914.localdomain ceph-mon[301710]: osdmap e255: 6 total, 6 up, 6 in
Dec 02 10:15:38 np0005541914.localdomain systemd[1]: tmp-crun.KSKIXB.mount: Deactivated successfully.
Dec 02 10:15:38 np0005541914.localdomain podman[324984]: 2025-12-02 10:15:38.102347235 +0000 UTC m=+0.101426987 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 02 10:15:38 np0005541914.localdomain podman[324984]: 2025-12-02 10:15:38.140942751 +0000 UTC m=+0.140022483 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible)
Dec 02 10:15:38 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:15:38 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:38.479 281049 DEBUG oslo_concurrency.lockutils [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:15:38 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:38.480 281049 DEBUG oslo_concurrency.lockutils [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:15:38 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:38.481 281049 DEBUG oslo_concurrency.lockutils [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:15:38 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:38.501 281049 DEBUG nova.objects.instance [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lazy-loading 'flavor' on Instance uuid e4135ac9-548a-4e8d-99d6-cde8dedb2c77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 02 10:15:38 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:38.524 281049 DEBUG nova.virt.libvirt.driver [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Attempting to attach volume eb88c64a-7c29-421c-91ad-190ba7bbf450 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Dec 02 10:15:38 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:38.528 281049 DEBUG nova.virt.libvirt.guest [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] attach device xml: <disk type="network" device="disk">
Dec 02 10:15:38 np0005541914.localdomain nova_compute[281045]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 02 10:15:38 np0005541914.localdomain nova_compute[281045]:   <source protocol="rbd" name="volumes/volume-eb88c64a-7c29-421c-91ad-190ba7bbf450">
Dec 02 10:15:38 np0005541914.localdomain nova_compute[281045]:     <host name="172.18.0.103" port="6789"/>
Dec 02 10:15:38 np0005541914.localdomain nova_compute[281045]:     <host name="172.18.0.104" port="6789"/>
Dec 02 10:15:38 np0005541914.localdomain nova_compute[281045]:     <host name="172.18.0.105" port="6789"/>
Dec 02 10:15:38 np0005541914.localdomain nova_compute[281045]:   </source>
Dec 02 10:15:38 np0005541914.localdomain nova_compute[281045]:   <auth username="openstack">
Dec 02 10:15:38 np0005541914.localdomain nova_compute[281045]:     <secret type="ceph" uuid="c7c8e171-a193-56fb-95fa-8879fcfa7074"/>
Dec 02 10:15:38 np0005541914.localdomain nova_compute[281045]:   </auth>
Dec 02 10:15:38 np0005541914.localdomain nova_compute[281045]:   <target dev="vdb" bus="virtio"/>
Dec 02 10:15:38 np0005541914.localdomain nova_compute[281045]:   <serial>eb88c64a-7c29-421c-91ad-190ba7bbf450</serial>
Dec 02 10:15:38 np0005541914.localdomain nova_compute[281045]: </disk>
Dec 02 10:15:38 np0005541914.localdomain nova_compute[281045]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 02 10:15:38 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:38.685 281049 DEBUG nova.virt.libvirt.driver [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 02 10:15:38 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:38.686 281049 DEBUG nova.virt.libvirt.driver [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 02 10:15:38 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:38.687 281049 DEBUG nova.virt.libvirt.driver [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 02 10:15:38 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:38.688 281049 DEBUG nova.virt.libvirt.driver [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] No VIF found with MAC fa:16:3e:77:0c:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 02 10:15:38 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:38.789 281049 DEBUG oslo_concurrency.lockutils [None req-394b5d1f-d1ea-4cb4-8011-c0f8d0beeeb7 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:15:39 np0005541914.localdomain ceph-mon[301710]: pgmap v631: 177 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 173 active+clean; 290 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 KiB/s rd, 2.7 MiB/s wr, 92 op/s
Dec 02 10:15:39 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "auth_id": "tempest-cephx-id-1696860369", "tenant_id": "82d5a09e66904b8ca3c7a7850f1e5c52", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:15:39 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1723398924' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:15:39 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:39.193 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:39 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v633: 177 pgs: 177 active+clean; 291 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 533 KiB/s rd, 3.4 MiB/s wr, 116 op/s
Dec 02 10:15:39 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "auth_id": "admin", "tenant_id": "0fe90f11d3f64e12b3591732792a929e", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:15:39 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:738f4ca9-41a9-48cc-8ca1-8d9ae9041202, tenant_id:0fe90f11d3f64e12b3591732792a929e, vol_name:cephfs) < ""
Dec 02 10:15:39 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin", "format": "json"} v 0)
Dec 02 10:15:39 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch
Dec 02 10:15:39 np0005541914.localdomain ceph-mgr[287188]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Dec 02 10:15:39 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:admin, format:json, prefix:fs subvolume authorize, sub_name:738f4ca9-41a9-48cc-8ca1-8d9ae9041202, tenant_id:0fe90f11d3f64e12b3591732792a929e, vol_name:cephfs) < ""
Dec 02 10:15:39 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:15:39.483+0000 7fd37dd6f640 -1 mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Dec 02 10:15:39 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (1) Operation not permitted auth ID: admin exists and not created by mgr plugin. Not allowed to modify
Dec 02 10:15:39 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 02 10:15:39 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3360648141' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:15:40 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e256 e256: 6 total, 6 up, 6 in
Dec 02 10:15:40 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch
Dec 02 10:15:40 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/3360648141' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:15:40 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:40.337 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:15:40 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:40.338 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:40 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:40.339 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 02 10:15:40 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch
Dec 02 10:15:40 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume deauthorize, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < ""
Dec 02 10:15:41 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} v 0)
Dec 02 10:15:41 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch
Dec 02 10:15:41 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} v 0)
Dec 02 10:15:41 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch
Dec 02 10:15:41 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume deauthorize, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < ""
Dec 02 10:15:41 np0005541914.localdomain ceph-mon[301710]: pgmap v633: 177 pgs: 177 active+clean; 291 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 533 KiB/s rd, 3.4 MiB/s wr, 116 op/s
Dec 02 10:15:41 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "auth_id": "admin", "tenant_id": "0fe90f11d3f64e12b3591732792a929e", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:15:41 np0005541914.localdomain ceph-mon[301710]: osdmap e256: 6 total, 6 up, 6 in
Dec 02 10:15:41 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch
Dec 02 10:15:41 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch
Dec 02 10:15:41 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch
Dec 02 10:15:41 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"}]': finished
Dec 02 10:15:41 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch
Dec 02 10:15:41 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume evict, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < ""
Dec 02 10:15:41 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1696860369, client_metadata.root=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4
Dec 02 10:15:41 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e257 e257: 6 total, 6 up, 6 in
Dec 02 10:15:41 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:15:41 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume evict, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < ""
Dec 02 10:15:41 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v636: 177 pgs: 177 active+clean; 291 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 2.2 KiB/s rd, 205 KiB/s wr, 15 op/s
Dec 02 10:15:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:41.322 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:41 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e258 e258: 6 total, 6 up, 6 in
Dec 02 10:15:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:15:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:15:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:15:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:15:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:15:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:15:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:15:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:15:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:15:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:15:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:15:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:15:42 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch
Dec 02 10:15:42 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch
Dec 02 10:15:42 np0005541914.localdomain ceph-mon[301710]: osdmap e257: 6 total, 6 up, 6 in
Dec 02 10:15:42 np0005541914.localdomain ceph-mon[301710]: osdmap e258: 6 total, 6 up, 6 in
Dec 02 10:15:42 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:15:42 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "auth_id": "david", "tenant_id": "0fe90f11d3f64e12b3591732792a929e", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:15:42 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:738f4ca9-41a9-48cc-8ca1-8d9ae9041202, tenant_id:0fe90f11d3f64e12b3591732792a929e, vol_name:cephfs) < ""
Dec 02 10:15:42 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0)
Dec 02 10:15:42 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Dec 02 10:15:42 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID david with tenant 0fe90f11d3f64e12b3591732792a929e
Dec 02 10:15:42 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/738f4ca9-41a9-48cc-8ca1-8d9ae9041202/ae52ead4-7b68-47be-8dae-42ce82602ac7", "osd", "allow rw pool=manila_data namespace=fsvolumens_738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:15:42 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/738f4ca9-41a9-48cc-8ca1-8d9ae9041202/ae52ead4-7b68-47be-8dae-42ce82602ac7", "osd", "allow rw pool=manila_data namespace=fsvolumens_738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:15:42 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:738f4ca9-41a9-48cc-8ca1-8d9ae9041202, tenant_id:0fe90f11d3f64e12b3591732792a929e, vol_name:cephfs) < ""
Dec 02 10:15:42 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 02 10:15:42 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/257232119' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:15:43 np0005541914.localdomain ceph-mon[301710]: pgmap v636: 177 pgs: 177 active+clean; 291 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 2.2 KiB/s rd, 205 KiB/s wr, 15 op/s
Dec 02 10:15:43 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Dec 02 10:15:43 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/738f4ca9-41a9-48cc-8ca1-8d9ae9041202/ae52ead4-7b68-47be-8dae-42ce82602ac7", "osd", "allow rw pool=manila_data namespace=fsvolumens_738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:15:43 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/738f4ca9-41a9-48cc-8ca1-8d9ae9041202/ae52ead4-7b68-47be-8dae-42ce82602ac7", "osd", "allow rw pool=manila_data namespace=fsvolumens_738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:15:43 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/738f4ca9-41a9-48cc-8ca1-8d9ae9041202/ae52ead4-7b68-47be-8dae-42ce82602ac7", "osd", "allow rw pool=manila_data namespace=fsvolumens_738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:15:43 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/257232119' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:15:43 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v638: 177 pgs: 177 active+clean; 292 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 367 KiB/s wr, 64 op/s
Dec 02 10:15:44 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "auth_id": "david", "tenant_id": "0fe90f11d3f64e12b3591732792a929e", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:15:44 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e259 e259: 6 total, 6 up, 6 in
Dec 02 10:15:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:44.200 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:44 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "auth_id": "tempest-cephx-id-1696860369", "tenant_id": "82d5a09e66904b8ca3c7a7850f1e5c52", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:15:44 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume authorize, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, tenant_id:82d5a09e66904b8ca3c7a7850f1e5c52, vol_name:cephfs) < ""
Dec 02 10:15:44 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} v 0)
Dec 02 10:15:44 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch
Dec 02 10:15:44 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID tempest-cephx-id-1696860369 with tenant 82d5a09e66904b8ca3c7a7850f1e5c52
Dec 02 10:15:44 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:15:44 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:15:44 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume authorize, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, tenant_id:82d5a09e66904b8ca3c7a7850f1e5c52, vol_name:cephfs) < ""
Dec 02 10:15:45 np0005541914.localdomain ceph-mon[301710]: pgmap v638: 177 pgs: 177 active+clean; 292 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 367 KiB/s wr, 64 op/s
Dec 02 10:15:45 np0005541914.localdomain ceph-mon[301710]: osdmap e259: 6 total, 6 up, 6 in
Dec 02 10:15:45 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch
Dec 02 10:15:45 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:15:45 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:15:45 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:15:45 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v640: 177 pgs: 177 active+clean; 292 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 150 KiB/s wr, 50 op/s
Dec 02 10:15:45 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:45.341 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:15:45 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e260 e260: 6 total, 6 up, 6 in
Dec 02 10:15:46 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5cdc679c-4ca6-4876-b423-0e54f450bff3", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:15:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5cdc679c-4ca6-4876-b423-0e54f450bff3, vol_name:cephfs) < ""
Dec 02 10:15:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5cdc679c-4ca6-4876-b423-0e54f450bff3/.meta.tmp'
Dec 02 10:15:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5cdc679c-4ca6-4876-b423-0e54f450bff3/.meta.tmp' to config b'/volumes/_nogroup/5cdc679c-4ca6-4876-b423-0e54f450bff3/.meta'
Dec 02 10:15:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5cdc679c-4ca6-4876-b423-0e54f450bff3, vol_name:cephfs) < ""
Dec 02 10:15:46 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5cdc679c-4ca6-4876-b423-0e54f450bff3", "format": "json"}]: dispatch
Dec 02 10:15:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5cdc679c-4ca6-4876-b423-0e54f450bff3, vol_name:cephfs) < ""
Dec 02 10:15:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5cdc679c-4ca6-4876-b423-0e54f450bff3, vol_name:cephfs) < ""
Dec 02 10:15:46 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "auth_id": "tempest-cephx-id-1696860369", "tenant_id": "82d5a09e66904b8ca3c7a7850f1e5c52", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:15:46 np0005541914.localdomain ceph-mon[301710]: pgmap v640: 177 pgs: 177 active+clean; 292 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 150 KiB/s wr, 50 op/s
Dec 02 10:15:46 np0005541914.localdomain ceph-mon[301710]: osdmap e260: 6 total, 6 up, 6 in
Dec 02 10:15:46 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5cdc679c-4ca6-4876-b423-0e54f450bff3", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:15:46 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5cdc679c-4ca6-4876-b423-0e54f450bff3", "format": "json"}]: dispatch
Dec 02 10:15:46 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:15:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:46.325 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:47 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ffd086bb-506f-4c57-a27d-657caefc8485", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:15:47 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ffd086bb-506f-4c57-a27d-657caefc8485, vol_name:cephfs) < ""
Dec 02 10:15:47 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ffd086bb-506f-4c57-a27d-657caefc8485/.meta.tmp'
Dec 02 10:15:47 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ffd086bb-506f-4c57-a27d-657caefc8485/.meta.tmp' to config b'/volumes/_nogroup/ffd086bb-506f-4c57-a27d-657caefc8485/.meta'
Dec 02 10:15:47 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ffd086bb-506f-4c57-a27d-657caefc8485, vol_name:cephfs) < ""
Dec 02 10:15:47 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ffd086bb-506f-4c57-a27d-657caefc8485", "format": "json"}]: dispatch
Dec 02 10:15:47 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ffd086bb-506f-4c57-a27d-657caefc8485, vol_name:cephfs) < ""
Dec 02 10:15:47 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ffd086bb-506f-4c57-a27d-657caefc8485, vol_name:cephfs) < ""
Dec 02 10:15:47 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e261 e261: 6 total, 6 up, 6 in
Dec 02 10:15:47 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v643: 177 pgs: 177 active+clean; 292 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 141 KiB/s wr, 47 op/s
Dec 02 10:15:47 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:15:47 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch
Dec 02 10:15:47 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume deauthorize, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < ""
Dec 02 10:15:47 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ffd086bb-506f-4c57-a27d-657caefc8485", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:15:47 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ffd086bb-506f-4c57-a27d-657caefc8485", "format": "json"}]: dispatch
Dec 02 10:15:47 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:15:47 np0005541914.localdomain ceph-mon[301710]: osdmap e261: 6 total, 6 up, 6 in
Dec 02 10:15:47 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} v 0)
Dec 02 10:15:47 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch
Dec 02 10:15:47 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} v 0)
Dec 02 10:15:47 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch
Dec 02 10:15:47 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume deauthorize, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < ""
Dec 02 10:15:47 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch
Dec 02 10:15:47 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume evict, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < ""
Dec 02 10:15:47 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1696860369, client_metadata.root=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4
Dec 02 10:15:47 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:15:47 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume evict, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < ""
Dec 02 10:15:48 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:15:48.009 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:15:47Z, description=, device_id=151bb500-c512-4a9b-b37e-ab2024450ce8, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f40349db340>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f40349dbca0>], id=4cb12c60-99ec-4340-8fd0-4d72b3c4fbda, ip_allocation=immediate, mac_address=fa:16:3e:75:ea:08, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3682, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:15:47Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:15:48 np0005541914.localdomain podman[325040]: 2025-12-02 10:15:48.239306012 +0000 UTC m=+0.061078698 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 02 10:15:48 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses
Dec 02 10:15:48 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:15:48 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:15:48 np0005541914.localdomain snmpd[69217]: empty variable list in _query
Dec 02 10:15:48 np0005541914.localdomain snmpd[69217]: empty variable list in _query
Dec 02 10:15:48 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:15:48.511 262347 INFO neutron.agent.dhcp.agent [None req-4ee019e8-e0ad-4e8a-a471-c689521992fa - - - - - -] DHCP configuration for ports {'4cb12c60-99ec-4340-8fd0-4d72b3c4fbda'} is completed
Dec 02 10:15:48 np0005541914.localdomain ceph-mon[301710]: pgmap v643: 177 pgs: 177 active+clean; 292 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 28 KiB/s rd, 141 KiB/s wr, 47 op/s
Dec 02 10:15:48 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch
Dec 02 10:15:48 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch
Dec 02 10:15:48 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch
Dec 02 10:15:48 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch
Dec 02 10:15:48 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"}]': finished
Dec 02 10:15:48 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch
Dec 02 10:15:48 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/3835753396' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:15:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:48.929 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:49.200 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:49 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e262 e262: 6 total, 6 up, 6 in
Dec 02 10:15:49 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v645: 177 pgs: 177 active+clean; 293 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 50 KiB/s rd, 138 KiB/s wr, 75 op/s
Dec 02 10:15:49 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5cdc679c-4ca6-4876-b423-0e54f450bff3", "auth_id": "david", "tenant_id": "3212fac1e026474b9022ee93e4d925a9", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:15:49 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:5cdc679c-4ca6-4876-b423-0e54f450bff3, tenant_id:3212fac1e026474b9022ee93e4d925a9, vol_name:cephfs) < ""
Dec 02 10:15:49 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0)
Dec 02 10:15:49 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Dec 02 10:15:49 np0005541914.localdomain ceph-mgr[287188]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: david is already in use
Dec 02 10:15:49 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:david, format:json, prefix:fs subvolume authorize, sub_name:5cdc679c-4ca6-4876-b423-0e54f450bff3, tenant_id:3212fac1e026474b9022ee93e4d925a9, vol_name:cephfs) < ""
Dec 02 10:15:49 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:15:49.419+0000 7fd37dd6f640 -1 mgr.server reply reply (1) Operation not permitted auth ID: david is already in use
Dec 02 10:15:49 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (1) Operation not permitted auth ID: david is already in use
Dec 02 10:15:50 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e263 e263: 6 total, 6 up, 6 in
Dec 02 10:15:50 np0005541914.localdomain ceph-mon[301710]: osdmap e262: 6 total, 6 up, 6 in
Dec 02 10:15:50 np0005541914.localdomain ceph-mon[301710]: pgmap v645: 177 pgs: 177 active+clean; 293 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 50 KiB/s rd, 138 KiB/s wr, 75 op/s
Dec 02 10:15:50 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5cdc679c-4ca6-4876-b423-0e54f450bff3", "auth_id": "david", "tenant_id": "3212fac1e026474b9022ee93e4d925a9", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:15:50 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Dec 02 10:15:50 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ffd086bb-506f-4c57-a27d-657caefc8485", "snap_name": "79e50957-8d03-44cb-99af-cee54fecf7f3", "format": "json"}]: dispatch
Dec 02 10:15:50 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:79e50957-8d03-44cb-99af-cee54fecf7f3, sub_name:ffd086bb-506f-4c57-a27d-657caefc8485, vol_name:cephfs) < ""
Dec 02 10:15:50 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:79e50957-8d03-44cb-99af-cee54fecf7f3, sub_name:ffd086bb-506f-4c57-a27d-657caefc8485, vol_name:cephfs) < ""
Dec 02 10:15:51 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "auth_id": "tempest-cephx-id-1696860369", "tenant_id": "82d5a09e66904b8ca3c7a7850f1e5c52", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:15:51 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume authorize, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, tenant_id:82d5a09e66904b8ca3c7a7850f1e5c52, vol_name:cephfs) < ""
Dec 02 10:15:51 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} v 0)
Dec 02 10:15:51 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch
Dec 02 10:15:51 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID tempest-cephx-id-1696860369 with tenant 82d5a09e66904b8ca3c7a7850f1e5c52
Dec 02 10:15:51 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v647: 177 pgs: 177 active+clean; 293 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 44 KiB/s rd, 121 KiB/s wr, 66 op/s
Dec 02 10:15:51 np0005541914.localdomain ceph-mon[301710]: osdmap e263: 6 total, 6 up, 6 in
Dec 02 10:15:51 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ffd086bb-506f-4c57-a27d-657caefc8485", "snap_name": "79e50957-8d03-44cb-99af-cee54fecf7f3", "format": "json"}]: dispatch
Dec 02 10:15:51 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch
Dec 02 10:15:51 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e264 e264: 6 total, 6 up, 6 in
Dec 02 10:15:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:51.328 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:51 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:15:51 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:15:51 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume authorize, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, tenant_id:82d5a09e66904b8ca3c7a7850f1e5c52, vol_name:cephfs) < ""
Dec 02 10:15:51 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e265 e265: 6 total, 6 up, 6 in
Dec 02 10:15:52 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "auth_id": "tempest-cephx-id-1696860369", "tenant_id": "82d5a09e66904b8ca3c7a7850f1e5c52", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:15:52 np0005541914.localdomain ceph-mon[301710]: pgmap v647: 177 pgs: 177 active+clean; 293 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 44 KiB/s rd, 121 KiB/s wr, 66 op/s
Dec 02 10:15:52 np0005541914.localdomain ceph-mon[301710]: osdmap e264: 6 total, 6 up, 6 in
Dec 02 10:15:52 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:15:52 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:15:52 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:15:52 np0005541914.localdomain ceph-mon[301710]: osdmap e265: 6 total, 6 up, 6 in
Dec 02 10:15:52 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:15:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:52.579 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:52 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5cdc679c-4ca6-4876-b423-0e54f450bff3", "auth_id": "david", "format": "json"}]: dispatch
Dec 02 10:15:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:5cdc679c-4ca6-4876-b423-0e54f450bff3, vol_name:cephfs) < ""
Dec 02 10:15:52 np0005541914.localdomain ceph-mgr[287188]: [volumes WARNING volumes.fs.operations.versions.subvolume_v1] deauthorized called for already-removed authID 'david' for subvolume '5cdc679c-4ca6-4876-b423-0e54f450bff3'
Dec 02 10:15:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:5cdc679c-4ca6-4876-b423-0e54f450bff3, vol_name:cephfs) < ""
Dec 02 10:15:52 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5cdc679c-4ca6-4876-b423-0e54f450bff3", "auth_id": "david", "format": "json"}]: dispatch
Dec 02 10:15:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:5cdc679c-4ca6-4876-b423-0e54f450bff3, vol_name:cephfs) < ""
Dec 02 10:15:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/5cdc679c-4ca6-4876-b423-0e54f450bff3/dbb45f73-684d-42e6-8bf1-5441b2faf73a
Dec 02 10:15:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:15:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:5cdc679c-4ca6-4876-b423-0e54f450bff3, vol_name:cephfs) < ""
Dec 02 10:15:52 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e266 e266: 6 total, 6 up, 6 in
Dec 02 10:15:53 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v651: 177 pgs: 177 active+clean; 293 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 99 KiB/s rd, 157 KiB/s wr, 139 op/s
Dec 02 10:15:53 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ffd086bb-506f-4c57-a27d-657caefc8485", "snap_name": "79e50957-8d03-44cb-99af-cee54fecf7f3_2ca4c212-9f32-4ac2-b7ea-e7caebf48841", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:53 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:79e50957-8d03-44cb-99af-cee54fecf7f3_2ca4c212-9f32-4ac2-b7ea-e7caebf48841, sub_name:ffd086bb-506f-4c57-a27d-657caefc8485, vol_name:cephfs) < ""
Dec 02 10:15:53 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5cdc679c-4ca6-4876-b423-0e54f450bff3", "auth_id": "david", "format": "json"}]: dispatch
Dec 02 10:15:53 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5cdc679c-4ca6-4876-b423-0e54f450bff3", "auth_id": "david", "format": "json"}]: dispatch
Dec 02 10:15:53 np0005541914.localdomain ceph-mon[301710]: osdmap e266: 6 total, 6 up, 6 in
Dec 02 10:15:53 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ffd086bb-506f-4c57-a27d-657caefc8485/.meta.tmp'
Dec 02 10:15:53 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ffd086bb-506f-4c57-a27d-657caefc8485/.meta.tmp' to config b'/volumes/_nogroup/ffd086bb-506f-4c57-a27d-657caefc8485/.meta'
Dec 02 10:15:53 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:79e50957-8d03-44cb-99af-cee54fecf7f3_2ca4c212-9f32-4ac2-b7ea-e7caebf48841, sub_name:ffd086bb-506f-4c57-a27d-657caefc8485, vol_name:cephfs) < ""
Dec 02 10:15:53 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ffd086bb-506f-4c57-a27d-657caefc8485", "snap_name": "79e50957-8d03-44cb-99af-cee54fecf7f3", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:53 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:79e50957-8d03-44cb-99af-cee54fecf7f3, sub_name:ffd086bb-506f-4c57-a27d-657caefc8485, vol_name:cephfs) < ""
Dec 02 10:15:53 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ffd086bb-506f-4c57-a27d-657caefc8485/.meta.tmp'
Dec 02 10:15:53 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ffd086bb-506f-4c57-a27d-657caefc8485/.meta.tmp' to config b'/volumes/_nogroup/ffd086bb-506f-4c57-a27d-657caefc8485/.meta'
Dec 02 10:15:53 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:79e50957-8d03-44cb-99af-cee54fecf7f3, sub_name:ffd086bb-506f-4c57-a27d-657caefc8485, vol_name:cephfs) < ""
Dec 02 10:15:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:54.203 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:54.306 281049 DEBUG oslo_concurrency.lockutils [None req-54f57ea6-a902-4681-8b0d-c4367544c25c 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Acquiring lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:15:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:54.307 281049 DEBUG oslo_concurrency.lockutils [None req-54f57ea6-a902-4681-8b0d-c4367544c25c 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:15:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:54.374 281049 INFO nova.compute.manager [None req-54f57ea6-a902-4681-8b0d-c4367544c25c 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Detaching volume eb88c64a-7c29-421c-91ad-190ba7bbf450
Dec 02 10:15:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:54.430 281049 INFO nova.virt.block_device [None req-54f57ea6-a902-4681-8b0d-c4367544c25c 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Attempting to driver detach volume eb88c64a-7c29-421c-91ad-190ba7bbf450 from mountpoint /dev/vdb
Dec 02 10:15:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:54.442 281049 DEBUG nova.virt.libvirt.driver [None req-54f57ea6-a902-4681-8b0d-c4367544c25c 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Attempting to detach device vdb from instance e4135ac9-548a-4e8d-99d6-cde8dedb2c77 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 02 10:15:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:54.443 281049 DEBUG nova.virt.libvirt.guest [None req-54f57ea6-a902-4681-8b0d-c4367544c25c 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] detach device xml: <disk type="network" device="disk">
Dec 02 10:15:54 np0005541914.localdomain nova_compute[281045]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 02 10:15:54 np0005541914.localdomain nova_compute[281045]:   <source protocol="rbd" name="volumes/volume-eb88c64a-7c29-421c-91ad-190ba7bbf450">
Dec 02 10:15:54 np0005541914.localdomain nova_compute[281045]:     <host name="172.18.0.103" port="6789"/>
Dec 02 10:15:54 np0005541914.localdomain nova_compute[281045]:     <host name="172.18.0.104" port="6789"/>
Dec 02 10:15:54 np0005541914.localdomain nova_compute[281045]:     <host name="172.18.0.105" port="6789"/>
Dec 02 10:15:54 np0005541914.localdomain nova_compute[281045]:   </source>
Dec 02 10:15:54 np0005541914.localdomain nova_compute[281045]:   <target dev="vdb" bus="virtio"/>
Dec 02 10:15:54 np0005541914.localdomain nova_compute[281045]:   <serial>eb88c64a-7c29-421c-91ad-190ba7bbf450</serial>
Dec 02 10:15:54 np0005541914.localdomain nova_compute[281045]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 02 10:15:54 np0005541914.localdomain nova_compute[281045]: </disk>
Dec 02 10:15:54 np0005541914.localdomain nova_compute[281045]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 02 10:15:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:54.453 281049 INFO nova.virt.libvirt.driver [None req-54f57ea6-a902-4681-8b0d-c4367544c25c 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Successfully detached device vdb from instance e4135ac9-548a-4e8d-99d6-cde8dedb2c77 from the persistent domain config.
Dec 02 10:15:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:54.454 281049 DEBUG nova.virt.libvirt.driver [None req-54f57ea6-a902-4681-8b0d-c4367544c25c 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance e4135ac9-548a-4e8d-99d6-cde8dedb2c77 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 02 10:15:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:54.455 281049 DEBUG nova.virt.libvirt.guest [None req-54f57ea6-a902-4681-8b0d-c4367544c25c 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] detach device xml: <disk type="network" device="disk">
Dec 02 10:15:54 np0005541914.localdomain nova_compute[281045]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 02 10:15:54 np0005541914.localdomain nova_compute[281045]:   <source protocol="rbd" name="volumes/volume-eb88c64a-7c29-421c-91ad-190ba7bbf450">
Dec 02 10:15:54 np0005541914.localdomain nova_compute[281045]:     <host name="172.18.0.103" port="6789"/>
Dec 02 10:15:54 np0005541914.localdomain nova_compute[281045]:     <host name="172.18.0.104" port="6789"/>
Dec 02 10:15:54 np0005541914.localdomain nova_compute[281045]:     <host name="172.18.0.105" port="6789"/>
Dec 02 10:15:54 np0005541914.localdomain nova_compute[281045]:   </source>
Dec 02 10:15:54 np0005541914.localdomain nova_compute[281045]:   <target dev="vdb" bus="virtio"/>
Dec 02 10:15:54 np0005541914.localdomain nova_compute[281045]:   <serial>eb88c64a-7c29-421c-91ad-190ba7bbf450</serial>
Dec 02 10:15:54 np0005541914.localdomain nova_compute[281045]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 02 10:15:54 np0005541914.localdomain nova_compute[281045]: </disk>
Dec 02 10:15:54 np0005541914.localdomain nova_compute[281045]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 02 10:15:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:54.585 281049 DEBUG nova.virt.libvirt.driver [None req-0dd74f87-59d5-417f-b06f-89d05c40e3b0 - - - - - -] Received event <DeviceRemovedEvent: 1764670554.584748, e4135ac9-548a-4e8d-99d6-cde8dedb2c77 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 02 10:15:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:54.588 281049 DEBUG nova.virt.libvirt.driver [None req-54f57ea6-a902-4681-8b0d-c4367544c25c 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance e4135ac9-548a-4e8d-99d6-cde8dedb2c77 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 02 10:15:54 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch
Dec 02 10:15:54 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume deauthorize, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < ""
Dec 02 10:15:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:54.592 281049 INFO nova.virt.libvirt.driver [None req-54f57ea6-a902-4681-8b0d-c4367544c25c 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Successfully detached device vdb from instance e4135ac9-548a-4e8d-99d6-cde8dedb2c77 from the live domain config.
Dec 02 10:15:54 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} v 0)
Dec 02 10:15:54 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch
Dec 02 10:15:54 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} v 0)
Dec 02 10:15:54 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch
Dec 02 10:15:54 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume deauthorize, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < ""
Dec 02 10:15:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:54.817 281049 DEBUG nova.objects.instance [None req-54f57ea6-a902-4681-8b0d-c4367544c25c 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lazy-loading 'flavor' on Instance uuid e4135ac9-548a-4e8d-99d6-cde8dedb2c77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 02 10:15:54 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch
Dec 02 10:15:54 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume evict, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < ""
Dec 02 10:15:54 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1696860369, client_metadata.root=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4
Dec 02 10:15:54 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:15:54 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume evict, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < ""
Dec 02 10:15:54 np0005541914.localdomain ceph-mon[301710]: pgmap v651: 177 pgs: 177 active+clean; 293 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 99 KiB/s rd, 157 KiB/s wr, 139 op/s
Dec 02 10:15:54 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ffd086bb-506f-4c57-a27d-657caefc8485", "snap_name": "79e50957-8d03-44cb-99af-cee54fecf7f3_2ca4c212-9f32-4ac2-b7ea-e7caebf48841", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:54 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ffd086bb-506f-4c57-a27d-657caefc8485", "snap_name": "79e50957-8d03-44cb-99af-cee54fecf7f3", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:54 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch
Dec 02 10:15:54 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch
Dec 02 10:15:54 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch
Dec 02 10:15:54 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"}]': finished
Dec 02 10:15:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:54.904 281049 DEBUG oslo_concurrency.lockutils [None req-54f57ea6-a902-4681-8b0d-c4367544c25c 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:15:55 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v652: 177 pgs: 177 active+clean; 293 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 79 KiB/s rd, 125 KiB/s wr, 110 op/s
Dec 02 10:15:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:55.634 281049 DEBUG oslo_concurrency.lockutils [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Acquiring lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:15:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:55.635 281049 DEBUG oslo_concurrency.lockutils [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:15:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:55.637 281049 DEBUG oslo_concurrency.lockutils [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Acquiring lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:15:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:55.638 281049 DEBUG oslo_concurrency.lockutils [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:15:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:55.638 281049 DEBUG oslo_concurrency.lockutils [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:15:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:55.640 281049 INFO nova.compute.manager [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Terminating instance
Dec 02 10:15:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:55.641 281049 DEBUG nova.compute.manager [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 02 10:15:55 np0005541914.localdomain kernel: device tap5312b3e8-70 left promiscuous mode
Dec 02 10:15:55 np0005541914.localdomain NetworkManager[5967]: <info>  [1764670555.7054] device (tap5312b3e8-70): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed')
Dec 02 10:15:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:55.715 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:55 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:15:55Z|00237|binding|INFO|Releasing lport 5312b3e8-70f6-4e16-95ba-31b46130d41f from this chassis (sb_readonly=0)
Dec 02 10:15:55 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:15:55Z|00238|binding|INFO|Setting lport 5312b3e8-70f6-4e16-95ba-31b46130d41f down in Southbound
Dec 02 10:15:55 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:15:55Z|00239|binding|INFO|Removing iface tap5312b3e8-70 ovn-installed in OVS
Dec 02 10:15:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:55.719 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:55 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:55.758 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:0c:21 10.100.0.8'], port_security=['fa:16:3e:77:0c:21 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005541914.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'e4135ac9-548a-4e8d-99d6-cde8dedb2c77', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8703a229-8c49-443e-95c6-aff62a358434', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd858413a9b01463f96545916d2abe5ab', 'neutron:revision_number': '4', 'neutron:security_group_ids': '10785715-ddea-43bb-82fa-9f44a2fb1faa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005541914.localdomain', 'neutron:port_fip': '192.168.122.196'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=22d83034-71a8-46e9-a33a-f696e74c13f0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>], logical_port=5312b3e8-70f6-4e16-95ba-31b46130d41f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fe0b81b1be0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:15:55 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:55.760 159483 INFO neutron.agent.ovn.metadata.agent [-] Port 5312b3e8-70f6-4e16-95ba-31b46130d41f in datapath 8703a229-8c49-443e-95c6-aff62a358434 unbound from our chassis
Dec 02 10:15:55 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:55.761 159483 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8703a229-8c49-443e-95c6-aff62a358434, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 02 10:15:55 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:55.762 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[62bef081-cd80-4f3a-9069-106785ae3b2e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:15:55 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:55.763 159483 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8703a229-8c49-443e-95c6-aff62a358434 namespace which is not needed anymore
Dec 02 10:15:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:55.777 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:55 np0005541914.localdomain systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Dec 02 10:15:55 np0005541914.localdomain systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000b.scope: Consumed 14.794s CPU time.
Dec 02 10:15:55 np0005541914.localdomain systemd-machined[202765]: Machine qemu-6-instance-0000000b terminated.
Dec 02 10:15:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:55.859 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:55.866 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:55.878 281049 INFO nova.virt.libvirt.driver [-] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Instance destroyed successfully.
Dec 02 10:15:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:55.879 281049 DEBUG nova.objects.instance [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lazy-loading 'resources' on Instance uuid e4135ac9-548a-4e8d-99d6-cde8dedb2c77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 02 10:15:55 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch
Dec 02 10:15:55 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch
Dec 02 10:15:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:55.904 281049 DEBUG nova.virt.libvirt.vif [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-02T10:15:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-296444076',display_name='tempest-VolumesBackupsTest-instance-296444076',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005541914.localdomain',hostname='tempest-volumesbackupstest-instance-296444076',id=11,image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHkG+iQFuqjdTdoAEp/kY7cw/kNkZh2LbPeLiGtN8Y97oQKWkY5uonMIVaaGJFGigPwU4U46n3JFHVn8N98Xn7K+8moZz1t1gU5zOrLM/YgrB2LfY32eA3cmwq2A59hxHw==',key_name='tempest-keypair-352232817',keypairs=<?>,launch_index=0,launched_at=2025-12-02T10:15:17Z,launched_on='np0005541914.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='np0005541914.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d858413a9b01463f96545916d2abe5ab',ramdisk_id='',reservation_id='r-nwps7030',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d85e840d-fa56-497b-b5bd-b49584d3e97a',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-479123361',owner_user_name='tempest-VolumesBackupsTest-479123361-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-02T10:15:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='0e5c738ba752455b908099b234a743a2',uuid=e4135ac9-548a-4e8d-99d6-cde8dedb2c77,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "address": "fa:16:3e:77:0c:21", "network": {"id": "8703a229-8c49-443e-95c6-aff62a358434", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1306125232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d858413a9b01463f96545916d2abe5ab", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5312b3e8-70", "ovs_interfaceid": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 02 10:15:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:55.905 281049 DEBUG nova.network.os_vif_util [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Converting VIF {"id": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "address": "fa:16:3e:77:0c:21", "network": {"id": "8703a229-8c49-443e-95c6-aff62a358434", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1306125232-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.3"}}], "meta": {"injected": false, "tenant_id": "d858413a9b01463f96545916d2abe5ab", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5312b3e8-70", "ovs_interfaceid": "5312b3e8-70f6-4e16-95ba-31b46130d41f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 02 10:15:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:55.906 281049 DEBUG nova.network.os_vif_util [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:77:0c:21,bridge_name='br-int',has_traffic_filtering=True,id=5312b3e8-70f6-4e16-95ba-31b46130d41f,network=Network(8703a229-8c49-443e-95c6-aff62a358434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5312b3e8-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 02 10:15:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:55.907 281049 DEBUG os_vif [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:77:0c:21,bridge_name='br-int',has_traffic_filtering=True,id=5312b3e8-70f6-4e16-95ba-31b46130d41f,network=Network(8703a229-8c49-443e-95c6-aff62a358434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5312b3e8-70') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 02 10:15:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:55.910 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:55.910 281049 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5312b3e8-70, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:15:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:55.912 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:55.915 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:55.919 281049 INFO os_vif [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:77:0c:21,bridge_name='br-int',has_traffic_filtering=True,id=5312b3e8-70f6-4e16-95ba-31b46130d41f,network=Network(8703a229-8c49-443e-95c6-aff62a358434),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5312b3e8-70')
Dec 02 10:15:55 np0005541914.localdomain neutron-haproxy-ovnmeta-8703a229-8c49-443e-95c6-aff62a358434[324743]: [NOTICE]   (324747) : haproxy version is 2.8.14-c23fe91
Dec 02 10:15:55 np0005541914.localdomain neutron-haproxy-ovnmeta-8703a229-8c49-443e-95c6-aff62a358434[324743]: [NOTICE]   (324747) : path to executable is /usr/sbin/haproxy
Dec 02 10:15:55 np0005541914.localdomain neutron-haproxy-ovnmeta-8703a229-8c49-443e-95c6-aff62a358434[324743]: [WARNING]  (324747) : Exiting Master process...
Dec 02 10:15:55 np0005541914.localdomain neutron-haproxy-ovnmeta-8703a229-8c49-443e-95c6-aff62a358434[324743]: [ALERT]    (324747) : Current worker (324749) exited with code 143 (Terminated)
Dec 02 10:15:55 np0005541914.localdomain neutron-haproxy-ovnmeta-8703a229-8c49-443e-95c6-aff62a358434[324743]: [WARNING]  (324747) : All workers exited. Exiting... (0)
Dec 02 10:15:55 np0005541914.localdomain systemd[1]: libpod-9540b38500ddcc3f9720744ddb7bfd0538c7f46acca7cf67f58475e81d15f8e8.scope: Deactivated successfully.
Dec 02 10:15:55 np0005541914.localdomain podman[325099]: 2025-12-02 10:15:55.963830789 +0000 UTC m=+0.075205272 container died 9540b38500ddcc3f9720744ddb7bfd0538c7f46acca7cf67f58475e81d15f8e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8703a229-8c49-443e-95c6-aff62a358434, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 02 10:15:56 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9540b38500ddcc3f9720744ddb7bfd0538c7f46acca7cf67f58475e81d15f8e8-userdata-shm.mount: Deactivated successfully.
Dec 02 10:15:56 np0005541914.localdomain podman[325099]: 2025-12-02 10:15:56.012583327 +0000 UTC m=+0.123957760 container cleanup 9540b38500ddcc3f9720744ddb7bfd0538c7f46acca7cf67f58475e81d15f8e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8703a229-8c49-443e-95c6-aff62a358434, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:15:56 np0005541914.localdomain podman[325128]: 2025-12-02 10:15:56.045199819 +0000 UTC m=+0.073680715 container cleanup 9540b38500ddcc3f9720744ddb7bfd0538c7f46acca7cf67f58475e81d15f8e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8703a229-8c49-443e-95c6-aff62a358434, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:15:56 np0005541914.localdomain systemd[1]: libpod-conmon-9540b38500ddcc3f9720744ddb7bfd0538c7f46acca7cf67f58475e81d15f8e8.scope: Deactivated successfully.
Dec 02 10:15:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:56.065 281049 DEBUG nova.compute.manager [req-b12beed7-9fd4-4727-954d-65cd45bafc2f req-275910b0-079e-496c-95ea-95029127d9e9 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Received event network-vif-unplugged-5312b3e8-70f6-4e16-95ba-31b46130d41f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 02 10:15:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:56.066 281049 DEBUG oslo_concurrency.lockutils [req-b12beed7-9fd4-4727-954d-65cd45bafc2f req-275910b0-079e-496c-95ea-95029127d9e9 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:15:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:56.067 281049 DEBUG oslo_concurrency.lockutils [req-b12beed7-9fd4-4727-954d-65cd45bafc2f req-275910b0-079e-496c-95ea-95029127d9e9 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:15:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:56.068 281049 DEBUG oslo_concurrency.lockutils [req-b12beed7-9fd4-4727-954d-65cd45bafc2f req-275910b0-079e-496c-95ea-95029127d9e9 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:15:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:56.068 281049 DEBUG nova.compute.manager [req-b12beed7-9fd4-4727-954d-65cd45bafc2f req-275910b0-079e-496c-95ea-95029127d9e9 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] No waiting events found dispatching network-vif-unplugged-5312b3e8-70f6-4e16-95ba-31b46130d41f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 02 10:15:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:56.069 281049 DEBUG nova.compute.manager [req-b12beed7-9fd4-4727-954d-65cd45bafc2f req-275910b0-079e-496c-95ea-95029127d9e9 dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Received event network-vif-unplugged-5312b3e8-70f6-4e16-95ba-31b46130d41f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 02 10:15:56 np0005541914.localdomain podman[325146]: 2025-12-02 10:15:56.096139884 +0000 UTC m=+0.063767610 container remove 9540b38500ddcc3f9720744ddb7bfd0538c7f46acca7cf67f58475e81d15f8e8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8703a229-8c49-443e-95c6-aff62a358434, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 02 10:15:56 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:56.101 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[b22281ac-564f-4951-ae11-f0b5469348f2]: (4, ('Tue Dec  2 10:15:55 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8703a229-8c49-443e-95c6-aff62a358434 (9540b38500ddcc3f9720744ddb7bfd0538c7f46acca7cf67f58475e81d15f8e8)\n9540b38500ddcc3f9720744ddb7bfd0538c7f46acca7cf67f58475e81d15f8e8\nTue Dec  2 10:15:56 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8703a229-8c49-443e-95c6-aff62a358434 (9540b38500ddcc3f9720744ddb7bfd0538c7f46acca7cf67f58475e81d15f8e8)\n9540b38500ddcc3f9720744ddb7bfd0538c7f46acca7cf67f58475e81d15f8e8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:15:56 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:56.103 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[b0660266-e5da-4bf8-9b9a-14de1b6fe3c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:15:56 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:56.104 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8703a229-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:15:56 np0005541914.localdomain kernel: device tap8703a229-80 left promiscuous mode
Dec 02 10:15:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:56.108 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:56.114 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:56 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "auth_id": "david", "format": "json"}]: dispatch
Dec 02 10:15:56 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:56.117 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[c402b3d2-5dbc-454d-a484-3b31aae865fe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:15:56 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:738f4ca9-41a9-48cc-8ca1-8d9ae9041202, vol_name:cephfs) < ""
Dec 02 10:15:56 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:56.135 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[dce3df0d-b60c-46b7-8b73-2d14ec923a3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:15:56 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:56.136 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[a8bdbc9e-e8e8-43d7-bcc6-e82aa28bf2fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:15:56 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:56.150 262550 DEBUG oslo.privsep.daemon [-] privsep: reply[ba6eab08-0018-461d-963b-210d1f43a298]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1268128, 'reachable_time': 26781, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325163, 'error': None, 'target': 'ovnmeta-8703a229-8c49-443e-95c6-aff62a358434', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:15:56 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:56.153 159602 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8703a229-8c49-443e-95c6-aff62a358434 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 02 10:15:56 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:15:56.153 159602 DEBUG oslo.privsep.daemon [-] privsep: reply[274b20e4-555f-43dc-9e60-e32d2b2b2310]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 02 10:15:56 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.david", "format": "json"} v 0)
Dec 02 10:15:56 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Dec 02 10:15:56 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.david"} v 0)
Dec 02 10:15:56 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch
Dec 02 10:15:56 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:david, format:json, prefix:fs subvolume deauthorize, sub_name:738f4ca9-41a9-48cc-8ca1-8d9ae9041202, vol_name:cephfs) < ""
Dec 02 10:15:56 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "auth_id": "david", "format": "json"}]: dispatch
Dec 02 10:15:56 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:738f4ca9-41a9-48cc-8ca1-8d9ae9041202, vol_name:cephfs) < ""
Dec 02 10:15:56 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=david, client_metadata.root=/volumes/_nogroup/738f4ca9-41a9-48cc-8ca1-8d9ae9041202/ae52ead4-7b68-47be-8dae-42ce82602ac7
Dec 02 10:15:56 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:15:56 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:david, format:json, prefix:fs subvolume evict, sub_name:738f4ca9-41a9-48cc-8ca1-8d9ae9041202, vol_name:cephfs) < ""
Dec 02 10:15:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:56.595 281049 INFO nova.virt.libvirt.driver [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Deleting instance files /var/lib/nova/instances/e4135ac9-548a-4e8d-99d6-cde8dedb2c77_del
Dec 02 10:15:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:56.596 281049 INFO nova.virt.libvirt.driver [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Deletion of /var/lib/nova/instances/e4135ac9-548a-4e8d-99d6-cde8dedb2c77_del complete
Dec 02 10:15:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:56.719 281049 INFO nova.compute.manager [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Took 1.08 seconds to destroy the instance on the hypervisor.
Dec 02 10:15:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:56.720 281049 DEBUG oslo.service.loopingcall [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 02 10:15:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:56.720 281049 DEBUG nova.compute.manager [-] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 02 10:15:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:56.721 281049 DEBUG nova.network.neutron [-] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 02 10:15:56 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:15:56 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:15:56 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:15:56 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:15:56 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e267 e267: 6 total, 6 up, 6 in
Dec 02 10:15:56 np0005541914.localdomain podman[325168]: 2025-12-02 10:15:56.860388857 +0000 UTC m=+0.098816797 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute)
Dec 02 10:15:56 np0005541914.localdomain ceph-mon[301710]: pgmap v652: 177 pgs: 177 active+clean; 293 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 79 KiB/s rd, 125 KiB/s wr, 110 op/s
Dec 02 10:15:56 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "auth_id": "david", "format": "json"}]: dispatch
Dec 02 10:15:56 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch
Dec 02 10:15:56 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Dec 02 10:15:56 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch
Dec 02 10:15:56 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished
Dec 02 10:15:56 np0005541914.localdomain ceph-mon[301710]: osdmap e267: 6 total, 6 up, 6 in
Dec 02 10:15:56 np0005541914.localdomain podman[325166]: 2025-12-02 10:15:56.910385863 +0000 UTC m=+0.151765334 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 02 10:15:56 np0005541914.localdomain podman[325166]: 2025-12-02 10:15:56.94119263 +0000 UTC m=+0.182572101 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125)
Dec 02 10:15:56 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-9251bae03dba350098b1f5dbad067aff0e21633b444c29635f1cf251c0cbf4bf-merged.mount: Deactivated successfully.
Dec 02 10:15:56 np0005541914.localdomain systemd[1]: run-netns-ovnmeta\x2d8703a229\x2d8c49\x2d443e\x2d95c6\x2daff62a358434.mount: Deactivated successfully.
Dec 02 10:15:56 np0005541914.localdomain podman[325169]: 2025-12-02 10:15:56.959209103 +0000 UTC m=+0.192889767 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:15:56 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:15:56 np0005541914.localdomain podman[325168]: 2025-12-02 10:15:56.976601478 +0000 UTC m=+0.215029388 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=edpm, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 02 10:15:56 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ffd086bb-506f-4c57-a27d-657caefc8485", "format": "json"}]: dispatch
Dec 02 10:15:56 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ffd086bb-506f-4c57-a27d-657caefc8485, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:15:56 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ffd086bb-506f-4c57-a27d-657caefc8485, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:15:56 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:15:56.984+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ffd086bb-506f-4c57-a27d-657caefc8485' of type subvolume
Dec 02 10:15:56 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ffd086bb-506f-4c57-a27d-657caefc8485' of type subvolume
Dec 02 10:15:56 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ffd086bb-506f-4c57-a27d-657caefc8485", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:56 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ffd086bb-506f-4c57-a27d-657caefc8485, vol_name:cephfs) < ""
Dec 02 10:15:56 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:15:56 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ffd086bb-506f-4c57-a27d-657caefc8485'' moved to trashcan
Dec 02 10:15:56 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:15:56 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ffd086bb-506f-4c57-a27d-657caefc8485, vol_name:cephfs) < ""
Dec 02 10:15:57 np0005541914.localdomain podman[325167]: 2025-12-02 10:15:57.056476722 +0000 UTC m=+0.294676525 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 10:15:57 np0005541914.localdomain podman[325167]: 2025-12-02 10:15:57.069005627 +0000 UTC m=+0.307205450 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 02 10:15:57 np0005541914.localdomain podman[325169]: 2025-12-02 10:15:57.081256884 +0000 UTC m=+0.314937588 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:15:57 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:15:57 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:15:57 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v654: 177 pgs: 177 active+clean; 293 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 66 KiB/s rd, 105 KiB/s wr, 93 op/s
Dec 02 10:15:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:15:57 np0005541914.localdomain neutron_sriov_agent[255428]: 2025-12-02 10:15:57.569 2 INFO neutron.agent.securitygroups_rpc [req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 req-bbad3521-a7cd-468f-9368-bc82a5a5c437 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Security group member updated ['10785715-ddea-43bb-82fa-9f44a2fb1faa']
Dec 02 10:15:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e268 e268: 6 total, 6 up, 6 in
Dec 02 10:15:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:57.847 281049 DEBUG nova.network.neutron [-] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 02 10:15:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:57.862 281049 INFO nova.compute.manager [-] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Took 1.14 seconds to deallocate network for instance.
Dec 02 10:15:57 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "auth_id": "tempest-cephx-id-1696860369", "tenant_id": "82d5a09e66904b8ca3c7a7850f1e5c52", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:15:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume authorize, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, tenant_id:82d5a09e66904b8ca3c7a7850f1e5c52, vol_name:cephfs) < ""
Dec 02 10:15:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} v 0)
Dec 02 10:15:57 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch
Dec 02 10:15:57 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "auth_id": "david", "format": "json"}]: dispatch
Dec 02 10:15:57 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ffd086bb-506f-4c57-a27d-657caefc8485", "format": "json"}]: dispatch
Dec 02 10:15:57 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ffd086bb-506f-4c57-a27d-657caefc8485", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:57 np0005541914.localdomain ceph-mon[301710]: osdmap e268: 6 total, 6 up, 6 in
Dec 02 10:15:57 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: Creating meta for ID tempest-cephx-id-1696860369 with tenant 82d5a09e66904b8ca3c7a7850f1e5c52
Dec 02 10:15:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} v 0)
Dec 02 10:15:57 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:15:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:58.043 281049 DEBUG oslo_concurrency.lockutils [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:15:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:58.043 281049 DEBUG oslo_concurrency.lockutils [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:15:58 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_authorize(access_level:rw, auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume authorize, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, tenant_id:82d5a09e66904b8ca3c7a7850f1e5c52, vol_name:cephfs) < ""
Dec 02 10:15:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:58.092 281049 DEBUG oslo_concurrency.processutils [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:15:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:58.117 281049 DEBUG nova.compute.manager [req-05cf092e-64b9-4b03-9ced-58a53cfc5cb9 req-8059bc6f-904b-4071-883c-675f93d6c2cb dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Received event network-vif-plugged-5312b3e8-70f6-4e16-95ba-31b46130d41f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 02 10:15:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:58.118 281049 DEBUG oslo_concurrency.lockutils [req-05cf092e-64b9-4b03-9ced-58a53cfc5cb9 req-8059bc6f-904b-4071-883c-675f93d6c2cb dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Acquiring lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:15:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:58.118 281049 DEBUG oslo_concurrency.lockutils [req-05cf092e-64b9-4b03-9ced-58a53cfc5cb9 req-8059bc6f-904b-4071-883c-675f93d6c2cb dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:15:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:58.118 281049 DEBUG oslo_concurrency.lockutils [req-05cf092e-64b9-4b03-9ced-58a53cfc5cb9 req-8059bc6f-904b-4071-883c-675f93d6c2cb dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:15:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:58.118 281049 DEBUG nova.compute.manager [req-05cf092e-64b9-4b03-9ced-58a53cfc5cb9 req-8059bc6f-904b-4071-883c-675f93d6c2cb dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] No waiting events found dispatching network-vif-plugged-5312b3e8-70f6-4e16-95ba-31b46130d41f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 02 10:15:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:58.119 281049 WARNING nova.compute.manager [req-05cf092e-64b9-4b03-9ced-58a53cfc5cb9 req-8059bc6f-904b-4071-883c-675f93d6c2cb dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Received unexpected event network-vif-plugged-5312b3e8-70f6-4e16-95ba-31b46130d41f for instance with vm_state deleted and task_state None.
Dec 02 10:15:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:58.119 281049 DEBUG nova.compute.manager [req-05cf092e-64b9-4b03-9ced-58a53cfc5cb9 req-8059bc6f-904b-4071-883c-675f93d6c2cb dafd7fe1ebe54740b64cc9f8b3667fc9 497073c2347a4b2dbbf501873318fbd3 - - default default] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Received event network-vif-deleted-5312b3e8-70f6-4e16-95ba-31b46130d41f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 02 10:15:58 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:15:58 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2709406224' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:15:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:58.559 281049 DEBUG oslo_concurrency.processutils [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:15:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:58.565 281049 DEBUG nova.compute.provider_tree [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 10:15:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:58.630 281049 DEBUG nova.scheduler.client.report [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 10:15:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:58.768 281049 DEBUG oslo_concurrency.lockutils [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:15:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:58.802 281049 INFO nova.scheduler.client.report [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Deleted allocations for instance e4135ac9-548a-4e8d-99d6-cde8dedb2c77
Dec 02 10:15:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:58.880 281049 DEBUG oslo_concurrency.lockutils [None req-3475e8cc-5e11-46e8-9664-ecb90f3bf921 0e5c738ba752455b908099b234a743a2 d858413a9b01463f96545916d2abe5ab - - default default] Lock "e4135ac9-548a-4e8d-99d6-cde8dedb2c77" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.245s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:15:58 np0005541914.localdomain ceph-mon[301710]: pgmap v654: 177 pgs: 177 active+clean; 293 MiB data, 1.4 GiB used, 41 GiB / 42 GiB avail; 66 KiB/s rd, 105 KiB/s wr, 93 op/s
Dec 02 10:15:58 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "auth_id": "tempest-cephx-id-1696860369", "tenant_id": "82d5a09e66904b8ca3c7a7850f1e5c52", "access_level": "rw", "format": "json"}]: dispatch
Dec 02 10:15:58 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch
Dec 02 10:15:58 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:15:58 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"} : dispatch
Dec 02 10:15:58 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1696860369", "caps": ["mds", "allow rw path=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4", "osd", "allow rw pool=manila_data namespace=fsvolumens_5aafe356-dc3f-4e86-bea5-6655303e90b0", "mon", "allow r"], "format": "json"}]': finished
Dec 02 10:15:58 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/2709406224' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:15:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:59.207 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v656: 177 pgs: 177 active+clean; 215 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 103 KiB/s rd, 243 KiB/s wr, 156 op/s
Dec 02 10:15:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5cdc679c-4ca6-4876-b423-0e54f450bff3", "format": "json"}]: dispatch
Dec 02 10:15:59 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5cdc679c-4ca6-4876-b423-0e54f450bff3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:15:59 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5cdc679c-4ca6-4876-b423-0e54f450bff3, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:15:59 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5cdc679c-4ca6-4876-b423-0e54f450bff3' of type subvolume
Dec 02 10:15:59 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:15:59.440+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5cdc679c-4ca6-4876-b423-0e54f450bff3' of type subvolume
Dec 02 10:15:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5cdc679c-4ca6-4876-b423-0e54f450bff3", "force": true, "format": "json"}]: dispatch
Dec 02 10:15:59 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5cdc679c-4ca6-4876-b423-0e54f450bff3, vol_name:cephfs) < ""
Dec 02 10:15:59 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5cdc679c-4ca6-4876-b423-0e54f450bff3'' moved to trashcan
Dec 02 10:15:59 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:15:59 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5cdc679c-4ca6-4876-b423-0e54f450bff3, vol_name:cephfs) < ""
Dec 02 10:15:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:15:59.970 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:15:59 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses
Dec 02 10:15:59 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:15:59 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:15:59 np0005541914.localdomain podman[325286]: 2025-12-02 10:15:59.994836398 +0000 UTC m=+0.052151864 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 02 10:16:00 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "718c525a-962f-42e3-9573-9fc3919d4aa7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:16:00 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:718c525a-962f-42e3-9573-9fc3919d4aa7, vol_name:cephfs) < ""
Dec 02 10:16:00 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/718c525a-962f-42e3-9573-9fc3919d4aa7/.meta.tmp'
Dec 02 10:16:00 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/718c525a-962f-42e3-9573-9fc3919d4aa7/.meta.tmp' to config b'/volumes/_nogroup/718c525a-962f-42e3-9573-9fc3919d4aa7/.meta'
Dec 02 10:16:00 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:718c525a-962f-42e3-9573-9fc3919d4aa7, vol_name:cephfs) < ""
Dec 02 10:16:00 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "718c525a-962f-42e3-9573-9fc3919d4aa7", "format": "json"}]: dispatch
Dec 02 10:16:00 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:718c525a-962f-42e3-9573-9fc3919d4aa7, vol_name:cephfs) < ""
Dec 02 10:16:00 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:718c525a-962f-42e3-9573-9fc3919d4aa7, vol_name:cephfs) < ""
Dec 02 10:16:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:00.952 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:16:00 np0005541914.localdomain ceph-mon[301710]: pgmap v656: 177 pgs: 177 active+clean; 215 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 103 KiB/s rd, 243 KiB/s wr, 156 op/s
Dec 02 10:16:00 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5cdc679c-4ca6-4876-b423-0e54f450bff3", "format": "json"}]: dispatch
Dec 02 10:16:00 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5cdc679c-4ca6-4876-b423-0e54f450bff3", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:00 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:16:01 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v657: 177 pgs: 177 active+clean; 215 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 117 KiB/s wr, 56 op/s
Dec 02 10:16:01 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch
Dec 02 10:16:01 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume deauthorize, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < ""
Dec 02 10:16:01 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} v 0)
Dec 02 10:16:01 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch
Dec 02 10:16:01 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} v 0)
Dec 02 10:16:01 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch
Dec 02 10:16:01 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume deauthorize, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < ""
Dec 02 10:16:01 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch
Dec 02 10:16:01 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume evict, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < ""
Dec 02 10:16:01 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict clients with auth_name=tempest-cephx-id-1696860369, client_metadata.root=/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0/5ba23353-e45c-4844-8c4d-be87f063ddd4
Dec 02 10:16:01 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_v1] evict: joined all
Dec 02 10:16:01 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_evict(auth_id:tempest-cephx-id-1696860369, format:json, prefix:fs subvolume evict, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < ""
Dec 02 10:16:01 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e269 e269: 6 total, 6 up, 6 in
Dec 02 10:16:01 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "718c525a-962f-42e3-9573-9fc3919d4aa7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:16:01 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "718c525a-962f-42e3-9573-9fc3919d4aa7", "format": "json"}]: dispatch
Dec 02 10:16:01 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch
Dec 02 10:16:01 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1696860369", "format": "json"} : dispatch
Dec 02 10:16:01 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"} : dispatch
Dec 02 10:16:01 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1696860369"}]': finished
Dec 02 10:16:01 np0005541914.localdomain ceph-mon[301710]: osdmap e269: 6 total, 6 up, 6 in
Dec 02 10:16:01 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:16:01 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:16:02 np0005541914.localdomain podman[325309]: 2025-12-02 10:16:02.086624832 +0000 UTC m=+0.090099580 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 10:16:02 np0005541914.localdomain podman[325309]: 2025-12-02 10:16:02.127846907 +0000 UTC m=+0.131321625 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 02 10:16:02 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:16:02 np0005541914.localdomain podman[325310]: 2025-12-02 10:16:02.13375516 +0000 UTC m=+0.136114454 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, name=ubi9-minimal, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 02 10:16:02 np0005541914.localdomain podman[325310]: 2025-12-02 10:16:02.214517441 +0000 UTC m=+0.216876735 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, container_name=openstack_network_exporter, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec 02 10:16:02 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:16:02 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:16:02 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a1ba20ee-ed37-461f-8a6b-289e0637343e", "format": "json"}]: dispatch
Dec 02 10:16:02 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:16:02 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:16:02 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a1ba20ee-ed37-461f-8a6b-289e0637343e' of type subvolume
Dec 02 10:16:02 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:02.768+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'a1ba20ee-ed37-461f-8a6b-289e0637343e' of type subvolume
Dec 02 10:16:02 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a1ba20ee-ed37-461f-8a6b-289e0637343e", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:02 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, vol_name:cephfs) < ""
Dec 02 10:16:02 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a1ba20ee-ed37-461f-8a6b-289e0637343e'' moved to trashcan
Dec 02 10:16:02 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:16:02 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a1ba20ee-ed37-461f-8a6b-289e0637343e, vol_name:cephfs) < ""
Dec 02 10:16:02 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e270 e270: 6 total, 6 up, 6 in
Dec 02 10:16:02 np0005541914.localdomain ceph-mon[301710]: pgmap v657: 177 pgs: 177 active+clean; 215 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 117 KiB/s wr, 56 op/s
Dec 02 10:16:02 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch
Dec 02 10:16:02 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "auth_id": "tempest-cephx-id-1696860369", "format": "json"}]: dispatch
Dec 02 10:16:02 np0005541914.localdomain ceph-mon[301710]: osdmap e270: 6 total, 6 up, 6 in
Dec 02 10:16:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:16:03.185 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:16:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:16:03.185 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:16:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:16:03.186 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:16:03 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v660: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 65 KiB/s rd, 325 KiB/s wr, 113 op/s
Dec 02 10:16:03 np0005541914.localdomain podman[239757]: time="2025-12-02T10:16:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:16:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:16:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 10:16:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:16:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19254 "" "Go-http-client/1.1"
Dec 02 10:16:03 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e271 e271: 6 total, 6 up, 6 in
Dec 02 10:16:03 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "718c525a-962f-42e3-9573-9fc3919d4aa7", "format": "json"}]: dispatch
Dec 02 10:16:03 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:718c525a-962f-42e3-9573-9fc3919d4aa7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:16:03 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:718c525a-962f-42e3-9573-9fc3919d4aa7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:16:03 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:03.997+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '718c525a-962f-42e3-9573-9fc3919d4aa7' of type subvolume
Dec 02 10:16:03 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '718c525a-962f-42e3-9573-9fc3919d4aa7' of type subvolume
Dec 02 10:16:04 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "718c525a-962f-42e3-9573-9fc3919d4aa7", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:04 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:718c525a-962f-42e3-9573-9fc3919d4aa7, vol_name:cephfs) < ""
Dec 02 10:16:04 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a1ba20ee-ed37-461f-8a6b-289e0637343e", "format": "json"}]: dispatch
Dec 02 10:16:04 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a1ba20ee-ed37-461f-8a6b-289e0637343e", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:04 np0005541914.localdomain ceph-mon[301710]: osdmap e271: 6 total, 6 up, 6 in
Dec 02 10:16:04 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/718c525a-962f-42e3-9573-9fc3919d4aa7'' moved to trashcan
Dec 02 10:16:04 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:16:04 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:718c525a-962f-42e3-9573-9fc3919d4aa7, vol_name:cephfs) < ""
Dec 02 10:16:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:04.210 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:16:04 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 02 10:16:04 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2684550647' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:16:04 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 02 10:16:04 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2684550647' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:16:05 np0005541914.localdomain ceph-mon[301710]: pgmap v660: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 65 KiB/s rd, 325 KiB/s wr, 113 op/s
Dec 02 10:16:05 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "718c525a-962f-42e3-9573-9fc3919d4aa7", "format": "json"}]: dispatch
Dec 02 10:16:05 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "718c525a-962f-42e3-9573-9fc3919d4aa7", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2684550647' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:16:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2684550647' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:16:05 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "format": "json"}]: dispatch
Dec 02 10:16:05 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:16:05 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:16:05 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:05.238+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5aafe356-dc3f-4e86-bea5-6655303e90b0' of type subvolume
Dec 02 10:16:05 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5aafe356-dc3f-4e86-bea5-6655303e90b0' of type subvolume
Dec 02 10:16:05 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:05 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < ""
Dec 02 10:16:05 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5aafe356-dc3f-4e86-bea5-6655303e90b0'' moved to trashcan
Dec 02 10:16:05 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:16:05 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5aafe356-dc3f-4e86-bea5-6655303e90b0, vol_name:cephfs) < ""
Dec 02 10:16:05 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v662: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 168 KiB/s wr, 38 op/s
Dec 02 10:16:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:05.984 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:16:06 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "07b7e455-1272-48fc-92f9-fd54c3fafcb0", "format": "json"}]: dispatch
Dec 02 10:16:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:07b7e455-1272-48fc-92f9-fd54c3fafcb0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:16:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:07b7e455-1272-48fc-92f9-fd54c3fafcb0, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:16:06 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:06.017+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '07b7e455-1272-48fc-92f9-fd54c3fafcb0' of type subvolume
Dec 02 10:16:06 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '07b7e455-1272-48fc-92f9-fd54c3fafcb0' of type subvolume
Dec 02 10:16:06 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "07b7e455-1272-48fc-92f9-fd54c3fafcb0", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:07b7e455-1272-48fc-92f9-fd54c3fafcb0, vol_name:cephfs) < ""
Dec 02 10:16:06 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/3357420336' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:16:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/07b7e455-1272-48fc-92f9-fd54c3fafcb0'' moved to trashcan
Dec 02 10:16:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:16:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:07b7e455-1272-48fc-92f9-fd54c3fafcb0, vol_name:cephfs) < ""
Dec 02 10:16:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e272 e272: 6 total, 6 up, 6 in
Dec 02 10:16:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:16:06
Dec 02 10:16:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 02 10:16:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] do_upmap
Dec 02 10:16:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] pools ['.mgr', 'manila_data', 'backups', 'manila_metadata', 'images', 'vms', 'volumes']
Dec 02 10:16:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes
Dec 02 10:16:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:16:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:16:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:16:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:16:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:16:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:16:07 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "format": "json"}]: dispatch
Dec 02 10:16:07 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5aafe356-dc3f-4e86-bea5-6655303e90b0", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:07 np0005541914.localdomain ceph-mon[301710]: pgmap v662: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 168 KiB/s wr, 38 op/s
Dec 02 10:16:07 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "07b7e455-1272-48fc-92f9-fd54c3fafcb0", "format": "json"}]: dispatch
Dec 02 10:16:07 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "07b7e455-1272-48fc-92f9-fd54c3fafcb0", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:07 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/2602515044' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:16:07 np0005541914.localdomain ceph-mon[301710]: osdmap e272: 6 total, 6 up, 6 in
Dec 02 10:16:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d6f9e4a2-dde5-48d3-9ade-72d59a880bf2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:16:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d6f9e4a2-dde5-48d3-9ade-72d59a880bf2, vol_name:cephfs) < ""
Dec 02 10:16:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d6f9e4a2-dde5-48d3-9ade-72d59a880bf2/.meta.tmp'
Dec 02 10:16:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d6f9e4a2-dde5-48d3-9ade-72d59a880bf2/.meta.tmp' to config b'/volumes/_nogroup/d6f9e4a2-dde5-48d3-9ade-72d59a880bf2/.meta'
Dec 02 10:16:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:07.256 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:16:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d6f9e4a2-dde5-48d3-9ade-72d59a880bf2, vol_name:cephfs) < ""
Dec 02 10:16:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d6f9e4a2-dde5-48d3-9ade-72d59a880bf2", "format": "json"}]: dispatch
Dec 02 10:16:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d6f9e4a2-dde5-48d3-9ade-72d59a880bf2, vol_name:cephfs) < ""
Dec 02 10:16:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d6f9e4a2-dde5-48d3-9ade-72d59a880bf2, vol_name:cephfs) < ""
Dec 02 10:16:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v664: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 185 KiB/s wr, 41 op/s
Dec 02 10:16:07 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 1 addresses
Dec 02 10:16:07 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:16:07 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:16:07 np0005541914.localdomain podman[325363]: 2025-12-02 10:16:07.283435152 +0000 UTC m=+0.093781613 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:16:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust
Dec 02 10:16:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:16:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 02 10:16:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:16:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32)
Dec 02 10:16:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:16:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014866541910943606 of space, bias 1.0, pg target 0.2968352868218407 quantized to 32 (current 32)
Dec 02 10:16:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:16:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32)
Dec 02 10:16:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:16:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32)
Dec 02 10:16:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:16:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 2.7263051367950866e-06 of space, bias 1.0, pg target 0.0005425347222222222 quantized to 32 (current 32)
Dec 02 10:16:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:16:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0016815850083752094 of space, bias 4.0, pg target 1.3385416666666667 quantized to 16 (current 16)
Dec 02 10:16:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 02 10:16:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:16:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 02 10:16:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:16:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:16:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:16:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:16:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:16:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:16:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:16:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:07.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:16:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:16:08 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d6f9e4a2-dde5-48d3-9ade-72d59a880bf2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:16:08 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:16:08 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:16:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:08.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:16:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:08.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:16:08 np0005541914.localdomain podman[325384]: 2025-12-02 10:16:08.548713108 +0000 UTC m=+0.056996762 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:16:08 np0005541914.localdomain podman[325384]: 2025-12-02 10:16:08.559898422 +0000 UTC m=+0.068182096 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 02 10:16:08 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:16:09 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d6f9e4a2-dde5-48d3-9ade-72d59a880bf2", "format": "json"}]: dispatch
Dec 02 10:16:09 np0005541914.localdomain ceph-mon[301710]: pgmap v664: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 185 KiB/s wr, 41 op/s
Dec 02 10:16:09 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:09.213 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:16:09 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v665: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 39 KiB/s rd, 233 KiB/s wr, 67 op/s
Dec 02 10:16:09 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "auth_id": "admin", "format": "json"}]: dispatch
Dec 02 10:16:09 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:738f4ca9-41a9-48cc-8ca1-8d9ae9041202, vol_name:cephfs) < ""
Dec 02 10:16:09 np0005541914.localdomain ceph-mgr[287188]: [volumes ERROR volumes.fs.operations.versions.subvolume_v1] auth ID: admin doesn't exist
Dec 02 10:16:09 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_deauthorize(auth_id:admin, format:json, prefix:fs subvolume deauthorize, sub_name:738f4ca9-41a9-48cc-8ca1-8d9ae9041202, vol_name:cephfs) < ""
Dec 02 10:16:09 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:09.520+0000 7fd37dd6f640 -1 mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist
Dec 02 10:16:09 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (2) No such file or directory auth ID: admin doesn't exist
Dec 02 10:16:09 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:09.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:16:09 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:09.550 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:16:09 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:09.551 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:16:09 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:09.552 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:16:09 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:09.552 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 10:16:09 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:09.552 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:16:09 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "format": "json"}]: dispatch
Dec 02 10:16:09 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:738f4ca9-41a9-48cc-8ca1-8d9ae9041202, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:16:09 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:738f4ca9-41a9-48cc-8ca1-8d9ae9041202, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:16:09 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:09.700+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '738f4ca9-41a9-48cc-8ca1-8d9ae9041202' of type subvolume
Dec 02 10:16:09 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '738f4ca9-41a9-48cc-8ca1-8d9ae9041202' of type subvolume
Dec 02 10:16:09 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:09 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:738f4ca9-41a9-48cc-8ca1-8d9ae9041202, vol_name:cephfs) < ""
Dec 02 10:16:09 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/738f4ca9-41a9-48cc-8ca1-8d9ae9041202'' moved to trashcan
Dec 02 10:16:09 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:16:09 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:738f4ca9-41a9-48cc-8ca1-8d9ae9041202, vol_name:cephfs) < ""
Dec 02 10:16:09 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:16:09 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1781133973' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:16:10 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:10.011 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:16:10 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/1781133973' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:16:10 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:10.213 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 10:16:10 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:10.214 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11409MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 10:16:10 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:10.215 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:16:10 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:10.215 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:16:10 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:10.422 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 10:16:10 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:10.422 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 10:16:10 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:10.445 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:16:10 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "d6f9e4a2-dde5-48d3-9ade-72d59a880bf2", "snap_name": "395e084c-5f31-4d0b-b40b-8a631da3af09", "format": "json"}]: dispatch
Dec 02 10:16:10 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:395e084c-5f31-4d0b-b40b-8a631da3af09, sub_name:d6f9e4a2-dde5-48d3-9ade-72d59a880bf2, vol_name:cephfs) < ""
Dec 02 10:16:10 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:395e084c-5f31-4d0b-b40b-8a631da3af09, sub_name:d6f9e4a2-dde5-48d3-9ade-72d59a880bf2, vol_name:cephfs) < ""
Dec 02 10:16:10 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:10.875 281049 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764670555.8740442, e4135ac9-548a-4e8d-99d6-cde8dedb2c77 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 02 10:16:10 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:10.877 281049 INFO nova.compute.manager [-] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] VM Stopped (Lifecycle Event)
Dec 02 10:16:10 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:16:10 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2839302655' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:16:10 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:10.954 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:16:10 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:10.963 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 10:16:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:11.040 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:16:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:11.090 281049 DEBUG nova.compute.manager [None req-924b3d3d-3944-48f4-bb01-134bf874dc0d - - - - - -] [instance: e4135ac9-548a-4e8d-99d6-cde8dedb2c77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 02 10:16:11 np0005541914.localdomain ceph-mon[301710]: pgmap v665: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 39 KiB/s rd, 233 KiB/s wr, 67 op/s
Dec 02 10:16:11 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "auth_id": "admin", "format": "json"}]: dispatch
Dec 02 10:16:11 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "format": "json"}]: dispatch
Dec 02 10:16:11 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "738f4ca9-41a9-48cc-8ca1-8d9ae9041202", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:11 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/2839302655' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:16:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:11.114 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 10:16:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:11.178 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 10:16:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:11.180 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.965s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:16:11 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v666: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 61 KiB/s wr, 25 op/s
Dec 02 10:16:11 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e273 e273: 6 total, 6 up, 6 in
Dec 02 10:16:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:16:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:16:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:16:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:16:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:16:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:16:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:16:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:16:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:16:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:16:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:16:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:16:12 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "d6f9e4a2-dde5-48d3-9ade-72d59a880bf2", "snap_name": "395e084c-5f31-4d0b-b40b-8a631da3af09", "format": "json"}]: dispatch
Dec 02 10:16:12 np0005541914.localdomain ceph-mon[301710]: osdmap e273: 6 total, 6 up, 6 in
Dec 02 10:16:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:12.176 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:16:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:12.176 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:16:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:12.177 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 10:16:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:12.177 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 10:16:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:12.260 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 10:16:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:12.261 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:16:12 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:16:13 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e274 e274: 6 total, 6 up, 6 in
Dec 02 10:16:13 np0005541914.localdomain ceph-mon[301710]: pgmap v666: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 61 KiB/s wr, 25 op/s
Dec 02 10:16:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v669: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 127 KiB/s wr, 38 op/s
Dec 02 10:16:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "d6f9e4a2-dde5-48d3-9ade-72d59a880bf2", "snap_name": "395e084c-5f31-4d0b-b40b-8a631da3af09", "target_sub_name": "c4aff2e0-53b6-4b58-8317-036e112a5bcd", "format": "json"}]: dispatch
Dec 02 10:16:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:395e084c-5f31-4d0b-b40b-8a631da3af09, sub_name:d6f9e4a2-dde5-48d3-9ade-72d59a880bf2, target_sub_name:c4aff2e0-53b6-4b58-8317-036e112a5bcd, vol_name:cephfs) < ""
Dec 02 10:16:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/c4aff2e0-53b6-4b58-8317-036e112a5bcd/.meta.tmp'
Dec 02 10:16:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c4aff2e0-53b6-4b58-8317-036e112a5bcd/.meta.tmp' to config b'/volumes/_nogroup/c4aff2e0-53b6-4b58-8317-036e112a5bcd/.meta'
Dec 02 10:16:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.clone_index] tracking-id 65fed077-d23e-47ab-99c4-c249dff46217 for path b'/volumes/_nogroup/c4aff2e0-53b6-4b58-8317-036e112a5bcd'
Dec 02 10:16:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/d6f9e4a2-dde5-48d3-9ade-72d59a880bf2/.meta.tmp'
Dec 02 10:16:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d6f9e4a2-dde5-48d3-9ade-72d59a880bf2/.meta.tmp' to config b'/volumes/_nogroup/d6f9e4a2-dde5-48d3-9ade-72d59a880bf2/.meta'
Dec 02 10:16:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:16:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:395e084c-5f31-4d0b-b40b-8a631da3af09, sub_name:d6f9e4a2-dde5-48d3-9ade-72d59a880bf2, target_sub_name:c4aff2e0-53b6-4b58-8317-036e112a5bcd, vol_name:cephfs) < ""
Dec 02 10:16:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c4aff2e0-53b6-4b58-8317-036e112a5bcd", "format": "json"}]: dispatch
Dec 02 10:16:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c4aff2e0-53b6-4b58-8317-036e112a5bcd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:16:13 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:13.940+0000 7fd382d79640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:16:13 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:16:13 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:13.940+0000 7fd382d79640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:16:13 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:16:13 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:13.940+0000 7fd382d79640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:16:13 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:16:13 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:13.940+0000 7fd382d79640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:16:13 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:16:13 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:13.940+0000 7fd382d79640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:16:13 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:16:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c4aff2e0-53b6-4b58-8317-036e112a5bcd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:16:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/c4aff2e0-53b6-4b58-8317-036e112a5bcd
Dec 02 10:16:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, c4aff2e0-53b6-4b58-8317-036e112a5bcd)
Dec 02 10:16:13 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:13.972+0000 7fd383d7b640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:16:13 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:16:13 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:13.972+0000 7fd383d7b640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:16:13 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:16:13 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:13.972+0000 7fd383d7b640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:16:13 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:16:13 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:13.972+0000 7fd383d7b640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:16:13 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:16:13 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:13.973+0000 7fd383d7b640 -1 client.0 error registering admin socket command: (17) File exists
Dec 02 10:16:13 np0005541914.localdomain ceph-mgr[287188]: client.0 error registering admin socket command: (17) File exists
Dec 02 10:16:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, c4aff2e0-53b6-4b58-8317-036e112a5bcd) -- by 0 seconds
Dec 02 10:16:14 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/c4aff2e0-53b6-4b58-8317-036e112a5bcd/.meta.tmp'
Dec 02 10:16:14 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c4aff2e0-53b6-4b58-8317-036e112a5bcd/.meta.tmp' to config b'/volumes/_nogroup/c4aff2e0-53b6-4b58-8317-036e112a5bcd/.meta'
Dec 02 10:16:14 np0005541914.localdomain ceph-mon[301710]: osdmap e274: 6 total, 6 up, 6 in
Dec 02 10:16:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:14.216 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:16:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:14.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:16:15 np0005541914.localdomain ceph-mon[301710]: pgmap v669: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 127 KiB/s wr, 38 op/s
Dec 02 10:16:15 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "d6f9e4a2-dde5-48d3-9ade-72d59a880bf2", "snap_name": "395e084c-5f31-4d0b-b40b-8a631da3af09", "target_sub_name": "c4aff2e0-53b6-4b58-8317-036e112a5bcd", "format": "json"}]: dispatch
Dec 02 10:16:15 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c4aff2e0-53b6-4b58-8317-036e112a5bcd", "format": "json"}]: dispatch
Dec 02 10:16:15 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e275 e275: 6 total, 6 up, 6 in
Dec 02 10:16:15 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v671: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.3 KiB/s rd, 56 KiB/s wr, 6 op/s
Dec 02 10:16:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:16:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:16:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:16:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:16:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:16:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:16:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:16:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:16:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:16:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:16:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:16:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:16:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:16:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:16:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:16:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:16:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:16:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:16:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:16:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:16:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.448 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:16:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.448 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:16:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.448 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:16:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.448 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:16:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:16:15.448 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:16:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:15.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:16:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:15.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 10:16:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:16.080 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:16:16 np0005541914.localdomain ceph-mon[301710]: osdmap e275: 6 total, 6 up, 6 in
Dec 02 10:16:16 np0005541914.localdomain ceph-mon[301710]: mgrmap e51: np0005541914.lljzmk(active, since 16m), standbys: np0005541912.qwddia, np0005541913.mfesdm
Dec 02 10:16:16 np0005541914.localdomain ceph-mon[301710]: pgmap v671: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.3 KiB/s rd, 56 KiB/s wr, 6 op/s
Dec 02 10:16:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/d6f9e4a2-dde5-48d3-9ade-72d59a880bf2/.snap/395e084c-5f31-4d0b-b40b-8a631da3af09/5859e93c-459f-40c4-a0ad-221e72111d9a' to b'/volumes/_nogroup/c4aff2e0-53b6-4b58-8317-036e112a5bcd/5a7b8174-6a6e-4c82-9582-305f3b6a0931'
Dec 02 10:16:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/c4aff2e0-53b6-4b58-8317-036e112a5bcd/.meta.tmp'
Dec 02 10:16:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c4aff2e0-53b6-4b58-8317-036e112a5bcd/.meta.tmp' to config b'/volumes/_nogroup/c4aff2e0-53b6-4b58-8317-036e112a5bcd/.meta'
Dec 02 10:16:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.clone_index] untracking 65fed077-d23e-47ab-99c4-c249dff46217
Dec 02 10:16:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d6f9e4a2-dde5-48d3-9ade-72d59a880bf2/.meta.tmp'
Dec 02 10:16:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d6f9e4a2-dde5-48d3-9ade-72d59a880bf2/.meta.tmp' to config b'/volumes/_nogroup/d6f9e4a2-dde5-48d3-9ade-72d59a880bf2/.meta'
Dec 02 10:16:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/c4aff2e0-53b6-4b58-8317-036e112a5bcd/.meta.tmp'
Dec 02 10:16:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/c4aff2e0-53b6-4b58-8317-036e112a5bcd/.meta.tmp' to config b'/volumes/_nogroup/c4aff2e0-53b6-4b58-8317-036e112a5bcd/.meta'
Dec 02 10:16:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, c4aff2e0-53b6-4b58-8317-036e112a5bcd)
Dec 02 10:16:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v672: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.3 KiB/s rd, 56 KiB/s wr, 6 op/s
Dec 02 10:16:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 02 10:16:17 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2273007360' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:16:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 02 10:16:17 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2273007360' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:16:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:16:17 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2273007360' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:16:17 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2273007360' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:16:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:18.525 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:16:18 np0005541914.localdomain ceph-mon[301710]: pgmap v672: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.3 KiB/s rd, 56 KiB/s wr, 6 op/s
Dec 02 10:16:19 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3d3f9f10-b22d-485e-b12b-97dbef75415d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:16:19 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3d3f9f10-b22d-485e-b12b-97dbef75415d, vol_name:cephfs) < ""
Dec 02 10:16:19 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3d3f9f10-b22d-485e-b12b-97dbef75415d/.meta.tmp'
Dec 02 10:16:19 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3d3f9f10-b22d-485e-b12b-97dbef75415d/.meta.tmp' to config b'/volumes/_nogroup/3d3f9f10-b22d-485e-b12b-97dbef75415d/.meta'
Dec 02 10:16:19 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:3d3f9f10-b22d-485e-b12b-97dbef75415d, vol_name:cephfs) < ""
Dec 02 10:16:19 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3d3f9f10-b22d-485e-b12b-97dbef75415d", "format": "json"}]: dispatch
Dec 02 10:16:19 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3d3f9f10-b22d-485e-b12b-97dbef75415d, vol_name:cephfs) < ""
Dec 02 10:16:19 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3d3f9f10-b22d-485e-b12b-97dbef75415d, vol_name:cephfs) < ""
Dec 02 10:16:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:19.223 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:16:19 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v673: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 37 KiB/s rd, 98 KiB/s wr, 57 op/s
Dec 02 10:16:19 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3d3f9f10-b22d-485e-b12b-97dbef75415d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:16:19 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3d3f9f10-b22d-485e-b12b-97dbef75415d", "format": "json"}]: dispatch
Dec 02 10:16:19 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:16:19 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/3867755322' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:16:20 np0005541914.localdomain ceph-mon[301710]: pgmap v673: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 37 KiB/s rd, 98 KiB/s wr, 57 op/s
Dec 02 10:16:20 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/1826029349' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:16:21 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:21.127 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:16:21 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v674: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 49 KiB/s wr, 47 op/s
Dec 02 10:16:21 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "75b01de1-fd46-4d42-88bc-75b04e569dcb", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:16:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:75b01de1-fd46-4d42-88bc-75b04e569dcb, vol_name:cephfs) < ""
Dec 02 10:16:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/75b01de1-fd46-4d42-88bc-75b04e569dcb/.meta.tmp'
Dec 02 10:16:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/75b01de1-fd46-4d42-88bc-75b04e569dcb/.meta.tmp' to config b'/volumes/_nogroup/75b01de1-fd46-4d42-88bc-75b04e569dcb/.meta'
Dec 02 10:16:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:75b01de1-fd46-4d42-88bc-75b04e569dcb, vol_name:cephfs) < ""
Dec 02 10:16:21 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "75b01de1-fd46-4d42-88bc-75b04e569dcb", "format": "json"}]: dispatch
Dec 02 10:16:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:75b01de1-fd46-4d42-88bc-75b04e569dcb, vol_name:cephfs) < ""
Dec 02 10:16:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:75b01de1-fd46-4d42-88bc-75b04e569dcb, vol_name:cephfs) < ""
Dec 02 10:16:21 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e276 e276: 6 total, 6 up, 6 in
Dec 02 10:16:21 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:16:21 np0005541914.localdomain ceph-mon[301710]: osdmap e276: 6 total, 6 up, 6 in
Dec 02 10:16:22 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "3d3f9f10-b22d-485e-b12b-97dbef75415d", "snap_name": "5212963f-950b-468a-8f66-9155c3dfc1c6", "format": "json"}]: dispatch
Dec 02 10:16:22 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5212963f-950b-468a-8f66-9155c3dfc1c6, sub_name:3d3f9f10-b22d-485e-b12b-97dbef75415d, vol_name:cephfs) < ""
Dec 02 10:16:22 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:5212963f-950b-468a-8f66-9155c3dfc1c6, sub_name:3d3f9f10-b22d-485e-b12b-97dbef75415d, vol_name:cephfs) < ""
Dec 02 10:16:22 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:16:22 np0005541914.localdomain ceph-mon[301710]: pgmap v674: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 49 KiB/s wr, 47 op/s
Dec 02 10:16:22 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "75b01de1-fd46-4d42-88bc-75b04e569dcb", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:16:22 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "75b01de1-fd46-4d42-88bc-75b04e569dcb", "format": "json"}]: dispatch
Dec 02 10:16:23 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8710895b-fa91-4f50-bc6f-341cddce5e76", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:16:23 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8710895b-fa91-4f50-bc6f-341cddce5e76, vol_name:cephfs) < ""
Dec 02 10:16:23 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8710895b-fa91-4f50-bc6f-341cddce5e76/.meta.tmp'
Dec 02 10:16:23 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8710895b-fa91-4f50-bc6f-341cddce5e76/.meta.tmp' to config b'/volumes/_nogroup/8710895b-fa91-4f50-bc6f-341cddce5e76/.meta'
Dec 02 10:16:23 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8710895b-fa91-4f50-bc6f-341cddce5e76, vol_name:cephfs) < ""
Dec 02 10:16:23 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8710895b-fa91-4f50-bc6f-341cddce5e76", "format": "json"}]: dispatch
Dec 02 10:16:23 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8710895b-fa91-4f50-bc6f-341cddce5e76, vol_name:cephfs) < ""
Dec 02 10:16:23 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8710895b-fa91-4f50-bc6f-341cddce5e76, vol_name:cephfs) < ""
Dec 02 10:16:23 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v676: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 98 KiB/s wr, 52 op/s
Dec 02 10:16:23 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "3d3f9f10-b22d-485e-b12b-97dbef75415d", "snap_name": "5212963f-950b-468a-8f66-9155c3dfc1c6", "format": "json"}]: dispatch
Dec 02 10:16:23 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8710895b-fa91-4f50-bc6f-341cddce5e76", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:16:23 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:16:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:24.225 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:16:24 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8710895b-fa91-4f50-bc6f-341cddce5e76", "format": "json"}]: dispatch
Dec 02 10:16:24 np0005541914.localdomain ceph-mon[301710]: pgmap v676: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 98 KiB/s wr, 52 op/s
Dec 02 10:16:25 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v677: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 79 KiB/s wr, 42 op/s
Dec 02 10:16:25 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d7d186aa-42ff-406e-975a-236ed40d3d49", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:16:25 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d7d186aa-42ff-406e-975a-236ed40d3d49, vol_name:cephfs) < ""
Dec 02 10:16:25 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d7d186aa-42ff-406e-975a-236ed40d3d49/.meta.tmp'
Dec 02 10:16:25 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d7d186aa-42ff-406e-975a-236ed40d3d49/.meta.tmp' to config b'/volumes/_nogroup/d7d186aa-42ff-406e-975a-236ed40d3d49/.meta'
Dec 02 10:16:25 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d7d186aa-42ff-406e-975a-236ed40d3d49, vol_name:cephfs) < ""
Dec 02 10:16:25 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d7d186aa-42ff-406e-975a-236ed40d3d49", "format": "json"}]: dispatch
Dec 02 10:16:25 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d7d186aa-42ff-406e-975a-236ed40d3d49, vol_name:cephfs) < ""
Dec 02 10:16:25 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d7d186aa-42ff-406e-975a-236ed40d3d49, vol_name:cephfs) < ""
Dec 02 10:16:25 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:16:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:26.129 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:16:26 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "3d3f9f10-b22d-485e-b12b-97dbef75415d", "snap_name": "5212963f-950b-468a-8f66-9155c3dfc1c6_f8a18cca-a1e3-4a3b-ab79-627d72357f7e", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5212963f-950b-468a-8f66-9155c3dfc1c6_f8a18cca-a1e3-4a3b-ab79-627d72357f7e, sub_name:3d3f9f10-b22d-485e-b12b-97dbef75415d, vol_name:cephfs) < ""
Dec 02 10:16:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3d3f9f10-b22d-485e-b12b-97dbef75415d/.meta.tmp'
Dec 02 10:16:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3d3f9f10-b22d-485e-b12b-97dbef75415d/.meta.tmp' to config b'/volumes/_nogroup/3d3f9f10-b22d-485e-b12b-97dbef75415d/.meta'
Dec 02 10:16:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5212963f-950b-468a-8f66-9155c3dfc1c6_f8a18cca-a1e3-4a3b-ab79-627d72357f7e, sub_name:3d3f9f10-b22d-485e-b12b-97dbef75415d, vol_name:cephfs) < ""
Dec 02 10:16:26 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "3d3f9f10-b22d-485e-b12b-97dbef75415d", "snap_name": "5212963f-950b-468a-8f66-9155c3dfc1c6", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5212963f-950b-468a-8f66-9155c3dfc1c6, sub_name:3d3f9f10-b22d-485e-b12b-97dbef75415d, vol_name:cephfs) < ""
Dec 02 10:16:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3d3f9f10-b22d-485e-b12b-97dbef75415d/.meta.tmp'
Dec 02 10:16:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3d3f9f10-b22d-485e-b12b-97dbef75415d/.meta.tmp' to config b'/volumes/_nogroup/3d3f9f10-b22d-485e-b12b-97dbef75415d/.meta'
Dec 02 10:16:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:5212963f-950b-468a-8f66-9155c3dfc1c6, sub_name:3d3f9f10-b22d-485e-b12b-97dbef75415d, vol_name:cephfs) < ""
Dec 02 10:16:26 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:16:26 np0005541914.localdomain ceph-mon[301710]: pgmap v677: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 79 KiB/s wr, 42 op/s
Dec 02 10:16:26 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d7d186aa-42ff-406e-975a-236ed40d3d49", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:16:26 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d7d186aa-42ff-406e-975a-236ed40d3d49", "format": "json"}]: dispatch
Dec 02 10:16:26 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:16:27 np0005541914.localdomain systemd[1]: tmp-crun.esm64p.mount: Deactivated successfully.
Dec 02 10:16:27 np0005541914.localdomain podman[325473]: 2025-12-02 10:16:27.092387117 +0000 UTC m=+0.089957735 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 02 10:16:27 np0005541914.localdomain podman[325473]: 2025-12-02 10:16:27.103956292 +0000 UTC m=+0.101526910 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 02 10:16:27 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:16:27 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:16:27 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:16:27 np0005541914.localdomain systemd[1]: tmp-crun.IsyuYk.mount: Deactivated successfully.
Dec 02 10:16:27 np0005541914.localdomain podman[325474]: 2025-12-02 10:16:27.211586689 +0000 UTC m=+0.202356478 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm)
Dec 02 10:16:27 np0005541914.localdomain podman[325501]: 2025-12-02 10:16:27.218938915 +0000 UTC m=+0.087073836 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:16:27 np0005541914.localdomain podman[325501]: 2025-12-02 10:16:27.255524339 +0000 UTC m=+0.123659300 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 02 10:16:27 np0005541914.localdomain podman[325474]: 2025-12-02 10:16:27.270985645 +0000 UTC m=+0.261756054 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 02 10:16:27 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:16:27 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v678: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 79 KiB/s wr, 42 op/s
Dec 02 10:16:27 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:16:27 np0005541914.localdomain podman[325500]: 2025-12-02 10:16:27.260763161 +0000 UTC m=+0.131670947 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 10:16:27 np0005541914.localdomain podman[325500]: 2025-12-02 10:16:27.34602774 +0000 UTC m=+0.216935536 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 10:16:27 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:16:27 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8710895b-fa91-4f50-bc6f-341cddce5e76", "format": "json"}]: dispatch
Dec 02 10:16:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8710895b-fa91-4f50-bc6f-341cddce5e76, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:16:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8710895b-fa91-4f50-bc6f-341cddce5e76, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:16:27 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:27.543+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8710895b-fa91-4f50-bc6f-341cddce5e76' of type subvolume
Dec 02 10:16:27 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8710895b-fa91-4f50-bc6f-341cddce5e76' of type subvolume
Dec 02 10:16:27 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8710895b-fa91-4f50-bc6f-341cddce5e76", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8710895b-fa91-4f50-bc6f-341cddce5e76, vol_name:cephfs) < ""
Dec 02 10:16:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:16:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8710895b-fa91-4f50-bc6f-341cddce5e76'' moved to trashcan
Dec 02 10:16:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:16:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8710895b-fa91-4f50-bc6f-341cddce5e76, vol_name:cephfs) < ""
Dec 02 10:16:28 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "3d3f9f10-b22d-485e-b12b-97dbef75415d", "snap_name": "5212963f-950b-468a-8f66-9155c3dfc1c6_f8a18cca-a1e3-4a3b-ab79-627d72357f7e", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:28 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "3d3f9f10-b22d-485e-b12b-97dbef75415d", "snap_name": "5212963f-950b-468a-8f66-9155c3dfc1c6", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:28 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d07c53ab-0584-4998-92b9-1d7bd9006b39", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:16:28 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d07c53ab-0584-4998-92b9-1d7bd9006b39, vol_name:cephfs) < ""
Dec 02 10:16:28 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d07c53ab-0584-4998-92b9-1d7bd9006b39/.meta.tmp'
Dec 02 10:16:28 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d07c53ab-0584-4998-92b9-1d7bd9006b39/.meta.tmp' to config b'/volumes/_nogroup/d07c53ab-0584-4998-92b9-1d7bd9006b39/.meta'
Dec 02 10:16:28 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:d07c53ab-0584-4998-92b9-1d7bd9006b39, vol_name:cephfs) < ""
Dec 02 10:16:28 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d07c53ab-0584-4998-92b9-1d7bd9006b39", "format": "json"}]: dispatch
Dec 02 10:16:28 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d07c53ab-0584-4998-92b9-1d7bd9006b39, vol_name:cephfs) < ""
Dec 02 10:16:28 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:d07c53ab-0584-4998-92b9-1d7bd9006b39, vol_name:cephfs) < ""
Dec 02 10:16:29 np0005541914.localdomain ceph-mon[301710]: pgmap v678: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 27 KiB/s rd, 79 KiB/s wr, 42 op/s
Dec 02 10:16:29 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8710895b-fa91-4f50-bc6f-341cddce5e76", "format": "json"}]: dispatch
Dec 02 10:16:29 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8710895b-fa91-4f50-bc6f-341cddce5e76", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:29 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:16:29 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:29.265 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:16:29 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v679: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 307 B/s rd, 88 KiB/s wr, 6 op/s
Dec 02 10:16:29 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3d3f9f10-b22d-485e-b12b-97dbef75415d", "format": "json"}]: dispatch
Dec 02 10:16:29 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3d3f9f10-b22d-485e-b12b-97dbef75415d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:16:29 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3d3f9f10-b22d-485e-b12b-97dbef75415d, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:16:29 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:29.420+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3d3f9f10-b22d-485e-b12b-97dbef75415d' of type subvolume
Dec 02 10:16:29 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3d3f9f10-b22d-485e-b12b-97dbef75415d' of type subvolume
Dec 02 10:16:29 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3d3f9f10-b22d-485e-b12b-97dbef75415d", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:29 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3d3f9f10-b22d-485e-b12b-97dbef75415d, vol_name:cephfs) < ""
Dec 02 10:16:29 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3d3f9f10-b22d-485e-b12b-97dbef75415d'' moved to trashcan
Dec 02 10:16:29 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:16:29 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3d3f9f10-b22d-485e-b12b-97dbef75415d, vol_name:cephfs) < ""
Dec 02 10:16:30 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d07c53ab-0584-4998-92b9-1d7bd9006b39", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:16:30 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d07c53ab-0584-4998-92b9-1d7bd9006b39", "format": "json"}]: dispatch
Dec 02 10:16:31 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:31.161 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:16:31 np0005541914.localdomain ceph-mon[301710]: pgmap v679: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 307 B/s rd, 88 KiB/s wr, 6 op/s
Dec 02 10:16:31 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3d3f9f10-b22d-485e-b12b-97dbef75415d", "format": "json"}]: dispatch
Dec 02 10:16:31 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3d3f9f10-b22d-485e-b12b-97dbef75415d", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:31 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v680: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 307 B/s rd, 88 KiB/s wr, 6 op/s
Dec 02 10:16:32 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "997ff79f-92e7-4de5-90ed-58387671be8e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:16:32 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:997ff79f-92e7-4de5-90ed-58387671be8e, vol_name:cephfs) < ""
Dec 02 10:16:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:16:32 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/997ff79f-92e7-4de5-90ed-58387671be8e/.meta.tmp'
Dec 02 10:16:32 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/997ff79f-92e7-4de5-90ed-58387671be8e/.meta.tmp' to config b'/volumes/_nogroup/997ff79f-92e7-4de5-90ed-58387671be8e/.meta'
Dec 02 10:16:32 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:997ff79f-92e7-4de5-90ed-58387671be8e, vol_name:cephfs) < ""
Dec 02 10:16:32 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "997ff79f-92e7-4de5-90ed-58387671be8e", "format": "json"}]: dispatch
Dec 02 10:16:32 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:997ff79f-92e7-4de5-90ed-58387671be8e, vol_name:cephfs) < ""
Dec 02 10:16:32 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:997ff79f-92e7-4de5-90ed-58387671be8e, vol_name:cephfs) < ""
Dec 02 10:16:32 np0005541914.localdomain ceph-mon[301710]: pgmap v680: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 307 B/s rd, 88 KiB/s wr, 6 op/s
Dec 02 10:16:32 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:16:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e277 e277: 6 total, 6 up, 6 in
Dec 02 10:16:32 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:16:32 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:16:33 np0005541914.localdomain systemd[1]: tmp-crun.P2QU2D.mount: Deactivated successfully.
Dec 02 10:16:33 np0005541914.localdomain podman[325558]: 2025-12-02 10:16:33.067918765 +0000 UTC m=+0.070715114 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 10:16:33 np0005541914.localdomain podman[325558]: 2025-12-02 10:16:33.079914293 +0000 UTC m=+0.082710692 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 02 10:16:33 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:16:33 np0005541914.localdomain podman[325559]: 2025-12-02 10:16:33.115825566 +0000 UTC m=+0.118745109 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, release=1755695350, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git)
Dec 02 10:16:33 np0005541914.localdomain podman[325559]: 2025-12-02 10:16:33.129715303 +0000 UTC m=+0.132634776 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, name=ubi9-minimal, architecture=x86_64, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, distribution-scope=public, version=9.6, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 02 10:16:33 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:16:33 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v682: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 103 KiB/s wr, 7 op/s
Dec 02 10:16:33 np0005541914.localdomain podman[239757]: time="2025-12-02T10:16:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:16:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:16:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 10:16:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:16:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19261 "" "Go-http-client/1.1"
Dec 02 10:16:33 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "997ff79f-92e7-4de5-90ed-58387671be8e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:16:33 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "997ff79f-92e7-4de5-90ed-58387671be8e", "format": "json"}]: dispatch
Dec 02 10:16:33 np0005541914.localdomain ceph-mon[301710]: osdmap e277: 6 total, 6 up, 6 in
Dec 02 10:16:34 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:34.268 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:16:34 np0005541914.localdomain ceph-mon[301710]: pgmap v682: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 103 KiB/s wr, 7 op/s
Dec 02 10:16:35 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v683: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 103 KiB/s wr, 7 op/s
Dec 02 10:16:35 np0005541914.localdomain sudo[325600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 10:16:35 np0005541914.localdomain sudo[325600]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:16:35 np0005541914.localdomain sudo[325600]: pam_unix(sudo:session): session closed for user root
Dec 02 10:16:35 np0005541914.localdomain sshd[325617]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 10:16:35 np0005541914.localdomain sudo[325619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 10:16:35 np0005541914.localdomain sudo[325619]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:16:36 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:36.193 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:16:36 np0005541914.localdomain sshd[325617]: Invalid user solana from 193.32.162.146 port 45778
Dec 02 10:16:36 np0005541914.localdomain sshd[325617]: Connection closed by invalid user solana 193.32.162.146 port 45778 [preauth]
Dec 02 10:16:36 np0005541914.localdomain sudo[325619]: pam_unix(sudo:session): session closed for user root
Dec 02 10:16:36 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "997ff79f-92e7-4de5-90ed-58387671be8e", "format": "json"}]: dispatch
Dec 02 10:16:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:997ff79f-92e7-4de5-90ed-58387671be8e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:16:36 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 10:16:36 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:16:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:997ff79f-92e7-4de5-90ed-58387671be8e, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:16:36 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:36.641+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '997ff79f-92e7-4de5-90ed-58387671be8e' of type subvolume
Dec 02 10:16:36 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 02 10:16:36 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:16:36 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '997ff79f-92e7-4de5-90ed-58387671be8e' of type subvolume
Dec 02 10:16:36 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "997ff79f-92e7-4de5-90ed-58387671be8e", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:36 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 02 10:16:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:997ff79f-92e7-4de5-90ed-58387671be8e, vol_name:cephfs) < ""
Dec 02 10:16:36 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] update: starting ev e437591f-a38e-4bd7-8823-784481c3d081 (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:16:36 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] complete: finished ev e437591f-a38e-4bd7-8823-784481c3d081 (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:16:36 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Completed event e437591f-a38e-4bd7-8823-784481c3d081 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 02 10:16:36 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 02 10:16:36 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:16:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/997ff79f-92e7-4de5-90ed-58387671be8e'' moved to trashcan
Dec 02 10:16:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:16:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:997ff79f-92e7-4de5-90ed-58387671be8e, vol_name:cephfs) < ""
Dec 02 10:16:36 np0005541914.localdomain sudo[325669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 10:16:36 np0005541914.localdomain sudo[325669]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:16:36 np0005541914.localdomain sudo[325669]: pam_unix(sudo:session): session closed for user root
Dec 02 10:16:36 np0005541914.localdomain ceph-mon[301710]: pgmap v683: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 103 KiB/s wr, 7 op/s
Dec 02 10:16:36 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:16:36 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:16:36 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:16:36 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:16:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:16:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:16:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:16:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:16:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:16:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:16:37 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v684: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 103 KiB/s wr, 7 op/s
Dec 02 10:16:37 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events
Dec 02 10:16:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 02 10:16:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:16:37 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "997ff79f-92e7-4de5-90ed-58387671be8e", "format": "json"}]: dispatch
Dec 02 10:16:37 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "997ff79f-92e7-4de5-90ed-58387671be8e", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:37 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:16:38 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:16:38Z|00240|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec 02 10:16:38 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7e7db66c-701d-40e6-a69b-fbd4e0d8a416", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:16:38 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7e7db66c-701d-40e6-a69b-fbd4e0d8a416, vol_name:cephfs) < ""
Dec 02 10:16:38 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/7e7db66c-701d-40e6-a69b-fbd4e0d8a416/.meta.tmp'
Dec 02 10:16:38 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/7e7db66c-701d-40e6-a69b-fbd4e0d8a416/.meta.tmp' to config b'/volumes/_nogroup/7e7db66c-701d-40e6-a69b-fbd4e0d8a416/.meta'
Dec 02 10:16:38 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:7e7db66c-701d-40e6-a69b-fbd4e0d8a416, vol_name:cephfs) < ""
Dec 02 10:16:38 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7e7db66c-701d-40e6-a69b-fbd4e0d8a416", "format": "json"}]: dispatch
Dec 02 10:16:38 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7e7db66c-701d-40e6-a69b-fbd4e0d8a416, vol_name:cephfs) < ""
Dec 02 10:16:38 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:7e7db66c-701d-40e6-a69b-fbd4e0d8a416, vol_name:cephfs) < ""
Dec 02 10:16:38 np0005541914.localdomain ceph-mon[301710]: pgmap v684: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 103 KiB/s wr, 7 op/s
Dec 02 10:16:38 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:16:38 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:16:39 np0005541914.localdomain podman[325687]: 2025-12-02 10:16:39.081878334 +0000 UTC m=+0.083030303 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:16:39 np0005541914.localdomain podman[325687]: 2025-12-02 10:16:39.097182294 +0000 UTC m=+0.098334313 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 02 10:16:39 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:16:39 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:39.269 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:16:39 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v685: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 86 KiB/s wr, 5 op/s
Dec 02 10:16:39 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d07c53ab-0584-4998-92b9-1d7bd9006b39", "format": "json"}]: dispatch
Dec 02 10:16:39 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d07c53ab-0584-4998-92b9-1d7bd9006b39, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:16:39 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d07c53ab-0584-4998-92b9-1d7bd9006b39, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:16:39 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd07c53ab-0584-4998-92b9-1d7bd9006b39' of type subvolume
Dec 02 10:16:39 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:39.810+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd07c53ab-0584-4998-92b9-1d7bd9006b39' of type subvolume
Dec 02 10:16:39 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d07c53ab-0584-4998-92b9-1d7bd9006b39", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:39 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d07c53ab-0584-4998-92b9-1d7bd9006b39, vol_name:cephfs) < ""
Dec 02 10:16:39 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d07c53ab-0584-4998-92b9-1d7bd9006b39'' moved to trashcan
Dec 02 10:16:39 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:16:39 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d07c53ab-0584-4998-92b9-1d7bd9006b39, vol_name:cephfs) < ""
Dec 02 10:16:39 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7e7db66c-701d-40e6-a69b-fbd4e0d8a416", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:16:39 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7e7db66c-701d-40e6-a69b-fbd4e0d8a416", "format": "json"}]: dispatch
Dec 02 10:16:40 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:16:40.682 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:16:40 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:16:40.683 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 02 10:16:40 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:40.683 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:16:40 np0005541914.localdomain ceph-mon[301710]: pgmap v685: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 86 KiB/s wr, 5 op/s
Dec 02 10:16:40 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d07c53ab-0584-4998-92b9-1d7bd9006b39", "format": "json"}]: dispatch
Dec 02 10:16:40 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d07c53ab-0584-4998-92b9-1d7bd9006b39", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:41.246 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:16:41 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v686: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 86 KiB/s wr, 5 op/s
Dec 02 10:16:41 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e278 e278: 6 total, 6 up, 6 in
Dec 02 10:16:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:16:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:16:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:16:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:16:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:16:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:16:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:16:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:16:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:16:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:16:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:16:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:16:42 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:16:42 np0005541914.localdomain ceph-mon[301710]: pgmap v686: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 86 KiB/s wr, 5 op/s
Dec 02 10:16:42 np0005541914.localdomain ceph-mon[301710]: osdmap e278: 6 total, 6 up, 6 in
Dec 02 10:16:43 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d7d186aa-42ff-406e-975a-236ed40d3d49", "format": "json"}]: dispatch
Dec 02 10:16:43 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d7d186aa-42ff-406e-975a-236ed40d3d49, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:16:43 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d7d186aa-42ff-406e-975a-236ed40d3d49, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:16:43 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:43.018+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd7d186aa-42ff-406e-975a-236ed40d3d49' of type subvolume
Dec 02 10:16:43 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd7d186aa-42ff-406e-975a-236ed40d3d49' of type subvolume
Dec 02 10:16:43 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d7d186aa-42ff-406e-975a-236ed40d3d49", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:43 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d7d186aa-42ff-406e-975a-236ed40d3d49, vol_name:cephfs) < ""
Dec 02 10:16:43 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d7d186aa-42ff-406e-975a-236ed40d3d49'' moved to trashcan
Dec 02 10:16:43 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:16:43 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d7d186aa-42ff-406e-975a-236ed40d3d49, vol_name:cephfs) < ""
Dec 02 10:16:43 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v688: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 66 KiB/s wr, 3 op/s
Dec 02 10:16:43 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d7d186aa-42ff-406e-975a-236ed40d3d49", "format": "json"}]: dispatch
Dec 02 10:16:43 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d7d186aa-42ff-406e-975a-236ed40d3d49", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:44.272 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:16:44 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:16:44.685 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:16:44 np0005541914.localdomain ceph-mon[301710]: pgmap v688: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 66 KiB/s wr, 3 op/s
Dec 02 10:16:45 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v689: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 66 KiB/s wr, 3 op/s
Dec 02 10:16:45 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7e7db66c-701d-40e6-a69b-fbd4e0d8a416", "format": "json"}]: dispatch
Dec 02 10:16:45 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:7e7db66c-701d-40e6-a69b-fbd4e0d8a416, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:16:45 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:7e7db66c-701d-40e6-a69b-fbd4e0d8a416, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:16:45 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7e7db66c-701d-40e6-a69b-fbd4e0d8a416' of type subvolume
Dec 02 10:16:45 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:45.605+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '7e7db66c-701d-40e6-a69b-fbd4e0d8a416' of type subvolume
Dec 02 10:16:45 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7e7db66c-701d-40e6-a69b-fbd4e0d8a416", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:45 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7e7db66c-701d-40e6-a69b-fbd4e0d8a416, vol_name:cephfs) < ""
Dec 02 10:16:45 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/7e7db66c-701d-40e6-a69b-fbd4e0d8a416'' moved to trashcan
Dec 02 10:16:45 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:16:45 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:7e7db66c-701d-40e6-a69b-fbd4e0d8a416, vol_name:cephfs) < ""
Dec 02 10:16:46 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "75b01de1-fd46-4d42-88bc-75b04e569dcb", "format": "json"}]: dispatch
Dec 02 10:16:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:75b01de1-fd46-4d42-88bc-75b04e569dcb, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:16:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:75b01de1-fd46-4d42-88bc-75b04e569dcb, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:16:46 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:16:46.274+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '75b01de1-fd46-4d42-88bc-75b04e569dcb' of type subvolume
Dec 02 10:16:46 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '75b01de1-fd46-4d42-88bc-75b04e569dcb' of type subvolume
Dec 02 10:16:46 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "75b01de1-fd46-4d42-88bc-75b04e569dcb", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:75b01de1-fd46-4d42-88bc-75b04e569dcb, vol_name:cephfs) < ""
Dec 02 10:16:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:46.284 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:16:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/75b01de1-fd46-4d42-88bc-75b04e569dcb'' moved to trashcan
Dec 02 10:16:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:16:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:75b01de1-fd46-4d42-88bc-75b04e569dcb, vol_name:cephfs) < ""
Dec 02 10:16:46 np0005541914.localdomain ceph-mon[301710]: pgmap v689: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 66 KiB/s wr, 3 op/s
Dec 02 10:16:46 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7e7db66c-701d-40e6-a69b-fbd4e0d8a416", "format": "json"}]: dispatch
Dec 02 10:16:46 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7e7db66c-701d-40e6-a69b-fbd4e0d8a416", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:47 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v690: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 66 KiB/s wr, 3 op/s
Dec 02 10:16:47 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:16:47 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "75b01de1-fd46-4d42-88bc-75b04e569dcb", "format": "json"}]: dispatch
Dec 02 10:16:47 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "75b01de1-fd46-4d42-88bc-75b04e569dcb", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:48 np0005541914.localdomain snmpd[69217]: empty variable list in _query
Dec 02 10:16:48 np0005541914.localdomain snmpd[69217]: empty variable list in _query
Dec 02 10:16:48 np0005541914.localdomain ceph-mon[301710]: pgmap v690: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 66 KiB/s wr, 3 op/s
Dec 02 10:16:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:49.275 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:16:49 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v691: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 87 KiB/s wr, 5 op/s
Dec 02 10:16:50 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c4aff2e0-53b6-4b58-8317-036e112a5bcd", "format": "json"}]: dispatch
Dec 02 10:16:50 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c4aff2e0-53b6-4b58-8317-036e112a5bcd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:16:50 np0005541914.localdomain ceph-mon[301710]: pgmap v691: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 87 KiB/s wr, 5 op/s
Dec 02 10:16:50 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c4aff2e0-53b6-4b58-8317-036e112a5bcd", "format": "json"}]: dispatch
Dec 02 10:16:51 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v692: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 87 KiB/s wr, 5 op/s
Dec 02 10:16:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:51.323 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:16:51 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0.
Dec 02 10:16:51 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:51.864399) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 02 10:16:51 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49
Dec 02 10:16:51 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670611864506, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 2824, "num_deletes": 263, "total_data_size": 4017438, "memory_usage": 4083848, "flush_reason": "Manual Compaction"}
Dec 02 10:16:51 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started
Dec 02 10:16:51 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670611881012, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 2166306, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31193, "largest_seqno": 34012, "table_properties": {"data_size": 2156116, "index_size": 6055, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 27564, "raw_average_key_size": 22, "raw_value_size": 2133473, "raw_average_value_size": 1769, "num_data_blocks": 260, "num_entries": 1206, "num_filter_entries": 1206, "num_deletions": 263, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764670490, "oldest_key_time": 1764670490, "file_creation_time": 1764670611, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Dec 02 10:16:51 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 16681 microseconds, and 7111 cpu microseconds.
Dec 02 10:16:51 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 10:16:51 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:51.881087) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 2166306 bytes OK
Dec 02 10:16:51 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:51.881113) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started
Dec 02 10:16:51 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:51.883606) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done
Dec 02 10:16:51 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:51.883629) EVENT_LOG_v1 {"time_micros": 1764670611883622, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 02 10:16:51 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:51.883650) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 02 10:16:51 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 4003923, prev total WAL file size 4004672, number of live WAL files 2.
Dec 02 10:16:51 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:16:51 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:51.884909) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740034323537' seq:72057594037927935, type:22 .. '6D6772737461740034353038' seq:0, type:0; will stop at (end)
Dec 02 10:16:51 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 02 10:16:51 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(2115KB)], [48(18MB)]
Dec 02 10:16:51 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670611884995, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 21070379, "oldest_snapshot_seqno": -1}
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 14715 keys, 19405700 bytes, temperature: kUnknown
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670612016290, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 19405700, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19319790, "index_size": 48049, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36805, "raw_key_size": 391860, "raw_average_key_size": 26, "raw_value_size": 19068068, "raw_average_value_size": 1295, "num_data_blocks": 1812, "num_entries": 14715, "num_filter_entries": 14715, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669502, "oldest_key_time": 0, "file_creation_time": 1764670611, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}}
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.016679) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 19405700 bytes
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.018631) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 160.4 rd, 147.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 18.0 +0.0 blob) out(18.5 +0.0 blob), read-write-amplify(18.7) write-amplify(9.0) OK, records in: 15205, records dropped: 490 output_compression: NoCompression
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.018662) EVENT_LOG_v1 {"time_micros": 1764670612018649, "job": 28, "event": "compaction_finished", "compaction_time_micros": 131383, "compaction_time_cpu_micros": 56256, "output_level": 6, "num_output_files": 1, "total_output_size": 19405700, "num_input_records": 15205, "num_output_records": 14715, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670612019099, "job": 28, "event": "table_file_deletion", "file_number": 50}
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670612021901, "job": 28, "event": "table_file_deletion", "file_number": 48}
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:51.884701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.021995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.022002) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.022004) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.022006) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.022009) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0.
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.022487) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670612022576, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 270, "num_deletes": 251, "total_data_size": 31616, "memory_usage": 36808, "flush_reason": "Manual Compaction"}
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670612025529, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 22566, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34017, "largest_seqno": 34282, "table_properties": {"data_size": 20734, "index_size": 72, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5042, "raw_average_key_size": 19, "raw_value_size": 17187, "raw_average_value_size": 65, "num_data_blocks": 3, "num_entries": 263, "num_filter_entries": 263, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764670611, "oldest_key_time": 1764670611, "file_creation_time": 1764670612, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 3090 microseconds, and 1139 cpu microseconds.
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.025583) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 22566 bytes OK
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.025619) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.027625) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.027653) EVENT_LOG_v1 {"time_micros": 1764670612027645, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.027682) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 29535, prev total WAL file size 46465, number of live WAL files 2.
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.029628) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003133303532' seq:72057594037927935, type:22 .. '7061786F73003133333034' seq:0, type:0; will stop at (end)
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(22KB)], [51(18MB)]
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670612029669, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 19428266, "oldest_snapshot_seqno": -1}
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 14467 keys, 17868336 bytes, temperature: kUnknown
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670612136530, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 17868336, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17786102, "index_size": 44949, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36229, "raw_key_size": 387148, "raw_average_key_size": 26, "raw_value_size": 17540729, "raw_average_value_size": 1212, "num_data_blocks": 1677, "num_entries": 14467, "num_filter_entries": 14467, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669502, "oldest_key_time": 0, "file_creation_time": 1764670612, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}}
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.136801) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 17868336 bytes
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.139585) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.7 rd, 167.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 18.5 +0.0 blob) out(17.0 +0.0 blob), read-write-amplify(1652.8) write-amplify(791.8) OK, records in: 14978, records dropped: 511 output_compression: NoCompression
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.139611) EVENT_LOG_v1 {"time_micros": 1764670612139600, "job": 30, "event": "compaction_finished", "compaction_time_micros": 106914, "compaction_time_cpu_micros": 53842, "output_level": 6, "num_output_files": 1, "total_output_size": 17868336, "num_input_records": 14978, "num_output_records": 14467, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670612139776, "job": 30, "event": "table_file_deletion", "file_number": 53}
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670612142260, "job": 30, "event": "table_file_deletion", "file_number": 51}
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.029551) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.142285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.142290) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.142292) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.142294) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:16:52.142296) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:16:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c4aff2e0-53b6-4b58-8317-036e112a5bcd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:16:52 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c4aff2e0-53b6-4b58-8317-036e112a5bcd", "format": "json"}]: dispatch
Dec 02 10:16:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c4aff2e0-53b6-4b58-8317-036e112a5bcd, vol_name:cephfs) < ""
Dec 02 10:16:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:c4aff2e0-53b6-4b58-8317-036e112a5bcd, vol_name:cephfs) < ""
Dec 02 10:16:52 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2188bdca-035f-45be-90c0-127aae7698b7", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:16:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:2188bdca-035f-45be-90c0-127aae7698b7, vol_name:cephfs) < ""
Dec 02 10:16:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2188bdca-035f-45be-90c0-127aae7698b7/.meta.tmp'
Dec 02 10:16:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2188bdca-035f-45be-90c0-127aae7698b7/.meta.tmp' to config b'/volumes/_nogroup/2188bdca-035f-45be-90c0-127aae7698b7/.meta'
Dec 02 10:16:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:2188bdca-035f-45be-90c0-127aae7698b7, vol_name:cephfs) < ""
Dec 02 10:16:52 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2188bdca-035f-45be-90c0-127aae7698b7", "format": "json"}]: dispatch
Dec 02 10:16:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2188bdca-035f-45be-90c0-127aae7698b7, vol_name:cephfs) < ""
Dec 02 10:16:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2188bdca-035f-45be-90c0-127aae7698b7, vol_name:cephfs) < ""
Dec 02 10:16:52 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:16:53 np0005541914.localdomain ceph-mon[301710]: pgmap v692: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 87 KiB/s wr, 5 op/s
Dec 02 10:16:53 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c4aff2e0-53b6-4b58-8317-036e112a5bcd", "format": "json"}]: dispatch
Dec 02 10:16:53 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:16:53 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:16:53 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v693: 177 pgs: 177 active+clean; 220 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 894 B/s rd, 101 KiB/s wr, 6 op/s
Dec 02 10:16:54 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2188bdca-035f-45be-90c0-127aae7698b7", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:16:54 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2188bdca-035f-45be-90c0-127aae7698b7", "format": "json"}]: dispatch
Dec 02 10:16:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:54.278 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:16:54 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c4aff2e0-53b6-4b58-8317-036e112a5bcd", "format": "json"}]: dispatch
Dec 02 10:16:54 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:c4aff2e0-53b6-4b58-8317-036e112a5bcd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:16:54 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:c4aff2e0-53b6-4b58-8317-036e112a5bcd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:16:54 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c4aff2e0-53b6-4b58-8317-036e112a5bcd", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:54 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c4aff2e0-53b6-4b58-8317-036e112a5bcd, vol_name:cephfs) < ""
Dec 02 10:16:54 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/c4aff2e0-53b6-4b58-8317-036e112a5bcd'' moved to trashcan
Dec 02 10:16:54 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:16:54 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:c4aff2e0-53b6-4b58-8317-036e112a5bcd, vol_name:cephfs) < ""
Dec 02 10:16:54 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "2188bdca-035f-45be-90c0-127aae7698b7", "snap_name": "db7eb361-3904-47e6-9098-d364d08f2cbb", "format": "json"}]: dispatch
Dec 02 10:16:54 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:db7eb361-3904-47e6-9098-d364d08f2cbb, sub_name:2188bdca-035f-45be-90c0-127aae7698b7, vol_name:cephfs) < ""
Dec 02 10:16:54 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:db7eb361-3904-47e6-9098-d364d08f2cbb, sub_name:2188bdca-035f-45be-90c0-127aae7698b7, vol_name:cephfs) < ""
Dec 02 10:16:55 np0005541914.localdomain ceph-mon[301710]: pgmap v693: 177 pgs: 177 active+clean; 220 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 894 B/s rd, 101 KiB/s wr, 6 op/s
Dec 02 10:16:55 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v694: 177 pgs: 177 active+clean; 220 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 67 KiB/s wr, 3 op/s
Dec 02 10:16:56 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c4aff2e0-53b6-4b58-8317-036e112a5bcd", "format": "json"}]: dispatch
Dec 02 10:16:56 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c4aff2e0-53b6-4b58-8317-036e112a5bcd", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:56 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "2188bdca-035f-45be-90c0-127aae7698b7", "snap_name": "db7eb361-3904-47e6-9098-d364d08f2cbb", "format": "json"}]: dispatch
Dec 02 10:16:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:56.364 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:16:56 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ce71e0bd-fac0-489e-baae-8568840b81a1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:16:56 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ce71e0bd-fac0-489e-baae-8568840b81a1, vol_name:cephfs) < ""
Dec 02 10:16:56 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ce71e0bd-fac0-489e-baae-8568840b81a1/.meta.tmp'
Dec 02 10:16:56 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ce71e0bd-fac0-489e-baae-8568840b81a1/.meta.tmp' to config b'/volumes/_nogroup/ce71e0bd-fac0-489e-baae-8568840b81a1/.meta'
Dec 02 10:16:56 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ce71e0bd-fac0-489e-baae-8568840b81a1, vol_name:cephfs) < ""
Dec 02 10:16:56 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ce71e0bd-fac0-489e-baae-8568840b81a1", "format": "json"}]: dispatch
Dec 02 10:16:56 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ce71e0bd-fac0-489e-baae-8568840b81a1, vol_name:cephfs) < ""
Dec 02 10:16:56 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ce71e0bd-fac0-489e-baae-8568840b81a1, vol_name:cephfs) < ""
Dec 02 10:16:57 np0005541914.localdomain ceph-mon[301710]: pgmap v694: 177 pgs: 177 active+clean; 220 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 67 KiB/s wr, 3 op/s
Dec 02 10:16:57 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:16:57 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v695: 177 pgs: 177 active+clean; 220 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 67 KiB/s wr, 3 op/s
Dec 02 10:16:57 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d6f9e4a2-dde5-48d3-9ade-72d59a880bf2", "snap_name": "395e084c-5f31-4d0b-b40b-8a631da3af09_e203339e-4ade-455c-bcd8-fd80834b9e84", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:395e084c-5f31-4d0b-b40b-8a631da3af09_e203339e-4ade-455c-bcd8-fd80834b9e84, sub_name:d6f9e4a2-dde5-48d3-9ade-72d59a880bf2, vol_name:cephfs) < ""
Dec 02 10:16:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d6f9e4a2-dde5-48d3-9ade-72d59a880bf2/.meta.tmp'
Dec 02 10:16:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d6f9e4a2-dde5-48d3-9ade-72d59a880bf2/.meta.tmp' to config b'/volumes/_nogroup/d6f9e4a2-dde5-48d3-9ade-72d59a880bf2/.meta'
Dec 02 10:16:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:395e084c-5f31-4d0b-b40b-8a631da3af09_e203339e-4ade-455c-bcd8-fd80834b9e84, sub_name:d6f9e4a2-dde5-48d3-9ade-72d59a880bf2, vol_name:cephfs) < ""
Dec 02 10:16:57 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d6f9e4a2-dde5-48d3-9ade-72d59a880bf2", "snap_name": "395e084c-5f31-4d0b-b40b-8a631da3af09", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:395e084c-5f31-4d0b-b40b-8a631da3af09, sub_name:d6f9e4a2-dde5-48d3-9ade-72d59a880bf2, vol_name:cephfs) < ""
Dec 02 10:16:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/d6f9e4a2-dde5-48d3-9ade-72d59a880bf2/.meta.tmp'
Dec 02 10:16:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/d6f9e4a2-dde5-48d3-9ade-72d59a880bf2/.meta.tmp' to config b'/volumes/_nogroup/d6f9e4a2-dde5-48d3-9ade-72d59a880bf2/.meta'
Dec 02 10:16:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:395e084c-5f31-4d0b-b40b-8a631da3af09, sub_name:d6f9e4a2-dde5-48d3-9ade-72d59a880bf2, vol_name:cephfs) < ""
Dec 02 10:16:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:16:57 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:16:58 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:16:58 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:16:58 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "2188bdca-035f-45be-90c0-127aae7698b7", "snap_name": "db7eb361-3904-47e6-9098-d364d08f2cbb_f3eeec94-da59-4afe-97b3-e80e98f55085", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:58 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:16:58 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:db7eb361-3904-47e6-9098-d364d08f2cbb_f3eeec94-da59-4afe-97b3-e80e98f55085, sub_name:2188bdca-035f-45be-90c0-127aae7698b7, vol_name:cephfs) < ""
Dec 02 10:16:58 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2188bdca-035f-45be-90c0-127aae7698b7/.meta.tmp'
Dec 02 10:16:58 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2188bdca-035f-45be-90c0-127aae7698b7/.meta.tmp' to config b'/volumes/_nogroup/2188bdca-035f-45be-90c0-127aae7698b7/.meta'
Dec 02 10:16:58 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:db7eb361-3904-47e6-9098-d364d08f2cbb_f3eeec94-da59-4afe-97b3-e80e98f55085, sub_name:2188bdca-035f-45be-90c0-127aae7698b7, vol_name:cephfs) < ""
Dec 02 10:16:58 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "2188bdca-035f-45be-90c0-127aae7698b7", "snap_name": "db7eb361-3904-47e6-9098-d364d08f2cbb", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:58 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:db7eb361-3904-47e6-9098-d364d08f2cbb, sub_name:2188bdca-035f-45be-90c0-127aae7698b7, vol_name:cephfs) < ""
Dec 02 10:16:58 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2188bdca-035f-45be-90c0-127aae7698b7/.meta.tmp'
Dec 02 10:16:58 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2188bdca-035f-45be-90c0-127aae7698b7/.meta.tmp' to config b'/volumes/_nogroup/2188bdca-035f-45be-90c0-127aae7698b7/.meta'
Dec 02 10:16:58 np0005541914.localdomain podman[325710]: 2025-12-02 10:16:58.091624543 +0000 UTC m=+0.077991690 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, io.buildah.version=1.41.3)
Dec 02 10:16:58 np0005541914.localdomain podman[325710]: 2025-12-02 10:16:58.104786866 +0000 UTC m=+0.091154003 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 02 10:16:58 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ce71e0bd-fac0-489e-baae-8568840b81a1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:16:58 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ce71e0bd-fac0-489e-baae-8568840b81a1", "format": "json"}]: dispatch
Dec 02 10:16:58 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:db7eb361-3904-47e6-9098-d364d08f2cbb, sub_name:2188bdca-035f-45be-90c0-127aae7698b7, vol_name:cephfs) < ""
Dec 02 10:16:58 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:16:58 np0005541914.localdomain podman[325709]: 2025-12-02 10:16:58.146901296 +0000 UTC m=+0.134704508 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 02 10:16:58 np0005541914.localdomain podman[325709]: 2025-12-02 10:16:58.156363187 +0000 UTC m=+0.144166369 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 02 10:16:58 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:16:58 np0005541914.localdomain podman[325708]: 2025-12-02 10:16:58.107671674 +0000 UTC m=+0.098946602 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, managed_by=edpm_ansible)
Dec 02 10:16:58 np0005541914.localdomain podman[325708]: 2025-12-02 10:16:58.237217454 +0000 UTC m=+0.228492452 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true)
Dec 02 10:16:58 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:16:58 np0005541914.localdomain podman[325716]: 2025-12-02 10:16:58.160372959 +0000 UTC m=+0.138870535 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 02 10:16:58 np0005541914.localdomain podman[325716]: 2025-12-02 10:16:58.294932992 +0000 UTC m=+0.273430578 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 02 10:16:58 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:16:59 np0005541914.localdomain ceph-mon[301710]: pgmap v695: 177 pgs: 177 active+clean; 220 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 67 KiB/s wr, 3 op/s
Dec 02 10:16:59 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d6f9e4a2-dde5-48d3-9ade-72d59a880bf2", "snap_name": "395e084c-5f31-4d0b-b40b-8a631da3af09_e203339e-4ade-455c-bcd8-fd80834b9e84", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:59 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "d6f9e4a2-dde5-48d3-9ade-72d59a880bf2", "snap_name": "395e084c-5f31-4d0b-b40b-8a631da3af09", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:59 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "2188bdca-035f-45be-90c0-127aae7698b7", "snap_name": "db7eb361-3904-47e6-9098-d364d08f2cbb_f3eeec94-da59-4afe-97b3-e80e98f55085", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:59 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "2188bdca-035f-45be-90c0-127aae7698b7", "snap_name": "db7eb361-3904-47e6-9098-d364d08f2cbb", "force": true, "format": "json"}]: dispatch
Dec 02 10:16:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:16:59.281 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:16:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v696: 177 pgs: 177 active+clean; 220 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 102 KiB/s wr, 6 op/s
Dec 02 10:16:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ce71e0bd-fac0-489e-baae-8568840b81a1", "snap_name": "7af3a8b2-5504-4261-9144-956137288f3e", "format": "json"}]: dispatch
Dec 02 10:16:59 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7af3a8b2-5504-4261-9144-956137288f3e, sub_name:ce71e0bd-fac0-489e-baae-8568840b81a1, vol_name:cephfs) < ""
Dec 02 10:16:59 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:7af3a8b2-5504-4261-9144-956137288f3e, sub_name:ce71e0bd-fac0-489e-baae-8568840b81a1, vol_name:cephfs) < ""
Dec 02 10:16:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5b0a592a-aac6-453e-a44c-9563c7dadce2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:16:59 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5b0a592a-aac6-453e-a44c-9563c7dadce2, vol_name:cephfs) < ""
Dec 02 10:16:59 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/5b0a592a-aac6-453e-a44c-9563c7dadce2/.meta.tmp'
Dec 02 10:16:59 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/5b0a592a-aac6-453e-a44c-9563c7dadce2/.meta.tmp' to config b'/volumes/_nogroup/5b0a592a-aac6-453e-a44c-9563c7dadce2/.meta'
Dec 02 10:16:59 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:5b0a592a-aac6-453e-a44c-9563c7dadce2, vol_name:cephfs) < ""
Dec 02 10:16:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5b0a592a-aac6-453e-a44c-9563c7dadce2", "format": "json"}]: dispatch
Dec 02 10:16:59 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5b0a592a-aac6-453e-a44c-9563c7dadce2, vol_name:cephfs) < ""
Dec 02 10:16:59 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:5b0a592a-aac6-453e-a44c-9563c7dadce2, vol_name:cephfs) < ""
Dec 02 10:17:00 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:17:00 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d6f9e4a2-dde5-48d3-9ade-72d59a880bf2", "format": "json"}]: dispatch
Dec 02 10:17:00 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:d6f9e4a2-dde5-48d3-9ade-72d59a880bf2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:17:00 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:d6f9e4a2-dde5-48d3-9ade-72d59a880bf2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:17:00 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:17:00.557+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd6f9e4a2-dde5-48d3-9ade-72d59a880bf2' of type subvolume
Dec 02 10:17:00 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'd6f9e4a2-dde5-48d3-9ade-72d59a880bf2' of type subvolume
Dec 02 10:17:00 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d6f9e4a2-dde5-48d3-9ade-72d59a880bf2", "force": true, "format": "json"}]: dispatch
Dec 02 10:17:00 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d6f9e4a2-dde5-48d3-9ade-72d59a880bf2, vol_name:cephfs) < ""
Dec 02 10:17:00 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/d6f9e4a2-dde5-48d3-9ade-72d59a880bf2'' moved to trashcan
Dec 02 10:17:00 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:17:00 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:d6f9e4a2-dde5-48d3-9ade-72d59a880bf2, vol_name:cephfs) < ""
Dec 02 10:17:01 np0005541914.localdomain ceph-mon[301710]: pgmap v696: 177 pgs: 177 active+clean; 220 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 102 KiB/s wr, 6 op/s
Dec 02 10:17:01 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ce71e0bd-fac0-489e-baae-8568840b81a1", "snap_name": "7af3a8b2-5504-4261-9144-956137288f3e", "format": "json"}]: dispatch
Dec 02 10:17:01 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5b0a592a-aac6-453e-a44c-9563c7dadce2", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:17:01 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5b0a592a-aac6-453e-a44c-9563c7dadce2", "format": "json"}]: dispatch
Dec 02 10:17:01 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2188bdca-035f-45be-90c0-127aae7698b7", "format": "json"}]: dispatch
Dec 02 10:17:01 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2188bdca-035f-45be-90c0-127aae7698b7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:17:01 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2188bdca-035f-45be-90c0-127aae7698b7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:17:01 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:17:01.201+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2188bdca-035f-45be-90c0-127aae7698b7' of type subvolume
Dec 02 10:17:01 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2188bdca-035f-45be-90c0-127aae7698b7' of type subvolume
Dec 02 10:17:01 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2188bdca-035f-45be-90c0-127aae7698b7", "force": true, "format": "json"}]: dispatch
Dec 02 10:17:01 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2188bdca-035f-45be-90c0-127aae7698b7, vol_name:cephfs) < ""
Dec 02 10:17:01 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/2188bdca-035f-45be-90c0-127aae7698b7'' moved to trashcan
Dec 02 10:17:01 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:17:01 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2188bdca-035f-45be-90c0-127aae7698b7, vol_name:cephfs) < ""
Dec 02 10:17:01 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v697: 177 pgs: 177 active+clean; 220 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 59 KiB/s wr, 4 op/s
Dec 02 10:17:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:01.408 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:17:02 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d6f9e4a2-dde5-48d3-9ade-72d59a880bf2", "format": "json"}]: dispatch
Dec 02 10:17:02 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d6f9e4a2-dde5-48d3-9ade-72d59a880bf2", "force": true, "format": "json"}]: dispatch
Dec 02 10:17:02 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2188bdca-035f-45be-90c0-127aae7698b7", "format": "json"}]: dispatch
Dec 02 10:17:02 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:17:02 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e279 e279: 6 total, 6 up, 6 in
Dec 02 10:17:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:17:03.186 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:17:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:17:03.186 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:17:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:17:03.186 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:17:03 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2188bdca-035f-45be-90c0-127aae7698b7", "force": true, "format": "json"}]: dispatch
Dec 02 10:17:03 np0005541914.localdomain ceph-mon[301710]: pgmap v697: 177 pgs: 177 active+clean; 220 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 59 KiB/s wr, 4 op/s
Dec 02 10:17:03 np0005541914.localdomain ceph-mon[301710]: osdmap e279: 6 total, 6 up, 6 in
Dec 02 10:17:03 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8db7e7dd-0638-45f8-969b-0cba743185e7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:17:03 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8db7e7dd-0638-45f8-969b-0cba743185e7, vol_name:cephfs) < ""
Dec 02 10:17:03 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v699: 177 pgs: 177 active+clean; 221 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 119 KiB/s wr, 7 op/s
Dec 02 10:17:03 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/8db7e7dd-0638-45f8-969b-0cba743185e7/.meta.tmp'
Dec 02 10:17:03 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/8db7e7dd-0638-45f8-969b-0cba743185e7/.meta.tmp' to config b'/volumes/_nogroup/8db7e7dd-0638-45f8-969b-0cba743185e7/.meta'
Dec 02 10:17:03 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:8db7e7dd-0638-45f8-969b-0cba743185e7, vol_name:cephfs) < ""
Dec 02 10:17:03 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8db7e7dd-0638-45f8-969b-0cba743185e7", "format": "json"}]: dispatch
Dec 02 10:17:03 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8db7e7dd-0638-45f8-969b-0cba743185e7, vol_name:cephfs) < ""
Dec 02 10:17:03 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:8db7e7dd-0638-45f8-969b-0cba743185e7, vol_name:cephfs) < ""
Dec 02 10:17:03 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "ce71e0bd-fac0-489e-baae-8568840b81a1", "snap_name": "7af3a8b2-5504-4261-9144-956137288f3e", "target_sub_name": "a33ca4d0-df57-473d-9fc9-9e83431eec70", "format": "json"}]: dispatch
Dec 02 10:17:03 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:7af3a8b2-5504-4261-9144-956137288f3e, sub_name:ce71e0bd-fac0-489e-baae-8568840b81a1, target_sub_name:a33ca4d0-df57-473d-9fc9-9e83431eec70, vol_name:cephfs) < ""
Dec 02 10:17:03 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 273 bytes to config b'/volumes/_nogroup/a33ca4d0-df57-473d-9fc9-9e83431eec70/.meta.tmp'
Dec 02 10:17:03 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a33ca4d0-df57-473d-9fc9-9e83431eec70/.meta.tmp' to config b'/volumes/_nogroup/a33ca4d0-df57-473d-9fc9-9e83431eec70/.meta'
Dec 02 10:17:03 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.clone_index] tracking-id a6ca65fd-98b6-4668-9e39-7653f2c50c81 for path b'/volumes/_nogroup/a33ca4d0-df57-473d-9fc9-9e83431eec70'
Dec 02 10:17:03 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 246 bytes to config b'/volumes/_nogroup/ce71e0bd-fac0-489e-baae-8568840b81a1/.meta.tmp'
Dec 02 10:17:03 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ce71e0bd-fac0-489e-baae-8568840b81a1/.meta.tmp' to config b'/volumes/_nogroup/ce71e0bd-fac0-489e-baae-8568840b81a1/.meta'
Dec 02 10:17:03 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:17:03 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_clone(format:json, prefix:fs subvolume snapshot clone, snap_name:7af3a8b2-5504-4261-9144-956137288f3e, sub_name:ce71e0bd-fac0-489e-baae-8568840b81a1, target_sub_name:a33ca4d0-df57-473d-9fc9-9e83431eec70, vol_name:cephfs) < ""
Dec 02 10:17:03 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_cloner] cloning to subvolume path: /volumes/_nogroup/a33ca4d0-df57-473d-9fc9-9e83431eec70
Dec 02 10:17:03 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_cloner] starting clone: (cephfs, None, a33ca4d0-df57-473d-9fc9-9e83431eec70)
Dec 02 10:17:03 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a33ca4d0-df57-473d-9fc9-9e83431eec70", "format": "json"}]: dispatch
Dec 02 10:17:03 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a33ca4d0-df57-473d-9fc9-9e83431eec70, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:17:03 np0005541914.localdomain podman[239757]: time="2025-12-02T10:17:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:17:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:17:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 10:17:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:17:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19256 "" "Go-http-client/1.1"
Dec 02 10:17:03 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:17:03 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:17:04 np0005541914.localdomain systemd[1]: tmp-crun.nAhIR5.mount: Deactivated successfully.
Dec 02 10:17:04 np0005541914.localdomain podman[325790]: 2025-12-02 10:17:04.093172915 +0000 UTC m=+0.095522128 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 10:17:04 np0005541914.localdomain podman[325790]: 2025-12-02 10:17:04.10183988 +0000 UTC m=+0.104189093 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 10:17:04 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:17:04 np0005541914.localdomain systemd[1]: tmp-crun.UnlhhQ.mount: Deactivated successfully.
Dec 02 10:17:04 np0005541914.localdomain podman[325791]: 2025-12-02 10:17:04.185842434 +0000 UTC m=+0.187822976 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, name=ubi9-minimal, managed_by=edpm_ansible, vcs-type=git, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 02 10:17:04 np0005541914.localdomain podman[325791]: 2025-12-02 10:17:04.201914917 +0000 UTC m=+0.203895469 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, distribution-scope=public, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.buildah.version=1.33.7, version=9.6)
Dec 02 10:17:04 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:17:04 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:17:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:04.285 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:17:05 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8db7e7dd-0638-45f8-969b-0cba743185e7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:17:05 np0005541914.localdomain ceph-mon[301710]: pgmap v699: 177 pgs: 177 active+clean; 221 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 119 KiB/s wr, 7 op/s
Dec 02 10:17:05 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8db7e7dd-0638-45f8-969b-0cba743185e7", "format": "json"}]: dispatch
Dec 02 10:17:05 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "ce71e0bd-fac0-489e-baae-8568840b81a1", "snap_name": "7af3a8b2-5504-4261-9144-956137288f3e", "target_sub_name": "a33ca4d0-df57-473d-9fc9-9e83431eec70", "format": "json"}]: dispatch
Dec 02 10:17:05 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a33ca4d0-df57-473d-9fc9-9e83431eec70", "format": "json"}]: dispatch
Dec 02 10:17:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2864597683' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:17:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2864597683' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:17:05 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v700: 177 pgs: 177 active+clean; 221 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 119 KiB/s wr, 7 op/s
Dec 02 10:17:06 np0005541914.localdomain ceph-mon[301710]: pgmap v700: 177 pgs: 177 active+clean; 221 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 119 KiB/s wr, 7 op/s
Dec 02 10:17:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:06.457 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:17:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:17:06
Dec 02 10:17:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 02 10:17:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] do_upmap
Dec 02 10:17:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] pools ['volumes', 'manila_data', 'backups', 'vms', '.mgr', 'images', 'manila_metadata']
Dec 02 10:17:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes
Dec 02 10:17:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:17:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:17:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:17:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:17:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:17:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:17:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_cloner] Delayed cloning (cephfs, None, a33ca4d0-df57-473d-9fc9-9e83431eec70) -- by 0 seconds
Dec 02 10:17:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 277 bytes to config b'/volumes/_nogroup/a33ca4d0-df57-473d-9fc9-9e83431eec70/.meta.tmp'
Dec 02 10:17:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a33ca4d0-df57-473d-9fc9-9e83431eec70/.meta.tmp' to config b'/volumes/_nogroup/a33ca4d0-df57-473d-9fc9-9e83431eec70/.meta'
Dec 02 10:17:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a33ca4d0-df57-473d-9fc9-9e83431eec70, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:17:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "421f49de-9caa-4e96-8ed7-c70fac5c9582", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:17:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:421f49de-9caa-4e96-8ed7-c70fac5c9582, vol_name:cephfs) < ""
Dec 02 10:17:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v701: 177 pgs: 177 active+clean; 221 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 119 KiB/s wr, 7 op/s
Dec 02 10:17:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust
Dec 02 10:17:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:17:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 02 10:17:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:17:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32)
Dec 02 10:17:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:17:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32)
Dec 02 10:17:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:17:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32)
Dec 02 10:17:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:17:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32)
Dec 02 10:17:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:17:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.0905220547180346e-06 of space, bias 1.0, pg target 0.00021701388888888888 quantized to 32 (current 32)
Dec 02 10:17:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:17:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0020270078692071467 of space, bias 4.0, pg target 1.6134982638888886 quantized to 16 (current 16)
Dec 02 10:17:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 02 10:17:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:17:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 02 10:17:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:17:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:17:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:17:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:17:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:17:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:17:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:17:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:07.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:17:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:17:07 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/3859412889' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:17:08 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "421f49de-9caa-4e96-8ed7-c70fac5c9582", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:17:08 np0005541914.localdomain ceph-mon[301710]: pgmap v701: 177 pgs: 177 active+clean; 221 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 119 KiB/s wr, 7 op/s
Dec 02 10:17:08 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/477887898' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:17:09 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:09.287 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:17:09 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v702: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 120 KiB/s wr, 7 op/s
Dec 02 10:17:09 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:09.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:17:09 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:09.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:17:09 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:17:10 np0005541914.localdomain podman[325833]: 2025-12-02 10:17:10.064712786 +0000 UTC m=+0.072817542 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec 02 10:17:10 np0005541914.localdomain podman[325833]: 2025-12-02 10:17:10.100878314 +0000 UTC m=+0.108983070 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:17:10 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:17:10 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:10.523 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:17:10 np0005541914.localdomain ceph-mon[301710]: pgmap v702: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 120 KiB/s wr, 7 op/s
Dec 02 10:17:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 10:17:11 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 9000.1 total, 600.0 interval
                                                          Cumulative writes: 20K writes, 81K keys, 20K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.01 MB/s
                                                          Cumulative WAL: 20K writes, 7249 syncs, 2.85 writes per sync, written: 0.07 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 12K writes, 47K keys, 12K commit groups, 1.0 writes per commit group, ingest: 44.32 MB, 0.07 MB/s
                                                          Interval WAL: 12K writes, 5057 syncs, 2.42 writes per sync, written: 0.04 GB, 0.07 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 02 10:17:11 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v703: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 120 KiB/s wr, 7 op/s
Dec 02 10:17:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:11.474 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:17:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:11.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:17:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:11.548 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:17:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:11.548 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:17:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:11.549 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:17:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:11.549 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 10:17:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:11.549 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:17:11 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e280 e280: 6 total, 6 up, 6 in
Dec 02 10:17:12 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:17:12 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3832367767' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:17:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:12.038 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:17:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:17:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:17:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:17:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:17:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:17:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:17:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:17:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:17:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:17:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:17:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:17:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:17:12 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_cloner] copying data from b'/volumes/_nogroup/ce71e0bd-fac0-489e-baae-8568840b81a1/.snap/7af3a8b2-5504-4261-9144-956137288f3e/1bbce990-c6c2-412f-a746-3f417e5bfa8d' to b'/volumes/_nogroup/a33ca4d0-df57-473d-9fc9-9e83431eec70/c7f20348-1e91-4645-90a4-2dc42aa24452'
Dec 02 10:17:12 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/421f49de-9caa-4e96-8ed7-c70fac5c9582/.meta.tmp'
Dec 02 10:17:12 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/421f49de-9caa-4e96-8ed7-c70fac5c9582/.meta.tmp' to config b'/volumes/_nogroup/421f49de-9caa-4e96-8ed7-c70fac5c9582/.meta'
Dec 02 10:17:12 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:421f49de-9caa-4e96-8ed7-c70fac5c9582, vol_name:cephfs) < ""
Dec 02 10:17:12 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "421f49de-9caa-4e96-8ed7-c70fac5c9582", "format": "json"}]: dispatch
Dec 02 10:17:12 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:421f49de-9caa-4e96-8ed7-c70fac5c9582, vol_name:cephfs) < ""
Dec 02 10:17:12 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 274 bytes to config b'/volumes/_nogroup/a33ca4d0-df57-473d-9fc9-9e83431eec70/.meta.tmp'
Dec 02 10:17:12 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a33ca4d0-df57-473d-9fc9-9e83431eec70/.meta.tmp' to config b'/volumes/_nogroup/a33ca4d0-df57-473d-9fc9-9e83431eec70/.meta'
Dec 02 10:17:12 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:421f49de-9caa-4e96-8ed7-c70fac5c9582, vol_name:cephfs) < ""
Dec 02 10:17:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:12.229 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 10:17:12 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.clone_index] untracking a6ca65fd-98b6-4668-9e39-7653f2c50c81
Dec 02 10:17:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:12.230 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11386MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 10:17:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:12.230 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:17:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:12.230 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:17:12 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ce71e0bd-fac0-489e-baae-8568840b81a1/.meta.tmp'
Dec 02 10:17:12 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ce71e0bd-fac0-489e-baae-8568840b81a1/.meta.tmp' to config b'/volumes/_nogroup/ce71e0bd-fac0-489e-baae-8568840b81a1/.meta'
Dec 02 10:17:12 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 151 bytes to config b'/volumes/_nogroup/a33ca4d0-df57-473d-9fc9-9e83431eec70/.meta.tmp'
Dec 02 10:17:12 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/a33ca4d0-df57-473d-9fc9-9e83431eec70/.meta.tmp' to config b'/volumes/_nogroup/a33ca4d0-df57-473d-9fc9-9e83431eec70/.meta'
Dec 02 10:17:12 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_cloner] finished clone: (cephfs, None, a33ca4d0-df57-473d-9fc9-9e83431eec70)
Dec 02 10:17:12 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8db7e7dd-0638-45f8-969b-0cba743185e7", "format": "json"}]: dispatch
Dec 02 10:17:12 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:8db7e7dd-0638-45f8-969b-0cba743185e7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:17:12 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:8db7e7dd-0638-45f8-969b-0cba743185e7, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:17:12 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:17:12.303+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8db7e7dd-0638-45f8-969b-0cba743185e7' of type subvolume
Dec 02 10:17:12 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '8db7e7dd-0638-45f8-969b-0cba743185e7' of type subvolume
Dec 02 10:17:12 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8db7e7dd-0638-45f8-969b-0cba743185e7", "force": true, "format": "json"}]: dispatch
Dec 02 10:17:12 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8db7e7dd-0638-45f8-969b-0cba743185e7, vol_name:cephfs) < ""
Dec 02 10:17:12 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/8db7e7dd-0638-45f8-969b-0cba743185e7'' moved to trashcan
Dec 02 10:17:12 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:17:12 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:8db7e7dd-0638-45f8-969b-0cba743185e7, vol_name:cephfs) < ""
Dec 02 10:17:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:12.457 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 10:17:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:12.457 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 10:17:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:12.477 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:17:12 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:17:12 np0005541914.localdomain ceph-mon[301710]: pgmap v703: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 120 KiB/s wr, 7 op/s
Dec 02 10:17:12 np0005541914.localdomain ceph-mon[301710]: osdmap e280: 6 total, 6 up, 6 in
Dec 02 10:17:12 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/3832367767' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:17:12 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "421f49de-9caa-4e96-8ed7-c70fac5c9582", "format": "json"}]: dispatch
Dec 02 10:17:12 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:17:12 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:17:12 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4112839142' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:17:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:12.942 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:17:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:12.947 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 10:17:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:13.063 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 10:17:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:13.066 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 10:17:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:13.066 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:17:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v705: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 107 KiB/s wr, 6 op/s
Dec 02 10:17:13 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8db7e7dd-0638-45f8-969b-0cba743185e7", "format": "json"}]: dispatch
Dec 02 10:17:13 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8db7e7dd-0638-45f8-969b-0cba743185e7", "force": true, "format": "json"}]: dispatch
Dec 02 10:17:13 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/4112839142' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:17:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:14.067 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:17:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:14.067 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 10:17:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:14.068 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 10:17:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:14.093 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 10:17:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:14.093 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:17:14 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "421f49de-9caa-4e96-8ed7-c70fac5c9582", "snap_name": "887f67e9-2bf7-45b5-84dd-6cbee4d7656b", "format": "json"}]: dispatch
Dec 02 10:17:14 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:887f67e9-2bf7-45b5-84dd-6cbee4d7656b, sub_name:421f49de-9caa-4e96-8ed7-c70fac5c9582, vol_name:cephfs) < ""
Dec 02 10:17:14 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:887f67e9-2bf7-45b5-84dd-6cbee4d7656b, sub_name:421f49de-9caa-4e96-8ed7-c70fac5c9582, vol_name:cephfs) < ""
Dec 02 10:17:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:14.292 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:17:14 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ee9ab43f-a3e3-4447-9084-6b663c27a445", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:17:14 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ee9ab43f-a3e3-4447-9084-6b663c27a445, vol_name:cephfs) < ""
Dec 02 10:17:14 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ee9ab43f-a3e3-4447-9084-6b663c27a445/.meta.tmp'
Dec 02 10:17:14 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ee9ab43f-a3e3-4447-9084-6b663c27a445/.meta.tmp' to config b'/volumes/_nogroup/ee9ab43f-a3e3-4447-9084-6b663c27a445/.meta'
Dec 02 10:17:14 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:ee9ab43f-a3e3-4447-9084-6b663c27a445, vol_name:cephfs) < ""
Dec 02 10:17:14 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ee9ab43f-a3e3-4447-9084-6b663c27a445", "format": "json"}]: dispatch
Dec 02 10:17:14 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ee9ab43f-a3e3-4447-9084-6b663c27a445, vol_name:cephfs) < ""
Dec 02 10:17:14 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:ee9ab43f-a3e3-4447-9084-6b663c27a445, vol_name:cephfs) < ""
Dec 02 10:17:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:14.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:17:14 np0005541914.localdomain ceph-mon[301710]: pgmap v705: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 107 KiB/s wr, 6 op/s
Dec 02 10:17:14 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:17:14 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bf788ca1-8e50-4d58-9737-f8c6482ff48b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:17:14 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bf788ca1-8e50-4d58-9737-f8c6482ff48b, vol_name:cephfs) < ""
Dec 02 10:17:15 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bf788ca1-8e50-4d58-9737-f8c6482ff48b/.meta.tmp'
Dec 02 10:17:15 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bf788ca1-8e50-4d58-9737-f8c6482ff48b/.meta.tmp' to config b'/volumes/_nogroup/bf788ca1-8e50-4d58-9737-f8c6482ff48b/.meta'
Dec 02 10:17:15 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:bf788ca1-8e50-4d58-9737-f8c6482ff48b, vol_name:cephfs) < ""
Dec 02 10:17:15 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bf788ca1-8e50-4d58-9737-f8c6482ff48b", "format": "json"}]: dispatch
Dec 02 10:17:15 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bf788ca1-8e50-4d58-9737-f8c6482ff48b, vol_name:cephfs) < ""
Dec 02 10:17:15 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:bf788ca1-8e50-4d58-9737-f8c6482ff48b, vol_name:cephfs) < ""
Dec 02 10:17:15 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v706: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 107 KiB/s wr, 6 op/s
Dec 02 10:17:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:15.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:17:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:15.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 10:17:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 10:17:15 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 9000.2 total, 600.0 interval
                                                          Cumulative writes: 24K writes, 90K keys, 24K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.01 MB/s
                                                          Cumulative WAL: 24K writes, 8465 syncs, 2.85 writes per sync, written: 0.06 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 14K writes, 50K keys, 14K commit groups, 1.0 writes per commit group, ingest: 27.33 MB, 0.05 MB/s
                                                          Interval WAL: 14K writes, 6036 syncs, 2.36 writes per sync, written: 0.03 GB, 0.05 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 02 10:17:15 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "421f49de-9caa-4e96-8ed7-c70fac5c9582", "snap_name": "887f67e9-2bf7-45b5-84dd-6cbee4d7656b", "format": "json"}]: dispatch
Dec 02 10:17:15 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ee9ab43f-a3e3-4447-9084-6b663c27a445", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:17:15 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ee9ab43f-a3e3-4447-9084-6b663c27a445", "format": "json"}]: dispatch
Dec 02 10:17:15 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bf788ca1-8e50-4d58-9737-f8c6482ff48b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:17:15 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bf788ca1-8e50-4d58-9737-f8c6482ff48b", "format": "json"}]: dispatch
Dec 02 10:17:15 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:17:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:16.519 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:17:16 np0005541914.localdomain ceph-mon[301710]: pgmap v706: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 107 KiB/s wr, 6 op/s
Dec 02 10:17:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v707: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 107 KiB/s wr, 6 op/s
Dec 02 10:17:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:17:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "421f49de-9caa-4e96-8ed7-c70fac5c9582", "snap_name": "887f67e9-2bf7-45b5-84dd-6cbee4d7656b_adf6c68d-e84b-4410-ac7f-adf3a353b05d", "force": true, "format": "json"}]: dispatch
Dec 02 10:17:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:887f67e9-2bf7-45b5-84dd-6cbee4d7656b_adf6c68d-e84b-4410-ac7f-adf3a353b05d, sub_name:421f49de-9caa-4e96-8ed7-c70fac5c9582, vol_name:cephfs) < ""
Dec 02 10:17:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/421f49de-9caa-4e96-8ed7-c70fac5c9582/.meta.tmp'
Dec 02 10:17:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/421f49de-9caa-4e96-8ed7-c70fac5c9582/.meta.tmp' to config b'/volumes/_nogroup/421f49de-9caa-4e96-8ed7-c70fac5c9582/.meta'
Dec 02 10:17:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:887f67e9-2bf7-45b5-84dd-6cbee4d7656b_adf6c68d-e84b-4410-ac7f-adf3a353b05d, sub_name:421f49de-9caa-4e96-8ed7-c70fac5c9582, vol_name:cephfs) < ""
Dec 02 10:17:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "421f49de-9caa-4e96-8ed7-c70fac5c9582", "snap_name": "887f67e9-2bf7-45b5-84dd-6cbee4d7656b", "force": true, "format": "json"}]: dispatch
Dec 02 10:17:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:887f67e9-2bf7-45b5-84dd-6cbee4d7656b, sub_name:421f49de-9caa-4e96-8ed7-c70fac5c9582, vol_name:cephfs) < ""
Dec 02 10:17:18 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/421f49de-9caa-4e96-8ed7-c70fac5c9582/.meta.tmp'
Dec 02 10:17:18 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/421f49de-9caa-4e96-8ed7-c70fac5c9582/.meta.tmp' to config b'/volumes/_nogroup/421f49de-9caa-4e96-8ed7-c70fac5c9582/.meta'
Dec 02 10:17:18 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:887f67e9-2bf7-45b5-84dd-6cbee4d7656b, sub_name:421f49de-9caa-4e96-8ed7-c70fac5c9582, vol_name:cephfs) < ""
Dec 02 10:17:18 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ee9ab43f-a3e3-4447-9084-6b663c27a445", "format": "json"}]: dispatch
Dec 02 10:17:18 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ee9ab43f-a3e3-4447-9084-6b663c27a445, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:17:18 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ee9ab43f-a3e3-4447-9084-6b663c27a445, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:17:18 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:17:18.263+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ee9ab43f-a3e3-4447-9084-6b663c27a445' of type subvolume
Dec 02 10:17:18 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ee9ab43f-a3e3-4447-9084-6b663c27a445' of type subvolume
Dec 02 10:17:18 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ee9ab43f-a3e3-4447-9084-6b663c27a445", "force": true, "format": "json"}]: dispatch
Dec 02 10:17:18 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ee9ab43f-a3e3-4447-9084-6b663c27a445, vol_name:cephfs) < ""
Dec 02 10:17:18 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ee9ab43f-a3e3-4447-9084-6b663c27a445'' moved to trashcan
Dec 02 10:17:18 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:17:18 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ee9ab43f-a3e3-4447-9084-6b663c27a445, vol_name:cephfs) < ""
Dec 02 10:17:18 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "bf788ca1-8e50-4d58-9737-f8c6482ff48b", "snap_name": "04e237e6-bdd1-4932-bf28-2abac8ca1d29", "format": "json"}]: dispatch
Dec 02 10:17:18 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:04e237e6-bdd1-4932-bf28-2abac8ca1d29, sub_name:bf788ca1-8e50-4d58-9737-f8c6482ff48b, vol_name:cephfs) < ""
Dec 02 10:17:18 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_create(format:json, prefix:fs subvolume snapshot create, snap_name:04e237e6-bdd1-4932-bf28-2abac8ca1d29, sub_name:bf788ca1-8e50-4d58-9737-f8c6482ff48b, vol_name:cephfs) < ""
Dec 02 10:17:18 np0005541914.localdomain ceph-mon[301710]: pgmap v707: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 107 KiB/s wr, 6 op/s
Dec 02 10:17:18 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "421f49de-9caa-4e96-8ed7-c70fac5c9582", "snap_name": "887f67e9-2bf7-45b5-84dd-6cbee4d7656b_adf6c68d-e84b-4410-ac7f-adf3a353b05d", "force": true, "format": "json"}]: dispatch
Dec 02 10:17:18 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "421f49de-9caa-4e96-8ed7-c70fac5c9582", "snap_name": "887f67e9-2bf7-45b5-84dd-6cbee4d7656b", "force": true, "format": "json"}]: dispatch
Dec 02 10:17:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:19.292 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:17:19 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v708: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 103 KiB/s wr, 6 op/s
Dec 02 10:17:19 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ee9ab43f-a3e3-4447-9084-6b663c27a445", "format": "json"}]: dispatch
Dec 02 10:17:19 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ee9ab43f-a3e3-4447-9084-6b663c27a445", "force": true, "format": "json"}]: dispatch
Dec 02 10:17:19 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "bf788ca1-8e50-4d58-9737-f8c6482ff48b", "snap_name": "04e237e6-bdd1-4932-bf28-2abac8ca1d29", "format": "json"}]: dispatch
Dec 02 10:17:21 np0005541914.localdomain ceph-mon[301710]: pgmap v708: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 103 KiB/s wr, 6 op/s
Dec 02 10:17:21 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "421f49de-9caa-4e96-8ed7-c70fac5c9582", "format": "json"}]: dispatch
Dec 02 10:17:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:421f49de-9caa-4e96-8ed7-c70fac5c9582, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:17:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:421f49de-9caa-4e96-8ed7-c70fac5c9582, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:17:21 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:17:21.127+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '421f49de-9caa-4e96-8ed7-c70fac5c9582' of type subvolume
Dec 02 10:17:21 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '421f49de-9caa-4e96-8ed7-c70fac5c9582' of type subvolume
Dec 02 10:17:21 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "421f49de-9caa-4e96-8ed7-c70fac5c9582", "force": true, "format": "json"}]: dispatch
Dec 02 10:17:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:421f49de-9caa-4e96-8ed7-c70fac5c9582, vol_name:cephfs) < ""
Dec 02 10:17:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/421f49de-9caa-4e96-8ed7-c70fac5c9582'' moved to trashcan
Dec 02 10:17:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:17:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:421f49de-9caa-4e96-8ed7-c70fac5c9582, vol_name:cephfs) < ""
Dec 02 10:17:21 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v709: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 103 KiB/s wr, 6 op/s
Dec 02 10:17:21 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:21.556 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:17:21 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b8feab34-30e1-4504-93e1-fee137b334fd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:17:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b8feab34-30e1-4504-93e1-fee137b334fd, vol_name:cephfs) < ""
Dec 02 10:17:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b8feab34-30e1-4504-93e1-fee137b334fd/.meta.tmp'
Dec 02 10:17:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b8feab34-30e1-4504-93e1-fee137b334fd/.meta.tmp' to config b'/volumes/_nogroup/b8feab34-30e1-4504-93e1-fee137b334fd/.meta'
Dec 02 10:17:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b8feab34-30e1-4504-93e1-fee137b334fd, vol_name:cephfs) < ""
Dec 02 10:17:21 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b8feab34-30e1-4504-93e1-fee137b334fd", "format": "json"}]: dispatch
Dec 02 10:17:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b8feab34-30e1-4504-93e1-fee137b334fd, vol_name:cephfs) < ""
Dec 02 10:17:21 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b8feab34-30e1-4504-93e1-fee137b334fd, vol_name:cephfs) < ""
Dec 02 10:17:22 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "421f49de-9caa-4e96-8ed7-c70fac5c9582", "format": "json"}]: dispatch
Dec 02 10:17:22 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "421f49de-9caa-4e96-8ed7-c70fac5c9582", "force": true, "format": "json"}]: dispatch
Dec 02 10:17:22 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/2488539184' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:17:22 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:17:22 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:17:22 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "bf788ca1-8e50-4d58-9737-f8c6482ff48b", "snap_name": "04e237e6-bdd1-4932-bf28-2abac8ca1d29_15204c8c-795d-4343-829f-29f32d779260", "force": true, "format": "json"}]: dispatch
Dec 02 10:17:22 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:04e237e6-bdd1-4932-bf28-2abac8ca1d29_15204c8c-795d-4343-829f-29f32d779260, sub_name:bf788ca1-8e50-4d58-9737-f8c6482ff48b, vol_name:cephfs) < ""
Dec 02 10:17:22 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bf788ca1-8e50-4d58-9737-f8c6482ff48b/.meta.tmp'
Dec 02 10:17:22 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bf788ca1-8e50-4d58-9737-f8c6482ff48b/.meta.tmp' to config b'/volumes/_nogroup/bf788ca1-8e50-4d58-9737-f8c6482ff48b/.meta'
Dec 02 10:17:22 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:04e237e6-bdd1-4932-bf28-2abac8ca1d29_15204c8c-795d-4343-829f-29f32d779260, sub_name:bf788ca1-8e50-4d58-9737-f8c6482ff48b, vol_name:cephfs) < ""
Dec 02 10:17:22 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "bf788ca1-8e50-4d58-9737-f8c6482ff48b", "snap_name": "04e237e6-bdd1-4932-bf28-2abac8ca1d29", "force": true, "format": "json"}]: dispatch
Dec 02 10:17:22 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:04e237e6-bdd1-4932-bf28-2abac8ca1d29, sub_name:bf788ca1-8e50-4d58-9737-f8c6482ff48b, vol_name:cephfs) < ""
Dec 02 10:17:22 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/bf788ca1-8e50-4d58-9737-f8c6482ff48b/.meta.tmp'
Dec 02 10:17:22 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/bf788ca1-8e50-4d58-9737-f8c6482ff48b/.meta.tmp' to config b'/volumes/_nogroup/bf788ca1-8e50-4d58-9737-f8c6482ff48b/.meta'
Dec 02 10:17:22 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:04e237e6-bdd1-4932-bf28-2abac8ca1d29, sub_name:bf788ca1-8e50-4d58-9737-f8c6482ff48b, vol_name:cephfs) < ""
Dec 02 10:17:23 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e281 e281: 6 total, 6 up, 6 in
Dec 02 10:17:23 np0005541914.localdomain ceph-mon[301710]: pgmap v709: 177 pgs: 177 active+clean; 222 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 103 KiB/s wr, 6 op/s
Dec 02 10:17:23 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b8feab34-30e1-4504-93e1-fee137b334fd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:17:23 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b8feab34-30e1-4504-93e1-fee137b334fd", "format": "json"}]: dispatch
Dec 02 10:17:23 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/852769459' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:17:23 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v711: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 123 KiB/s wr, 7 op/s
Dec 02 10:17:24 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "bf788ca1-8e50-4d58-9737-f8c6482ff48b", "snap_name": "04e237e6-bdd1-4932-bf28-2abac8ca1d29_15204c8c-795d-4343-829f-29f32d779260", "force": true, "format": "json"}]: dispatch
Dec 02 10:17:24 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "bf788ca1-8e50-4d58-9737-f8c6482ff48b", "snap_name": "04e237e6-bdd1-4932-bf28-2abac8ca1d29", "force": true, "format": "json"}]: dispatch
Dec 02 10:17:24 np0005541914.localdomain ceph-mon[301710]: osdmap e281: 6 total, 6 up, 6 in
Dec 02 10:17:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:24.294 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:17:25 np0005541914.localdomain ceph-mon[301710]: pgmap v711: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 123 KiB/s wr, 7 op/s
Dec 02 10:17:25 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b8feab34-30e1-4504-93e1-fee137b334fd", "format": "json"}]: dispatch
Dec 02 10:17:25 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b8feab34-30e1-4504-93e1-fee137b334fd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:17:25 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b8feab34-30e1-4504-93e1-fee137b334fd, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:17:25 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:17:25.242+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b8feab34-30e1-4504-93e1-fee137b334fd' of type subvolume
Dec 02 10:17:25 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b8feab34-30e1-4504-93e1-fee137b334fd' of type subvolume
Dec 02 10:17:25 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b8feab34-30e1-4504-93e1-fee137b334fd", "force": true, "format": "json"}]: dispatch
Dec 02 10:17:25 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b8feab34-30e1-4504-93e1-fee137b334fd, vol_name:cephfs) < ""
Dec 02 10:17:25 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b8feab34-30e1-4504-93e1-fee137b334fd'' moved to trashcan
Dec 02 10:17:25 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:17:25 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b8feab34-30e1-4504-93e1-fee137b334fd, vol_name:cephfs) < ""
Dec 02 10:17:25 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v712: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 123 KiB/s wr, 7 op/s
Dec 02 10:17:26 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bf788ca1-8e50-4d58-9737-f8c6482ff48b", "format": "json"}]: dispatch
Dec 02 10:17:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:bf788ca1-8e50-4d58-9737-f8c6482ff48b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:17:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:bf788ca1-8e50-4d58-9737-f8c6482ff48b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:17:26 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:17:26.143+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bf788ca1-8e50-4d58-9737-f8c6482ff48b' of type subvolume
Dec 02 10:17:26 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'bf788ca1-8e50-4d58-9737-f8c6482ff48b' of type subvolume
Dec 02 10:17:26 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bf788ca1-8e50-4d58-9737-f8c6482ff48b", "force": true, "format": "json"}]: dispatch
Dec 02 10:17:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bf788ca1-8e50-4d58-9737-f8c6482ff48b, vol_name:cephfs) < ""
Dec 02 10:17:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/bf788ca1-8e50-4d58-9737-f8c6482ff48b'' moved to trashcan
Dec 02 10:17:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:17:26 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:bf788ca1-8e50-4d58-9737-f8c6482ff48b, vol_name:cephfs) < ""
Dec 02 10:17:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:26.608 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:17:27 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b8feab34-30e1-4504-93e1-fee137b334fd", "format": "json"}]: dispatch
Dec 02 10:17:27 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b8feab34-30e1-4504-93e1-fee137b334fd", "force": true, "format": "json"}]: dispatch
Dec 02 10:17:27 np0005541914.localdomain ceph-mon[301710]: pgmap v712: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 123 KiB/s wr, 7 op/s
Dec 02 10:17:27 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bf788ca1-8e50-4d58-9737-f8c6482ff48b", "format": "json"}]: dispatch
Dec 02 10:17:27 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bf788ca1-8e50-4d58-9737-f8c6482ff48b", "force": true, "format": "json"}]: dispatch
Dec 02 10:17:27 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v713: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 123 KiB/s wr, 7 op/s
Dec 02 10:17:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:17:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e282 e282: 6 total, 6 up, 6 in
Dec 02 10:17:28 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "37ec89f0-b485-493a-a6e2-4d54629ab0d1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:17:28 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:37ec89f0-b485-493a-a6e2-4d54629ab0d1, vol_name:cephfs) < ""
Dec 02 10:17:28 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/37ec89f0-b485-493a-a6e2-4d54629ab0d1/.meta.tmp'
Dec 02 10:17:28 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/37ec89f0-b485-493a-a6e2-4d54629ab0d1/.meta.tmp' to config b'/volumes/_nogroup/37ec89f0-b485-493a-a6e2-4d54629ab0d1/.meta'
Dec 02 10:17:28 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:37ec89f0-b485-493a-a6e2-4d54629ab0d1, vol_name:cephfs) < ""
Dec 02 10:17:28 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "37ec89f0-b485-493a-a6e2-4d54629ab0d1", "format": "json"}]: dispatch
Dec 02 10:17:28 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:37ec89f0-b485-493a-a6e2-4d54629ab0d1, vol_name:cephfs) < ""
Dec 02 10:17:28 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:37ec89f0-b485-493a-a6e2-4d54629ab0d1, vol_name:cephfs) < ""
Dec 02 10:17:28 np0005541914.localdomain ceph-mon[301710]: pgmap v713: 177 pgs: 177 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 123 KiB/s wr, 7 op/s
Dec 02 10:17:28 np0005541914.localdomain ceph-mon[301710]: osdmap e282: 6 total, 6 up, 6 in
Dec 02 10:17:28 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:17:28 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:17:28 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:17:28 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:17:29 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:17:29 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v715: 177 pgs: 1 active+clean+snaptrim, 176 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.2 KiB/s rd, 150 KiB/s wr, 9 op/s
Dec 02 10:17:29 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:29.395 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:17:29 np0005541914.localdomain podman[325896]: 2025-12-02 10:17:29.448864835 +0000 UTC m=+0.445367746 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 10:17:29 np0005541914.localdomain podman[325896]: 2025-12-02 10:17:29.454491838 +0000 UTC m=+0.450994809 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 10:17:29 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:17:29 np0005541914.localdomain podman[325895]: 2025-12-02 10:17:29.417538076 +0000 UTC m=+0.417507433 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec 02 10:17:29 np0005541914.localdomain podman[325897]: 2025-12-02 10:17:29.431412331 +0000 UTC m=+0.429253433 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:17:29 np0005541914.localdomain podman[325902]: 2025-12-02 10:17:29.489950824 +0000 UTC m=+0.479723370 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 02 10:17:29 np0005541914.localdomain podman[325897]: 2025-12-02 10:17:29.511544155 +0000 UTC m=+0.509385247 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_compute)
Dec 02 10:17:29 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:17:29 np0005541914.localdomain podman[325902]: 2025-12-02 10:17:29.524576415 +0000 UTC m=+0.514348961 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_id=ovn_controller)
Dec 02 10:17:29 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:17:29 np0005541914.localdomain podman[325895]: 2025-12-02 10:17:29.55021654 +0000 UTC m=+0.550185897 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:17:29 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:17:29 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "37ec89f0-b485-493a-a6e2-4d54629ab0d1", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:17:29 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "37ec89f0-b485-493a-a6e2-4d54629ab0d1", "format": "json"}]: dispatch
Dec 02 10:17:30 np0005541914.localdomain ceph-mon[301710]: pgmap v715: 177 pgs: 1 active+clean+snaptrim, 176 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.2 KiB/s rd, 150 KiB/s wr, 9 op/s
Dec 02 10:17:31 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v716: 177 pgs: 1 active+clean+snaptrim, 176 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.2 KiB/s rd, 45 KiB/s wr, 4 op/s
Dec 02 10:17:31 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:31.647 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:17:31 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e283 e283: 6 total, 6 up, 6 in
Dec 02 10:17:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:17:32 np0005541914.localdomain ceph-mon[301710]: pgmap v716: 177 pgs: 1 active+clean+snaptrim, 176 active+clean; 223 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1.2 KiB/s rd, 45 KiB/s wr, 4 op/s
Dec 02 10:17:32 np0005541914.localdomain ceph-mon[301710]: osdmap e283: 6 total, 6 up, 6 in
Dec 02 10:17:33 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "37ec89f0-b485-493a-a6e2-4d54629ab0d1", "format": "json"}]: dispatch
Dec 02 10:17:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:37ec89f0-b485-493a-a6e2-4d54629ab0d1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:17:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:37ec89f0-b485-493a-a6e2-4d54629ab0d1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:17:33 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '37ec89f0-b485-493a-a6e2-4d54629ab0d1' of type subvolume
Dec 02 10:17:33 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:17:33.096+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '37ec89f0-b485-493a-a6e2-4d54629ab0d1' of type subvolume
Dec 02 10:17:33 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "37ec89f0-b485-493a-a6e2-4d54629ab0d1", "force": true, "format": "json"}]: dispatch
Dec 02 10:17:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:37ec89f0-b485-493a-a6e2-4d54629ab0d1, vol_name:cephfs) < ""
Dec 02 10:17:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/37ec89f0-b485-493a-a6e2-4d54629ab0d1'' moved to trashcan
Dec 02 10:17:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:17:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:37ec89f0-b485-493a-a6e2-4d54629ab0d1, vol_name:cephfs) < ""
Dec 02 10:17:33 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v718: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 96 KiB/s wr, 6 op/s
Dec 02 10:17:33 np0005541914.localdomain podman[239757]: time="2025-12-02T10:17:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:17:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:17:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 10:17:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:17:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19265 "" "Go-http-client/1.1"
Dec 02 10:17:33 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "37ec89f0-b485-493a-a6e2-4d54629ab0d1", "format": "json"}]: dispatch
Dec 02 10:17:33 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "37ec89f0-b485-493a-a6e2-4d54629ab0d1", "force": true, "format": "json"}]: dispatch
Dec 02 10:17:34 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:34.398 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:17:34 np0005541914.localdomain ceph-mon[301710]: pgmap v718: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 96 KiB/s wr, 6 op/s
Dec 02 10:17:34 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:17:34 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:17:35 np0005541914.localdomain podman[325976]: 2025-12-02 10:17:35.089892601 +0000 UTC m=+0.089025869 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, managed_by=edpm_ansible, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6)
Dec 02 10:17:35 np0005541914.localdomain podman[325976]: 2025-12-02 10:17:35.104533069 +0000 UTC m=+0.103666377 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, architecture=x86_64)
Dec 02 10:17:35 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:17:35 np0005541914.localdomain podman[325975]: 2025-12-02 10:17:35.203163141 +0000 UTC m=+0.204564839 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec 02 10:17:35 np0005541914.localdomain podman[325975]: 2025-12-02 10:17:35.217027966 +0000 UTC m=+0.218429644 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 10:17:35 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:17:35 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v719: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 96 KiB/s wr, 6 op/s
Dec 02 10:17:36 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:36.676 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:17:36 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e284 e284: 6 total, 6 up, 6 in
Dec 02 10:17:36 np0005541914.localdomain ceph-mon[301710]: pgmap v719: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 96 KiB/s wr, 6 op/s
Dec 02 10:17:36 np0005541914.localdomain ceph-mon[301710]: osdmap e284: 6 total, 6 up, 6 in
Dec 02 10:17:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:17:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:17:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:17:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:17:37 np0005541914.localdomain sudo[326016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 10:17:37 np0005541914.localdomain sudo[326016]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:17:37 np0005541914.localdomain sudo[326016]: pam_unix(sudo:session): session closed for user root
Dec 02 10:17:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:17:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:17:37 np0005541914.localdomain sudo[326034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 10:17:37 np0005541914.localdomain sudo[326034]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:17:37 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v721: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 51 KiB/s wr, 2 op/s
Dec 02 10:17:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:17:37 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2826f8b0-e859-462c-8596-fb04c439e342", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:17:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2826f8b0-e859-462c-8596-fb04c439e342, vol_name:cephfs) < ""
Dec 02 10:17:37 np0005541914.localdomain sudo[326034]: pam_unix(sudo:session): session closed for user root
Dec 02 10:17:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 10:17:37 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:17:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/2826f8b0-e859-462c-8596-fb04c439e342/.meta.tmp'
Dec 02 10:17:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/2826f8b0-e859-462c-8596-fb04c439e342/.meta.tmp' to config b'/volumes/_nogroup/2826f8b0-e859-462c-8596-fb04c439e342/.meta'
Dec 02 10:17:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 02 10:17:37 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:17:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:2826f8b0-e859-462c-8596-fb04c439e342, vol_name:cephfs) < ""
Dec 02 10:17:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 02 10:17:37 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2826f8b0-e859-462c-8596-fb04c439e342", "format": "json"}]: dispatch
Dec 02 10:17:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2826f8b0-e859-462c-8596-fb04c439e342, vol_name:cephfs) < ""
Dec 02 10:17:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:2826f8b0-e859-462c-8596-fb04c439e342, vol_name:cephfs) < ""
Dec 02 10:17:37 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] update: starting ev f7f4e511-9bf3-4844-b8a5-5cb908bc3d6d (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:17:37 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] complete: finished ev f7f4e511-9bf3-4844-b8a5-5cb908bc3d6d (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:17:37 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Completed event f7f4e511-9bf3-4844-b8a5-5cb908bc3d6d (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 02 10:17:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 02 10:17:37 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:17:37 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:17:37 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:17:37 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:17:37 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:17:37 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:17:38 np0005541914.localdomain sudo[326084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 10:17:38 np0005541914.localdomain sudo[326084]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:17:38 np0005541914.localdomain sudo[326084]: pam_unix(sudo:session): session closed for user root
Dec 02 10:17:38 np0005541914.localdomain ceph-mon[301710]: pgmap v721: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 51 KiB/s wr, 2 op/s
Dec 02 10:17:38 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2826f8b0-e859-462c-8596-fb04c439e342", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:17:39 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2826f8b0-e859-462c-8596-fb04c439e342", "format": "json"}]: dispatch
Dec 02 10:17:39 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v722: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 91 KiB/s wr, 5 op/s
Dec 02 10:17:39 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:39.400 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:17:40 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:17:41 np0005541914.localdomain ceph-mon[301710]: pgmap v722: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 91 KiB/s wr, 5 op/s
Dec 02 10:17:41 np0005541914.localdomain podman[326103]: 2025-12-02 10:17:41.10179987 +0000 UTC m=+0.105262237 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd)
Dec 02 10:17:41 np0005541914.localdomain podman[326103]: 2025-12-02 10:17:41.116001935 +0000 UTC m=+0.119464302 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:17:41 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:17:41 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v723: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 434 B/s rd, 78 KiB/s wr, 4 op/s
Dec 02 10:17:41 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:41.726 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:17:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:17:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:17:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:17:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:17:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:17:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:17:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:17:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:17:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:17:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:17:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:17:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:17:42 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events
Dec 02 10:17:42 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 02 10:17:42 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2826f8b0-e859-462c-8596-fb04c439e342", "format": "json"}]: dispatch
Dec 02 10:17:42 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:2826f8b0-e859-462c-8596-fb04c439e342, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:17:42 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:2826f8b0-e859-462c-8596-fb04c439e342, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:17:42 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:17:42.450+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2826f8b0-e859-462c-8596-fb04c439e342' of type subvolume
Dec 02 10:17:42 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '2826f8b0-e859-462c-8596-fb04c439e342' of type subvolume
Dec 02 10:17:42 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2826f8b0-e859-462c-8596-fb04c439e342", "force": true, "format": "json"}]: dispatch
Dec 02 10:17:42 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2826f8b0-e859-462c-8596-fb04c439e342, vol_name:cephfs) < ""
Dec 02 10:17:42 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/2826f8b0-e859-462c-8596-fb04c439e342'' moved to trashcan
Dec 02 10:17:42 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:17:42 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:2826f8b0-e859-462c-8596-fb04c439e342, vol_name:cephfs) < ""
Dec 02 10:17:42 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:17:43 np0005541914.localdomain ceph-mon[301710]: pgmap v723: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 434 B/s rd, 78 KiB/s wr, 4 op/s
Dec 02 10:17:43 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:17:43 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v724: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 70 KiB/s wr, 3 op/s
Dec 02 10:17:44 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2826f8b0-e859-462c-8596-fb04c439e342", "format": "json"}]: dispatch
Dec 02 10:17:44 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2826f8b0-e859-462c-8596-fb04c439e342", "force": true, "format": "json"}]: dispatch
Dec 02 10:17:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:44.403 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:17:45 np0005541914.localdomain ceph-mon[301710]: pgmap v724: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 70 KiB/s wr, 3 op/s
Dec 02 10:17:45 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v725: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 70 KiB/s wr, 3 op/s
Dec 02 10:17:46 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:46.773 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:17:46 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5b0a592a-aac6-453e-a44c-9563c7dadce2", "format": "json"}]: dispatch
Dec 02 10:17:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:5b0a592a-aac6-453e-a44c-9563c7dadce2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:17:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:5b0a592a-aac6-453e-a44c-9563c7dadce2, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:17:46 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:17:46.956+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5b0a592a-aac6-453e-a44c-9563c7dadce2' of type subvolume
Dec 02 10:17:46 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '5b0a592a-aac6-453e-a44c-9563c7dadce2' of type subvolume
Dec 02 10:17:46 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5b0a592a-aac6-453e-a44c-9563c7dadce2", "force": true, "format": "json"}]: dispatch
Dec 02 10:17:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5b0a592a-aac6-453e-a44c-9563c7dadce2, vol_name:cephfs) < ""
Dec 02 10:17:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/5b0a592a-aac6-453e-a44c-9563c7dadce2'' moved to trashcan
Dec 02 10:17:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:17:46 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:5b0a592a-aac6-453e-a44c-9563c7dadce2, vol_name:cephfs) < ""
Dec 02 10:17:47 np0005541914.localdomain ceph-mon[301710]: pgmap v725: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 70 KiB/s wr, 3 op/s
Dec 02 10:17:47 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v726: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 196 B/s rd, 67 KiB/s wr, 2 op/s
Dec 02 10:17:47 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:17:48 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5b0a592a-aac6-453e-a44c-9563c7dadce2", "format": "json"}]: dispatch
Dec 02 10:17:48 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5b0a592a-aac6-453e-a44c-9563c7dadce2", "force": true, "format": "json"}]: dispatch
Dec 02 10:17:49 np0005541914.localdomain ceph-mon[301710]: pgmap v726: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 196 B/s rd, 67 KiB/s wr, 2 op/s
Dec 02 10:17:49 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v727: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 78 KiB/s wr, 3 op/s
Dec 02 10:17:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:49.406 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:17:49 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a33ca4d0-df57-473d-9fc9-9e83431eec70", "format": "json"}]: dispatch
Dec 02 10:17:49 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a33ca4d0-df57-473d-9fc9-9e83431eec70, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:17:51 np0005541914.localdomain ceph-mon[301710]: pgmap v727: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 78 KiB/s wr, 3 op/s
Dec 02 10:17:51 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a33ca4d0-df57-473d-9fc9-9e83431eec70", "format": "json"}]: dispatch
Dec 02 10:17:51 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v728: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 51 KiB/s wr, 2 op/s
Dec 02 10:17:51 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:17:51.714 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:17:51 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:17:51.715 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 02 10:17:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:51.742 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:17:51 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:51.775 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:17:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a33ca4d0-df57-473d-9fc9-9e83431eec70, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:17:52 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a33ca4d0-df57-473d-9fc9-9e83431eec70", "format": "json"}]: dispatch
Dec 02 10:17:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a33ca4d0-df57-473d-9fc9-9e83431eec70, vol_name:cephfs) < ""
Dec 02 10:17:52 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:a33ca4d0-df57-473d-9fc9-9e83431eec70, vol_name:cephfs) < ""
Dec 02 10:17:52 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:17:52 np0005541914.localdomain ceph-mon[301710]: pgmap v728: 177 pgs: 177 active+clean; 224 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 51 KiB/s wr, 2 op/s
Dec 02 10:17:52 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a33ca4d0-df57-473d-9fc9-9e83431eec70", "format": "json"}]: dispatch
Dec 02 10:17:52 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:17:53 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v729: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 74 KiB/s wr, 3 op/s
Dec 02 10:17:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:54.410 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:17:54 np0005541914.localdomain ceph-mon[301710]: pgmap v729: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 74 KiB/s wr, 3 op/s
Dec 02 10:17:55 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v730: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 43 KiB/s wr, 2 op/s
Dec 02 10:17:55 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e32efa73-156a-46e5-a7b8-279ab8d48b0b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:17:55 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e32efa73-156a-46e5-a7b8-279ab8d48b0b, vol_name:cephfs) < ""
Dec 02 10:17:55 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/e32efa73-156a-46e5-a7b8-279ab8d48b0b/.meta.tmp'
Dec 02 10:17:55 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/e32efa73-156a-46e5-a7b8-279ab8d48b0b/.meta.tmp' to config b'/volumes/_nogroup/e32efa73-156a-46e5-a7b8-279ab8d48b0b/.meta'
Dec 02 10:17:55 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:e32efa73-156a-46e5-a7b8-279ab8d48b0b, vol_name:cephfs) < ""
Dec 02 10:17:55 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e32efa73-156a-46e5-a7b8-279ab8d48b0b", "format": "json"}]: dispatch
Dec 02 10:17:55 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e32efa73-156a-46e5-a7b8-279ab8d48b0b, vol_name:cephfs) < ""
Dec 02 10:17:55 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:e32efa73-156a-46e5-a7b8-279ab8d48b0b, vol_name:cephfs) < ""
Dec 02 10:17:55 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:17:56 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:56.813 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:17:56 np0005541914.localdomain ceph-mon[301710]: pgmap v730: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 43 KiB/s wr, 2 op/s
Dec 02 10:17:56 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e32efa73-156a-46e5-a7b8-279ab8d48b0b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:17:56 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e32efa73-156a-46e5-a7b8-279ab8d48b0b", "format": "json"}]: dispatch
Dec 02 10:17:57 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v731: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 43 KiB/s wr, 2 op/s
Dec 02 10:17:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:17:57 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "79a413b5-c28c-47ee-83ea-fa37bb286785", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:17:57 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:79a413b5-c28c-47ee-83ea-fa37bb286785, vol_name:cephfs) < ""
Dec 02 10:17:58 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/79a413b5-c28c-47ee-83ea-fa37bb286785/.meta.tmp'
Dec 02 10:17:58 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/79a413b5-c28c-47ee-83ea-fa37bb286785/.meta.tmp' to config b'/volumes/_nogroup/79a413b5-c28c-47ee-83ea-fa37bb286785/.meta'
Dec 02 10:17:58 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:79a413b5-c28c-47ee-83ea-fa37bb286785, vol_name:cephfs) < ""
Dec 02 10:17:58 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "79a413b5-c28c-47ee-83ea-fa37bb286785", "format": "json"}]: dispatch
Dec 02 10:17:58 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:79a413b5-c28c-47ee-83ea-fa37bb286785, vol_name:cephfs) < ""
Dec 02 10:17:58 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:79a413b5-c28c-47ee-83ea-fa37bb286785, vol_name:cephfs) < ""
Dec 02 10:17:58 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "e32efa73-156a-46e5-a7b8-279ab8d48b0b", "new_size": 2147483648, "format": "json"}]: dispatch
Dec 02 10:17:58 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:e32efa73-156a-46e5-a7b8-279ab8d48b0b, vol_name:cephfs) < ""
Dec 02 10:17:58 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:2147483648, prefix:fs subvolume resize, sub_name:e32efa73-156a-46e5-a7b8-279ab8d48b0b, vol_name:cephfs) < ""
Dec 02 10:17:58 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:17:58.717 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:17:58 np0005541914.localdomain ceph-mon[301710]: pgmap v731: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 43 KiB/s wr, 2 op/s
Dec 02 10:17:58 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "79a413b5-c28c-47ee-83ea-fa37bb286785", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:17:58 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "79a413b5-c28c-47ee-83ea-fa37bb286785", "format": "json"}]: dispatch
Dec 02 10:17:58 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:17:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v732: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 65 KiB/s wr, 3 op/s
Dec 02 10:17:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:17:59.423 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:17:59 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:17:59 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:18:00 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:18:00 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:18:00 np0005541914.localdomain systemd[1]: tmp-crun.BEHGiu.mount: Deactivated successfully.
Dec 02 10:18:00 np0005541914.localdomain podman[326122]: 2025-12-02 10:18:00.085054225 +0000 UTC m=+0.082641702 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:18:00 np0005541914.localdomain systemd[1]: tmp-crun.IyY5Fb.mount: Deactivated successfully.
Dec 02 10:18:00 np0005541914.localdomain podman[326122]: 2025-12-02 10:18:00.120322276 +0000 UTC m=+0.117909743 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 02 10:18:00 np0005541914.localdomain podman[326123]: 2025-12-02 10:18:00.127395522 +0000 UTC m=+0.121348278 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 10:18:00 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:18:00 np0005541914.localdomain podman[326123]: 2025-12-02 10:18:00.137440551 +0000 UTC m=+0.131393347 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 10:18:00 np0005541914.localdomain podman[326135]: 2025-12-02 10:18:00.102897342 +0000 UTC m=+0.084643155 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible)
Dec 02 10:18:00 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:18:00 np0005541914.localdomain podman[326135]: 2025-12-02 10:18:00.18802126 +0000 UTC m=+0.169767113 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:18:00 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:18:00 np0005541914.localdomain podman[326125]: 2025-12-02 10:18:00.206850358 +0000 UTC m=+0.194100569 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 02 10:18:00 np0005541914.localdomain podman[326125]: 2025-12-02 10:18:00.244254143 +0000 UTC m=+0.231504394 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=edpm, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 02 10:18:00 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:18:00 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "e32efa73-156a-46e5-a7b8-279ab8d48b0b", "new_size": 2147483648, "format": "json"}]: dispatch
Dec 02 10:18:01 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v733: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 45 KiB/s wr, 2 op/s
Dec 02 10:18:01 np0005541914.localdomain ceph-mon[301710]: pgmap v732: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 65 KiB/s wr, 3 op/s
Dec 02 10:18:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:01.851 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:18:01 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e32efa73-156a-46e5-a7b8-279ab8d48b0b", "format": "json"}]: dispatch
Dec 02 10:18:01 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:e32efa73-156a-46e5-a7b8-279ab8d48b0b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:18:01 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:e32efa73-156a-46e5-a7b8-279ab8d48b0b, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:18:01 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:18:01.858+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e32efa73-156a-46e5-a7b8-279ab8d48b0b' of type subvolume
Dec 02 10:18:01 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'e32efa73-156a-46e5-a7b8-279ab8d48b0b' of type subvolume
Dec 02 10:18:01 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e32efa73-156a-46e5-a7b8-279ab8d48b0b", "force": true, "format": "json"}]: dispatch
Dec 02 10:18:01 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e32efa73-156a-46e5-a7b8-279ab8d48b0b, vol_name:cephfs) < ""
Dec 02 10:18:01 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/e32efa73-156a-46e5-a7b8-279ab8d48b0b'' moved to trashcan
Dec 02 10:18:01 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:18:01 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:e32efa73-156a-46e5-a7b8-279ab8d48b0b, vol_name:cephfs) < ""
Dec 02 10:18:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0.
Dec 02 10:18:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:18:01.928661) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 02 10:18:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55
Dec 02 10:18:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670681928718, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1288, "num_deletes": 259, "total_data_size": 1444728, "memory_usage": 1471824, "flush_reason": "Manual Compaction"}
Dec 02 10:18:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started
Dec 02 10:18:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670681937318, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 950051, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34284, "largest_seqno": 35570, "table_properties": {"data_size": 944687, "index_size": 2707, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13018, "raw_average_key_size": 20, "raw_value_size": 933281, "raw_average_value_size": 1472, "num_data_blocks": 118, "num_entries": 634, "num_filter_entries": 634, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764670612, "oldest_key_time": 1764670612, "file_creation_time": 1764670681, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Dec 02 10:18:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 8677 microseconds, and 2694 cpu microseconds.
Dec 02 10:18:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 10:18:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:18:01.937349) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 950051 bytes OK
Dec 02 10:18:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:18:01.937369) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started
Dec 02 10:18:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:18:01.939390) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done
Dec 02 10:18:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:18:01.939405) EVENT_LOG_v1 {"time_micros": 1764670681939401, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 02 10:18:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:18:01.939432) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 02 10:18:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 1438285, prev total WAL file size 1438609, number of live WAL files 2.
Dec 02 10:18:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:18:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:18:01.940109) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034353233' seq:72057594037927935, type:22 .. '6C6F676D0034373735' seq:0, type:0; will stop at (end)
Dec 02 10:18:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 02 10:18:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(927KB)], [54(17MB)]
Dec 02 10:18:01 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670681940198, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 18818387, "oldest_snapshot_seqno": -1}
Dec 02 10:18:02 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 14562 keys, 18691694 bytes, temperature: kUnknown
Dec 02 10:18:02 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670682040939, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 18691694, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18606972, "index_size": 47245, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36421, "raw_key_size": 390576, "raw_average_key_size": 26, "raw_value_size": 18358121, "raw_average_value_size": 1260, "num_data_blocks": 1769, "num_entries": 14562, "num_filter_entries": 14562, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669502, "oldest_key_time": 0, "file_creation_time": 1764670681, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}}
Dec 02 10:18:02 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 10:18:02 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:18:02.041178) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 18691694 bytes
Dec 02 10:18:02 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:18:02.042968) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 186.7 rd, 185.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 17.0 +0.0 blob) out(17.8 +0.0 blob), read-write-amplify(39.5) write-amplify(19.7) OK, records in: 15101, records dropped: 539 output_compression: NoCompression
Dec 02 10:18:02 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:18:02.042986) EVENT_LOG_v1 {"time_micros": 1764670682042977, "job": 32, "event": "compaction_finished", "compaction_time_micros": 100800, "compaction_time_cpu_micros": 54695, "output_level": 6, "num_output_files": 1, "total_output_size": 18691694, "num_input_records": 15101, "num_output_records": 14562, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 02 10:18:02 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:18:02 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670682043246, "job": 32, "event": "table_file_deletion", "file_number": 56}
Dec 02 10:18:02 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:18:02 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670682044999, "job": 32, "event": "table_file_deletion", "file_number": 54}
Dec 02 10:18:02 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:18:01.939989) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:18:02 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:18:02.045161) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:18:02 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:18:02.045169) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:18:02 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:18:02.045173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:18:02 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:18:02.045176) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:18:02 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:18:02.045179) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:18:02 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "79a413b5-c28c-47ee-83ea-fa37bb286785", "format": "json"}]: dispatch
Dec 02 10:18:02 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:79a413b5-c28c-47ee-83ea-fa37bb286785, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:18:02 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:79a413b5-c28c-47ee-83ea-fa37bb286785, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:18:02 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:18:02.068+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '79a413b5-c28c-47ee-83ea-fa37bb286785' of type subvolume
Dec 02 10:18:02 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '79a413b5-c28c-47ee-83ea-fa37bb286785' of type subvolume
Dec 02 10:18:02 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "79a413b5-c28c-47ee-83ea-fa37bb286785", "force": true, "format": "json"}]: dispatch
Dec 02 10:18:02 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:79a413b5-c28c-47ee-83ea-fa37bb286785, vol_name:cephfs) < ""
Dec 02 10:18:02 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/79a413b5-c28c-47ee-83ea-fa37bb286785'' moved to trashcan
Dec 02 10:18:02 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:18:02 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:79a413b5-c28c-47ee-83ea-fa37bb286785, vol_name:cephfs) < ""
Dec 02 10:18:02 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:18:02 np0005541914.localdomain ceph-mon[301710]: pgmap v733: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 45 KiB/s wr, 2 op/s
Dec 02 10:18:02 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e32efa73-156a-46e5-a7b8-279ab8d48b0b", "format": "json"}]: dispatch
Dec 02 10:18:02 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e32efa73-156a-46e5-a7b8-279ab8d48b0b", "force": true, "format": "json"}]: dispatch
Dec 02 10:18:02 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "79a413b5-c28c-47ee-83ea-fa37bb286785", "format": "json"}]: dispatch
Dec 02 10:18:02 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "79a413b5-c28c-47ee-83ea-fa37bb286785", "force": true, "format": "json"}]: dispatch
Dec 02 10:18:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:18:03.187 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:18:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:18:03.187 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:18:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:18:03.188 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:18:03 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v734: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 89 KiB/s wr, 3 op/s
Dec 02 10:18:03 np0005541914.localdomain podman[239757]: time="2025-12-02T10:18:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:18:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:18:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 10:18:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:18:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19264 "" "Go-http-client/1.1"
Dec 02 10:18:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:04.460 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:18:04 np0005541914.localdomain ceph-mon[301710]: pgmap v734: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 89 KiB/s wr, 3 op/s
Dec 02 10:18:04 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/815784916' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:18:04 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/815784916' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:18:05 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v735: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 66 KiB/s wr, 2 op/s
Dec 02 10:18:05 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:18:05 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:18:06 np0005541914.localdomain systemd[1]: tmp-crun.GiX8Wv.mount: Deactivated successfully.
Dec 02 10:18:06 np0005541914.localdomain podman[326208]: 2025-12-02 10:18:06.10078138 +0000 UTC m=+0.105526034 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 02 10:18:06 np0005541914.localdomain podman[326208]: 2025-12-02 10:18:06.115039988 +0000 UTC m=+0.119784652 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 02 10:18:06 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:18:06 np0005541914.localdomain podman[326209]: 2025-12-02 10:18:06.181441992 +0000 UTC m=+0.182447921 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, maintainer=Red Hat, Inc., release=1755695350, build-date=2025-08-20T13:12:41, distribution-scope=public, version=9.6, io.openshift.tags=minimal rhel9, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container)
Dec 02 10:18:06 np0005541914.localdomain podman[326209]: 2025-12-02 10:18:06.198113912 +0000 UTC m=+0.199119841 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.buildah.version=1.33.7, version=9.6, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 02 10:18:06 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:18:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:06.892 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:18:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:18:06
Dec 02 10:18:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 02 10:18:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] do_upmap
Dec 02 10:18:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] pools ['manila_metadata', 'backups', '.mgr', 'images', 'manila_data', 'vms', 'volumes']
Dec 02 10:18:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes
Dec 02 10:18:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:18:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:18:06 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:18:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc, vol_name:cephfs) < ""
Dec 02 10:18:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:18:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fd3a2f155b0>)]
Dec 02 10:18:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 02 10:18:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:18:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:18:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc/.meta.tmp'
Dec 02 10:18:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc/.meta.tmp' to config b'/volumes/_nogroup/3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc/.meta'
Dec 02 10:18:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:2147483648, sub_name:3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc, vol_name:cephfs) < ""
Dec 02 10:18:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc", "format": "json"}]: dispatch
Dec 02 10:18:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc, vol_name:cephfs) < ""
Dec 02 10:18:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc, vol_name:cephfs) < ""
Dec 02 10:18:07 np0005541914.localdomain ceph-mon[301710]: pgmap v735: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 66 KiB/s wr, 2 op/s
Dec 02 10:18:07 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:18:07 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc", "format": "json"}]: dispatch
Dec 02 10:18:07 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:18:07 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/2957551482' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:18:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v736: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 66 KiB/s wr, 2 op/s
Dec 02 10:18:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust
Dec 02 10:18:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:18:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 02 10:18:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:18:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32)
Dec 02 10:18:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:18:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32)
Dec 02 10:18:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:18:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32)
Dec 02 10:18:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:18:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32)
Dec 02 10:18:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:18:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 1.3631525683975433e-06 of space, bias 1.0, pg target 0.0002712673611111111 quantized to 32 (current 32)
Dec 02 10:18:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:18:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.0023612528789782243 of space, bias 4.0, pg target 1.8795572916666667 quantized to 16 (current 16)
Dec 02 10:18:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 02 10:18:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:18:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 02 10:18:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:18:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:18:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:18:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:18:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:18:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:18:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:18:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:18:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:08.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:18:08 np0005541914.localdomain ceph-mon[301710]: pgmap v736: 177 pgs: 177 active+clean; 225 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 66 KiB/s wr, 2 op/s
Dec 02 10:18:08 np0005541914.localdomain ceph-mon[301710]: mgrmap e52: np0005541914.lljzmk(active, since 18m), standbys: np0005541912.qwddia, np0005541913.mfesdm
Dec 02 10:18:08 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/893749530' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:18:09 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v737: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 82 KiB/s wr, 4 op/s
Dec 02 10:18:09 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:09.494 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:18:09 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:09.526 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:18:10 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:10.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:18:10 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc", "new_size": 1073741824, "no_shrink": true, "format": "json"}]: dispatch
Dec 02 10:18:10 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc, vol_name:cephfs) < ""
Dec 02 10:18:10 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_resize(format:json, new_size:1073741824, no_shrink:True, prefix:fs subvolume resize, sub_name:3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc, vol_name:cephfs) < ""
Dec 02 10:18:10 np0005541914.localdomain ceph-mon[301710]: pgmap v737: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 82 KiB/s wr, 4 op/s
Dec 02 10:18:11 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v738: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 60 KiB/s wr, 3 op/s
Dec 02 10:18:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:11.925 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:18:11 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:18:12 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc", "new_size": 1073741824, "no_shrink": true, "format": "json"}]: dispatch
Dec 02 10:18:12 np0005541914.localdomain systemd[1]: tmp-crun.haXtEk.mount: Deactivated successfully.
Dec 02 10:18:12 np0005541914.localdomain podman[326251]: 2025-12-02 10:18:12.091041436 +0000 UTC m=+0.093526627 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible)
Dec 02 10:18:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:18:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:18:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:18:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:18:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:18:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:18:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:18:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:18:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:18:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:18:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:18:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:18:12 np0005541914.localdomain podman[326251]: 2025-12-02 10:18:12.133861698 +0000 UTC m=+0.136346909 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:18:12 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:18:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:12.523 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:18:12 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:18:13 np0005541914.localdomain ceph-mon[301710]: pgmap v738: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 60 KiB/s wr, 3 op/s
Dec 02 10:18:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v739: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 84 KiB/s wr, 4 op/s
Dec 02 10:18:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:13.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:18:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:13.575 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:18:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:13.576 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:18:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:13.576 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:18:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:13.577 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 10:18:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:13.577 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:18:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc", "format": "json"}]: dispatch
Dec 02 10:18:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:18:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:18:13 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:18:13.682+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc' of type subvolume
Dec 02 10:18:13 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc' of type subvolume
Dec 02 10:18:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc", "force": true, "format": "json"}]: dispatch
Dec 02 10:18:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc, vol_name:cephfs) < ""
Dec 02 10:18:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc'' moved to trashcan
Dec 02 10:18:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:18:13 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc, vol_name:cephfs) < ""
Dec 02 10:18:13 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:18:13 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3070259623' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:18:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:14.015 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:18:14 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/3070259623' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:18:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:14.257 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 10:18:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:14.259 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11395MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 10:18:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:14.260 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:18:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:14.260 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:18:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:14.459 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 10:18:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:14.460 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 10:18:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:14.510 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:18:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:14.518 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Refreshing inventories for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 02 10:18:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:14.578 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Updating ProviderTree inventory for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 02 10:18:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:14.579 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Updating inventory in ProviderTree for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 02 10:18:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:14.595 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Refreshing aggregate associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 02 10:18:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:14.618 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Refreshing trait associations for resource provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1, traits: HW_CPU_X86_SSE41,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AVX,HW_CPU_X86_SSSE3,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,HW_CPU_X86_SSE42,HW_CPU_X86_FMA3,COMPUTE_TRUSTED_CERTS,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_IDE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VOLUME_EXTEND,COMPUTE_DEVICE_TAGGING,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI2,HW_CPU_X86_F16C,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSE,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE4A,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_AKI _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 02 10:18:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:14.637 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:18:15 np0005541914.localdomain ceph-mon[301710]: pgmap v739: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 84 KiB/s wr, 4 op/s
Dec 02 10:18:15 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc", "format": "json"}]: dispatch
Dec 02 10:18:15 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3c3c7fb5-6d8f-4a5a-ae27-22b86bd4eddc", "force": true, "format": "json"}]: dispatch
Dec 02 10:18:15 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:18:15 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3788442662' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:18:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:15.112 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:18:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:15.118 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 10:18:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:15.134 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 10:18:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:15.137 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 10:18:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:15.137 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:18:15 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v740: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 40 KiB/s wr, 2 op/s
Dec 02 10:18:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:18:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:18:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:18:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:18:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:18:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:18:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:18:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:18:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:18:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:18:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:18:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:18:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:18:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:18:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:18:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:18:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:18:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:18:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:18:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:18:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:18:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:18:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:18:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:18:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:18:15.448 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:18:16 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/3788442662' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:18:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:16.139 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:18:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:16.139 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 10:18:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:16.140 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 10:18:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:16.157 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 10:18:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:16.157 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:18:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:16.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:18:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:16.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:18:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:16.529 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 02 10:18:16 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "20f292f4-5867-4407-9e49-afe0674f9a28", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:18:16 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:20f292f4-5867-4407-9e49-afe0674f9a28, vol_name:cephfs) < ""
Dec 02 10:18:16 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/20f292f4-5867-4407-9e49-afe0674f9a28/.meta.tmp'
Dec 02 10:18:16 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/20f292f4-5867-4407-9e49-afe0674f9a28/.meta.tmp' to config b'/volumes/_nogroup/20f292f4-5867-4407-9e49-afe0674f9a28/.meta'
Dec 02 10:18:16 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:20f292f4-5867-4407-9e49-afe0674f9a28, vol_name:cephfs) < ""
Dec 02 10:18:16 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "20f292f4-5867-4407-9e49-afe0674f9a28", "format": "json"}]: dispatch
Dec 02 10:18:16 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:20f292f4-5867-4407-9e49-afe0674f9a28, vol_name:cephfs) < ""
Dec 02 10:18:16 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:20f292f4-5867-4407-9e49-afe0674f9a28, vol_name:cephfs) < ""
Dec 02 10:18:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:16.966 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:18:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b888758a-b516-4f6f-a2a7-c3912230af77", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:18:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b888758a-b516-4f6f-a2a7-c3912230af77, vol_name:cephfs) < ""
Dec 02 10:18:17 np0005541914.localdomain ceph-mon[301710]: pgmap v740: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 40 KiB/s wr, 2 op/s
Dec 02 10:18:17 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:18:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/b888758a-b516-4f6f-a2a7-c3912230af77/.meta.tmp'
Dec 02 10:18:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/b888758a-b516-4f6f-a2a7-c3912230af77/.meta.tmp' to config b'/volumes/_nogroup/b888758a-b516-4f6f-a2a7-c3912230af77/.meta'
Dec 02 10:18:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_create(format:json, mode:0755, namespace_isolated:True, prefix:fs subvolume create, size:1073741824, sub_name:b888758a-b516-4f6f-a2a7-c3912230af77, vol_name:cephfs) < ""
Dec 02 10:18:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b888758a-b516-4f6f-a2a7-c3912230af77", "format": "json"}]: dispatch
Dec 02 10:18:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b888758a-b516-4f6f-a2a7-c3912230af77, vol_name:cephfs) < ""
Dec 02 10:18:17 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_getpath(format:json, prefix:fs subvolume getpath, sub_name:b888758a-b516-4f6f-a2a7-c3912230af77, vol_name:cephfs) < ""
Dec 02 10:18:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v741: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 40 KiB/s wr, 2 op/s
Dec 02 10:18:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:17.542 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:18:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:17.542 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 10:18:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:18:18 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "20f292f4-5867-4407-9e49-afe0674f9a28", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:18:18 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "20f292f4-5867-4407-9e49-afe0674f9a28", "format": "json"}]: dispatch
Dec 02 10:18:18 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b888758a-b516-4f6f-a2a7-c3912230af77", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Dec 02 10:18:18 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b888758a-b516-4f6f-a2a7-c3912230af77", "format": "json"}]: dispatch
Dec 02 10:18:18 np0005541914.localdomain ceph-mon[301710]: from='client.15678 172.18.0.34:0/1767085859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 02 10:18:19 np0005541914.localdomain ceph-mon[301710]: pgmap v741: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 40 KiB/s wr, 2 op/s
Dec 02 10:18:19 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v742: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 72 KiB/s wr, 4 op/s
Dec 02 10:18:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:19.536 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:18:20 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "20f292f4-5867-4407-9e49-afe0674f9a28", "format": "json"}]: dispatch
Dec 02 10:18:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:20f292f4-5867-4407-9e49-afe0674f9a28, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:18:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:20f292f4-5867-4407-9e49-afe0674f9a28, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:18:20 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:18:20.744+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '20f292f4-5867-4407-9e49-afe0674f9a28' of type subvolume
Dec 02 10:18:20 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume '20f292f4-5867-4407-9e49-afe0674f9a28' of type subvolume
Dec 02 10:18:20 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "20f292f4-5867-4407-9e49-afe0674f9a28", "force": true, "format": "json"}]: dispatch
Dec 02 10:18:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:20f292f4-5867-4407-9e49-afe0674f9a28, vol_name:cephfs) < ""
Dec 02 10:18:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/20f292f4-5867-4407-9e49-afe0674f9a28'' moved to trashcan
Dec 02 10:18:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:18:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:20f292f4-5867-4407-9e49-afe0674f9a28, vol_name:cephfs) < ""
Dec 02 10:18:20 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b888758a-b516-4f6f-a2a7-c3912230af77", "format": "json"}]: dispatch
Dec 02 10:18:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:b888758a-b516-4f6f-a2a7-c3912230af77, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:18:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:b888758a-b516-4f6f-a2a7-c3912230af77, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:18:20 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:18:20.965+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b888758a-b516-4f6f-a2a7-c3912230af77' of type subvolume
Dec 02 10:18:20 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'b888758a-b516-4f6f-a2a7-c3912230af77' of type subvolume
Dec 02 10:18:20 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b888758a-b516-4f6f-a2a7-c3912230af77", "force": true, "format": "json"}]: dispatch
Dec 02 10:18:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b888758a-b516-4f6f-a2a7-c3912230af77, vol_name:cephfs) < ""
Dec 02 10:18:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/b888758a-b516-4f6f-a2a7-c3912230af77'' moved to trashcan
Dec 02 10:18:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:18:20 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:b888758a-b516-4f6f-a2a7-c3912230af77, vol_name:cephfs) < ""
Dec 02 10:18:21 np0005541914.localdomain ceph-mon[301710]: pgmap v742: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 72 KiB/s wr, 4 op/s
Dec 02 10:18:21 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v743: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 56 KiB/s wr, 2 op/s
Dec 02 10:18:22 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:22.000 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:18:22 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "20f292f4-5867-4407-9e49-afe0674f9a28", "format": "json"}]: dispatch
Dec 02 10:18:22 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "20f292f4-5867-4407-9e49-afe0674f9a28", "force": true, "format": "json"}]: dispatch
Dec 02 10:18:22 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b888758a-b516-4f6f-a2a7-c3912230af77", "format": "json"}]: dispatch
Dec 02 10:18:22 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b888758a-b516-4f6f-a2a7-c3912230af77", "force": true, "format": "json"}]: dispatch
Dec 02 10:18:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 10:18:22 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                           ** DB Stats **
                                                           Uptime(secs): 1200.0 total, 600.0 interval
                                                           Cumulative writes: 4608 writes, 35K keys, 4608 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.05 MB/s
                                                           Cumulative WAL: 4608 writes, 4608 syncs, 1.00 writes per sync, written: 0.05 GB, 0.05 MB/s
                                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                           Interval writes: 2487 writes, 13K keys, 2487 commit groups, 1.0 writes per commit group, ingest: 18.30 MB, 0.03 MB/s
                                                           Interval WAL: 2487 writes, 2487 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                           
                                                           ** Compaction Stats [default] **
                                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    123.3      0.30              0.11        16    0.019       0      0       0.0       0.0
                                                             L6      1/0   17.83 MB   0.0      0.3     0.0      0.2       0.2      0.0       0.0   6.9    170.7    158.7      1.61              0.71        15    0.107    203K   7710       0.0       0.0
                                                            Sum      1/0   17.83 MB   0.0      0.3     0.0      0.2       0.3      0.1       0.0   7.9    143.8    153.1      1.91              0.81        31    0.062    203K   7710       0.0       0.0
                                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0  13.4    158.9    160.4      0.93              0.43        16    0.058    114K   4245       0.0       0.0
                                                           
                                                           ** Compaction Stats [default] **
                                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            Low      0/0    0.00 KB   0.0      0.3     0.0      0.2       0.2      0.0       0.0   0.0    170.7    158.7      1.61              0.71        15    0.107    203K   7710       0.0       0.0
                                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    124.3      0.30              0.11        15    0.020       0      0       0.0       0.0
                                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.002       0      0       0.0       0.0
                                                           
                                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                           
                                                           Uptime(secs): 1200.0 total, 600.0 interval
                                                           Flush(GB): cumulative 0.036, interval 0.011
                                                           AddFile(GB): cumulative 0.000, interval 0.000
                                                           AddFile(Total Files): cumulative 0, interval 0
                                                           AddFile(L0 Files): cumulative 0, interval 0
                                                           AddFile(Keys): cumulative 0, interval 0
                                                           Cumulative compaction: 0.29 GB write, 0.24 MB/s write, 0.27 GB read, 0.23 MB/s read, 1.9 seconds
                                                           Interval compaction: 0.15 GB write, 0.25 MB/s write, 0.14 GB read, 0.25 MB/s read, 0.9 seconds
                                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                           Block cache BinnedLRUCache@0x562ea3bdf1f0#2 capacity: 304.00 MB usage: 22.75 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000232 secs_since: 0
                                                           Block cache entry stats(count,size,portion): DataBlock(1308,21.44 MB,7.05272%) FilterBlock(31,587.92 KB,0.188863%) IndexBlock(31,756.09 KB,0.242886%) Misc(1,0.00 KB,0%)
                                                           
                                                           ** File Read Latency Histogram By Level [default] **
Dec 02 10:18:22 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:22.524 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:18:22 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:18:23 np0005541914.localdomain ceph-mon[301710]: pgmap v743: 177 pgs: 177 active+clean; 226 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 56 KiB/s wr, 2 op/s
Dec 02 10:18:23 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v744: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 98 KiB/s wr, 4 op/s
Dec 02 10:18:24 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/59618546' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:18:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:24.579 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:18:25 np0005541914.localdomain ceph-mon[301710]: pgmap v744: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 98 KiB/s wr, 4 op/s
Dec 02 10:18:25 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v745: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 74 KiB/s wr, 3 op/s
Dec 02 10:18:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:26.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:18:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:26.528 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 02 10:18:27 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:27.051 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:18:27 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a33ca4d0-df57-473d-9fc9-9e83431eec70", "format": "json"}]: dispatch
Dec 02 10:18:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:a33ca4d0-df57-473d-9fc9-9e83431eec70, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:18:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:a33ca4d0-df57-473d-9fc9-9e83431eec70, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:18:27 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a33ca4d0-df57-473d-9fc9-9e83431eec70", "force": true, "format": "json"}]: dispatch
Dec 02 10:18:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a33ca4d0-df57-473d-9fc9-9e83431eec70, vol_name:cephfs) < ""
Dec 02 10:18:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/a33ca4d0-df57-473d-9fc9-9e83431eec70'' moved to trashcan
Dec 02 10:18:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:18:27 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:a33ca4d0-df57-473d-9fc9-9e83431eec70, vol_name:cephfs) < ""
Dec 02 10:18:27 np0005541914.localdomain ceph-mon[301710]: pgmap v745: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 74 KiB/s wr, 3 op/s
Dec 02 10:18:27 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v746: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 74 KiB/s wr, 3 op/s
Dec 02 10:18:27 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:27.452 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 02 10:18:27 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:27.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:18:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:18:28 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a33ca4d0-df57-473d-9fc9-9e83431eec70", "format": "json"}]: dispatch
Dec 02 10:18:28 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a33ca4d0-df57-473d-9fc9-9e83431eec70", "force": true, "format": "json"}]: dispatch
Dec 02 10:18:28 np0005541914.localdomain ceph-mon[301710]: pgmap v746: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 74 KiB/s wr, 3 op/s
Dec 02 10:18:29 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/3641618941' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:18:29 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v747: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 85 KiB/s wr, 4 op/s
Dec 02 10:18:29 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:29.583 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:18:30 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ce71e0bd-fac0-489e-baae-8568840b81a1", "snap_name": "7af3a8b2-5504-4261-9144-956137288f3e_fb2677f6-3453-4240-a85b-11d96bc9c80e", "force": true, "format": "json"}]: dispatch
Dec 02 10:18:30 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7af3a8b2-5504-4261-9144-956137288f3e_fb2677f6-3453-4240-a85b-11d96bc9c80e, sub_name:ce71e0bd-fac0-489e-baae-8568840b81a1, vol_name:cephfs) < ""
Dec 02 10:18:30 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ce71e0bd-fac0-489e-baae-8568840b81a1/.meta.tmp'
Dec 02 10:18:30 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ce71e0bd-fac0-489e-baae-8568840b81a1/.meta.tmp' to config b'/volumes/_nogroup/ce71e0bd-fac0-489e-baae-8568840b81a1/.meta'
Dec 02 10:18:30 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7af3a8b2-5504-4261-9144-956137288f3e_fb2677f6-3453-4240-a85b-11d96bc9c80e, sub_name:ce71e0bd-fac0-489e-baae-8568840b81a1, vol_name:cephfs) < ""
Dec 02 10:18:30 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ce71e0bd-fac0-489e-baae-8568840b81a1", "snap_name": "7af3a8b2-5504-4261-9144-956137288f3e", "force": true, "format": "json"}]: dispatch
Dec 02 10:18:30 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7af3a8b2-5504-4261-9144-956137288f3e, sub_name:ce71e0bd-fac0-489e-baae-8568840b81a1, vol_name:cephfs) < ""
Dec 02 10:18:30 np0005541914.localdomain ceph-mon[301710]: pgmap v747: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 85 KiB/s wr, 4 op/s
Dec 02 10:18:30 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] wrote 155 bytes to config b'/volumes/_nogroup/ce71e0bd-fac0-489e-baae-8568840b81a1/.meta.tmp'
Dec 02 10:18:30 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.metadata_manager] Renamed b'/volumes/_nogroup/ce71e0bd-fac0-489e-baae-8568840b81a1/.meta.tmp' to config b'/volumes/_nogroup/ce71e0bd-fac0-489e-baae-8568840b81a1/.meta'
Dec 02 10:18:30 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_snapshot_rm(force:True, format:json, prefix:fs subvolume snapshot rm, snap_name:7af3a8b2-5504-4261-9144-956137288f3e, sub_name:ce71e0bd-fac0-489e-baae-8568840b81a1, vol_name:cephfs) < ""
Dec 02 10:18:30 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:18:30 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:18:30 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:18:31 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:18:31 np0005541914.localdomain systemd[1]: tmp-crun.H1jPON.mount: Deactivated successfully.
Dec 02 10:18:31 np0005541914.localdomain systemd[1]: tmp-crun.rl4K4f.mount: Deactivated successfully.
Dec 02 10:18:31 np0005541914.localdomain podman[326315]: 2025-12-02 10:18:31.095265524 +0000 UTC m=+0.091064971 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 10:18:31 np0005541914.localdomain podman[326322]: 2025-12-02 10:18:31.112877003 +0000 UTC m=+0.099971024 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 02 10:18:31 np0005541914.localdomain podman[326314]: 2025-12-02 10:18:31.06771067 +0000 UTC m=+0.070684966 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 02 10:18:31 np0005541914.localdomain podman[326315]: 2025-12-02 10:18:31.132979919 +0000 UTC m=+0.128779356 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 02 10:18:31 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:18:31 np0005541914.localdomain podman[326314]: 2025-12-02 10:18:31.151250039 +0000 UTC m=+0.154224295 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3)
Dec 02 10:18:31 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:18:31 np0005541914.localdomain podman[326321]: 2025-12-02 10:18:31.203312494 +0000 UTC m=+0.194552821 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:18:31 np0005541914.localdomain podman[326322]: 2025-12-02 10:18:31.225638538 +0000 UTC m=+0.212732539 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller)
Dec 02 10:18:31 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:18:31 np0005541914.localdomain podman[326321]: 2025-12-02 10:18:31.243025301 +0000 UTC m=+0.234265598 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125)
Dec 02 10:18:31 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:18:31 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ce71e0bd-fac0-489e-baae-8568840b81a1", "snap_name": "7af3a8b2-5504-4261-9144-956137288f3e_fb2677f6-3453-4240-a85b-11d96bc9c80e", "force": true, "format": "json"}]: dispatch
Dec 02 10:18:31 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ce71e0bd-fac0-489e-baae-8568840b81a1", "snap_name": "7af3a8b2-5504-4261-9144-956137288f3e", "force": true, "format": "json"}]: dispatch
Dec 02 10:18:31 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v748: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 53 KiB/s wr, 2 op/s
Dec 02 10:18:32 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Dec 02 10:18:32 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:32.099 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:18:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:18:32 np0005541914.localdomain ceph-mon[301710]: pgmap v748: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 341 B/s rd, 53 KiB/s wr, 2 op/s
Dec 02 10:18:33 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v749: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 71 KiB/s wr, 5 op/s
Dec 02 10:18:33 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ce71e0bd-fac0-489e-baae-8568840b81a1", "format": "json"}]: dispatch
Dec 02 10:18:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_clone_status(clone_name:ce71e0bd-fac0-489e-baae-8568840b81a1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:18:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_clone_status(clone_name:ce71e0bd-fac0-489e-baae-8568840b81a1, format:json, prefix:fs clone status, vol_name:cephfs) < ""
Dec 02 10:18:33 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:18:33.439+0000 7fd37dd6f640 -1 mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ce71e0bd-fac0-489e-baae-8568840b81a1' of type subvolume
Dec 02 10:18:33 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported operation 'clone-status' is not allowed on subvolume 'ce71e0bd-fac0-489e-baae-8568840b81a1' of type subvolume
Dec 02 10:18:33 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ce71e0bd-fac0-489e-baae-8568840b81a1", "force": true, "format": "json"}]: dispatch
Dec 02 10:18:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Starting _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ce71e0bd-fac0-489e-baae-8568840b81a1, vol_name:cephfs) < ""
Dec 02 10:18:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.operations.versions.subvolume_base] subvolume path 'b'/volumes/_nogroup/ce71e0bd-fac0-489e-baae-8568840b81a1'' moved to trashcan
Dec 02 10:18:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec 02 10:18:33 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO volumes.module] Finishing _cmd_fs_subvolume_rm(force:True, format:json, prefix:fs subvolume rm, sub_name:ce71e0bd-fac0-489e-baae-8568840b81a1, vol_name:cephfs) < ""
Dec 02 10:18:33 np0005541914.localdomain podman[239757]: time="2025-12-02T10:18:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:18:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:18:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 10:18:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:18:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19262 "" "Go-http-client/1.1"
Dec 02 10:18:34 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:34.586 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:18:34 np0005541914.localdomain ceph-mon[301710]: pgmap v749: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 71 KiB/s wr, 5 op/s
Dec 02 10:18:34 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ce71e0bd-fac0-489e-baae-8568840b81a1", "format": "json"}]: dispatch
Dec 02 10:18:34 np0005541914.localdomain ceph-mon[301710]: from='client.15678 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ce71e0bd-fac0-489e-baae-8568840b81a1", "force": true, "format": "json"}]: dispatch
Dec 02 10:18:35 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v750: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 28 KiB/s wr, 3 op/s
Dec 02 10:18:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:18:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:18:36 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:18:36 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:18:36 np0005541914.localdomain ceph-mon[301710]: pgmap v750: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 28 KiB/s wr, 3 op/s
Dec 02 10:18:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:37.000 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:18:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:18:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fd397e9aac0>)]
Dec 02 10:18:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 02 10:18:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:18:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:18:37 np0005541914.localdomain podman[326394]: 2025-12-02 10:18:37.08458053 +0000 UTC m=+0.080699623 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-type=git, distribution-scope=public, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, version=9.6, io.openshift.tags=minimal rhel9)
Dec 02 10:18:37 np0005541914.localdomain podman[326394]: 2025-12-02 10:18:37.118816819 +0000 UTC m=+0.114935922 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.openshift.expose-services=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, com.redhat.component=ubi9-minimal-container)
Dec 02 10:18:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:37.136 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:18:37 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:18:37 np0005541914.localdomain systemd[1]: tmp-crun.y4PCD8.mount: Deactivated successfully.
Dec 02 10:18:37 np0005541914.localdomain podman[326393]: 2025-12-02 10:18:37.169714898 +0000 UTC m=+0.171122974 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 10:18:37 np0005541914.localdomain podman[326393]: 2025-12-02 10:18:37.178672503 +0000 UTC m=+0.180080579 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 10:18:37 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:18:37 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v751: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 28 KiB/s wr, 3 op/s
Dec 02 10:18:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:18:38 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e285 e285: 6 total, 6 up, 6 in
Dec 02 10:18:38 np0005541914.localdomain sudo[326436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 10:18:38 np0005541914.localdomain sudo[326436]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:18:38 np0005541914.localdomain sudo[326436]: pam_unix(sudo:session): session closed for user root
Dec 02 10:18:38 np0005541914.localdomain sudo[326454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 10:18:38 np0005541914.localdomain sudo[326454]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:18:38 np0005541914.localdomain sudo[326454]: pam_unix(sudo:session): session closed for user root
Dec 02 10:18:39 np0005541914.localdomain ceph-mon[301710]: pgmap v751: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 28 KiB/s wr, 3 op/s
Dec 02 10:18:39 np0005541914.localdomain ceph-mon[301710]: mgrmap e53: np0005541914.lljzmk(active, since 18m), standbys: np0005541912.qwddia, np0005541913.mfesdm
Dec 02 10:18:39 np0005541914.localdomain ceph-mon[301710]: osdmap e285: 6 total, 6 up, 6 in
Dec 02 10:18:39 np0005541914.localdomain sudo[326504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 10:18:39 np0005541914.localdomain sudo[326504]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:18:39 np0005541914.localdomain sudo[326504]: pam_unix(sudo:session): session closed for user root
Dec 02 10:18:39 np0005541914.localdomain sudo[326522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid c7c8e171-a193-56fb-95fa-8879fcfa7074 -- inventory --format=json-pretty --filter-for-batch
Dec 02 10:18:39 np0005541914.localdomain sudo[326522]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:18:39 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v753: 177 pgs: 3 active+clean+snaptrim, 12 active+clean+snaptrim_wait, 162 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 59 KiB/s wr, 4 op/s
Dec 02 10:18:39 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:39.587 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:18:39 np0005541914.localdomain podman[326580]: 
Dec 02 10:18:39 np0005541914.localdomain podman[326580]: 2025-12-02 10:18:39.920283463 +0000 UTC m=+0.062971481 container create bcf97500dd6dbaef563bd65c8e5e96b8cbfb05504f85ce4372ef9085d0e4f544 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_torvalds, architecture=x86_64, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, vendor=Red Hat, Inc., distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_BRANCH=main, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, build-date=2025-11-26T19:44:28Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.tags=rhceph ceph, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, version=7, release=1763362218, description=Red Hat Ceph Storage 7)
Dec 02 10:18:39 np0005541914.localdomain systemd[1]: Started libpod-conmon-bcf97500dd6dbaef563bd65c8e5e96b8cbfb05504f85ce4372ef9085d0e4f544.scope.
Dec 02 10:18:39 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 10:18:39 np0005541914.localdomain podman[326580]: 2025-12-02 10:18:39.987809872 +0000 UTC m=+0.130497920 container init bcf97500dd6dbaef563bd65c8e5e96b8cbfb05504f85ce4372ef9085d0e4f544 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_torvalds, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.buildah.version=1.41.4, ceph=True, release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, name=rhceph, vendor=Red Hat, Inc., version=7)
Dec 02 10:18:39 np0005541914.localdomain podman[326580]: 2025-12-02 10:18:39.892035408 +0000 UTC m=+0.034723486 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 10:18:40 np0005541914.localdomain podman[326580]: 2025-12-02 10:18:40.000643726 +0000 UTC m=+0.143331794 container start bcf97500dd6dbaef563bd65c8e5e96b8cbfb05504f85ce4372ef9085d0e4f544 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_torvalds, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-type=git, RELEASE=main, architecture=x86_64, io.buildah.version=1.41.4, release=1763362218, io.openshift.tags=rhceph ceph, name=rhceph, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 10:18:40 np0005541914.localdomain podman[326580]: 2025-12-02 10:18:40.000916794 +0000 UTC m=+0.143604862 container attach bcf97500dd6dbaef563bd65c8e5e96b8cbfb05504f85ce4372ef9085d0e4f544 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_torvalds, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, architecture=x86_64, io.openshift.expose-services=, name=rhceph, build-date=2025-11-26T19:44:28Z, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, vcs-type=git, io.buildah.version=1.41.4, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, CEPH_POINT_RELEASE=, RELEASE=main)
Dec 02 10:18:40 np0005541914.localdomain condescending_torvalds[326595]: 167 167
Dec 02 10:18:40 np0005541914.localdomain systemd[1]: libpod-bcf97500dd6dbaef563bd65c8e5e96b8cbfb05504f85ce4372ef9085d0e4f544.scope: Deactivated successfully.
Dec 02 10:18:40 np0005541914.localdomain podman[326580]: 2025-12-02 10:18:40.00864459 +0000 UTC m=+0.151332638 container died bcf97500dd6dbaef563bd65c8e5e96b8cbfb05504f85ce4372ef9085d0e4f544 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_torvalds, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, ceph=True, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1763362218, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, build-date=2025-11-26T19:44:28Z, io.buildah.version=1.41.4, architecture=x86_64, distribution-scope=public, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 10:18:40 np0005541914.localdomain podman[326600]: 2025-12-02 10:18:40.102127955 +0000 UTC m=+0.080789687 container remove bcf97500dd6dbaef563bd65c8e5e96b8cbfb05504f85ce4372ef9085d0e4f544 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=condescending_torvalds, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, architecture=x86_64, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, build-date=2025-11-26T19:44:28Z, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, vcs-type=git, RELEASE=main, distribution-scope=public)
Dec 02 10:18:40 np0005541914.localdomain systemd[1]: libpod-conmon-bcf97500dd6dbaef563bd65c8e5e96b8cbfb05504f85ce4372ef9085d0e4f544.scope: Deactivated successfully.
Dec 02 10:18:40 np0005541914.localdomain podman[326622]: 
Dec 02 10:18:40 np0005541914.localdomain podman[326622]: 2025-12-02 10:18:40.317505964 +0000 UTC m=+0.068271814 container create 05867c4916a833db831c3ee03bf70eb8cf0521006ddd3dee105751cc0e16d515 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_leavitt, GIT_BRANCH=main, RELEASE=main, architecture=x86_64, build-date=2025-11-26T19:44:28Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1763362218, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, io.buildah.version=1.41.4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, ceph=True, CEPH_POINT_RELEASE=, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Dec 02 10:18:40 np0005541914.localdomain systemd[1]: Started libpod-conmon-05867c4916a833db831c3ee03bf70eb8cf0521006ddd3dee105751cc0e16d515.scope.
Dec 02 10:18:40 np0005541914.localdomain systemd[1]: Started libcrun container.
Dec 02 10:18:40 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39521573b513841daec2656cdb27c44d3524f23bdddb85705a6670147a413798/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 02 10:18:40 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39521573b513841daec2656cdb27c44d3524f23bdddb85705a6670147a413798/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 02 10:18:40 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39521573b513841daec2656cdb27c44d3524f23bdddb85705a6670147a413798/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 02 10:18:40 np0005541914.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39521573b513841daec2656cdb27c44d3524f23bdddb85705a6670147a413798/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 02 10:18:40 np0005541914.localdomain podman[326622]: 2025-12-02 10:18:40.382931788 +0000 UTC m=+0.133697648 container init 05867c4916a833db831c3ee03bf70eb8cf0521006ddd3dee105751cc0e16d515 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_leavitt, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, GIT_CLEAN=True, version=7, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-11-26T19:44:28Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.buildah.version=1.41.4)
Dec 02 10:18:40 np0005541914.localdomain podman[326622]: 2025-12-02 10:18:40.294017983 +0000 UTC m=+0.044783823 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Dec 02 10:18:40 np0005541914.localdomain podman[326622]: 2025-12-02 10:18:40.393943926 +0000 UTC m=+0.144709776 container start 05867c4916a833db831c3ee03bf70eb8cf0521006ddd3dee105751cc0e16d515 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_leavitt, vcs-type=git, com.redhat.component=rhceph-container, build-date=2025-11-26T19:44:28Z, ceph=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_BRANCH=main, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., GIT_CLEAN=True, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1763362218, io.buildah.version=1.41.4, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, distribution-scope=public)
Dec 02 10:18:40 np0005541914.localdomain podman[326622]: 2025-12-02 10:18:40.394251955 +0000 UTC m=+0.145017855 container attach 05867c4916a833db831c3ee03bf70eb8cf0521006ddd3dee105751cc0e16d515 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_leavitt, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, release=1763362218, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, build-date=2025-11-26T19:44:28Z, vendor=Red Hat, Inc., version=7, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, RELEASE=main)
Dec 02 10:18:40 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-705f7264708481aa281e3a4f25153233325637f492ccc70e43236280f73bc22e-merged.mount: Deactivated successfully.
Dec 02 10:18:41 np0005541914.localdomain ceph-mon[301710]: pgmap v753: 177 pgs: 3 active+clean+snaptrim, 12 active+clean+snaptrim_wait, 162 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 59 KiB/s wr, 4 op/s
Dec 02 10:18:41 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v754: 177 pgs: 3 active+clean+snaptrim, 12 active+clean+snaptrim_wait, 162 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 59 KiB/s wr, 4 op/s
Dec 02 10:18:41 np0005541914.localdomain agitated_leavitt[326637]: [
Dec 02 10:18:41 np0005541914.localdomain agitated_leavitt[326637]:     {
Dec 02 10:18:41 np0005541914.localdomain agitated_leavitt[326637]:         "available": false,
Dec 02 10:18:41 np0005541914.localdomain agitated_leavitt[326637]:         "ceph_device": false,
Dec 02 10:18:41 np0005541914.localdomain agitated_leavitt[326637]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 02 10:18:41 np0005541914.localdomain agitated_leavitt[326637]:         "lsm_data": {},
Dec 02 10:18:41 np0005541914.localdomain agitated_leavitt[326637]:         "lvs": [],
Dec 02 10:18:41 np0005541914.localdomain agitated_leavitt[326637]:         "path": "/dev/sr0",
Dec 02 10:18:41 np0005541914.localdomain agitated_leavitt[326637]:         "rejected_reasons": [
Dec 02 10:18:41 np0005541914.localdomain agitated_leavitt[326637]:             "Insufficient space (<5GB)",
Dec 02 10:18:41 np0005541914.localdomain agitated_leavitt[326637]:             "Has a FileSystem"
Dec 02 10:18:41 np0005541914.localdomain agitated_leavitt[326637]:         ],
Dec 02 10:18:41 np0005541914.localdomain agitated_leavitt[326637]:         "sys_api": {
Dec 02 10:18:41 np0005541914.localdomain agitated_leavitt[326637]:             "actuators": null,
Dec 02 10:18:41 np0005541914.localdomain agitated_leavitt[326637]:             "device_nodes": "sr0",
Dec 02 10:18:41 np0005541914.localdomain agitated_leavitt[326637]:             "human_readable_size": "482.00 KB",
Dec 02 10:18:41 np0005541914.localdomain agitated_leavitt[326637]:             "id_bus": "ata",
Dec 02 10:18:41 np0005541914.localdomain agitated_leavitt[326637]:             "model": "QEMU DVD-ROM",
Dec 02 10:18:41 np0005541914.localdomain agitated_leavitt[326637]:             "nr_requests": "2",
Dec 02 10:18:41 np0005541914.localdomain agitated_leavitt[326637]:             "partitions": {},
Dec 02 10:18:41 np0005541914.localdomain agitated_leavitt[326637]:             "path": "/dev/sr0",
Dec 02 10:18:41 np0005541914.localdomain agitated_leavitt[326637]:             "removable": "1",
Dec 02 10:18:41 np0005541914.localdomain agitated_leavitt[326637]:             "rev": "2.5+",
Dec 02 10:18:41 np0005541914.localdomain agitated_leavitt[326637]:             "ro": "0",
Dec 02 10:18:41 np0005541914.localdomain agitated_leavitt[326637]:             "rotational": "1",
Dec 02 10:18:41 np0005541914.localdomain agitated_leavitt[326637]:             "sas_address": "",
Dec 02 10:18:41 np0005541914.localdomain agitated_leavitt[326637]:             "sas_device_handle": "",
Dec 02 10:18:41 np0005541914.localdomain agitated_leavitt[326637]:             "scheduler_mode": "mq-deadline",
Dec 02 10:18:41 np0005541914.localdomain agitated_leavitt[326637]:             "sectors": 0,
Dec 02 10:18:41 np0005541914.localdomain agitated_leavitt[326637]:             "sectorsize": "2048",
Dec 02 10:18:41 np0005541914.localdomain agitated_leavitt[326637]:             "size": 493568.0,
Dec 02 10:18:41 np0005541914.localdomain agitated_leavitt[326637]:             "support_discard": "0",
Dec 02 10:18:41 np0005541914.localdomain agitated_leavitt[326637]:             "type": "disk",
Dec 02 10:18:41 np0005541914.localdomain agitated_leavitt[326637]:             "vendor": "QEMU"
Dec 02 10:18:41 np0005541914.localdomain agitated_leavitt[326637]:         }
Dec 02 10:18:41 np0005541914.localdomain agitated_leavitt[326637]:     }
Dec 02 10:18:41 np0005541914.localdomain agitated_leavitt[326637]: ]
Dec 02 10:18:41 np0005541914.localdomain systemd[1]: libpod-05867c4916a833db831c3ee03bf70eb8cf0521006ddd3dee105751cc0e16d515.scope: Deactivated successfully.
Dec 02 10:18:41 np0005541914.localdomain systemd[1]: libpod-05867c4916a833db831c3ee03bf70eb8cf0521006ddd3dee105751cc0e16d515.scope: Consumed 1.175s CPU time.
Dec 02 10:18:41 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 10:18:41 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 10:18:41 np0005541914.localdomain podman[328743]: 2025-12-02 10:18:41.583599095 +0000 UTC m=+0.039274984 container died 05867c4916a833db831c3ee03bf70eb8cf0521006ddd3dee105751cc0e16d515 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_leavitt, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, vcs-type=git, release=1763362218, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.41.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.openshift.expose-services=, distribution-scope=public, ceph=True, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, com.redhat.component=rhceph-container, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2025-11-26T19:44:28Z, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., RELEASE=main, architecture=x86_64)
Dec 02 10:18:41 np0005541914.localdomain systemd[1]: var-lib-containers-storage-overlay-39521573b513841daec2656cdb27c44d3524f23bdddb85705a6670147a413798-merged.mount: Deactivated successfully.
Dec 02 10:18:41 np0005541914.localdomain podman[328743]: 2025-12-02 10:18:41.626048116 +0000 UTC m=+0.081723995 container remove 05867c4916a833db831c3ee03bf70eb8cf0521006ddd3dee105751cc0e16d515 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=agitated_leavitt, release=1763362218, ceph=True, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.buildah.version=1.41.4, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2025-11-26T19:44:28Z, vcs-type=git, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec 02 10:18:41 np0005541914.localdomain systemd[1]: libpod-conmon-05867c4916a833db831c3ee03bf70eb8cf0521006ddd3dee105751cc0e16d515.scope: Deactivated successfully.
Dec 02 10:18:41 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 10:18:41 np0005541914.localdomain sudo[326522]: pam_unix(sudo:session): session closed for user root
Dec 02 10:18:41 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0)
Dec 02 10:18:41 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 10:18:41 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0)
Dec 02 10:18:41 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 10:18:41 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:18:41 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 02 10:18:41 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:18:41 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 02 10:18:41 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] update: starting ev d00ff4a1-ce59-44dd-b4e4-7d20c6c4566d (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:18:41 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] complete: finished ev d00ff4a1-ce59-44dd-b4e4-7d20c6c4566d (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:18:41 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Completed event d00ff4a1-ce59-44dd-b4e4-7d20c6c4566d (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 02 10:18:41 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 02 10:18:41 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:18:41 np0005541914.localdomain sudo[328758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 10:18:41 np0005541914.localdomain sudo[328758]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:18:42 np0005541914.localdomain sudo[328758]: pam_unix(sudo:session): session closed for user root
Dec 02 10:18:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:18:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:18:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:18:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:18:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:18:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:18:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:18:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:18:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:18:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:18:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:18:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:18:42 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:42.143 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:18:42 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events
Dec 02 10:18:42 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 02 10:18:42 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:18:42 np0005541914.localdomain ceph-mon[301710]: pgmap v754: 177 pgs: 3 active+clean+snaptrim, 12 active+clean+snaptrim_wait, 162 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 59 KiB/s wr, 4 op/s
Dec 02 10:18:42 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:18:42 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:18:42 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:18:42 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:18:42 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:18:42 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:18:42 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:18:42 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:18:42 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:18:42 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:18:42 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:18:42 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:18:43 np0005541914.localdomain podman[328776]: 2025-12-02 10:18:43.085879844 +0000 UTC m=+0.085149810 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 02 10:18:43 np0005541914.localdomain podman[328776]: 2025-12-02 10:18:43.127965123 +0000 UTC m=+0.127235129 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:18:43 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:18:43 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v755: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 40 KiB/s wr, 2 op/s
Dec 02 10:18:43 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:43.559 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:18:43 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:18:43.561 159483 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': '0a:ed:9b', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6e:ce:d1:dc:83:80'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 02 10:18:43 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:18:43.561 159483 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 02 10:18:44 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:18:44.219 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:18:44Z, description=, device_id=4ee86722-429a-4a47-a912-a41ad8c5f9ac, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034aecac0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034aec880>], id=739db3f1-5def-4a06-ae92-802b21657418, ip_allocation=immediate, mac_address=fa:16:3e:ea:99:78, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3950, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:18:44Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:18:44 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses
Dec 02 10:18:44 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:18:44 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:18:44 np0005541914.localdomain podman[328811]: 2025-12-02 10:18:44.439037273 +0000 UTC m=+0.060360981 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 02 10:18:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:44.589 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:18:44 np0005541914.localdomain ceph-mon[301710]: pgmap v755: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 40 KiB/s wr, 2 op/s
Dec 02 10:18:44 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:18:44.715 262347 INFO neutron.agent.dhcp.agent [None req-294e6647-79bd-4175-8bef-b2d1ac48c3e1 - - - - - -] DHCP configuration for ports {'739db3f1-5def-4a06-ae92-802b21657418'} is completed
Dec 02 10:18:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:45.193 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:18:45 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v756: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 40 KiB/s wr, 2 op/s
Dec 02 10:18:46 np0005541914.localdomain ceph-mon[301710]: pgmap v756: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 40 KiB/s wr, 2 op/s
Dec 02 10:18:46 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:18:46.913 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:18:46Z, description=, device_id=ff064be5-371a-4a89-8fb9-b1f5eb5224da, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034ac8970>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034ac89a0>], id=421cc112-f5c1-4e29-a229-2aafbf7f14ae, ip_allocation=immediate, mac_address=fa:16:3e:ac:f5:c4, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3953, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:18:46Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:18:46 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 e286: 6 total, 6 up, 6 in
Dec 02 10:18:47 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses
Dec 02 10:18:47 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:18:47 np0005541914.localdomain podman[328850]: 2025-12-02 10:18:47.070749976 +0000 UTC m=+0.035863720 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Dec 02 10:18:47 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:18:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:47.144 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:18:47 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v758: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 219 B/s rd, 3.3 KiB/s wr, 1 op/s
Dec 02 10:18:47 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:18:47.563 159483 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=515e0717-8baa-40e6-ac30-5fb148626504, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 02 10:18:47 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:18:47 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:18:47.605 262347 INFO neutron.agent.dhcp.agent [None req-cddefb12-04f4-4abc-ac82-98c814ae2ab0 - - - - - -] DHCP configuration for ports {'421cc112-f5c1-4e29-a229-2aafbf7f14ae'} is completed
Dec 02 10:18:47 np0005541914.localdomain ceph-mon[301710]: osdmap e286: 6 total, 6 up, 6 in
Dec 02 10:18:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:48.145 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:18:48 np0005541914.localdomain ceph-mon[301710]: pgmap v758: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 219 B/s rd, 3.3 KiB/s wr, 1 op/s
Dec 02 10:18:49 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v759: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 2.7 KiB/s wr, 0 op/s
Dec 02 10:18:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:49.594 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:18:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:50.111 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:18:51 np0005541914.localdomain ceph-mon[301710]: pgmap v759: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 2.7 KiB/s wr, 0 op/s
Dec 02 10:18:51 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v760: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 2.7 KiB/s wr, 0 op/s
Dec 02 10:18:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:52.145 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:18:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:52.236 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:18:52 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:18:53 np0005541914.localdomain ceph-mon[301710]: pgmap v760: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 2.7 KiB/s wr, 0 op/s
Dec 02 10:18:53 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v761: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:18:54 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:18:54.140 262347 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2025-12-02T10:18:53Z, description=, device_id=146339f0-3e49-419f-a49c-241664c75695, device_owner=network:router_gateway, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c28790>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f4034c28eb0>], id=06e59d5f-dec2-4d76-9835-8a7968e9ba35, ip_allocation=immediate, mac_address=fa:16:3e:fd:06:12, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2025-12-02T08:31:07Z, description=, dns_domain=, id=447a69ac-5cfc-4dee-8482-764b4cafdf04, ipv4_address_scope=None, ipv6_address_scope=None, is_default=False, l2_adjacency=True, mtu=1350, name=public, port_security_enabled=True, project_id=e2d97696ab6749899bb8ba5ce29a3de2, provider:network_type=flat, provider:physical_network=datacentre, provider:segmentation_id=None, qos_policy_id=None, revision_number=2, router:external=True, shared=False, standard_attr_id=29, status=ACTIVE, subnets=['73d42bd3-1113-47f0-b083-570a4d5b4a5b'], tags=[], tenant_id=e2d97696ab6749899bb8ba5ce29a3de2, updated_at=2025-12-02T08:31:14Z, vlan_transparent=None, network_id=447a69ac-5cfc-4dee-8482-764b4cafdf04, port_security_enabled=False, project_id=, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3967, status=DOWN, tags=[], tenant_id=, updated_at=2025-12-02T10:18:53Z on network 447a69ac-5cfc-4dee-8482-764b4cafdf04
Dec 02 10:18:54 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 4 addresses
Dec 02 10:18:54 np0005541914.localdomain podman[328886]: 2025-12-02 10:18:54.350289554 +0000 UTC m=+0.054422179 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:18:54 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:18:54 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:18:54 np0005541914.localdomain neutron_dhcp_agent[262343]: 2025-12-02 10:18:54.574 262347 INFO neutron.agent.dhcp.agent [None req-5485d8ae-b7be-41d2-b1e7-1ebe1f110f0e - - - - - -] DHCP configuration for ports {'06e59d5f-dec2-4d76-9835-8a7968e9ba35'} is completed
Dec 02 10:18:54 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:54.610 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:18:55 np0005541914.localdomain ceph-mon[301710]: pgmap v761: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:18:55 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v762: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:18:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:55.889 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:18:57 np0005541914.localdomain ceph-mon[301710]: pgmap v762: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:18:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:57.147 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:18:57 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v763: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:18:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:18:57 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 3 addresses
Dec 02 10:18:57 np0005541914.localdomain podman[328922]: 2025-12-02 10:18:57.679556819 +0000 UTC m=+0.056683398 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Dec 02 10:18:57 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:18:57 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:18:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:57.800 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:18:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v764: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:18:59 np0005541914.localdomain ceph-mon[301710]: pgmap v763: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:18:59 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:18:59.613 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:19:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:00.164 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:19:00 np0005541914.localdomain ceph-mon[301710]: pgmap v764: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:01 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v765: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:01 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:01.966 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:19:01 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:19:01 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:19:02 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:19:02 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:19:02 np0005541914.localdomain podman[328944]: 2025-12-02 10:19:02.09850007 +0000 UTC m=+0.095695632 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 02 10:19:02 np0005541914.localdomain podman[328944]: 2025-12-02 10:19:02.131878393 +0000 UTC m=+0.129073915 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:19:02 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:19:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:02.167 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:19:02 np0005541914.localdomain podman[328945]: 2025-12-02 10:19:02.16933166 +0000 UTC m=+0.160825258 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 10:19:02 np0005541914.localdomain podman[328945]: 2025-12-02 10:19:02.212003267 +0000 UTC m=+0.203496895 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 02 10:19:02 np0005541914.localdomain podman[328952]: 2025-12-02 10:19:02.227578275 +0000 UTC m=+0.209941643 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec 02 10:19:02 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:19:02 np0005541914.localdomain podman[328952]: 2025-12-02 10:19:02.266702974 +0000 UTC m=+0.249066392 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, config_id=ovn_controller)
Dec 02 10:19:02 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:19:02 np0005541914.localdomain podman[328946]: 2025-12-02 10:19:02.35306477 +0000 UTC m=+0.339647038 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:19:02 np0005541914.localdomain podman[328946]: 2025-12-02 10:19:02.387302949 +0000 UTC m=+0.373885297 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec 02 10:19:02 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:19:02 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:19:02 np0005541914.localdomain ceph-mon[301710]: pgmap v765: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:03 np0005541914.localdomain systemd[1]: tmp-crun.cYAAiK.mount: Deactivated successfully.
Dec 02 10:19:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:19:03.188 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:19:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:19:03.189 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:19:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:19:03.189 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:19:03 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v766: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:03 np0005541914.localdomain podman[239757]: time="2025-12-02T10:19:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:19:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:19:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 10:19:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:19:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19257 "" "Go-http-client/1.1"
Dec 02 10:19:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:03.918 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:19:03 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 2 addresses
Dec 02 10:19:03 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:19:03 np0005541914.localdomain podman[329045]: 2025-12-02 10:19:03.969519076 +0000 UTC m=+0.063840897 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3)
Dec 02 10:19:03 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:19:04 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:04.642 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:19:05 np0005541914.localdomain ceph-mon[301710]: pgmap v766: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2544020483' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:19:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/2544020483' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:19:05 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v767: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:06 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:06.916 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:19:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:19:06
Dec 02 10:19:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 02 10:19:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] do_upmap
Dec 02 10:19:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] pools ['vms', 'manila_metadata', 'volumes', '.mgr', 'manila_data', 'images', 'backups']
Dec 02 10:19:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes
Dec 02 10:19:06 np0005541914.localdomain dnsmasq[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/addn_hosts - 1 addresses
Dec 02 10:19:06 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/host
Dec 02 10:19:06 np0005541914.localdomain dnsmasq-dhcp[262677]: read /var/lib/neutron/dhcp/447a69ac-5cfc-4dee-8482-764b4cafdf04/opts
Dec 02 10:19:06 np0005541914.localdomain podman[329083]: 2025-12-02 10:19:06.968625945 +0000 UTC m=+0.061240107 container kill 69e9f3681c291ae784cdfdf66e180ebfe2df616d23152294b3e319f208fe54a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-447a69ac-5cfc-4dee-8482-764b4cafdf04, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 02 10:19:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:19:06 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:19:07 np0005541914.localdomain ceph-mon[301710]: pgmap v767: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:19:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:19:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:19:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:19:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:07.219 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:19:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v768: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust
Dec 02 10:19:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:19:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 02 10:19:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:19:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32)
Dec 02 10:19:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:19:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32)
Dec 02 10:19:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:19:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32)
Dec 02 10:19:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:19:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32)
Dec 02 10:19:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:19:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 02 10:19:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:19:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.002499749179927415 of space, bias 4.0, pg target 1.9898003472222223 quantized to 16 (current 16)
Dec 02 10:19:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 02 10:19:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:19:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 02 10:19:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:19:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:19:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:19:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:19:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:19:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:19:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:19:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:19:07 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:19:07 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:19:08 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/3253900841' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:19:08 np0005541914.localdomain podman[329105]: 2025-12-02 10:19:08.080995797 +0000 UTC m=+0.079201617 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 10:19:08 np0005541914.localdomain podman[329105]: 2025-12-02 10:19:08.092782588 +0000 UTC m=+0.090988388 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 10:19:08 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:19:08 np0005541914.localdomain podman[329106]: 2025-12-02 10:19:08.135327302 +0000 UTC m=+0.130266892 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, release=1755695350, name=ubi9-minimal, vendor=Red Hat, Inc., architecture=x86_64, version=9.6, distribution-scope=public)
Dec 02 10:19:08 np0005541914.localdomain podman[329106]: 2025-12-02 10:19:08.14605337 +0000 UTC m=+0.140992980 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, managed_by=edpm_ansible, release=1755695350, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-type=git, io.openshift.tags=minimal rhel9, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec 02 10:19:08 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:19:09 np0005541914.localdomain ceph-mon[301710]: pgmap v768: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:09 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/4109271996' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:19:09 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v769: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:09 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:09.545 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:19:09 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:09.546 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:19:09 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:09.681 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:19:11 np0005541914.localdomain ceph-mon[301710]: pgmap v769: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:11 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v770: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:11.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:19:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:19:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:19:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:19:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:19:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:19:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:19:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:19:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:19:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:19:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:19:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:19:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:19:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:12.220 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:19:12 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:19:13 np0005541914.localdomain ceph-mon[301710]: pgmap v770: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v771: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:13.523 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:19:13 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:19:14 np0005541914.localdomain podman[329147]: 2025-12-02 10:19:14.046591777 +0000 UTC m=+0.057733520 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:19:14 np0005541914.localdomain podman[329147]: 2025-12-02 10:19:14.086964193 +0000 UTC m=+0.098105966 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:19:14 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:19:14 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:14.684 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:19:15 np0005541914.localdomain ceph-mon[301710]: pgmap v771: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:15 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v772: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:15.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:19:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:15.527 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 10:19:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:15.527 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 10:19:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:15.545 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 10:19:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:15.546 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:19:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:15.565 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:19:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:15.565 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:19:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:15.565 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:19:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:15.566 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 10:19:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:15.566 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:19:16 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:19:16 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3458857762' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:19:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:16.026 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:19:16 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/3458857762' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:19:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:16.229 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 10:19:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:16.231 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11372MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 10:19:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:16.231 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:19:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:16.232 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:19:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:16.291 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 10:19:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:16.294 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 10:19:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:16.308 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:19:16 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:19:16 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3626538401' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:19:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:16.723 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:19:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:16.728 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 10:19:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:16.742 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 10:19:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:16.744 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 10:19:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:16.744 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.513s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:19:17 np0005541914.localdomain ceph-mon[301710]: pgmap v772: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:17 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/3626538401' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:19:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:17.263 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:19:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v773: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:19:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:17.726 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:19:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:17.726 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:19:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:18.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:19:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:18.527 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 10:19:19 np0005541914.localdomain ceph-mon[301710]: pgmap v773: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:19 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v774: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:19.717 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:19:21 np0005541914.localdomain ceph-mon[301710]: pgmap v774: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:21 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v775: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:22 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:22.317 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:19:22 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:19:23 np0005541914.localdomain ceph-mon[301710]: pgmap v775: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:23 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v776: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:24 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:24.755 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:19:25 np0005541914.localdomain ceph-mon[301710]: pgmap v776: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:25 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v777: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:26 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/2958011048' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:19:26 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/2436311479' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:19:27 np0005541914.localdomain ceph-mon[301710]: pgmap v777: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:27 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:27.362 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:19:27 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v778: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:19:28 np0005541914.localdomain ceph-mon[301710]: pgmap v778: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:29 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v779: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:29 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:29.831 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:19:30 np0005541914.localdomain ceph-mon[301710]: pgmap v779: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:31 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v780: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:32 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:32.411 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:19:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:19:32 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:19:32 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:19:32 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:19:33 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:19:33 np0005541914.localdomain ceph-mon[301710]: pgmap v780: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:33 np0005541914.localdomain podman[329213]: 2025-12-02 10:19:33.089372037 +0000 UTC m=+0.080323461 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:19:33 np0005541914.localdomain podman[329213]: 2025-12-02 10:19:33.100356014 +0000 UTC m=+0.091307438 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:19:33 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:19:33 np0005541914.localdomain systemd[1]: tmp-crun.DPwGcr.mount: Deactivated successfully.
Dec 02 10:19:33 np0005541914.localdomain podman[329214]: 2025-12-02 10:19:33.165001634 +0000 UTC m=+0.150779310 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:19:33 np0005541914.localdomain podman[329211]: 2025-12-02 10:19:33.199607995 +0000 UTC m=+0.196579704 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:19:33 np0005541914.localdomain podman[329211]: 2025-12-02 10:19:33.208757135 +0000 UTC m=+0.205728844 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 02 10:19:33 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:19:33 np0005541914.localdomain podman[329212]: 2025-12-02 10:19:33.250430032 +0000 UTC m=+0.244844163 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 10:19:33 np0005541914.localdomain podman[329212]: 2025-12-02 10:19:33.259449148 +0000 UTC m=+0.253863279 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec 02 10:19:33 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:19:33 np0005541914.localdomain podman[329214]: 2025-12-02 10:19:33.280338928 +0000 UTC m=+0.266116604 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, org.label-schema.build-date=20251125)
Dec 02 10:19:33 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:19:33 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v781: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:33 np0005541914.localdomain podman[239757]: time="2025-12-02T10:19:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:19:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:19:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 10:19:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:19:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19268 "" "Go-http-client/1.1"
Dec 02 10:19:34 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:34.836 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:19:35 np0005541914.localdomain ceph-mon[301710]: pgmap v781: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:35 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v782: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:35 np0005541914.localdomain sshd[329292]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 10:19:36 np0005541914.localdomain sshd[329292]: Accepted publickey for zuul from 38.102.83.114 port 60994 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 10:19:36 np0005541914.localdomain systemd-logind[760]: New session 74 of user zuul.
Dec 02 10:19:36 np0005541914.localdomain systemd[1]: Started Session 74 of User zuul.
Dec 02 10:19:36 np0005541914.localdomain sshd[329292]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 10:19:36 np0005541914.localdomain sudo[329312]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yafzlhsnycrfifkgoypwmepyifhjnfwy ; /usr/bin/python3
Dec 02 10:19:36 np0005541914.localdomain sudo[329312]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 10:19:36 np0005541914.localdomain python3[329314]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager unregister
                                                           _uses_shell=True zuul_log_id=fa163e3b-3c83-cf35-eb71-00000000000c-1-overcloudnovacompute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 02 10:19:36 np0005541914.localdomain sudo[329312]: pam_unix(sudo:session): session closed for user root
Dec 02 10:19:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:19:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fd397e9aeb0>)]
Dec 02 10:19:36 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 02 10:19:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:19:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fd37d53ceb0>), ('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fd3b07be3d0>)]
Dec 02 10:19:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 02 10:19:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:19:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:19:37 np0005541914.localdomain ceph-mon[301710]: pgmap v782: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Dec 02 10:19:37 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0.
Dec 02 10:19:37 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:19:37.071573) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 02 10:19:37 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58
Dec 02 10:19:37 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670777071636, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1474, "num_deletes": 253, "total_data_size": 2571981, "memory_usage": 2646064, "flush_reason": "Manual Compaction"}
Dec 02 10:19:37 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started
Dec 02 10:19:37 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670777081892, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 1691608, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35575, "largest_seqno": 37044, "table_properties": {"data_size": 1685803, "index_size": 3083, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13812, "raw_average_key_size": 21, "raw_value_size": 1673636, "raw_average_value_size": 2570, "num_data_blocks": 131, "num_entries": 651, "num_filter_entries": 651, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764670681, "oldest_key_time": 1764670681, "file_creation_time": 1764670777, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Dec 02 10:19:37 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 10358 microseconds, and 3868 cpu microseconds.
Dec 02 10:19:37 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 10:19:37 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:19:37.081935) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 1691608 bytes OK
Dec 02 10:19:37 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:19:37.081957) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started
Dec 02 10:19:37 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:19:37.083505) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done
Dec 02 10:19:37 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:19:37.083518) EVENT_LOG_v1 {"time_micros": 1764670777083514, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 02 10:19:37 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:19:37.083540) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 02 10:19:37 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 2564890, prev total WAL file size 2564890, number of live WAL files 2.
Dec 02 10:19:37 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:19:37 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:19:37.084131) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003133333033' seq:72057594037927935, type:22 .. '7061786F73003133353535' seq:0, type:0; will stop at (end)
Dec 02 10:19:37 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 02 10:19:37 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(1651KB)], [57(17MB)]
Dec 02 10:19:37 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670777084188, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 20383302, "oldest_snapshot_seqno": -1}
Dec 02 10:19:37 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 14677 keys, 19065285 bytes, temperature: kUnknown
Dec 02 10:19:37 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670777181544, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 19065285, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18980345, "index_size": 47143, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36741, "raw_key_size": 393590, "raw_average_key_size": 26, "raw_value_size": 18730054, "raw_average_value_size": 1276, "num_data_blocks": 1761, "num_entries": 14677, "num_filter_entries": 14677, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764669502, "oldest_key_time": 0, "file_creation_time": 1764670777, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2a601a42-6d19-4945-9484-73e64f055198", "db_session_id": "O7EMRIXC8F5M1Z077C5B", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}}
Dec 02 10:19:37 np0005541914.localdomain ceph-mon[301710]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 02 10:19:37 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:19:37.181807) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 19065285 bytes
Dec 02 10:19:37 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:19:37.184679) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 209.2 rd, 195.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 17.8 +0.0 blob) out(18.2 +0.0 blob), read-write-amplify(23.3) write-amplify(11.3) OK, records in: 15213, records dropped: 536 output_compression: NoCompression
Dec 02 10:19:37 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:19:37.184705) EVENT_LOG_v1 {"time_micros": 1764670777184694, "job": 34, "event": "compaction_finished", "compaction_time_micros": 97426, "compaction_time_cpu_micros": 46086, "output_level": 6, "num_output_files": 1, "total_output_size": 19065285, "num_input_records": 15213, "num_output_records": 14677, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 02 10:19:37 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:19:37 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670777185024, "job": 34, "event": "table_file_deletion", "file_number": 59}
Dec 02 10:19:37 np0005541914.localdomain ceph-mon[301710]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005541914/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 02 10:19:37 np0005541914.localdomain ceph-mon[301710]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764670777187487, "job": 34, "event": "table_file_deletion", "file_number": 57}
Dec 02 10:19:37 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:19:37.084033) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:19:37 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:19:37.187545) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:19:37 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:19:37.187551) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:19:37 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:19:37.187554) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:19:37 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:19:37.187557) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:19:37 np0005541914.localdomain ceph-mon[301710]: rocksdb: (Original Log Time 2025/12/02-10:19:37.187560) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 02 10:19:37 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v783: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:37.466 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:19:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:19:37 np0005541914.localdomain ovn_controller[153778]: 2025-12-02T10:19:37Z|00241|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Dec 02 10:19:38 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:19:38 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:19:38 np0005541914.localdomain podman[329317]: 2025-12-02 10:19:38.6017252 +0000 UTC m=+0.094230819 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec 02 10:19:38 np0005541914.localdomain podman[329318]: 2025-12-02 10:19:38.650478763 +0000 UTC m=+0.138593937 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., architecture=x86_64, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.expose-services=, name=ubi9-minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Dec 02 10:19:38 np0005541914.localdomain podman[329318]: 2025-12-02 10:19:38.661994547 +0000 UTC m=+0.150109711 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, version=9.6, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vcs-type=git, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter)
Dec 02 10:19:38 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:19:38 np0005541914.localdomain podman[329317]: 2025-12-02 10:19:38.717999443 +0000 UTC m=+0.210505062 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 10:19:38 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:19:39 np0005541914.localdomain ceph-mon[301710]: pgmap v783: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:39 np0005541914.localdomain ceph-mon[301710]: mgrmap e54: np0005541914.lljzmk(active, since 19m), standbys: np0005541912.qwddia, np0005541913.mfesdm
Dec 02 10:19:39 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v784: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s
Dec 02 10:19:39 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:39.872 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:19:41 np0005541914.localdomain ceph-mon[301710]: pgmap v784: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s
Dec 02 10:19:41 np0005541914.localdomain sshd[329292]: pam_unix(sshd:session): session closed for user zuul
Dec 02 10:19:41 np0005541914.localdomain systemd[1]: session-74.scope: Deactivated successfully.
Dec 02 10:19:41 np0005541914.localdomain systemd-logind[760]: Session 74 logged out. Waiting for processes to exit.
Dec 02 10:19:41 np0005541914.localdomain systemd-logind[760]: Removed session 74.
Dec 02 10:19:41 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v785: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s
Dec 02 10:19:42 np0005541914.localdomain sudo[329362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 10:19:42 np0005541914.localdomain sudo[329362]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:19:42 np0005541914.localdomain sudo[329362]: pam_unix(sudo:session): session closed for user root
Dec 02 10:19:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:19:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:19:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:19:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:19:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:19:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:19:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:19:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:19:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:19:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:19:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:19:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:19:42 np0005541914.localdomain sudo[329380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 10:19:42 np0005541914.localdomain sudo[329380]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:19:42 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:42.505 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:19:42 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:19:42 np0005541914.localdomain sudo[329380]: pam_unix(sudo:session): session closed for user root
Dec 02 10:19:42 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 10:19:42 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:19:42 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 02 10:19:42 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:19:42 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 02 10:19:42 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] update: starting ev d981bc46-e489-47e7-ab06-cf577cde6f0f (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:19:42 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] complete: finished ev d981bc46-e489-47e7-ab06-cf577cde6f0f (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:19:42 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Completed event d981bc46-e489-47e7-ab06-cf577cde6f0f (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 02 10:19:42 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 02 10:19:42 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:19:43 np0005541914.localdomain ceph-mon[301710]: pgmap v785: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s
Dec 02 10:19:43 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:19:43 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:19:43 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:19:43 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:19:43 np0005541914.localdomain sudo[329429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 10:19:43 np0005541914.localdomain sudo[329429]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:19:43 np0005541914.localdomain sudo[329429]: pam_unix(sudo:session): session closed for user root
Dec 02 10:19:43 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v786: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s
Dec 02 10:19:44 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:44.912 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:19:44 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:19:45 np0005541914.localdomain podman[329447]: 2025-12-02 10:19:45.099517445 +0000 UTC m=+0.094246949 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=multipathd)
Dec 02 10:19:45 np0005541914.localdomain podman[329447]: 2025-12-02 10:19:45.113973638 +0000 UTC m=+0.108703182 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 02 10:19:45 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:19:45 np0005541914.localdomain ceph-mon[301710]: pgmap v786: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s
Dec 02 10:19:45 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v787: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s
Dec 02 10:19:46 np0005541914.localdomain sshd[329466]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 10:19:47 np0005541914.localdomain ceph-mon[301710]: pgmap v787: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s
Dec 02 10:19:47 np0005541914.localdomain sshd[329466]: Invalid user sol from 193.32.162.146 port 35734
Dec 02 10:19:47 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v788: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s
Dec 02 10:19:47 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events
Dec 02 10:19:47 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 02 10:19:47 np0005541914.localdomain sshd[329466]: Connection closed by invalid user sol 193.32.162.146 port 35734 [preauth]
Dec 02 10:19:47 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:47.508 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:19:47 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:19:48 np0005541914.localdomain ceph-mon[301710]: pgmap v788: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s
Dec 02 10:19:48 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:19:49 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v789: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s
Dec 02 10:19:49 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:49.957 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:19:50 np0005541914.localdomain ceph-mon[301710]: pgmap v789: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s
Dec 02 10:19:51 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v790: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 0 B/s wr, 0 op/s
Dec 02 10:19:52 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:52.545 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:19:52 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:19:53 np0005541914.localdomain ceph-mon[301710]: pgmap v790: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 0 B/s wr, 0 op/s
Dec 02 10:19:53 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v791: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 0 B/s wr, 0 op/s
Dec 02 10:19:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:55.010 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:19:55 np0005541914.localdomain ceph-mon[301710]: pgmap v791: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 0 B/s wr, 0 op/s
Dec 02 10:19:55 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v792: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:57 np0005541914.localdomain ceph-mon[301710]: pgmap v792: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:57 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v793: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:19:57 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:19:57.592 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:19:59 np0005541914.localdomain ceph-mon[301710]: pgmap v793: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:19:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v794: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:00.042 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:20:00 np0005541914.localdomain ceph-mon[301710]: overall HEALTH_OK
Dec 02 10:20:01 np0005541914.localdomain ceph-mon[301710]: pgmap v794: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:01 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v795: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:02 np0005541914.localdomain sshd[329469]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 10:20:02 np0005541914.localdomain sshd[329469]: Accepted publickey for zuul from 38.102.83.114 port 43484 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 10:20:02 np0005541914.localdomain systemd-logind[760]: New session 75 of user zuul.
Dec 02 10:20:02 np0005541914.localdomain systemd[1]: Started Session 75 of User zuul.
Dec 02 10:20:02 np0005541914.localdomain sshd[329469]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 10:20:02 np0005541914.localdomain sudo[329473]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rsync --server --sender -lLogDtprze.LsfxC . /var/log
Dec 02 10:20:02 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:20:02 np0005541914.localdomain sudo[329473]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 10:20:02 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:02.649 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:20:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:20:03.190 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:20:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:20:03.190 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:20:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:20:03.190 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:20:03 np0005541914.localdomain ceph-mon[301710]: pgmap v795: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:03 np0005541914.localdomain sudo[329473]: pam_unix(sudo:session): session closed for user root
Dec 02 10:20:03 np0005541914.localdomain sshd[329472]: Received disconnect from 38.102.83.114 port 43484:11: disconnected by user
Dec 02 10:20:03 np0005541914.localdomain sshd[329472]: Disconnected from user zuul 38.102.83.114 port 43484
Dec 02 10:20:03 np0005541914.localdomain sshd[329469]: pam_unix(sshd:session): session closed for user zuul
Dec 02 10:20:03 np0005541914.localdomain systemd[1]: session-75.scope: Deactivated successfully.
Dec 02 10:20:03 np0005541914.localdomain systemd-logind[760]: Session 75 logged out. Waiting for processes to exit.
Dec 02 10:20:03 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:20:03 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:20:03 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:20:03 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:20:03 np0005541914.localdomain systemd-logind[760]: Removed session 75.
Dec 02 10:20:03 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v796: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:03 np0005541914.localdomain systemd[1]: tmp-crun.awMLnw.mount: Deactivated successfully.
Dec 02 10:20:03 np0005541914.localdomain podman[329499]: 2025-12-02 10:20:03.499156941 +0000 UTC m=+0.072109931 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 02 10:20:03 np0005541914.localdomain podman[329492]: 2025-12-02 10:20:03.551910966 +0000 UTC m=+0.128736205 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec 02 10:20:03 np0005541914.localdomain podman[329492]: 2025-12-02 10:20:03.560910183 +0000 UTC m=+0.137735412 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 02 10:20:03 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:20:03 np0005541914.localdomain podman[329491]: 2025-12-02 10:20:03.61077606 +0000 UTC m=+0.190508138 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 02 10:20:03 np0005541914.localdomain podman[329499]: 2025-12-02 10:20:03.61635008 +0000 UTC m=+0.189303040 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Dec 02 10:20:03 np0005541914.localdomain podman[239757]: time="2025-12-02T10:20:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:20:03 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:20:03 np0005541914.localdomain sshd[329567]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 10:20:03 np0005541914.localdomain podman[329493]: 2025-12-02 10:20:03.72597391 +0000 UTC m=+0.297305741 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS)
Dec 02 10:20:03 np0005541914.localdomain podman[329493]: 2025-12-02 10:20:03.742984131 +0000 UTC m=+0.314315912 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:20:03 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:20:03 np0005541914.localdomain podman[329491]: 2025-12-02 10:20:03.795798638 +0000 UTC m=+0.375530666 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Dec 02 10:20:03 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:20:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:20:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 10:20:03 np0005541914.localdomain sshd[329567]: Accepted publickey for zuul from 38.102.83.114 port 43492 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 10:20:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:20:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19267 "" "Go-http-client/1.1"
Dec 02 10:20:03 np0005541914.localdomain systemd-logind[760]: New session 76 of user zuul.
Dec 02 10:20:03 np0005541914.localdomain systemd[1]: Started Session 76 of User zuul.
Dec 02 10:20:03 np0005541914.localdomain sshd[329567]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 10:20:03 np0005541914.localdomain sudo[329577]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rsync --server --sender -lLogDtprze.LsfxC . /etc/containers/networks
Dec 02 10:20:03 np0005541914.localdomain sudo[329577]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 10:20:04 np0005541914.localdomain sudo[329577]: pam_unix(sudo:session): session closed for user root
Dec 02 10:20:04 np0005541914.localdomain sshd[329576]: Received disconnect from 38.102.83.114 port 43492:11: disconnected by user
Dec 02 10:20:04 np0005541914.localdomain sshd[329576]: Disconnected from user zuul 38.102.83.114 port 43492
Dec 02 10:20:04 np0005541914.localdomain sshd[329567]: pam_unix(sshd:session): session closed for user zuul
Dec 02 10:20:04 np0005541914.localdomain systemd[1]: session-76.scope: Deactivated successfully.
Dec 02 10:20:04 np0005541914.localdomain systemd-logind[760]: Session 76 logged out. Waiting for processes to exit.
Dec 02 10:20:04 np0005541914.localdomain systemd-logind[760]: Removed session 76.
Dec 02 10:20:04 np0005541914.localdomain sshd[329595]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 10:20:04 np0005541914.localdomain ceph-mon[301710]: pgmap v796: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:04 np0005541914.localdomain sshd[329595]: Accepted publickey for zuul from 38.102.83.114 port 43504 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 10:20:04 np0005541914.localdomain systemd-logind[760]: New session 77 of user zuul.
Dec 02 10:20:04 np0005541914.localdomain systemd[1]: Started Session 77 of User zuul.
Dec 02 10:20:04 np0005541914.localdomain sshd[329595]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 10:20:04 np0005541914.localdomain sudo[329599]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rsync --server --sender -lLogDtprze.LsfxC . /etc/containers/containers.conf
Dec 02 10:20:04 np0005541914.localdomain sudo[329599]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 10:20:04 np0005541914.localdomain sudo[329599]: pam_unix(sudo:session): session closed for user root
Dec 02 10:20:04 np0005541914.localdomain sshd[329598]: Received disconnect from 38.102.83.114 port 43504:11: disconnected by user
Dec 02 10:20:04 np0005541914.localdomain sshd[329598]: Disconnected from user zuul 38.102.83.114 port 43504
Dec 02 10:20:04 np0005541914.localdomain sshd[329595]: pam_unix(sshd:session): session closed for user zuul
Dec 02 10:20:04 np0005541914.localdomain systemd[1]: session-77.scope: Deactivated successfully.
Dec 02 10:20:04 np0005541914.localdomain systemd-logind[760]: Session 77 logged out. Waiting for processes to exit.
Dec 02 10:20:04 np0005541914.localdomain systemd-logind[760]: Removed session 77.
Dec 02 10:20:04 np0005541914.localdomain sshd[329617]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 10:20:05 np0005541914.localdomain sshd[329617]: Accepted publickey for zuul from 38.102.83.114 port 43510 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 10:20:05 np0005541914.localdomain systemd-logind[760]: New session 78 of user zuul.
Dec 02 10:20:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:05.079 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:20:05 np0005541914.localdomain systemd[1]: Started Session 78 of User zuul.
Dec 02 10:20:05 np0005541914.localdomain sshd[329617]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 10:20:05 np0005541914.localdomain sudo[329621]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rsync --server --sender -lLogDtprze.LsfxC . /etc/ceph
Dec 02 10:20:05 np0005541914.localdomain sudo[329621]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 10:20:05 np0005541914.localdomain sudo[329621]: pam_unix(sudo:session): session closed for user root
Dec 02 10:20:05 np0005541914.localdomain sshd[329620]: Received disconnect from 38.102.83.114 port 43510:11: disconnected by user
Dec 02 10:20:05 np0005541914.localdomain sshd[329620]: Disconnected from user zuul 38.102.83.114 port 43510
Dec 02 10:20:05 np0005541914.localdomain sshd[329617]: pam_unix(sshd:session): session closed for user zuul
Dec 02 10:20:05 np0005541914.localdomain systemd[1]: session-78.scope: Deactivated successfully.
Dec 02 10:20:05 np0005541914.localdomain systemd-logind[760]: Session 78 logged out. Waiting for processes to exit.
Dec 02 10:20:05 np0005541914.localdomain systemd-logind[760]: Removed session 78.
Dec 02 10:20:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1234616018' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:20:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/1234616018' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:20:05 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v797: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:05 np0005541914.localdomain sshd[329639]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 10:20:05 np0005541914.localdomain sshd[329639]: Accepted publickey for zuul from 38.102.83.114 port 43520 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 10:20:05 np0005541914.localdomain systemd-logind[760]: New session 79 of user zuul.
Dec 02 10:20:05 np0005541914.localdomain systemd[1]: Started Session 79 of User zuul.
Dec 02 10:20:05 np0005541914.localdomain sshd[329639]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 10:20:05 np0005541914.localdomain sudo[329643]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rsync --server --sender -lLogDtprze.LsfxC . /etc/ci
Dec 02 10:20:05 np0005541914.localdomain sudo[329643]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 10:20:05 np0005541914.localdomain sudo[329643]: pam_unix(sudo:session): session closed for user root
Dec 02 10:20:05 np0005541914.localdomain sshd[329642]: Received disconnect from 38.102.83.114 port 43520:11: disconnected by user
Dec 02 10:20:05 np0005541914.localdomain sshd[329642]: Disconnected from user zuul 38.102.83.114 port 43520
Dec 02 10:20:05 np0005541914.localdomain sshd[329639]: pam_unix(sshd:session): session closed for user zuul
Dec 02 10:20:05 np0005541914.localdomain systemd[1]: session-79.scope: Deactivated successfully.
Dec 02 10:20:05 np0005541914.localdomain systemd-logind[760]: Session 79 logged out. Waiting for processes to exit.
Dec 02 10:20:05 np0005541914.localdomain systemd-logind[760]: Removed session 79.
Dec 02 10:20:06 np0005541914.localdomain sshd[329661]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 10:20:06 np0005541914.localdomain sshd[329661]: Accepted publickey for zuul from 38.102.83.114 port 43528 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 10:20:06 np0005541914.localdomain systemd-logind[760]: New session 80 of user zuul.
Dec 02 10:20:06 np0005541914.localdomain systemd[1]: Started Session 80 of User zuul.
Dec 02 10:20:06 np0005541914.localdomain sshd[329661]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 10:20:06 np0005541914.localdomain ceph-mon[301710]: pgmap v797: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:06 np0005541914.localdomain sudo[329665]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rsync --server --sender -lLogDtprze.LsfxC . /etc/yum.conf
Dec 02 10:20:06 np0005541914.localdomain sudo[329665]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 10:20:06 np0005541914.localdomain sudo[329665]: pam_unix(sudo:session): session closed for user root
Dec 02 10:20:06 np0005541914.localdomain sshd[329664]: Received disconnect from 38.102.83.114 port 43528:11: disconnected by user
Dec 02 10:20:06 np0005541914.localdomain sshd[329664]: Disconnected from user zuul 38.102.83.114 port 43528
Dec 02 10:20:06 np0005541914.localdomain sshd[329661]: pam_unix(sshd:session): session closed for user zuul
Dec 02 10:20:06 np0005541914.localdomain systemd[1]: session-80.scope: Deactivated successfully.
Dec 02 10:20:06 np0005541914.localdomain systemd-logind[760]: Session 80 logged out. Waiting for processes to exit.
Dec 02 10:20:06 np0005541914.localdomain systemd-logind[760]: Removed session 80.
Dec 02 10:20:06 np0005541914.localdomain sshd[329683]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 10:20:06 np0005541914.localdomain sshd[329683]: Accepted publickey for zuul from 38.102.83.114 port 43542 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 10:20:06 np0005541914.localdomain systemd-logind[760]: New session 81 of user zuul.
Dec 02 10:20:06 np0005541914.localdomain systemd[1]: Started Session 81 of User zuul.
Dec 02 10:20:06 np0005541914.localdomain sshd[329683]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 10:20:06 np0005541914.localdomain sudo[329687]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rsync --server --sender -lLogDtprze.LsfxC . /etc/yum.repos.d
Dec 02 10:20:06 np0005541914.localdomain sudo[329687]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 10:20:06 np0005541914.localdomain sudo[329687]: pam_unix(sudo:session): session closed for user root
Dec 02 10:20:06 np0005541914.localdomain sshd[329686]: Received disconnect from 38.102.83.114 port 43542:11: disconnected by user
Dec 02 10:20:06 np0005541914.localdomain sshd[329686]: Disconnected from user zuul 38.102.83.114 port 43542
Dec 02 10:20:06 np0005541914.localdomain sshd[329683]: pam_unix(sshd:session): session closed for user zuul
Dec 02 10:20:06 np0005541914.localdomain systemd[1]: session-81.scope: Deactivated successfully.
Dec 02 10:20:06 np0005541914.localdomain systemd-logind[760]: Session 81 logged out. Waiting for processes to exit.
Dec 02 10:20:06 np0005541914.localdomain systemd-logind[760]: Removed session 81.
Dec 02 10:20:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:20:06
Dec 02 10:20:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 02 10:20:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] do_upmap
Dec 02 10:20:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] pools ['manila_metadata', 'volumes', 'vms', 'manila_data', '.mgr', 'backups', 'images']
Dec 02 10:20:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes
Dec 02 10:20:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:20:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:20:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:20:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:20:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:20:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:20:07 np0005541914.localdomain sshd[329705]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 10:20:07 np0005541914.localdomain sshd[329705]: Accepted publickey for zuul from 38.102.83.114 port 43554 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 10:20:07 np0005541914.localdomain systemd-logind[760]: New session 82 of user zuul.
Dec 02 10:20:07 np0005541914.localdomain systemd[1]: Started Session 82 of User zuul.
Dec 02 10:20:07 np0005541914.localdomain sshd[329705]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 10:20:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust
Dec 02 10:20:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v798: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 02 10:20:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:20:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 02 10:20:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:20:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 02 10:20:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:20:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32)
Dec 02 10:20:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:20:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32)
Dec 02 10:20:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:20:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32)
Dec 02 10:20:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:20:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32)
Dec 02 10:20:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:20:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 02 10:20:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:20:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.002499749179927415 of space, bias 4.0, pg target 1.9898003472222223 quantized to 16 (current 16)
Dec 02 10:20:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:20:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:20:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:20:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:20:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:20:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:20:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:20:07 np0005541914.localdomain sudo[329709]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rsync --server --sender -lLogDtprze.LsfxC . /etc/os-net-config
Dec 02 10:20:07 np0005541914.localdomain sudo[329709]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 10:20:07 np0005541914.localdomain sudo[329709]: pam_unix(sudo:session): session closed for user root
Dec 02 10:20:07 np0005541914.localdomain sshd[329708]: Received disconnect from 38.102.83.114 port 43554:11: disconnected by user
Dec 02 10:20:07 np0005541914.localdomain sshd[329708]: Disconnected from user zuul 38.102.83.114 port 43554
Dec 02 10:20:07 np0005541914.localdomain sshd[329705]: pam_unix(sshd:session): session closed for user zuul
Dec 02 10:20:07 np0005541914.localdomain systemd[1]: session-82.scope: Deactivated successfully.
Dec 02 10:20:07 np0005541914.localdomain systemd-logind[760]: Session 82 logged out. Waiting for processes to exit.
Dec 02 10:20:07 np0005541914.localdomain systemd-logind[760]: Removed session 82.
Dec 02 10:20:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:20:07 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:07.700 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:20:07 np0005541914.localdomain sshd[329727]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 10:20:07 np0005541914.localdomain sshd[329727]: Accepted publickey for zuul from 38.102.83.114 port 43560 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 10:20:07 np0005541914.localdomain systemd-logind[760]: New session 83 of user zuul.
Dec 02 10:20:07 np0005541914.localdomain systemd[1]: Started Session 83 of User zuul.
Dec 02 10:20:07 np0005541914.localdomain sshd[329727]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 10:20:08 np0005541914.localdomain sudo[329731]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rsync --server --sender -lLogDtprze.LsfxC . /home/zuul/ansible_hostname
Dec 02 10:20:08 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/1049435799' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:20:08 np0005541914.localdomain sudo[329731]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 10:20:08 np0005541914.localdomain sudo[329731]: pam_unix(sudo:session): session closed for user root
Dec 02 10:20:08 np0005541914.localdomain sshd[329730]: Received disconnect from 38.102.83.114 port 43560:11: disconnected by user
Dec 02 10:20:08 np0005541914.localdomain sshd[329730]: Disconnected from user zuul 38.102.83.114 port 43560
Dec 02 10:20:08 np0005541914.localdomain sshd[329727]: pam_unix(sshd:session): session closed for user zuul
Dec 02 10:20:08 np0005541914.localdomain systemd[1]: session-83.scope: Deactivated successfully.
Dec 02 10:20:08 np0005541914.localdomain systemd-logind[760]: Session 83 logged out. Waiting for processes to exit.
Dec 02 10:20:08 np0005541914.localdomain systemd-logind[760]: Removed session 83.
Dec 02 10:20:08 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:20:08 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:20:09 np0005541914.localdomain systemd[1]: tmp-crun.RqJpEt.mount: Deactivated successfully.
Dec 02 10:20:09 np0005541914.localdomain podman[329749]: 2025-12-02 10:20:09.089002657 +0000 UTC m=+0.090283437 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 10:20:09 np0005541914.localdomain podman[329749]: 2025-12-02 10:20:09.100125588 +0000 UTC m=+0.101406368 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec 02 10:20:09 np0005541914.localdomain ceph-mon[301710]: pgmap v798: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:09 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/3296878270' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:20:09 np0005541914.localdomain systemd[1]: tmp-crun.NoO0Is.mount: Deactivated successfully.
Dec 02 10:20:09 np0005541914.localdomain podman[329750]: 2025-12-02 10:20:09.138224865 +0000 UTC m=+0.139823245 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, managed_by=edpm_ansible, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., architecture=x86_64, name=ubi9-minimal, version=9.6, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, distribution-scope=public)
Dec 02 10:20:09 np0005541914.localdomain podman[329750]: 2025-12-02 10:20:09.152764331 +0000 UTC m=+0.154362701 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, release=1755695350, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., version=9.6)
Dec 02 10:20:09 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:20:09 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:20:09 np0005541914.localdomain ceph-mgr[287188]: [devicehealth INFO root] Check health
Dec 02 10:20:09 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v799: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:10 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:10.122 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:20:11 np0005541914.localdomain ceph-mon[301710]: pgmap v799: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:11 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v800: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:11.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:20:11 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:11.529 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:20:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:20:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:20:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:20:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:20:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:20:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:20:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:20:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:20:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:20:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:20:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:20:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:20:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:12.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:20:12 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:20:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:12.735 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:20:13 np0005541914.localdomain ceph-mon[301710]: pgmap v800: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v801: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:15.168 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:20:15 np0005541914.localdomain ceph-mon[301710]: pgmap v801: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:15 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v802: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:20:15.444 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:20:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:20:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:20:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:20:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:20:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:20:15.445 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:20:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:20:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:20:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:20:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:20:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:20:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:20:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:20:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:20:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:20:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:20:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:20:15.446 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:20:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:20:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:20:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:20:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:20:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:20:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:20:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:20:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:20:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:20:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:20:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:20:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:20:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:20:15.447 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:20:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:20:15.448 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:20:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:20:15.448 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:20:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:20:15.448 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:20:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:20:15.448 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:20:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:20:15.448 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:20:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:20:15.448 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:20:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:20:15.448 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:20:15 np0005541914.localdomain ceilometer_agent_compute[237061]: 2025-12-02 10:20:15.449 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Dec 02 10:20:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:15.523 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:20:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:15.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:20:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:15.550 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:20:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:15.550 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:20:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:15.551 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:20:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:15.551 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 10:20:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:15.551 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:20:15 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:20:15 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:20:15 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/4137885901' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:20:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:16.003 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:20:16 np0005541914.localdomain podman[329812]: 2025-12-02 10:20:16.098942303 +0000 UTC m=+0.099135808 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:20:16 np0005541914.localdomain podman[329812]: 2025-12-02 10:20:16.112058345 +0000 UTC m=+0.112251830 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec 02 10:20:16 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:20:16 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/4137885901' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:20:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:16.228 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 10:20:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:16.228 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11335MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 10:20:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:16.229 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:20:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:16.229 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:20:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:16.281 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 10:20:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:16.282 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 10:20:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:16.299 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:20:16 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:20:16 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2506910296' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:20:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:16.731 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:20:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:16.737 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 10:20:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:16.758 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 10:20:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:16.761 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 10:20:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:16.761 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:20:17 np0005541914.localdomain ceph-mon[301710]: pgmap v802: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:17 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/2506910296' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:20:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v803: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:20:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:17.778 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:20:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:18.762 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:20:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:18.763 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 10:20:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:18.763 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 10:20:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:18.785 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 10:20:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:18.785 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:20:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:18.786 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:20:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:18.786 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:20:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:18.786 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 10:20:19 np0005541914.localdomain ceph-mon[301710]: pgmap v803: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:19 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v804: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:20 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:20.174 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:20:21 np0005541914.localdomain ceph-mon[301710]: pgmap v804: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:21 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v805: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:22 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:20:22 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:22.821 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:20:23 np0005541914.localdomain ceph-mon[301710]: pgmap v805: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:23 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v806: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:24 np0005541914.localdomain ceph-mon[301710]: pgmap v806: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:25 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:25.176 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:20:25 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v807: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:26 np0005541914.localdomain ceph-mon[301710]: pgmap v807: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:26 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:26.547 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:20:27 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v808: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:27 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:20:27 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:27.858 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:20:28 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/1038183216' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:20:29 np0005541914.localdomain ceph-mon[301710]: pgmap v808: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:29 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/2708879317' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:20:29 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v809: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:30 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:30.180 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:20:31 np0005541914.localdomain ceph-mon[301710]: pgmap v809: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:31 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v810: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:32 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:20:32 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:32.888 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:20:33 np0005541914.localdomain ceph-mon[301710]: pgmap v810: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:33 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v811: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:33 np0005541914.localdomain podman[239757]: time="2025-12-02T10:20:33Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:20:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:20:33 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 10:20:33 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:20:33 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19264 "" "Go-http-client/1.1"
Dec 02 10:20:34 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:20:34 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:20:34 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:20:34 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:20:34 np0005541914.localdomain podman[329857]: 2025-12-02 10:20:34.104091402 +0000 UTC m=+0.062043042 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Dec 02 10:20:34 np0005541914.localdomain podman[329857]: 2025-12-02 10:20:34.123994793 +0000 UTC m=+0.081946483 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm)
Dec 02 10:20:34 np0005541914.localdomain systemd[1]: tmp-crun.NVxSJm.mount: Deactivated successfully.
Dec 02 10:20:34 np0005541914.localdomain podman[329856]: 2025-12-02 10:20:34.133072041 +0000 UTC m=+0.089161543 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 02 10:20:34 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:20:34 np0005541914.localdomain podman[329856]: 2025-12-02 10:20:34.146829132 +0000 UTC m=+0.102918644 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 02 10:20:34 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:20:34 np0005541914.localdomain podman[329859]: 2025-12-02 10:20:34.192608105 +0000 UTC m=+0.144617633 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_controller)
Dec 02 10:20:34 np0005541914.localdomain podman[329855]: 2025-12-02 10:20:34.254278674 +0000 UTC m=+0.215705300 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 02 10:20:34 np0005541914.localdomain podman[329859]: 2025-12-02 10:20:34.275933888 +0000 UTC m=+0.227943456 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 02 10:20:34 np0005541914.localdomain podman[329855]: 2025-12-02 10:20:34.286661737 +0000 UTC m=+0.248088373 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 02 10:20:34 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:20:34 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:20:35 np0005541914.localdomain ceph-mon[301710]: pgmap v811: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:35 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:35.212 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:20:35 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v812: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:20:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:20:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:20:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:20:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:20:37 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:20:37 np0005541914.localdomain ceph-mon[301710]: pgmap v812: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:37 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v813: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:37 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:20:37 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:37.950 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:20:39 np0005541914.localdomain ceph-mon[301710]: pgmap v813: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:39 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v814: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:39 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:20:39 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:20:40 np0005541914.localdomain podman[329936]: 2025-12-02 10:20:40.085209597 +0000 UTC m=+0.085660985 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 10:20:40 np0005541914.localdomain podman[329936]: 2025-12-02 10:20:40.095045149 +0000 UTC m=+0.095496577 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec 02 10:20:40 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:20:40 np0005541914.localdomain podman[329937]: 2025-12-02 10:20:40.20214899 +0000 UTC m=+0.198744750 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, name=ubi9-minimal, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.openshift.expose-services=, config_id=edpm, com.redhat.component=ubi9-minimal-container, distribution-scope=public, vendor=Red Hat, Inc., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec 02 10:20:40 np0005541914.localdomain podman[329937]: 2025-12-02 10:20:40.21390002 +0000 UTC m=+0.210495780 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, version=9.6, io.openshift.tags=minimal rhel9, architecture=x86_64, container_name=openstack_network_exporter, io.buildah.version=1.33.7, config_id=edpm, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., name=ubi9-minimal, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec 02 10:20:40 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:40.257 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:20:40 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:20:41 np0005541914.localdomain ceph-mon[301710]: pgmap v814: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:41 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v815: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:20:42 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:20:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:20:42 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:20:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:20:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:20:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:20:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:20:42 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:20:42 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:20:42 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:20:42 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:20:42 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:20:43 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:43.001 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:20:43 np0005541914.localdomain ceph-mon[301710]: pgmap v815: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:43 np0005541914.localdomain sudo[329979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 10:20:43 np0005541914.localdomain sudo[329979]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:20:43 np0005541914.localdomain sudo[329979]: pam_unix(sudo:session): session closed for user root
Dec 02 10:20:43 np0005541914.localdomain sudo[329997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Dec 02 10:20:43 np0005541914.localdomain sudo[329997]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:20:43 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v816: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:44 np0005541914.localdomain systemd[1]: tmp-crun.sWqJjB.mount: Deactivated successfully.
Dec 02 10:20:44 np0005541914.localdomain podman[330090]: 2025-12-02 10:20:44.248385873 +0000 UTC m=+0.113204310 container exec 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, ceph=True, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1763362218, name=rhceph, build-date=2025-11-26T19:44:28Z, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_CLEAN=True, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.buildah.version=1.41.4, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, vcs-type=git, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, version=7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec 02 10:20:44 np0005541914.localdomain podman[330090]: 2025-12-02 10:20:44.388116033 +0000 UTC m=+0.252934130 container exec_died 306e3f591111ae55ed409f76249370397a97aa050a74909938a93c200c45d81c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-crash-np0005541914, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, build-date=2025-11-26T19:44:28Z, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, org.opencontainers.image.revision=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, io.buildah.version=1.41.4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, io.openshift.expose-services=, release=1763362218, io.openshift.tags=rhceph ceph, vcs-ref=09e5383fa24dada2ef392e4f10e9f5d0a9ef83f0)
Dec 02 10:20:44 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain.devices.0}] v 0)
Dec 02 10:20:44 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain.devices.0}] v 0)
Dec 02 10:20:44 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541912.localdomain}] v 0)
Dec 02 10:20:44 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541913.localdomain}] v 0)
Dec 02 10:20:44 np0005541914.localdomain sudo[329997]: pam_unix(sudo:session): session closed for user root
Dec 02 10:20:44 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain.devices.0}] v 0)
Dec 02 10:20:45 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005541914.localdomain}] v 0)
Dec 02 10:20:45 np0005541914.localdomain sudo[330209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 02 10:20:45 np0005541914.localdomain sudo[330209]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:20:45 np0005541914.localdomain sudo[330209]: pam_unix(sudo:session): session closed for user root
Dec 02 10:20:45 np0005541914.localdomain sudo[330227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/c7c8e171-a193-56fb-95fa-8879fcfa7074/cephadm.a31fbded8455ec58c0f5dde8c3d5c4a1f59de5789b672b4f3ea9cd8d29a0d1c3 --timeout 895 gather-facts
Dec 02 10:20:45 np0005541914.localdomain sudo[330227]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:20:45 np0005541914.localdomain ceph-mon[301710]: pgmap v816: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:45 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:20:45 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:20:45 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:20:45 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:20:45 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:20:45 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:20:45 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:45.306 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:20:45 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v817: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:45 np0005541914.localdomain sudo[330227]: pam_unix(sudo:session): session closed for user root
Dec 02 10:20:45 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Dec 02 10:20:45 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 02 10:20:45 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec 02 10:20:45 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 02 10:20:45 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec 02 10:20:45 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 02 10:20:45 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} v 0)
Dec 02 10:20:45 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 02 10:20:45 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} v 0)
Dec 02 10:20:45 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 02 10:20:45 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} v 0)
Dec 02 10:20:45 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 02 10:20:45 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO root] Adjusting osd_memory_target on np0005541912.localdomain to 836.6M
Dec 02 10:20:45 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005541912.localdomain to 836.6M
Dec 02 10:20:45 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO root] Adjusting osd_memory_target on np0005541914.localdomain to 836.6M
Dec 02 10:20:45 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005541914.localdomain to 836.6M
Dec 02 10:20:45 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 02 10:20:45 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 02 10:20:45 np0005541914.localdomain ceph-mgr[287188]: [cephadm INFO root] Adjusting osd_memory_target on np0005541913.localdomain to 836.6M
Dec 02 10:20:45 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005541913.localdomain to 836.6M
Dec 02 10:20:45 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 02 10:20:45 np0005541914.localdomain ceph-mgr[287188]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005541912.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 10:20:45 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005541912.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 10:20:45 np0005541914.localdomain ceph-mgr[287188]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005541914.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 10:20:45 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005541914.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 10:20:45 np0005541914.localdomain ceph-mgr[287188]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005541913.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 10:20:45 np0005541914.localdomain ceph-mgr[287188]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005541913.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 10:20:45 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 10:20:45 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:20:45 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 02 10:20:45 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [INF] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:20:45 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 02 10:20:45 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] update: starting ev 5d357a87-69a7-4082-b8ea-75508b5e4d43 (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:20:45 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] complete: finished ev 5d357a87-69a7-4082-b8ea-75508b5e4d43 (Updating node-proxy deployment (+3 -> 3))
Dec 02 10:20:45 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Completed event 5d357a87-69a7-4082-b8ea-75508b5e4d43 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Dec 02 10:20:45 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 02 10:20:45 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:20:46 np0005541914.localdomain sudo[330278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 02 10:20:46 np0005541914.localdomain sudo[330278]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Dec 02 10:20:46 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:20:46 np0005541914.localdomain sudo[330278]: pam_unix(sudo:session): session closed for user root
Dec 02 10:20:46 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 02 10:20:46 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 02 10:20:46 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 02 10:20:46 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 02 10:20:46 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 02 10:20:46 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 02 10:20:46 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 02 10:20:46 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 02 10:20:46 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 02 10:20:46 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Dec 02 10:20:46 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Dec 02 10:20:46 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Dec 02 10:20:46 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:20:46 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 02 10:20:46 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:20:46 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 172.18.0.108:0/2286681988' entity='mgr.np0005541914.lljzmk' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 02 10:20:46 np0005541914.localdomain podman[330296]: 2025-12-02 10:20:46.263278077 +0000 UTC m=+0.097364025 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=multipathd)
Dec 02 10:20:46 np0005541914.localdomain podman[330296]: 2025-12-02 10:20:46.280840734 +0000 UTC m=+0.114926692 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 02 10:20:46 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:20:47 np0005541914.localdomain ceph-mon[301710]: pgmap v817: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:47 np0005541914.localdomain ceph-mon[301710]: Adjusting osd_memory_target on np0005541912.localdomain to 836.6M
Dec 02 10:20:47 np0005541914.localdomain ceph-mon[301710]: Adjusting osd_memory_target on np0005541914.localdomain to 836.6M
Dec 02 10:20:47 np0005541914.localdomain ceph-mon[301710]: Adjusting osd_memory_target on np0005541913.localdomain to 836.6M
Dec 02 10:20:47 np0005541914.localdomain ceph-mon[301710]: Unable to set osd_memory_target on np0005541912.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 10:20:47 np0005541914.localdomain ceph-mon[301710]: Unable to set osd_memory_target on np0005541914.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 10:20:47 np0005541914.localdomain ceph-mon[301710]: Unable to set osd_memory_target on np0005541913.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Dec 02 10:20:47 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v818: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:47 np0005541914.localdomain ceph-mgr[287188]: [progress INFO root] Writing back 50 completed events
Dec 02 10:20:47 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 02 10:20:47 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:20:48 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:48.037 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:20:48 np0005541914.localdomain ceph-mon[301710]: pgmap v818: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:48 np0005541914.localdomain ceph-mon[301710]: from='mgr.34354 ' entity='mgr.np0005541914.lljzmk' 
Dec 02 10:20:49 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v819: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:50 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:50.343 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:20:50 np0005541914.localdomain ceph-mon[301710]: pgmap v819: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:51 np0005541914.localdomain sshd[330315]: main: sshd: ssh-rsa algorithm is disabled
Dec 02 10:20:51 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v820: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:51 np0005541914.localdomain sshd[330315]: Accepted publickey for zuul from 192.168.122.10 port 51898 ssh2: RSA SHA256:uKO0Ohw486fj8lQHxind1+ryY96O3+Z9KYMOgF8+dKU
Dec 02 10:20:51 np0005541914.localdomain systemd-logind[760]: New session 84 of user zuul.
Dec 02 10:20:51 np0005541914.localdomain systemd[1]: Started Session 84 of User zuul.
Dec 02 10:20:51 np0005541914.localdomain sshd[330315]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Dec 02 10:20:51 np0005541914.localdomain sudo[330319]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt
Dec 02 10:20:51 np0005541914.localdomain sudo[330319]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Dec 02 10:20:52 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:20:53 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:53.040 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:20:53 np0005541914.localdomain ceph-mon[301710]: pgmap v820: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:53 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v821: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:54 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.69428 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:20:54 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.59014 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:20:54 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.49287 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:20:54 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.69434 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:20:54 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.59020 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:20:55 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.49293 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:20:55 np0005541914.localdomain ceph-mon[301710]: pgmap v821: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:55 np0005541914.localdomain ceph-mon[301710]: from='client.69428 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:20:55 np0005541914.localdomain ceph-mon[301710]: from='client.59014 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:20:55 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "status"} v 0)
Dec 02 10:20:55 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1735864119' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 02 10:20:55 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:55.343 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:20:55 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v822: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:56 np0005541914.localdomain ceph-mon[301710]: from='client.49287 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:20:56 np0005541914.localdomain ceph-mon[301710]: from='client.69434 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:20:56 np0005541914.localdomain ceph-mon[301710]: from='client.59020 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:20:56 np0005541914.localdomain ceph-mon[301710]: from='client.49293 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:20:56 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/1735864119' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 02 10:20:56 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/618962487' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 02 10:20:56 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/1347547795' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 02 10:20:57 np0005541914.localdomain ceph-mon[301710]: pgmap v822: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:57 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v823: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:57 np0005541914.localdomain ovs-vsctl[330571]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec 02 10:20:57 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:20:58 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:20:58.042 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:20:58 np0005541914.localdomain virtqemud[228953]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec 02 10:20:58 np0005541914.localdomain virtqemud[228953]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec 02 10:20:58 np0005541914.localdomain virtqemud[228953]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 02 10:20:58 np0005541914.localdomain systemd[1]: efi.automount: Got automount request for /efi, triggered by 330723 (lsinitrd)
Dec 02 10:20:58 np0005541914.localdomain systemd[1]: Mounting EFI System Partition Automount...
Dec 02 10:20:58 np0005541914.localdomain systemd[1]: Mounted EFI System Partition Automount.
Dec 02 10:20:58 np0005541914.localdomain ceph-mds[285895]: mds.mds.np0005541914.sqgqkj asok_command: cache status {prefix=cache status} (starting...)
Dec 02 10:20:58 np0005541914.localdomain ceph-mds[285895]: mds.mds.np0005541914.sqgqkj Can't run that command on an inactive MDS!
Dec 02 10:20:58 np0005541914.localdomain ceph-mds[285895]: mds.mds.np0005541914.sqgqkj asok_command: client ls {prefix=client ls} (starting...)
Dec 02 10:20:58 np0005541914.localdomain ceph-mds[285895]: mds.mds.np0005541914.sqgqkj Can't run that command on an inactive MDS!
Dec 02 10:20:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.69449 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:20:59 np0005541914.localdomain lvm[330817]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 02 10:20:59 np0005541914.localdomain lvm[330817]: VG ceph_vg1 finished
Dec 02 10:20:59 np0005541914.localdomain lvm[330821]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 02 10:20:59 np0005541914.localdomain lvm[330821]: VG ceph_vg0 finished
Dec 02 10:20:59 np0005541914.localdomain ceph-mon[301710]: pgmap v823: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.59032 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:20:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v824: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:20:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.69455 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:20:59 np0005541914.localdomain ceph-mds[285895]: mds.mds.np0005541914.sqgqkj asok_command: damage ls {prefix=damage ls} (starting...)
Dec 02 10:20:59 np0005541914.localdomain ceph-mds[285895]: mds.mds.np0005541914.sqgqkj Can't run that command on an inactive MDS!
Dec 02 10:20:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.49305 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:20:59 np0005541914.localdomain ceph-mds[285895]: mds.mds.np0005541914.sqgqkj asok_command: dump loads {prefix=dump loads} (starting...)
Dec 02 10:20:59 np0005541914.localdomain ceph-mds[285895]: mds.mds.np0005541914.sqgqkj Can't run that command on an inactive MDS!
Dec 02 10:20:59 np0005541914.localdomain ceph-mds[285895]: mds.mds.np0005541914.sqgqkj asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec 02 10:20:59 np0005541914.localdomain ceph-mds[285895]: mds.mds.np0005541914.sqgqkj Can't run that command on an inactive MDS!
Dec 02 10:20:59 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.59038 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:00 np0005541914.localdomain ceph-mds[285895]: mds.mds.np0005541914.sqgqkj asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec 02 10:21:00 np0005541914.localdomain ceph-mds[285895]: mds.mds.np0005541914.sqgqkj Can't run that command on an inactive MDS!
Dec 02 10:21:00 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "report"} v 0)
Dec 02 10:21:00 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2106594270' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec 02 10:21:00 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.49311 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:00 np0005541914.localdomain ceph-mds[285895]: mds.mds.np0005541914.sqgqkj asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec 02 10:21:00 np0005541914.localdomain ceph-mds[285895]: mds.mds.np0005541914.sqgqkj Can't run that command on an inactive MDS!
Dec 02 10:21:00 np0005541914.localdomain ceph-mds[285895]: mds.mds.np0005541914.sqgqkj asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec 02 10:21:00 np0005541914.localdomain ceph-mds[285895]: mds.mds.np0005541914.sqgqkj Can't run that command on an inactive MDS!
Dec 02 10:21:00 np0005541914.localdomain ceph-mon[301710]: from='client.69449 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:00 np0005541914.localdomain ceph-mon[301710]: from='client.? ' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec 02 10:21:00 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/2106594270' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec 02 10:21:00 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.69476 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:00 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 02 10:21:00 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:21:00.302+0000 7fd3aff61640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 02 10:21:00 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:00.385 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:21:00 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 02 10:21:00 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3132060943' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:21:00 np0005541914.localdomain ceph-mds[285895]: mds.mds.np0005541914.sqgqkj asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec 02 10:21:00 np0005541914.localdomain ceph-mds[285895]: mds.mds.np0005541914.sqgqkj Can't run that command on an inactive MDS!
Dec 02 10:21:00 np0005541914.localdomain ceph-mds[285895]: mds.mds.np0005541914.sqgqkj asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec 02 10:21:00 np0005541914.localdomain ceph-mds[285895]: mds.mds.np0005541914.sqgqkj Can't run that command on an inactive MDS!
Dec 02 10:21:00 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.59077 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:00 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:21:00.715+0000 7fd3aff61640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 02 10:21:00 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 02 10:21:00 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Dec 02 10:21:00 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1963989503' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Dec 02 10:21:00 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config log"} v 0)
Dec 02 10:21:00 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/912277046' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Dec 02 10:21:00 np0005541914.localdomain ceph-mds[285895]: mds.mds.np0005541914.sqgqkj asok_command: ops {prefix=ops} (starting...)
Dec 02 10:21:00 np0005541914.localdomain ceph-mds[285895]: mds.mds.np0005541914.sqgqkj Can't run that command on an inactive MDS!
Dec 02 10:21:00 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.49338 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:00 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:21:00.959+0000 7fd3aff61640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 02 10:21:00 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 02 10:21:01 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 02 10:21:01 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1115069005' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 02 10:21:01 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config-key dump"} v 0)
Dec 02 10:21:01 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2110934913' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Dec 02 10:21:01 np0005541914.localdomain ceph-mon[301710]: from='client.59032 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:01 np0005541914.localdomain ceph-mon[301710]: pgmap v824: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:21:01 np0005541914.localdomain ceph-mon[301710]: from='client.69455 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:01 np0005541914.localdomain ceph-mon[301710]: from='client.49305 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:01 np0005541914.localdomain ceph-mon[301710]: from='client.59038 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:01 np0005541914.localdomain ceph-mon[301710]: from='client.49311 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:01 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/60804839' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec 02 10:21:01 np0005541914.localdomain ceph-mon[301710]: from='client.? ' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec 02 10:21:01 np0005541914.localdomain ceph-mon[301710]: from='client.69476 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:01 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/3132060943' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:21:01 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/3324992184' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec 02 10:21:01 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/527587334' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:21:01 np0005541914.localdomain ceph-mon[301710]: from='client.59077 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:01 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/1963989503' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Dec 02 10:21:01 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/912277046' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Dec 02 10:21:01 np0005541914.localdomain ceph-mon[301710]: from='client.49338 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:01 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/951073284' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Dec 02 10:21:01 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/1158635755' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 02 10:21:01 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/701367491' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Dec 02 10:21:01 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/1115069005' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 02 10:21:01 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v825: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:21:01 np0005541914.localdomain ceph-mds[285895]: mds.mds.np0005541914.sqgqkj asok_command: session ls {prefix=session ls} (starting...)
Dec 02 10:21:01 np0005541914.localdomain ceph-mds[285895]: mds.mds.np0005541914.sqgqkj Can't run that command on an inactive MDS!
Dec 02 10:21:01 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 02 10:21:01 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1820455341' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 02 10:21:01 np0005541914.localdomain ceph-mds[285895]: mds.mds.np0005541914.sqgqkj asok_command: status {prefix=status} (starting...)
Dec 02 10:21:01 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.69521 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:01 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.49368 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:02 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.69536 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:02 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec 02 10:21:02 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1397926291' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 02 10:21:02 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.59122 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:02 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/2110934913' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Dec 02 10:21:02 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/3173019494' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Dec 02 10:21:02 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/1077184288' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Dec 02 10:21:02 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/259684202' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Dec 02 10:21:02 np0005541914.localdomain ceph-mon[301710]: pgmap v825: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:21:02 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/3223086368' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 02 10:21:02 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/1820455341' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 02 10:21:02 np0005541914.localdomain ceph-mon[301710]: from='client.69521 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:02 np0005541914.localdomain ceph-mon[301710]: from='client.49368 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:02 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/1882961530' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 02 10:21:02 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/2070081632' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Dec 02 10:21:02 np0005541914.localdomain ceph-mon[301710]: from='client.69536 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:02 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/200524969' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 02 10:21:02 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/1397926291' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 02 10:21:02 np0005541914.localdomain ceph-mon[301710]: from='client.59122 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:02 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/2258191271' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 02 10:21:02 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.49386 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:02 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "features"} v 0)
Dec 02 10:21:02 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3350057924' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 02 10:21:02 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:21:02 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 02 10:21:02 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/513619678' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 02 10:21:02 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.49401 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:02 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Dec 02 10:21:02 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3178003001' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Dec 02 10:21:03 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec 02 10:21:03 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3115311484' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 02 10:21:03 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:03.082 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:21:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:21:03.191 159483 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:21:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:21:03.191 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:21:03 np0005541914.localdomain ovn_metadata_agent[159477]: 2025-12-02 10:21:03.191 159483 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:21:03 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.69584 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:03 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:21:03.295+0000 7fd3aff61640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 02 10:21:03 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 02 10:21:03 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "features"} v 0)
Dec 02 10:21:03 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/1263238642' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 02 10:21:03 np0005541914.localdomain ceph-mon[301710]: from='client.49386 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:03 np0005541914.localdomain ceph-mon[301710]: from='client.? ' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 02 10:21:03 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/3350057924' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 02 10:21:03 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/1839279825' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 02 10:21:03 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/513619678' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 02 10:21:03 np0005541914.localdomain ceph-mon[301710]: from='client.? ' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 02 10:21:03 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/162806466' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 02 10:21:03 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/2821027336' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 02 10:21:03 np0005541914.localdomain ceph-mon[301710]: from='client.49401 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:03 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/3178003001' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Dec 02 10:21:03 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/3729005944' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 02 10:21:03 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/3115311484' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 02 10:21:03 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/509267441' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Dec 02 10:21:03 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/3833727113' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 02 10:21:03 np0005541914.localdomain ceph-mon[301710]: from='client.? ' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 02 10:21:03 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/1263238642' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 02 10:21:03 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 02 10:21:03 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2934133144' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 02 10:21:03 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v826: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:21:03 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.59176 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:03 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:21:03.505+0000 7fd3aff61640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 02 10:21:03 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 02 10:21:03 np0005541914.localdomain podman[239757]: time="2025-12-02T10:21:03Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec 02 10:21:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:21:03 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 156746 "" "Go-http-client/1.1"
Dec 02 10:21:03 np0005541914.localdomain podman[239757]: @ - - [02/Dec/2025:10:21:03 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19268 "" "Go-http-client/1.1"
Dec 02 10:21:03 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Dec 02 10:21:03 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/788720208' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Dec 02 10:21:03 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.69611 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:04 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.49458 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:04 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:21:04.215+0000 7fd3aff61640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 02 10:21:04 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 02 10:21:04 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.69626 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:04 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Dec 02 10:21:04 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1120667361' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Dec 02 10:21:04 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.59206 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:04 np0005541914.localdomain ceph-mon[301710]: from='client.69584 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:04 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/2037890107' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 02 10:21:04 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/2934133144' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 02 10:21:04 np0005541914.localdomain ceph-mon[301710]: pgmap v826: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:21:04 np0005541914.localdomain ceph-mon[301710]: from='client.59176 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:04 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/1262229764' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 02 10:21:04 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/788720208' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Dec 02 10:21:04 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/1985353952' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Dec 02 10:21:04 np0005541914.localdomain ceph-mon[301710]: from='client.69611 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:04 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/3132199302' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 02 10:21:04 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/3308714736' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Dec 02 10:21:04 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/2320340967' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 02 10:21:04 np0005541914.localdomain ceph-mon[301710]: from='client.49458 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:04 np0005541914.localdomain ceph-mon[301710]: from='client.69626 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:04 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/1120667361' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Dec 02 10:21:04 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.49467 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:04 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.69635 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:04 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.59224 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:04 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 02 10:21:04 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2446013239' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 02 10:21:04 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.49491 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:04 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.
Dec 02 10:21:04 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.
Dec 02 10:21:04 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.
Dec 02 10:21:05 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.
Dec 02 10:21:05 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.69656 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:47.607549+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94674944 unmapped: 1638400 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:48.607719+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b98bf000/0x0/0x1bfc00000, data 0x214a043/0x21cd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94683136 unmapped: 1630208 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:49.607924+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b98bf000/0x0/0x1bfc00000, data 0x214a043/0x21cd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94683136 unmapped: 1630208 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:50.608081+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94683136 unmapped: 1630208 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:51.608224+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 870661 data_alloc: 184549376 data_used: 10383360
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94683136 unmapped: 1630208 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:52.608379+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94683136 unmapped: 1630208 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:53.609114+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94683136 unmapped: 1630208 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:54.609280+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94683136 unmapped: 1630208 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:55.609526+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b98bf000/0x0/0x1bfc00000, data 0x214a043/0x21cd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94683136 unmapped: 1630208 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:56.609721+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 870661 data_alloc: 184549376 data_used: 10383360
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94691328 unmapped: 1622016 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:57.609888+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94691328 unmapped: 1622016 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:58.615381+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94691328 unmapped: 1622016 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:59.615521+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94691328 unmapped: 1622016 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:00.615689+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b98bf000/0x0/0x1bfc00000, data 0x214a043/0x21cd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94691328 unmapped: 1622016 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:01.615927+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 870661 data_alloc: 184549376 data_used: 10383360
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94691328 unmapped: 1622016 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:02.616063+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94691328 unmapped: 1622016 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:03.616238+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94691328 unmapped: 1622016 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:04.616411+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b98bf000/0x0/0x1bfc00000, data 0x214a043/0x21cd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94699520 unmapped: 1613824 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:05.616590+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94699520 unmapped: 1613824 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:06.616749+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 870661 data_alloc: 184549376 data_used: 10383360
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94699520 unmapped: 1613824 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:07.616906+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94699520 unmapped: 1613824 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:08.617033+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94699520 unmapped: 1613824 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:09.617207+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94699520 unmapped: 1613824 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:10.617343+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b98bf000/0x0/0x1bfc00000, data 0x214a043/0x21cd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94699520 unmapped: 1613824 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:11.617533+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 870661 data_alloc: 184549376 data_used: 10383360
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94699520 unmapped: 1613824 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:12.617721+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b98bf000/0x0/0x1bfc00000, data 0x214a043/0x21cd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94707712 unmapped: 1605632 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:13.617913+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94707712 unmapped: 1605632 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:14.618130+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94707712 unmapped: 1605632 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:15.618306+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94707712 unmapped: 1605632 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:16.618537+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 870661 data_alloc: 184549376 data_used: 10383360
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94707712 unmapped: 1605632 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:17.618673+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b98bf000/0x0/0x1bfc00000, data 0x214a043/0x21cd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94707712 unmapped: 1605632 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:18.619322+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94707712 unmapped: 1605632 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:19.619525+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b98bf000/0x0/0x1bfc00000, data 0x214a043/0x21cd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94707712 unmapped: 1605632 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:20.619720+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94724096 unmapped: 1589248 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:21.619882+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 870661 data_alloc: 184549376 data_used: 10383360
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94724096 unmapped: 1589248 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:22.620043+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b98bf000/0x0/0x1bfc00000, data 0x214a043/0x21cd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94724096 unmapped: 1589248 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:23.620279+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94724096 unmapped: 1589248 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b98bf000/0x0/0x1bfc00000, data 0x214a043/0x21cd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:24.620581+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94724096 unmapped: 1589248 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:25.620750+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94724096 unmapped: 1589248 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:26.620893+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 870661 data_alloc: 184549376 data_used: 10383360
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b98bf000/0x0/0x1bfc00000, data 0x214a043/0x21cd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94724096 unmapped: 1589248 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:27.621073+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94724096 unmapped: 1589248 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b98bf000/0x0/0x1bfc00000, data 0x214a043/0x21cd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:28.621209+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94724096 unmapped: 1589248 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:29.621343+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94724096 unmapped: 1589248 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:30.621539+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94724096 unmapped: 1589248 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:31.621695+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 870661 data_alloc: 184549376 data_used: 10383360
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94724096 unmapped: 1589248 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:32.621828+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94724096 unmapped: 1589248 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:33.622003+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b98bf000/0x0/0x1bfc00000, data 0x214a043/0x21cd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94724096 unmapped: 1589248 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:34.622182+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94724096 unmapped: 1589248 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:35.622331+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94732288 unmapped: 1581056 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:36.622516+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: mgrc handle_mgr_map Got map version 38
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: mgrc handle_mgr_map Active mgr is now 
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: mgrc reconnect Terminating session with v2:172.18.0.106:6810/2383186409
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: mgrc reconnect No active mgr available yet
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 90 ms_handle_reset con 0x562054ea7c00 session 0x562052bb0b40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052629400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 86.816940308s of 86.822204590s, submitted: 1
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 872961 data_alloc: 184549376 data_used: 10383360
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94511104 unmapped: 1802240 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:37.622631+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: mgrc handle_mgr_map Got map version 39
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/4212177170,v1:172.18.0.108:6811/4212177170]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: mgrc reconnect Starting new session with [v2:172.18.0.108:6810/4212177170,v1:172.18.0.108:6811/4212177170]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: get_auth_request con 0x5620541c5400 auth_method 0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: mgrc handle_mgr_configure stats_period=5
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b98bd000/0x0/0x1bfc00000, data 0x214c259/0x21d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94511104 unmapped: 1802240 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:38.622744+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: mgrc handle_mgr_map Got map version 40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/4212177170,v1:172.18.0.108:6811/4212177170]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94511104 unmapped: 1802240 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:39.622867+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94511104 unmapped: 1802240 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:40.623056+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b98bd000/0x0/0x1bfc00000, data 0x214c259/0x21d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94511104 unmapped: 1802240 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: mgrc handle_mgr_map Got map version 41
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/4212177170,v1:172.18.0.108:6811/4212177170]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:41.623514+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 872961 data_alloc: 184549376 data_used: 10383360
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94511104 unmapped: 1802240 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:42.623652+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: mgrc handle_mgr_map Got map version 42
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/4212177170,v1:172.18.0.108:6811/4212177170]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94363648 unmapped: 1949696 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:43.623806+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94363648 unmapped: 1949696 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:44.623941+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b98bd000/0x0/0x1bfc00000, data 0x214c259/0x21d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94363648 unmapped: 1949696 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:45.624551+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94363648 unmapped: 1949696 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:46.625292+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b98bd000/0x0/0x1bfc00000, data 0x214c259/0x21d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 872961 data_alloc: 184549376 data_used: 10383360
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94363648 unmapped: 1949696 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:47.625777+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:48.626265+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94363648 unmapped: 1949696 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:49.626633+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94363648 unmapped: 1949696 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:50.626937+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94363648 unmapped: 1949696 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b98bd000/0x0/0x1bfc00000, data 0x214c259/0x21d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:51.627218+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94363648 unmapped: 1949696 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 872961 data_alloc: 184549376 data_used: 10383360
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:52.627516+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94363648 unmapped: 1949696 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:53.628121+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94363648 unmapped: 1949696 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b98bd000/0x0/0x1bfc00000, data 0x214c259/0x21d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:54.628354+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94363648 unmapped: 1949696 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:55.628537+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94363648 unmapped: 1949696 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:56.628884+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94363648 unmapped: 1949696 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 872961 data_alloc: 184549376 data_used: 10383360
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:57.629183+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94363648 unmapped: 1949696 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b98bd000/0x0/0x1bfc00000, data 0x214c259/0x21d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:58.629340+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94363648 unmapped: 1949696 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b98bd000/0x0/0x1bfc00000, data 0x214c259/0x21d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:59.629528+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94363648 unmapped: 1949696 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:00.629806+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94363648 unmapped: 1949696 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:01.629985+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94363648 unmapped: 1949696 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 872961 data_alloc: 184549376 data_used: 10383360
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:02.630217+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94363648 unmapped: 1949696 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b98bd000/0x0/0x1bfc00000, data 0x214c259/0x21d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:03.630401+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94363648 unmapped: 1949696 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:04.630558+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94363648 unmapped: 1949696 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b98bd000/0x0/0x1bfc00000, data 0x214c259/0x21d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:05.630768+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94363648 unmapped: 1949696 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:06.630924+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94363648 unmapped: 1949696 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 872961 data_alloc: 184549376 data_used: 10383360
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:07.631112+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94363648 unmapped: 1949696 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:08.631279+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b98bd000/0x0/0x1bfc00000, data 0x214c259/0x21d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94371840 unmapped: 1941504 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:09.631423+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94371840 unmapped: 1941504 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:10.631626+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94371840 unmapped: 1941504 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:11.631835+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94371840 unmapped: 1941504 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b98bd000/0x0/0x1bfc00000, data 0x214c259/0x21d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 872961 data_alloc: 184549376 data_used: 10383360
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:12.632042+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94371840 unmapped: 1941504 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:13.632278+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94371840 unmapped: 1941504 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:14.632873+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94371840 unmapped: 1941504 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:15.633058+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94371840 unmapped: 1941504 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b98bd000/0x0/0x1bfc00000, data 0x214c259/0x21d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:16.633525+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94380032 unmapped: 1933312 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 872961 data_alloc: 184549376 data_used: 10383360
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:17.633661+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94380032 unmapped: 1933312 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:18.634118+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94380032 unmapped: 1933312 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b98bd000/0x0/0x1bfc00000, data 0x214c259/0x21d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:19.634348+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94380032 unmapped: 1933312 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:20.634835+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94380032 unmapped: 1933312 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b98bd000/0x0/0x1bfc00000, data 0x214c259/0x21d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:21.635244+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94380032 unmapped: 1933312 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:22.635474+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 872961 data_alloc: 184549376 data_used: 10383360
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94380032 unmapped: 1933312 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:23.635714+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b98bd000/0x0/0x1bfc00000, data 0x214c259/0x21d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94380032 unmapped: 1933312 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:24.636020+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94388224 unmapped: 1925120 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:25.636232+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94388224 unmapped: 1925120 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:26.636387+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94388224 unmapped: 1925120 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:27.636590+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 872961 data_alloc: 184549376 data_used: 10383360
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94388224 unmapped: 1925120 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b98bd000/0x0/0x1bfc00000, data 0x214c259/0x21d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:28.636813+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94388224 unmapped: 1925120 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:29.636981+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94388224 unmapped: 1925120 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:30.637284+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94388224 unmapped: 1925120 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:31.637526+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94388224 unmapped: 1925120 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:32.637708+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b98bd000/0x0/0x1bfc00000, data 0x214c259/0x21d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 872961 data_alloc: 184549376 data_used: 10383360
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94396416 unmapped: 1916928 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:33.637946+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94396416 unmapped: 1916928 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:34.638162+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94396416 unmapped: 1916928 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:35.638301+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94396416 unmapped: 1916928 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:36.638530+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b98bd000/0x0/0x1bfc00000, data 0x214c259/0x21d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94396416 unmapped: 1916928 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:37.638763+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 872961 data_alloc: 184549376 data_used: 10383360
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b98bd000/0x0/0x1bfc00000, data 0x214c259/0x21d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94396416 unmapped: 1916928 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:38.638943+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94396416 unmapped: 1916928 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:39.639174+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94396416 unmapped: 1916928 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:40.639361+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94404608 unmapped: 1908736 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:41.639700+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94404608 unmapped: 1908736 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:42.639965+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 872961 data_alloc: 184549376 data_used: 10383360
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94404608 unmapped: 1908736 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:43.640204+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b98bd000/0x0/0x1bfc00000, data 0x214c259/0x21d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94404608 unmapped: 1908736 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:44.640369+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94404608 unmapped: 1908736 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b98bd000/0x0/0x1bfc00000, data 0x214c259/0x21d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:45.640580+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94404608 unmapped: 1908736 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:46.640723+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94404608 unmapped: 1908736 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:47.640892+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 872961 data_alloc: 184549376 data_used: 10383360
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94404608 unmapped: 1908736 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:48.641053+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94404608 unmapped: 1908736 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:49.641250+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94404608 unmapped: 1908736 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:50.641400+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b98bd000/0x0/0x1bfc00000, data 0x214c259/0x21d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94404608 unmapped: 1908736 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:51.641513+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94404608 unmapped: 1908736 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:52.641634+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 872961 data_alloc: 184549376 data_used: 10383360
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94404608 unmapped: 1908736 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:53.641814+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94404608 unmapped: 1908736 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b98bd000/0x0/0x1bfc00000, data 0x214c259/0x21d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:54.641966+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94404608 unmapped: 1908736 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b98bd000/0x0/0x1bfc00000, data 0x214c259/0x21d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:55.642045+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94404608 unmapped: 1908736 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:56.642151+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94404608 unmapped: 1908736 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:57.642282+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 872961 data_alloc: 184549376 data_used: 10383360
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94404608 unmapped: 1908736 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:58.642417+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94404608 unmapped: 1908736 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:59.642591+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94404608 unmapped: 1908736 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:00.642701+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94404608 unmapped: 1908736 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b98bd000/0x0/0x1bfc00000, data 0x214c259/0x21d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:01.642841+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94404608 unmapped: 1908736 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:02.642977+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 872961 data_alloc: 184549376 data_used: 10383360
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94404608 unmapped: 1908736 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:03.643180+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94404608 unmapped: 1908736 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:04.643284+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94404608 unmapped: 1908736 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b98bd000/0x0/0x1bfc00000, data 0x214c259/0x21d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:05.643415+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94404608 unmapped: 1908736 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:06.643557+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94404608 unmapped: 1908736 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:07.643698+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 872961 data_alloc: 184549376 data_used: 10383360
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94404608 unmapped: 1908736 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:08.643870+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94404608 unmapped: 1908736 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: mgrc handle_mgr_map Got map version 43
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/4212177170,v1:172.18.0.108:6811/4212177170]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:09.644030+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94552064 unmapped: 1761280 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:10.644183+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b98bd000/0x0/0x1bfc00000, data 0x214c259/0x21d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94552064 unmapped: 1761280 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:11.644341+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b98bd000/0x0/0x1bfc00000, data 0x214c259/0x21d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94552064 unmapped: 1761280 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:12.644516+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 872961 data_alloc: 184549376 data_used: 10383360
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b98bd000/0x0/0x1bfc00000, data 0x214c259/0x21d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94552064 unmapped: 1761280 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b98bd000/0x0/0x1bfc00000, data 0x214c259/0x21d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:13.644674+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94552064 unmapped: 1761280 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:14.644816+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94552064 unmapped: 1761280 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:15.644997+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b98bd000/0x0/0x1bfc00000, data 0x214c259/0x21d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94552064 unmapped: 1761280 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:16.645172+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94552064 unmapped: 1761280 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:17.645328+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 872961 data_alloc: 184549376 data_used: 10383360
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b98bd000/0x0/0x1bfc00000, data 0x214c259/0x21d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94552064 unmapped: 1761280 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:18.645507+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94552064 unmapped: 1761280 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:19.645660+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94552064 unmapped: 1761280 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b98bd000/0x0/0x1bfc00000, data 0x214c259/0x21d0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:20.645837+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94552064 unmapped: 1761280 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:21.645981+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94552064 unmapped: 1761280 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:22.646137+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 872961 data_alloc: 184549376 data_used: 10383360
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 94552064 unmapped: 1761280 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:23.646348+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d1e000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 106.838829041s of 106.848365784s, submitted: 2
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 92274688 unmapped: 4038656 heap: 96313344 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:24.646502+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93388800 unmapped: 19709952 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:25.646696+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 92 ms_handle_reset con 0x562053d1e000 session 0x5620519634a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93388800 unmapped: 19709952 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 92 heartbeat osd_stat(store_statfs(0x1b8c4d000/0x0/0x1bfc00000, data 0x2dbc269/0x2e41000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d36800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:26.646878+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 98099200 unmapped: 14999552 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:27.647028+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 92 heartbeat osd_stat(store_statfs(0x1b7fd7000/0x0/0x1bfc00000, data 0x3a2e4ac/0x3ab6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1053526 data_alloc: 184549376 data_used: 7708672
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 92 handle_osd_map epochs [93,93], i have 92, src has [1,93]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 93 ms_handle_reset con 0x562053d36800 session 0x562053e0be00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93462528 unmapped: 19636224 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:28.647178+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93478912 unmapped: 19619840 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b7fd2000/0x0/0x1bfc00000, data 0x3a306df/0x3aba000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:29.647314+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93478912 unmapped: 19619840 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:30.647481+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93478912 unmapped: 19619840 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:31.647717+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93478912 unmapped: 19619840 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:32.647886+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1058930 data_alloc: 184549376 data_used: 7708672
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93478912 unmapped: 19619840 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:33.648059+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93478912 unmapped: 19619840 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:34.648227+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93478912 unmapped: 19619840 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b7fd2000/0x0/0x1bfc00000, data 0x3a306df/0x3aba000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:35.648421+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93487104 unmapped: 19611648 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:36.648563+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93487104 unmapped: 19611648 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:37.648743+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1058930 data_alloc: 184549376 data_used: 7708672
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93487104 unmapped: 19611648 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:38.648872+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93487104 unmapped: 19611648 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:39.649082+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93487104 unmapped: 19611648 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:40.649260+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b7fd2000/0x0/0x1bfc00000, data 0x3a306df/0x3aba000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93487104 unmapped: 19611648 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:41.649420+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93487104 unmapped: 19611648 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:42.649580+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1058930 data_alloc: 184549376 data_used: 7708672
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93487104 unmapped: 19611648 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:43.649771+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93487104 unmapped: 19611648 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:44.649984+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93487104 unmapped: 19611648 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:45.650132+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93487104 unmapped: 19611648 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:46.650267+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b7fd2000/0x0/0x1bfc00000, data 0x3a306df/0x3aba000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93495296 unmapped: 19603456 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets getting new tickets!
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:47.650537+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _finish_auth 0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:47.652217+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1058930 data_alloc: 184549376 data_used: 7708672
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93495296 unmapped: 19603456 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:48.650795+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b7fd2000/0x0/0x1bfc00000, data 0x3a306df/0x3aba000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93495296 unmapped: 19603456 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:49.650944+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93495296 unmapped: 19603456 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:50.651093+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b7fd2000/0x0/0x1bfc00000, data 0x3a306df/0x3aba000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93495296 unmapped: 19603456 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:51.652705+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93495296 unmapped: 19603456 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:52.653858+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1058930 data_alloc: 184549376 data_used: 7708672
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93495296 unmapped: 19603456 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:53.654667+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93495296 unmapped: 19603456 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:54.654894+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93503488 unmapped: 19595264 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b7fd2000/0x0/0x1bfc00000, data 0x3a306df/0x3aba000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:55.656145+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93503488 unmapped: 19595264 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:56.657052+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93503488 unmapped: 19595264 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:57.657656+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1058930 data_alloc: 184549376 data_used: 7708672
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93503488 unmapped: 19595264 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain podman[331598]: 2025-12-02 10:21:05.112582088 +0000 UTC m=+0.106606417 container health_status 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:58.658091+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93503488 unmapped: 19595264 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b7fd2000/0x0/0x1bfc00000, data 0x3a306df/0x3aba000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:59.658287+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b7fd2000/0x0/0x1bfc00000, data 0x3a306df/0x3aba000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93503488 unmapped: 19595264 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:00.658616+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93503488 unmapped: 19595264 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:01.658794+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93503488 unmapped: 19595264 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:02.658983+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1058930 data_alloc: 184549376 data_used: 7708672
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93503488 unmapped: 19595264 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b7fd2000/0x0/0x1bfc00000, data 0x3a306df/0x3aba000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:03.659144+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93511680 unmapped: 19587072 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:04.659376+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93511680 unmapped: 19587072 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:05.659903+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b7fd2000/0x0/0x1bfc00000, data 0x3a306df/0x3aba000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93511680 unmapped: 19587072 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:06.660343+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93511680 unmapped: 19587072 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:07.660904+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1058930 data_alloc: 184549376 data_used: 7708672
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93511680 unmapped: 19587072 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:08.661174+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93511680 unmapped: 19587072 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:09.661360+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b7fd2000/0x0/0x1bfc00000, data 0x3a306df/0x3aba000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93511680 unmapped: 19587072 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:10.661576+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93511680 unmapped: 19587072 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:11.661966+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93511680 unmapped: 19587072 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:12.662199+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1058930 data_alloc: 184549376 data_used: 7708672
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93511680 unmapped: 19587072 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:13.662499+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b7fd2000/0x0/0x1bfc00000, data 0x3a306df/0x3aba000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93519872 unmapped: 19578880 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:14.663073+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93519872 unmapped: 19578880 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:15.663283+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93519872 unmapped: 19578880 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:16.663519+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b7fd2000/0x0/0x1bfc00000, data 0x3a306df/0x3aba000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93519872 unmapped: 19578880 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:17.663738+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1058930 data_alloc: 184549376 data_used: 7708672
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93519872 unmapped: 19578880 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:18.664027+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054660800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 93 ms_handle_reset con 0x562054660800 session 0x5620539903c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054032000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 93 ms_handle_reset con 0x562054032000 session 0x5620539905a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054297400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 93 ms_handle_reset con 0x562054297400 session 0x562053990780
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93519872 unmapped: 19578880 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:19.664278+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d1e000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 93519872 unmapped: 19578880 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:20.664540+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 93 ms_handle_reset con 0x562053d1e000 session 0x562053990960
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b7fd2000/0x0/0x1bfc00000, data 0x3a306df/0x3aba000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d36800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 102236160 unmapped: 10862592 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:21.664687+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 93 ms_handle_reset con 0x562053d36800 session 0x562053990d20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054032000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 93 ms_handle_reset con 0x562054032000 session 0x5620539910e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 102236160 unmapped: 10862592 heap: 113098752 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054660800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 58.616416931s of 58.821014404s, submitted: 25
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:22.664825+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1192916 data_alloc: 184549376 data_used: 18132992
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 93 ms_handle_reset con 0x562054660800 session 0x5620539912c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 105037824 unmapped: 9682944 heap: 114720768 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:23.665017+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 105037824 unmapped: 9682944 heap: 114720768 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:24.665152+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 105037824 unmapped: 9682944 heap: 114720768 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:25.665319+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b7410000/0x0/0x1bfc00000, data 0x45f3741/0x467e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052820000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 104906752 unmapped: 9814016 heap: 114720768 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 93 ms_handle_reset con 0x562052820000 session 0x5620539914a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:26.665490+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d1e000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d36800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 93 handle_osd_map epochs [94,94], i have 93, src has [1,94]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 94 ms_handle_reset con 0x562053d1e000 session 0x562053991680
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 104947712 unmapped: 9773056 heap: 114720768 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:27.665660+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1204988 data_alloc: 184549376 data_used: 18145280
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 104947712 unmapped: 9773056 heap: 114720768 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 94 heartbeat osd_stat(store_statfs(0x1b73e5000/0x0/0x1bfc00000, data 0x461a207/0x46a8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:28.665865+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054032000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 104947712 unmapped: 9773056 heap: 114720768 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:29.666014+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 95 ms_handle_reset con 0x562054032000 session 0x562053991a40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 104996864 unmapped: 9723904 heap: 114720768 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:30.666146+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 104996864 unmapped: 9723904 heap: 114720768 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:31.666274+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 104996864 unmapped: 9723904 heap: 114720768 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:32.666400+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1207205 data_alloc: 184549376 data_used: 18153472
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 95 heartbeat osd_stat(store_statfs(0x1b73e3000/0x0/0x1bfc00000, data 0x461bbb5/0x46a9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 104996864 unmapped: 9723904 heap: 114720768 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:33.666523+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 104996864 unmapped: 9723904 heap: 114720768 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:34.666634+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 104996864 unmapped: 9723904 heap: 114720768 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:35.666808+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 95 handle_osd_map epochs [95,96], i have 95, src has [1,96]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 12.337144852s of 13.127574921s, submitted: 120
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 105021440 unmapped: 9699328 heap: 114720768 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:36.666980+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 105021440 unmapped: 9699328 heap: 114720768 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:37.667116+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1209876 data_alloc: 184549376 data_used: 18173952
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 96 handle_osd_map epochs [96,96], i have 96, src has [1,96]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 96 handle_osd_map epochs [96,96], i have 96, src has [1,96]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 96 handle_osd_map epochs [96,96], i have 96, src has [1,96]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 110141440 unmapped: 4579328 heap: 114720768 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:38.667220+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054660800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 96 heartbeat osd_stat(store_statfs(0x1b6751000/0x0/0x1bfc00000, data 0x52aecab/0x533d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 113475584 unmapped: 3792896 heap: 117268480 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:39.667355+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 111247360 unmapped: 6021120 heap: 117268480 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:40.667497+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 111624192 unmapped: 5644288 heap: 117268480 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:41.667616+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 111624192 unmapped: 5644288 heap: 117268480 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:42.667753+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 96 heartbeat osd_stat(store_statfs(0x1b6017000/0x0/0x1bfc00000, data 0x59e0cab/0x5a6f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1370538 data_alloc: 184549376 data_used: 18259968
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 111689728 unmapped: 5578752 heap: 117268480 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:43.667897+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054236800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 96 ms_handle_reset con 0x562054236800 session 0x562052bb1a40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d36000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 96 ms_handle_reset con 0x562053d36000 session 0x562055c53680
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 112156672 unmapped: 5111808 heap: 117268480 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:44.668039+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6ac00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 96 ms_handle_reset con 0x562053b6ac00 session 0x562055c53860
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 111919104 unmapped: 21643264 heap: 133562368 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:45.668146+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6ac00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 8.788699150s of 10.022469521s, submitted: 257
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 96 ms_handle_reset con 0x562053b6ac00 session 0x562053c88f00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d1e000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 96 ms_handle_reset con 0x562053d1e000 session 0x562053ba4d20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 112050176 unmapped: 21512192 heap: 133562368 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:46.668275+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 96 heartbeat osd_stat(store_statfs(0x1b5140000/0x0/0x1bfc00000, data 0x68bed0d/0x694e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 112132096 unmapped: 21430272 heap: 133562368 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:47.668382+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d36000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 96 ms_handle_reset con 0x562053d36000 session 0x562055c52780
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1483851 data_alloc: 184549376 data_used: 18264064
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:48.668495+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 111943680 unmapped: 21618688 heap: 133562368 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 96 heartbeat osd_stat(store_statfs(0x1b511f000/0x0/0x1bfc00000, data 0x68dfd0d/0x696f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054032000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 96 ms_handle_reset con 0x562054032000 session 0x562055c52960
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054236800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 96 ms_handle_reset con 0x562054236800 session 0x562055c52b40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:49.668631+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 111091712 unmapped: 22470656 heap: 133562368 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6ac00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d1e000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 96 heartbeat osd_stat(store_statfs(0x1b511d000/0x0/0x1bfc00000, data 0x68dfd40/0x6971000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [0,0,0,0,0,0,0,11])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:50.668749+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 112140288 unmapped: 21422080 heap: 133562368 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:51.668898+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 112140288 unmapped: 21422080 heap: 133562368 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 96 ms_handle_reset con 0x562054660800 session 0x562051d152c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 96 ms_handle_reset con 0x562053d36800 session 0x56205554f0e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:52.669060+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 112140288 unmapped: 21422080 heap: 133562368 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1496375 data_alloc: 184549376 data_used: 18796544
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:53.669249+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 113451008 unmapped: 20111360 heap: 133562368 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 97 handle_osd_map epochs [97,97], i have 97, src has [1,97]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 97 handle_osd_map epochs [97,97], i have 97, src has [1,97]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d36000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 97 ms_handle_reset con 0x562053d36000 session 0x562053b8c960
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054032000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:54.669360+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 123781120 unmapped: 13639680 heap: 137420800 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 97 ms_handle_reset con 0x562054032000 session 0x562053c43860
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052050000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 97 heartbeat osd_stat(store_statfs(0x1b4552000/0x0/0x1bfc00000, data 0x74a7024/0x753c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:55.669618+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 120037376 unmapped: 25436160 heap: 145473536 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 97 handle_osd_map epochs [98,98], i have 97, src has [1,98]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.466485023s of 10.426920891s, submitted: 131
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.19] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 98 handle_osd_map epochs [98,98], i have 98, src has [1,98]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.13] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 98 ms_handle_reset con 0x562052050000 session 0x5620519621e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d36000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.15] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:56.670029+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 120061952 unmapped: 25411584 heap: 145473536 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 98 heartbeat osd_stat(store_statfs(0x1b30ca000/0x0/0x1bfc00000, data 0x892d226/0x89c3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 98 handle_osd_map epochs [99,99], i have 98, src has [1,99]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 99 ms_handle_reset con 0x562053d36000 session 0x562052baf4a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:57.670566+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 120176640 unmapped: 25296896 heap: 145473536 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1800407 data_alloc: 201326592 data_used: 24989696
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:58.671276+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 120176640 unmapped: 25296896 heap: 145473536 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 99 heartbeat osd_stat(store_statfs(0x1b30c6000/0x0/0x1bfc00000, data 0x892f452/0x89c6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:59.671635+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 120242176 unmapped: 25231360 heap: 145473536 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d36800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 99 ms_handle_reset con 0x562053d36800 session 0x5620541c7c20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054032000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 99 ms_handle_reset con 0x562054032000 session 0x56205858a000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054660800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 99 ms_handle_reset con 0x562054660800 session 0x56205858a1e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:00.671897+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 120283136 unmapped: 25190400 heap: 145473536 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.19] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.15] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.13] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:01.672765+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 120299520 unmapped: 25174016 heap: 145473536 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:02.672909+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 120299520 unmapped: 25174016 heap: 145473536 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d1f000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 100 handle_osd_map epochs [100,100], i have 100, src has [1,100]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1566693 data_alloc: 201326592 data_used: 24563712
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 100 ms_handle_reset con 0x562053d1f000 session 0x56205858a5a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:03.673336+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 119988224 unmapped: 25485312 heap: 145473536 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 100 heartbeat osd_stat(store_statfs(0x1b509a000/0x0/0x1bfc00000, data 0x69574e6/0x69ee000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:04.673510+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 120922112 unmapped: 24551424 heap: 145473536 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 100 heartbeat osd_stat(store_statfs(0x1b4e39000/0x0/0x1bfc00000, data 0x6bb84e6/0x6c4f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [2,2,1])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:05.673692+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 125657088 unmapped: 19816448 heap: 145473536 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.342316628s of 10.134612083s, submitted: 223
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 100 heartbeat osd_stat(store_statfs(0x1b41a0000/0x0/0x1bfc00000, data 0x784f4e6/0x78e6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:06.673828+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 126058496 unmapped: 19415040 heap: 145473536 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 100 heartbeat osd_stat(store_statfs(0x1b4197000/0x0/0x1bfc00000, data 0x78574e6/0x78ee000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:07.674001+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 123305984 unmapped: 22167552 heap: 145473536 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1691327 data_alloc: 201326592 data_used: 25395200
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 100 heartbeat osd_stat(store_statfs(0x1b4192000/0x0/0x1bfc00000, data 0x78654e6/0x78fc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:08.674226+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 123330560 unmapped: 22142976 heap: 145473536 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d36000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 100 ms_handle_reset con 0x562053d36000 session 0x56205858b680
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:09.674366+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d36800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 100 ms_handle_reset con 0x562053d36800 session 0x56205858b860
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 100 heartbeat osd_stat(store_statfs(0x1b4192000/0x0/0x1bfc00000, data 0x78654e6/0x78fc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054032000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 100 ms_handle_reset con 0x562054032000 session 0x56205858ba40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 123363328 unmapped: 22110208 heap: 145473536 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054660800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 100 ms_handle_reset con 0x562054660800 session 0x56205858bc20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054237000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053cff800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 100 ms_handle_reset con 0x562053cff800 session 0x562053bcb0e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 100 ms_handle_reset con 0x562054237000 session 0x562054eeba40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053cff800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 100 ms_handle_reset con 0x562053cff800 session 0x562054eeb860
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d36000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 100 ms_handle_reset con 0x562053d36000 session 0x562054eeb680
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d36800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 100 ms_handle_reset con 0x562053d36800 session 0x562054eeb4a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054032000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 100 ms_handle_reset con 0x562054032000 session 0x562053bfed20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053cff800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:10.674529+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 144023552 unmapped: 12484608 heap: 156508160 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 100 ms_handle_reset con 0x562053cff800 session 0x562053bfe000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:11.674688+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 135208960 unmapped: 28139520 heap: 163348480 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:12.674828+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 135208960 unmapped: 28139520 heap: 163348480 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d36000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d36800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0740741
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 905969664 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1923366 data_alloc: 201326592 data_used: 35606528
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:13.674988+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054032000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 135266304 unmapped: 28082176 heap: 163348480 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 100 ms_handle_reset con 0x562053b6ac00 session 0x562055c52f00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 100 ms_handle_reset con 0x562053d1e000 session 0x562053bca780
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054237000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:14.675148+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 135774208 unmapped: 27574272 heap: 163348480 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 100 handle_osd_map epochs [100,101], i have 100, src has [1,101]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 101 handle_osd_map epochs [101,101], i have 101, src has [1,101]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 101 ms_handle_reset con 0x562054032000 session 0x562058591a40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 101 heartbeat osd_stat(store_statfs(0x1b27f9000/0x0/0x1bfc00000, data 0x91fd509/0x9295000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:15.675268+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 125067264 unmapped: 38281216 heap: 163348480 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:16.675387+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127606784 unmapped: 35741696 heap: 163348480 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:17.675516+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127606784 unmapped: 35741696 heap: 163348480 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 101 heartbeat osd_stat(store_statfs(0x1b45ad000/0x0/0x1bfc00000, data 0x71c56fb/0x725d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1698237 data_alloc: 201326592 data_used: 27648000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:18.675652+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127664128 unmapped: 35684352 heap: 163348480 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:19.675787+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127664128 unmapped: 35684352 heap: 163348480 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 101 ms_handle_reset con 0x562053d36000 session 0x56205858a000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 13.041388512s of 13.851861954s, submitted: 127
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 101 ms_handle_reset con 0x562053d36800 session 0x562053b35a40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6ac00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:20.675897+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 123043840 unmapped: 40304640 heap: 163348480 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 102 ms_handle_reset con 0x562053b6ac00 session 0x562053c89a40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:21.676015+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 123101184 unmapped: 40247296 heap: 163348480 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:22.676145+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 123101184 unmapped: 40247296 heap: 163348480 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1421319 data_alloc: 201326592 data_used: 20369408
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:23.676310+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 102 heartbeat osd_stat(store_statfs(0x1b661c000/0x0/0x1bfc00000, data 0x53db75c/0x5471000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 123101184 unmapped: 40247296 heap: 163348480 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:24.676508+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 123101184 unmapped: 40247296 heap: 163348480 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:25.676614+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 123256832 unmapped: 40091648 heap: 163348480 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 102 heartbeat osd_stat(store_statfs(0x1b661d000/0x0/0x1bfc00000, data 0x53db75c/0x5471000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:26.676787+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 123559936 unmapped: 39788544 heap: 163348480 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:27.676914+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 123609088 unmapped: 39739392 heap: 163348480 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1434871 data_alloc: 201326592 data_used: 21360640
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:28.677239+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 123617280 unmapped: 39731200 heap: 163348480 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 102 ms_handle_reset con 0x562054237000 session 0x562058590960
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 102 heartbeat osd_stat(store_statfs(0x1b661d000/0x0/0x1bfc00000, data 0x53db75c/0x5471000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:29.677392+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 123666432 unmapped: 39682048 heap: 163348480 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053cff800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 102 handle_osd_map epochs [103,103], i have 102, src has [1,103]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.772720337s of 10.028295517s, submitted: 108
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 103 ms_handle_reset con 0x562053cff800 session 0x5620576885a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d1e000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054032000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:30.677513+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 103 ms_handle_reset con 0x562053d1e000 session 0x562053e65e00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 103 ms_handle_reset con 0x562054032000 session 0x562052995e00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6ac00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 103 handle_osd_map epochs [103,103], i have 103, src has [1,103]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 123641856 unmapped: 39706624 heap: 163348480 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 103 ms_handle_reset con 0x562053b6ac00 session 0x562052ba90e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053cff800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:31.677660+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127623168 unmapped: 35725312 heap: 163348480 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 103 heartbeat osd_stat(store_statfs(0x1b5203000/0x0/0x1bfc00000, data 0x67f1a40/0x688b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 103 ms_handle_reset con 0x562053cff800 session 0x562053c88b40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d36800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.19] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.15] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.13] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 104 handle_osd_map epochs [104,104], i have 104, src has [1,104]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:32.677784+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127655936 unmapped: 35692544 heap: 163348480 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 105 ms_handle_reset con 0x562053d36800 session 0x562057688d20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1626474 data_alloc: 201326592 data_used: 22441984
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 105 heartbeat osd_stat(store_statfs(0x1b51ff000/0x0/0x1bfc00000, data 0x67f3c42/0x688e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:33.677933+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054237000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127664128 unmapped: 35684352 heap: 163348480 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 105 ms_handle_reset con 0x562054237000 session 0x562057688f00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6ac00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 105 ms_handle_reset con 0x562053b6ac00 session 0x5620576892c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 105 heartbeat osd_stat(store_statfs(0x1b51fb000/0x0/0x1bfc00000, data 0x67f5e6e/0x6891000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053cff800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 105 ms_handle_reset con 0x562053cff800 session 0x5620576894a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:34.678205+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d36800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127664128 unmapped: 35684352 heap: 163348480 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 105 ms_handle_reset con 0x562053d36800 session 0x562057689860
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:35.678394+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 105 heartbeat osd_stat(store_statfs(0x1b51fb000/0x0/0x1bfc00000, data 0x67f5e6e/0x6891000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.15] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.19] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 119152640 unmapped: 44195840 heap: 163348480 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.13] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 106 heartbeat osd_stat(store_statfs(0x1b6b95000/0x0/0x1bfc00000, data 0x4e5de4b/0x4ef8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:36.678516+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 106 heartbeat osd_stat(store_statfs(0x1b6b95000/0x0/0x1bfc00000, data 0x4e5de4b/0x4ef8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 119152640 unmapped: 44195840 heap: 163348480 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:37.678658+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 119152640 unmapped: 44195840 heap: 163348480 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1373354 data_alloc: 184549376 data_used: 14032896
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:38.678855+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 119152640 unmapped: 44195840 heap: 163348480 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054032000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 106 ms_handle_reset con 0x562054032000 session 0x562053e65e00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054660800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 106 ms_handle_reset con 0x562054660800 session 0x5620576883c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6ac00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 106 ms_handle_reset con 0x562053b6ac00 session 0x5620576885a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053cff800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 106 ms_handle_reset con 0x562053cff800 session 0x562053c89a40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d36800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054032000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 106 ms_handle_reset con 0x562054032000 session 0x562052ba8960
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 106 ms_handle_reset con 0x562053d36800 session 0x562053b352c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:39.679069+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054033800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 106 ms_handle_reset con 0x562054033800 session 0x562053b35a40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6ac00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 106 ms_handle_reset con 0x562053b6ac00 session 0x56205858ab40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053cff800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 106 ms_handle_reset con 0x562053cff800 session 0x56205858a960
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 119136256 unmapped: 44212224 heap: 163348480 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d36800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 106 ms_handle_reset con 0x562053d36800 session 0x56205858a000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054032000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.252107620s of 10.029066086s, submitted: 127
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:40.679201+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 106 ms_handle_reset con 0x562054032000 session 0x562058591a40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 106 heartbeat osd_stat(store_statfs(0x1b4a78000/0x0/0x1bfc00000, data 0x6f7af41/0x7016000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 132382720 unmapped: 38379520 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:41.679388+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 132382720 unmapped: 38379520 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:42.679561+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051d30000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 124968960 unmapped: 45793280 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1653682 data_alloc: 184549376 data_used: 16547840
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:43.679727+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 124993536 unmapped: 45768704 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 107 handle_osd_map epochs [107,107], i have 107, src has [1,107]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 107 handle_osd_map epochs [107,107], i have 107, src has [1,107]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 107 handle_osd_map epochs [107,107], i have 107, src has [1,107]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 107 ms_handle_reset con 0x562052821400 session 0x56205858a5a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:44.679845+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 119808000 unmapped: 50954240 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 107 heartbeat osd_stat(store_statfs(0x1b5e60000/0x0/0x1bfc00000, data 0x5b8d156/0x5c29000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:45.680007+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 121487360 unmapped: 49274880 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:46.680198+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 121487360 unmapped: 49274880 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:47.680367+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 121487360 unmapped: 49274880 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 107 ms_handle_reset con 0x562051d30000 session 0x562055c52780
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6ac00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1534051 data_alloc: 201326592 data_used: 20226048
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:48.680515+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 119324672 unmapped: 51437568 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 107 ms_handle_reset con 0x562053b6ac00 session 0x562053b350e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:49.680636+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 119382016 unmapped: 51380224 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053cff800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.294309616s of 10.008589745s, submitted: 164
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 107 heartbeat osd_stat(store_statfs(0x1b7fa4000/0x0/0x1bfc00000, data 0x3a4e133/0x3ae9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [0,0,0,0,0,0,2])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 107 ms_handle_reset con 0x562053cff800 session 0x562053b352c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:50.680773+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 121126912 unmapped: 49635328 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:51.680955+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 107 handle_osd_map epochs [108,108], i have 107, src has [1,108]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 121126912 unmapped: 49635328 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:52.681112+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 121126912 unmapped: 49635328 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1365185 data_alloc: 184549376 data_used: 14053376
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:53.681318+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 121126912 unmapped: 49635328 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:54.681545+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 121126912 unmapped: 49635328 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:55.681762+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 121126912 unmapped: 49635328 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 108 heartbeat osd_stat(store_statfs(0x1b6cee000/0x0/0x1bfc00000, data 0x490228b/0x499f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:56.681924+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 121126912 unmapped: 49635328 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d36800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 108 ms_handle_reset con 0x562053d36800 session 0x562053bfed20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054032000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:57.682090+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 120922112 unmapped: 49840128 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1367462 data_alloc: 184549376 data_used: 14053376
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:58.682226+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 120922112 unmapped: 49840128 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:59.682359+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 120922112 unmapped: 49840128 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:00.682517+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 120922112 unmapped: 49840128 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:01.682677+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 120922112 unmapped: 49840128 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 108 heartbeat osd_stat(store_statfs(0x1b6ccb000/0x0/0x1bfc00000, data 0x492628b/0x49c3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:02.682803+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 120922112 unmapped: 49840128 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1367462 data_alloc: 184549376 data_used: 14053376
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:03.682971+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 120922112 unmapped: 49840128 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054236400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:04.683124+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 120758272 unmapped: 50003968 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:05.683308+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 121290752 unmapped: 49471488 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:06.683523+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 108 heartbeat osd_stat(store_statfs(0x1b6ccb000/0x0/0x1bfc00000, data 0x492628b/0x49c3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 123936768 unmapped: 46825472 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:07.683752+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 123936768 unmapped: 46825472 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 108 heartbeat osd_stat(store_statfs(0x1b6ccb000/0x0/0x1bfc00000, data 0x492628b/0x49c3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:08.683913+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1423142 data_alloc: 201326592 data_used: 21798912
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 123936768 unmapped: 46825472 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 108 heartbeat osd_stat(store_statfs(0x1b6ccb000/0x0/0x1bfc00000, data 0x492628b/0x49c3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:09.684083+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 123936768 unmapped: 46825472 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 108 heartbeat osd_stat(store_statfs(0x1b6ccb000/0x0/0x1bfc00000, data 0x492628b/0x49c3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:10.684279+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 123936768 unmapped: 46825472 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:11.684492+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 123936768 unmapped: 46825472 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:12.684651+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 123936768 unmapped: 46825472 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:13.684867+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1423142 data_alloc: 201326592 data_used: 21798912
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 123936768 unmapped: 46825472 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:14.685039+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 108 heartbeat osd_stat(store_statfs(0x1b6ccb000/0x0/0x1bfc00000, data 0x492628b/0x49c3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 123936768 unmapped: 46825472 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 24.706624985s of 24.943386078s, submitted: 49
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:15.685185+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 108 heartbeat osd_stat(store_statfs(0x1b688d000/0x0/0x1bfc00000, data 0x4d5c28b/0x4df9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054237c00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127754240 unmapped: 43008000 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:16.685343+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 108 ms_handle_reset con 0x562054236400 session 0x56205858b2c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 108 ms_handle_reset con 0x562054032000 session 0x5620585a3a40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051d30000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127762432 unmapped: 42999808 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:17.685514+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128172032 unmapped: 42590208 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:18.685790+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1493614 data_alloc: 201326592 data_used: 22032384
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128172032 unmapped: 42590208 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 108 heartbeat osd_stat(store_statfs(0x1b654b000/0x0/0x1bfc00000, data 0x50a628b/0x5143000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:19.685999+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128172032 unmapped: 42590208 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:20.686227+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128188416 unmapped: 42573824 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 108 heartbeat osd_stat(store_statfs(0x1b654b000/0x0/0x1bfc00000, data 0x50a628b/0x5143000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:21.686409+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128188416 unmapped: 42573824 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:22.686658+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127942656 unmapped: 42819584 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:23.686899+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1491526 data_alloc: 201326592 data_used: 22036480
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128090112 unmapped: 42672128 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 108 heartbeat osd_stat(store_statfs(0x1b652c000/0x0/0x1bfc00000, data 0x50c528b/0x5162000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:24.687036+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128090112 unmapped: 42672128 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.012789726s of 10.444272995s, submitted: 102
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 108 ms_handle_reset con 0x562051d30000 session 0x562054ee9c20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 108 ms_handle_reset con 0x562054237c00 session 0x56205554f0e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:25.687189+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6ac00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128098304 unmapped: 42663936 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 108 ms_handle_reset con 0x562053b6ac00 session 0x562053990d20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:26.687320+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 123666432 unmapped: 47095808 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:27.687524+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 123666432 unmapped: 47095808 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:28.687674+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1265626 data_alloc: 184549376 data_used: 14053376
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 123666432 unmapped: 47095808 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:29.687858+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 123666432 unmapped: 47095808 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053cff800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 108 ms_handle_reset con 0x562053cff800 session 0x562053b2c000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051d30000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 108 ms_handle_reset con 0x562051d30000 session 0x562053b2da40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6ac00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 108 ms_handle_reset con 0x562053b6ac00 session 0x562053b2cf00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054032000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 108 ms_handle_reset con 0x562054032000 session 0x562052ba41e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054237c00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 108 heartbeat osd_stat(store_statfs(0x1b7b79000/0x0/0x1bfc00000, data 0x3a50229/0x3aec000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:30.688038+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 108 ms_handle_reset con 0x562054237c00 session 0x5620585a32c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 123052032 unmapped: 47710208 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d36800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:31.688194+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 109 handle_osd_map epochs [109,109], i have 109, src has [1,109]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 109 ms_handle_reset con 0x562053d36800 session 0x5620585a25a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 123027456 unmapped: 47734784 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:32.688510+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 109 heartbeat osd_stat(store_statfs(0x1b6c51000/0x0/0x1bfc00000, data 0x499d46c/0x4a3c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 123035648 unmapped: 47726592 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:33.688697+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051d30000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 109 ms_handle_reset con 0x562051d30000 session 0x5620585a2b40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1395177 data_alloc: 184549376 data_used: 14061568
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 123035648 unmapped: 47726592 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:34.688855+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6ac00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 109 ms_handle_reset con 0x562053b6ac00 session 0x5620585a3c20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d36800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 109 ms_handle_reset con 0x562053d36800 session 0x56205291dc20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 123035648 unmapped: 47726592 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054032000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 109 ms_handle_reset con 0x562054032000 session 0x5620519621e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:35.689020+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054237c00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 122724352 unmapped: 48037888 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ae0400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.289743423s of 10.727819443s, submitted: 105
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:36.689186+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 109 heartbeat osd_stat(store_statfs(0x1b6c27000/0x0/0x1bfc00000, data 0x49c747c/0x4a67000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 122732544 unmapped: 48029696 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 109 handle_osd_map epochs [110,110], i have 109, src has [1,110]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 110 handle_osd_map epochs [110,110], i have 110, src has [1,110]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ce6400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 110 handle_osd_map epochs [110,110], i have 110, src has [1,110]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 110 ms_handle_reset con 0x562053ae0400 session 0x562055c52d20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:37.689361+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 122781696 unmapped: 47980544 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:38.689573+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1425030 data_alloc: 184549376 data_used: 17223680
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 122781696 unmapped: 47980544 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:39.689740+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 122781696 unmapped: 47980544 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 110 heartbeat osd_stat(store_statfs(0x1b6c22000/0x0/0x1bfc00000, data 0x49c96e0/0x4a6b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:40.689910+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 110 handle_osd_map epochs [110,111], i have 110, src has [1,111]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 122871808 unmapped: 47890432 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: get_auth_request con 0x562051d30c00 auth_method 0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:41.690033+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 122871808 unmapped: 47890432 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:42.690217+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 122871808 unmapped: 47890432 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:43.690503+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1426900 data_alloc: 184549376 data_used: 17227776
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 122871808 unmapped: 47890432 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:44.690714+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 111 heartbeat osd_stat(store_statfs(0x1b6c20000/0x0/0x1bfc00000, data 0x49cb7d6/0x4a6e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 122871808 unmapped: 47890432 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:45.690863+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 122871808 unmapped: 47890432 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:46.691052+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 122871808 unmapped: 47890432 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:47.691222+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 11.398875237s of 11.772677422s, submitted: 71
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 124608512 unmapped: 46153728 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:48.691385+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1475444 data_alloc: 184549376 data_used: 17227776
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127778816 unmapped: 42983424 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:49.691563+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 111 heartbeat osd_stat(store_statfs(0x1b65a0000/0x0/0x1bfc00000, data 0x50447d6/0x50e7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 129228800 unmapped: 41533440 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:50.691710+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 130056192 unmapped: 40706048 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 111 heartbeat osd_stat(store_statfs(0x1b6579000/0x0/0x1bfc00000, data 0x50637d6/0x5106000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:51.691847+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 130056192 unmapped: 40706048 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:52.692000+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 130056192 unmapped: 40706048 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:53.692265+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1489756 data_alloc: 184549376 data_used: 17227776
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 130056192 unmapped: 40706048 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:54.692515+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 130056192 unmapped: 40706048 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:55.692667+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 130056192 unmapped: 40706048 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:56.692874+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 130072576 unmapped: 40689664 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 111 heartbeat osd_stat(store_statfs(0x1b6579000/0x0/0x1bfc00000, data 0x50637d6/0x5106000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:57.693086+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 130072576 unmapped: 40689664 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:58.693302+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1489756 data_alloc: 184549376 data_used: 17227776
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 130072576 unmapped: 40689664 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:59.693678+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 130072576 unmapped: 40689664 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 111 ms_handle_reset con 0x562053ce6400 session 0x562052c130e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 111 ms_handle_reset con 0x562054237c00 session 0x5620519630e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051d30000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:00.693810+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 12.263017654s of 12.546191216s, submitted: 94
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 111 ms_handle_reset con 0x562051d30000 session 0x562058590b40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127041536 unmapped: 43720704 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:01.693985+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 111 heartbeat osd_stat(store_statfs(0x1b7b97000/0x0/0x1bfc00000, data 0x3a56793/0x3af6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127041536 unmapped: 43720704 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:02.694156+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127041536 unmapped: 43720704 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:03.694338+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1293079 data_alloc: 184549376 data_used: 14077952
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127041536 unmapped: 43720704 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:04.694602+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127041536 unmapped: 43720704 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 111 heartbeat osd_stat(store_statfs(0x1b7b97000/0x0/0x1bfc00000, data 0x3a56793/0x3af6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:05.694880+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127041536 unmapped: 43720704 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:06.695179+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127041536 unmapped: 43720704 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 111 heartbeat osd_stat(store_statfs(0x1b7b97000/0x0/0x1bfc00000, data 0x3a56793/0x3af6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:07.695389+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127041536 unmapped: 43720704 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:08.695661+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1293079 data_alloc: 184549376 data_used: 14077952
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127041536 unmapped: 43720704 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:09.695836+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127041536 unmapped: 43720704 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:10.696120+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127049728 unmapped: 43712512 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:11.696351+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127049728 unmapped: 43712512 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:12.696553+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 111 heartbeat osd_stat(store_statfs(0x1b7b97000/0x0/0x1bfc00000, data 0x3a56793/0x3af6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127057920 unmapped: 43704320 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:13.696729+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1293079 data_alloc: 184549376 data_used: 14077952
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127057920 unmapped: 43704320 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:14.696913+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 111 heartbeat osd_stat(store_statfs(0x1b7b97000/0x0/0x1bfc00000, data 0x3a56793/0x3af6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127057920 unmapped: 43704320 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:15.697112+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127057920 unmapped: 43704320 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 111 heartbeat osd_stat(store_statfs(0x1b7b97000/0x0/0x1bfc00000, data 0x3a56793/0x3af6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:16.697402+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127057920 unmapped: 43704320 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6ac00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 16.769365311s of 16.900638580s, submitted: 34
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:17.697648+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 111 handle_osd_map epochs [112,112], i have 111, src has [1,112]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127049728 unmapped: 43712512 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 112 ms_handle_reset con 0x562053b6ac00 session 0x5620541c7c20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:18.697817+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1300378 data_alloc: 184549376 data_used: 14086144
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127049728 unmapped: 43712512 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:19.697983+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d36800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b7b93000/0x0/0x1bfc00000, data 0x3a591a3/0x3afa000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 112 handle_osd_map epochs [113,113], i have 112, src has [1,113]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127074304 unmapped: 43687936 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 113 handle_osd_map epochs [113,113], i have 113, src has [1,113]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 113 ms_handle_reset con 0x562053d36800 session 0x5620541c61e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:20.698204+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 113 heartbeat osd_stat(store_statfs(0x1b7b90000/0x0/0x1bfc00000, data 0x3a5b407/0x3afe000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127107072 unmapped: 43655168 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:21.698443+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127107072 unmapped: 43655168 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:22.698638+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 113 heartbeat osd_stat(store_statfs(0x1b7b91000/0x0/0x1bfc00000, data 0x3a5ac07/0x3afd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127107072 unmapped: 43655168 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:23.698795+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1301184 data_alloc: 184549376 data_used: 14090240
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127107072 unmapped: 43655168 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:24.699120+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 113 heartbeat osd_stat(store_statfs(0x1b7b91000/0x0/0x1bfc00000, data 0x3a5ac07/0x3afd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127115264 unmapped: 43646976 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:25.699328+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 113 handle_osd_map epochs [114,114], i have 113, src has [1,114]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127123456 unmapped: 43638784 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:26.699517+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127123456 unmapped: 43638784 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:27.699724+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127123456 unmapped: 43638784 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:28.699867+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1304478 data_alloc: 184549376 data_used: 14102528
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127131648 unmapped: 43630592 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:29.700101+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127131648 unmapped: 43630592 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 114 heartbeat osd_stat(store_statfs(0x1b7b8e000/0x0/0x1bfc00000, data 0x3a5ccfd/0x3b00000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:30.700252+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127131648 unmapped: 43630592 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:31.700423+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127131648 unmapped: 43630592 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:32.700544+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 114 heartbeat osd_stat(store_statfs(0x1b7b8e000/0x0/0x1bfc00000, data 0x3a5ccfd/0x3b00000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127139840 unmapped: 43622400 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:33.700718+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1304478 data_alloc: 184549376 data_used: 14102528
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127139840 unmapped: 43622400 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:34.700903+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127139840 unmapped: 43622400 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:35.701149+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 114 heartbeat osd_stat(store_statfs(0x1b7b8e000/0x0/0x1bfc00000, data 0x3a5ccfd/0x3b00000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127139840 unmapped: 43622400 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:36.701321+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127156224 unmapped: 43606016 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:37.701521+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127156224 unmapped: 43606016 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 114 heartbeat osd_stat(store_statfs(0x1b7b8e000/0x0/0x1bfc00000, data 0x3a5ccfd/0x3b00000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:38.701660+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1304958 data_alloc: 184549376 data_used: 14114816
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127156224 unmapped: 43606016 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:39.701888+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 114 heartbeat osd_stat(store_statfs(0x1b7b8e000/0x0/0x1bfc00000, data 0x3a5ccfd/0x3b00000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 114 heartbeat osd_stat(store_statfs(0x1b7b8e000/0x0/0x1bfc00000, data 0x3a5ccfd/0x3b00000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127156224 unmapped: 43606016 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:40.702102+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127156224 unmapped: 43606016 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:41.702307+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127156224 unmapped: 43606016 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:42.702516+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127156224 unmapped: 43606016 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:43.702836+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 114 heartbeat osd_stat(store_statfs(0x1b7b8e000/0x0/0x1bfc00000, data 0x3a5ccfd/0x3b00000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1304958 data_alloc: 184549376 data_used: 14114816
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127164416 unmapped: 43597824 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:44.703055+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 114 heartbeat osd_stat(store_statfs(0x1b7b8e000/0x0/0x1bfc00000, data 0x3a5ccfd/0x3b00000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127172608 unmapped: 43589632 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:45.703269+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 114 heartbeat osd_stat(store_statfs(0x1b7b8e000/0x0/0x1bfc00000, data 0x3a5ccfd/0x3b00000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127172608 unmapped: 43589632 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:46.703522+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127172608 unmapped: 43589632 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:47.703723+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127172608 unmapped: 43589632 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:48.703910+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1304958 data_alloc: 184549376 data_used: 14114816
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127172608 unmapped: 43589632 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:49.704015+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 114 heartbeat osd_stat(store_statfs(0x1b7b8e000/0x0/0x1bfc00000, data 0x3a5ccfd/0x3b00000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127172608 unmapped: 43589632 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:50.704173+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127172608 unmapped: 43589632 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:51.704411+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127180800 unmapped: 43581440 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:52.704613+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127188992 unmapped: 43573248 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:53.704840+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1304958 data_alloc: 184549376 data_used: 14114816
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127188992 unmapped: 43573248 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:54.705107+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 114 heartbeat osd_stat(store_statfs(0x1b7b8e000/0x0/0x1bfc00000, data 0x3a5ccfd/0x3b00000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127188992 unmapped: 43573248 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:55.705307+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127188992 unmapped: 43573248 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:56.705532+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127188992 unmapped: 43573248 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:57.705714+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127180800 unmapped: 43581440 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:58.705872+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1304958 data_alloc: 184549376 data_used: 14114816
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127180800 unmapped: 43581440 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:59.706013+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127188992 unmapped: 43573248 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 114 heartbeat osd_stat(store_statfs(0x1b7b8e000/0x0/0x1bfc00000, data 0x3a5ccfd/0x3b00000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:00.706152+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127197184 unmapped: 43565056 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:01.706285+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 114 heartbeat osd_stat(store_statfs(0x1b7b8e000/0x0/0x1bfc00000, data 0x3a5ccfd/0x3b00000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127197184 unmapped: 43565056 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:02.706758+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127197184 unmapped: 43565056 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:03.706969+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1304958 data_alloc: 184549376 data_used: 14114816
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127197184 unmapped: 43565056 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:04.707201+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 114 heartbeat osd_stat(store_statfs(0x1b7b8e000/0x0/0x1bfc00000, data 0x3a5ccfd/0x3b00000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127197184 unmapped: 43565056 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:05.707421+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127197184 unmapped: 43565056 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051d30000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 48.826694489s of 49.083557129s, submitted: 89
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:06.707560+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 114 handle_osd_map epochs [115,115], i have 114, src has [1,115]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 114 handle_osd_map epochs [115,115], i have 115, src has [1,115]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 115 ms_handle_reset con 0x562051d30000 session 0x56205858a000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127328256 unmapped: 43433984 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:07.707821+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127336448 unmapped: 43425792 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:08.707938+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 115 heartbeat osd_stat(store_statfs(0x1b7b87000/0x0/0x1bfc00000, data 0x3a5f33f/0x3b06000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6ac00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1316563 data_alloc: 184549376 data_used: 14123008
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 115 handle_osd_map epochs [116,116], i have 115, src has [1,116]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 116 ms_handle_reset con 0x562053b6ac00 session 0x56205554ed20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127393792 unmapped: 43368448 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:09.708102+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ce6400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 116 handle_osd_map epochs [116,116], i have 116, src has [1,116]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d36800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 116 ms_handle_reset con 0x562053d36800 session 0x562053c8a1e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054237c00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 116 ms_handle_reset con 0x562054237c00 session 0x562053c8b4a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127401984 unmapped: 43360256 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:10.708269+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054032000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127426560 unmapped: 43335680 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:11.708511+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 116 handle_osd_map epochs [117,117], i have 116, src has [1,117]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 117 handle_osd_map epochs [117,117], i have 117, src has [1,117]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 117 ms_handle_reset con 0x562054032000 session 0x562053c8b0e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 136241152 unmapped: 34521088 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:12.708659+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051d30000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 117 ms_handle_reset con 0x562051d30000 session 0x5620541c7680
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6ac00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 117 ms_handle_reset con 0x562053b6ac00 session 0x56205401f680
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 117 heartbeat osd_stat(store_statfs(0x1b637e000/0x0/0x1bfc00000, data 0x52637b5/0x530f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d36800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054032000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 136282112 unmapped: 34480128 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:13.708906+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 117 handle_osd_map epochs [117,118], i have 117, src has [1,118]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 118 handle_osd_map epochs [118,118], i have 118, src has [1,118]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1720098 data_alloc: 184549376 data_used: 14143488
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 118 handle_osd_map epochs [118,118], i have 118, src has [1,118]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 118 ms_handle_reset con 0x562053d36800 session 0x56205401eb40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127926272 unmapped: 42835968 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:14.709042+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054237c00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 119 handle_osd_map epochs [119,119], i have 119, src has [1,119]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 119 handle_osd_map epochs [119,119], i have 119, src has [1,119]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 119 ms_handle_reset con 0x562054032000 session 0x562053b2da40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 119 ms_handle_reset con 0x562053ce6400 session 0x56205858a3c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127950848 unmapped: 42811392 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:15.709229+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 119 ms_handle_reset con 0x562054237c00 session 0x562053ba43c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051d30000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 127967232 unmapped: 42795008 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:16.709384+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6ac00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 119 handle_osd_map epochs [120,120], i have 119, src has [1,120]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.537930489s of 10.529221535s, submitted: 205
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 120 handle_osd_map epochs [120,120], i have 120, src has [1,120]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 120 ms_handle_reset con 0x562053b6ac00 session 0x562055c46960
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 120 handle_osd_map epochs [120,120], i have 120, src has [1,120]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 120 handle_osd_map epochs [120,120], i have 120, src has [1,120]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128040960 unmapped: 42721280 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 120 ms_handle_reset con 0x562051d30000 session 0x562052bae1e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:17.709533+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d36800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 120 handle_osd_map epochs [121,121], i have 120, src has [1,121]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128098304 unmapped: 42663936 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:18.709696+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 121 ms_handle_reset con 0x562053d36800 session 0x56205858b680
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054032000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1351056 data_alloc: 184549376 data_used: 14168064
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 121 handle_osd_map epochs [121,122], i have 121, src has [1,122]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 122 heartbeat osd_stat(store_statfs(0x1b7b70000/0x0/0x1bfc00000, data 0x3a6c1e4/0x3b1c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 122 ms_handle_reset con 0x562054032000 session 0x562053b8c000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128155648 unmapped: 42606592 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:19.709878+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051d30000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 122 ms_handle_reset con 0x562051d30000 session 0x562051d15e00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6ac00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 122 ms_handle_reset con 0x562053b6ac00 session 0x562053c89a40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128229376 unmapped: 42532864 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:20.710081+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d36800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128237568 unmapped: 42524672 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:21.710213+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 124 heartbeat osd_stat(store_statfs(0x1b7b65000/0x0/0x1bfc00000, data 0x3a72862/0x3b28000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 124 ms_handle_reset con 0x562053d36800 session 0x562054eeaf00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128237568 unmapped: 42524672 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:22.710352+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054237c00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128311296 unmapped: 42450944 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 125 ms_handle_reset con 0x562054237c00 session 0x562054eea5a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b48000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:23.710512+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1363544 data_alloc: 184549376 data_used: 14168064
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 125 ms_handle_reset con 0x562051b48000 session 0x56205703c1e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128335872 unmapped: 42426368 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:24.710662+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128385024 unmapped: 42377216 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:25.710817+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b48000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 126 ms_handle_reset con 0x562051b48000 session 0x56205703c3c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128442368 unmapped: 42319872 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:26.711001+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051d30000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 127 ms_handle_reset con 0x562051d30000 session 0x56205703c780
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6ac00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.717695236s of 10.230424881s, submitted: 249
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 128 ms_handle_reset con 0x562053b6ac00 session 0x56205703c960
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 128 heartbeat osd_stat(store_statfs(0x1b7b5d000/0x0/0x1bfc00000, data 0x3a76d3a/0x3b30000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128557056 unmapped: 42205184 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:27.711125+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d36800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128557056 unmapped: 42205184 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:28.711264+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1372550 data_alloc: 184549376 data_used: 14168064
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 129 ms_handle_reset con 0x562053d36800 session 0x56205703cd20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128573440 unmapped: 42188800 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:29.711405+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 129 heartbeat osd_stat(store_statfs(0x1b7b4f000/0x0/0x1bfc00000, data 0x3a7d4e8/0x3b3c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128573440 unmapped: 42188800 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:30.711544+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128573440 unmapped: 42188800 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:31.711721+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054237c00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 130 ms_handle_reset con 0x562054237c00 session 0x56205703cf00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b48000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 130 handle_osd_map epochs [130,130], i have 130, src has [1,130]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128598016 unmapped: 42164224 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:32.711885+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 130 handle_osd_map epochs [130,131], i have 130, src has [1,131]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain podman[331600]: 2025-12-02 10:21:05.158534886 +0000 UTC m=+0.144969133 container health_status a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd)
Dec 02 10:21:05 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.59245 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 131 ms_handle_reset con 0x562051b48000 session 0x56205703d2c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128606208 unmapped: 42156032 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:33.712071+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1381754 data_alloc: 184549376 data_used: 14168064
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 131 heartbeat osd_stat(store_statfs(0x1b7b4b000/0x0/0x1bfc00000, data 0x3a81800/0x3b42000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128606208 unmapped: 42156032 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:34.712229+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051d30000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 131 ms_handle_reset con 0x562051d30000 session 0x56205703d4a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6ac00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 131 handle_osd_map epochs [131,132], i have 131, src has [1,132]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 132 ms_handle_reset con 0x562053b6ac00 session 0x56205703d860
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128679936 unmapped: 42082304 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:35.712324+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d36800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 132 ms_handle_reset con 0x562053d36800 session 0x56205703da40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128679936 unmapped: 42082304 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:36.712523+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054ea7800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 132 ms_handle_reset con 0x562054ea7800 session 0x56205703dc20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128679936 unmapped: 42082304 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:37.712662+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128679936 unmapped: 42082304 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:38.712799+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1385008 data_alloc: 184549376 data_used: 14196736
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128679936 unmapped: 42082304 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:39.712938+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 132 heartbeat osd_stat(store_statfs(0x1b7b4a000/0x0/0x1bfc00000, data 0x3a83a02/0x3b44000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 132 ms_handle_reset con 0x562053e4f800 session 0x562053b8cb40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b48000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128688128 unmapped: 42074112 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:40.713101+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128688128 unmapped: 42074112 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:41.713260+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 14.050313950s of 14.301703453s, submitted: 87
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128704512 unmapped: 42057728 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:42.713431+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 133 heartbeat osd_stat(store_statfs(0x1b7b46000/0x0/0x1bfc00000, data 0x3a85b18/0x3b47000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128704512 unmapped: 42057728 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:43.713678+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1389182 data_alloc: 184549376 data_used: 14209024
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 133 heartbeat osd_stat(store_statfs(0x1b7b46000/0x0/0x1bfc00000, data 0x3a85b18/0x3b47000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128704512 unmapped: 42057728 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:44.713842+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128704512 unmapped: 42057728 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:45.713979+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 8400.2 total, 600.0 interval
                                                          Cumulative writes: 9909 writes, 40K keys, 9909 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s
                                                          Cumulative WAL: 9909 writes, 2429 syncs, 4.08 writes per sync, written: 0.03 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 4031 writes, 14K keys, 4031 commit groups, 1.0 writes per commit group, ingest: 14.71 MB, 0.02 MB/s
                                                          Interval WAL: 4031 writes, 1640 syncs, 2.46 writes per sync, written: 0.01 GB, 0.02 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 133 heartbeat osd_stat(store_statfs(0x1b7b46000/0x0/0x1bfc00000, data 0x3a85b18/0x3b47000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128704512 unmapped: 42057728 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:46.714101+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128704512 unmapped: 42057728 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:47.714201+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 133 heartbeat osd_stat(store_statfs(0x1b7b46000/0x0/0x1bfc00000, data 0x3a85b18/0x3b47000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128704512 unmapped: 42057728 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:48.714318+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1390656 data_alloc: 184549376 data_used: 14209024
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128704512 unmapped: 42057728 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:49.714436+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128704512 unmapped: 42057728 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:50.714606+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051d30000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 133 ms_handle_reset con 0x562051d30000 session 0x56205716a3c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6ac00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 133 ms_handle_reset con 0x562053b6ac00 session 0x56205716a960
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128745472 unmapped: 42016768 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:51.714753+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 133 heartbeat osd_stat(store_statfs(0x1b7b45000/0x0/0x1bfc00000, data 0x3a85b8b/0x3b49000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128745472 unmapped: 42016768 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:52.714841+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 11.751968384s of 11.878515244s, submitted: 46
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128745472 unmapped: 42016768 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:53.714995+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d36800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 133 ms_handle_reset con 0x562053d36800 session 0x56205716ab40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1391955 data_alloc: 184549376 data_used: 14209024
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128761856 unmapped: 42000384 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:54.715153+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 133 heartbeat osd_stat(store_statfs(0x1b7b47000/0x0/0x1bfc00000, data 0x3a85b18/0x3b47000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128761856 unmapped: 42000384 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:55.715297+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128770048 unmapped: 41992192 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:56.715399+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052820400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 133 ms_handle_reset con 0x562052820400 session 0x562055c472c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:57.715515+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128770048 unmapped: 41992192 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 133 heartbeat osd_stat(store_statfs(0x1b7b47000/0x0/0x1bfc00000, data 0x3a85b18/0x3b47000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821c00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 133 ms_handle_reset con 0x562052821c00 session 0x56205716b4a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051d30000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 133 ms_handle_reset con 0x562051d30000 session 0x56205716b860
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:58.715637+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128745472 unmapped: 42016768 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1392860 data_alloc: 184549376 data_used: 14209024
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:59.715732+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128745472 unmapped: 42016768 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 133 heartbeat osd_stat(store_statfs(0x1b7b47000/0x0/0x1bfc00000, data 0x3a85b18/0x3b47000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:00.715826+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128745472 unmapped: 42016768 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 133 heartbeat osd_stat(store_statfs(0x1b7b47000/0x0/0x1bfc00000, data 0x3a85b18/0x3b47000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 133 heartbeat osd_stat(store_statfs(0x1b7b47000/0x0/0x1bfc00000, data 0x3a85b18/0x3b47000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:01.715970+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128745472 unmapped: 42016768 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:02.716141+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128745472 unmapped: 42016768 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 133 heartbeat osd_stat(store_statfs(0x1b7b47000/0x0/0x1bfc00000, data 0x3a85b18/0x3b47000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:03.716325+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128745472 unmapped: 42016768 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052820400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.046869278s of 10.269983292s, submitted: 51
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 133 ms_handle_reset con 0x562052820400 session 0x562054eeaf00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1394688 data_alloc: 184549376 data_used: 14209024
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:04.716492+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128753664 unmapped: 42008576 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6ac00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 133 ms_handle_reset con 0x562053b6ac00 session 0x562054eea5a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d36800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 133 ms_handle_reset con 0x562053d36800 session 0x562052bae1e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:05.716625+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128753664 unmapped: 42008576 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:06.716785+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128753664 unmapped: 42008576 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 133 heartbeat osd_stat(store_statfs(0x1b7b46000/0x0/0x1bfc00000, data 0x3a85b18/0x3b47000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:07.716932+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128753664 unmapped: 42008576 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:08.717083+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128753664 unmapped: 42008576 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1393920 data_alloc: 184549376 data_used: 14209024
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:09.717220+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128753664 unmapped: 42008576 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 133 heartbeat osd_stat(store_statfs(0x1b7b46000/0x0/0x1bfc00000, data 0x3a85b18/0x3b47000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:10.717368+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128753664 unmapped: 42008576 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:11.717532+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128753664 unmapped: 42008576 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 133 heartbeat osd_stat(store_statfs(0x1b7b46000/0x0/0x1bfc00000, data 0x3a85b18/0x3b47000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:12.717713+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128753664 unmapped: 42008576 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:13.717981+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128753664 unmapped: 42008576 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1393920 data_alloc: 184549376 data_used: 14209024
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:14.718830+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128753664 unmapped: 42008576 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054ea6c00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.955700874s of 10.983824730s, submitted: 7
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: mgrc handle_mgr_map Got map version 44
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/4212177170,v1:172.18.0.108:6811/4212177170]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:15.719548+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128712704 unmapped: 42049536 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:16.720542+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128712704 unmapped: 42049536 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:17.720674+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128712704 unmapped: 42049536 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 133 heartbeat osd_stat(store_statfs(0x1b7b46000/0x0/0x1bfc00000, data 0x3a85bb3/0x3b48000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:18.721418+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128712704 unmapped: 42049536 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 133 heartbeat osd_stat(store_statfs(0x1b7b46000/0x0/0x1bfc00000, data 0x3a85bb3/0x3b48000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1394628 data_alloc: 184549376 data_used: 14209024
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:19.721610+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128712704 unmapped: 42049536 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:20.721756+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128712704 unmapped: 42049536 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: mgrc handle_mgr_map Got map version 45
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/4212177170,v1:172.18.0.108:6811/4212177170]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:21.722090+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128712704 unmapped: 42049536 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:22.722359+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128712704 unmapped: 42049536 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 133 heartbeat osd_stat(store_statfs(0x1b7b46000/0x0/0x1bfc00000, data 0x3a85be2/0x3b48000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:23.722679+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128712704 unmapped: 42049536 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1395706 data_alloc: 184549376 data_used: 14209024
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:24.722958+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128712704 unmapped: 42049536 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:25.723242+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128712704 unmapped: 42049536 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:26.723419+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128712704 unmapped: 42049536 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:27.723596+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128712704 unmapped: 42049536 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:28.723849+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128712704 unmapped: 42049536 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 133 heartbeat osd_stat(store_statfs(0x1b7b46000/0x0/0x1bfc00000, data 0x3a85be2/0x3b48000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1395706 data_alloc: 184549376 data_used: 14209024
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 133 heartbeat osd_stat(store_statfs(0x1b7b46000/0x0/0x1bfc00000, data 0x3a85be2/0x3b48000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:29.724034+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128712704 unmapped: 42049536 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:30.724240+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128712704 unmapped: 42049536 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x5620553b9400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 16.212928772s of 16.253484726s, submitted: 9
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 133 heartbeat osd_stat(store_statfs(0x1b7b46000/0x0/0x1bfc00000, data 0x3a85be2/0x3b48000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:31.724426+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 128753664 unmapped: 42008576 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 134 ms_handle_reset con 0x5620553b9400 session 0x56205716a780
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:32.724616+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 129810432 unmapped: 40951808 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051d30000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 134 handle_osd_map epochs [134,134], i have 134, src has [1,134]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 134 handle_osd_map epochs [134,134], i have 134, src has [1,134]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 135 handle_osd_map epochs [135,135], i have 135, src has [1,135]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:33.724810+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 135 ms_handle_reset con 0x562051d30000 session 0x562053990960
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 129900544 unmapped: 40861696 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052820400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 135 handle_osd_map epochs [135,135], i have 135, src has [1,135]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 135 handle_osd_map epochs [134,135], i have 135, src has [1,135]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1422016 data_alloc: 184549376 data_used: 14225408
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:34.724978+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 136 handle_osd_map epochs [136,136], i have 136, src has [1,136]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 136 heartbeat osd_stat(store_statfs(0x1b7b38000/0x0/0x1bfc00000, data 0x3a8bb7a/0x3b55000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 136 ms_handle_reset con 0x562052820400 session 0x562052ba8960
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 129949696 unmapped: 40812544 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6ac00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 136 heartbeat osd_stat(store_statfs(0x1b7b32000/0x0/0x1bfc00000, data 0x3a8ddbe/0x3b5a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [0,0,0,1])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 136 ms_handle_reset con 0x562053b6ac00 session 0x562053b27c20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:35.725231+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 136 heartbeat osd_stat(store_statfs(0x1b7b32000/0x0/0x1bfc00000, data 0x3a8ddbe/0x3b5a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d36800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 129990656 unmapped: 40771584 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:36.725374+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 130023424 unmapped: 40738816 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ae1000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 137 ms_handle_reset con 0x562053d36800 session 0x562053c710e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 137 ms_handle_reset con 0x562053ae1000 session 0x562053bfed20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 137 handle_osd_map epochs [137,137], i have 137, src has [1,137]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051d30000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052820400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 137 ms_handle_reset con 0x562051d30000 session 0x562054692f00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:37.725571+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131121152 unmapped: 39641088 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6ac00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 137 ms_handle_reset con 0x562053b6ac00 session 0x562054693680
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 137 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 138 ms_handle_reset con 0x562052820400 session 0x562053b350e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d36800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 138 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:38.725732+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131170304 unmapped: 39591936 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 139 ms_handle_reset con 0x562053d36800 session 0x562054693a40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 139 handle_osd_map epochs [139,139], i have 139, src has [1,139]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1450646 data_alloc: 184549376 data_used: 14233600
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 139 handle_osd_map epochs [139,139], i have 139, src has [1,139]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:39.725899+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 139 ms_handle_reset con 0x562051b49400 session 0x562054693e00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 139 heartbeat osd_stat(store_statfs(0x1b7b24000/0x0/0x1bfc00000, data 0x3a92e0e/0x3b68000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131194880 unmapped: 39567360 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:40.726017+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131194880 unmapped: 39567360 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.603715897s of 10.311456680s, submitted: 174
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 140 heartbeat osd_stat(store_statfs(0x1b7b23000/0x0/0x1bfc00000, data 0x3a9502a/0x3b6a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:41.726193+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 140 ms_handle_reset con 0x562051b49400 session 0x5620545061e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131260416 unmapped: 39501824 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051d30000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:42.726306+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131276800 unmapped: 39485440 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 141 handle_osd_map epochs [141,141], i have 141, src has [1,141]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:43.726511+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 141 handle_osd_map epochs [141,141], i have 141, src has [1,141]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 141 ms_handle_reset con 0x562051d30000 session 0x562054506780
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 141 heartbeat osd_stat(store_statfs(0x1b7b21000/0x0/0x1bfc00000, data 0x3a9729a/0x3b6d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131301376 unmapped: 39460864 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052820400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 141 heartbeat osd_stat(store_statfs(0x1b771c000/0x0/0x1bfc00000, data 0x3a9956b/0x3b71000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1458558 data_alloc: 184549376 data_used: 14245888
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6ac00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 142 ms_handle_reset con 0x562053b6ac00 session 0x5620569bd0e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:44.726612+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 142 ms_handle_reset con 0x562052820400 session 0x562055b15680
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131342336 unmapped: 39419904 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d36800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 143 handle_osd_map epochs [143,143], i have 143, src has [1,143]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 143 handle_osd_map epochs [143,143], i have 143, src has [1,143]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:45.726761+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 143 ms_handle_reset con 0x562053d36800 session 0x562053c734a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131391488 unmapped: 39370752 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 144 handle_osd_map epochs [144,144], i have 144, src has [1,144]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 144 handle_osd_map epochs [143,144], i have 144, src has [1,144]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 144 handle_osd_map epochs [144,144], i have 144, src has [1,144]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 144 ms_handle_reset con 0x562051b49400 session 0x562053b341e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051d30000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:46.726892+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131424256 unmapped: 39337984 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 145 handle_osd_map epochs [145,145], i have 145, src has [1,145]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:47.727008+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 145 handle_osd_map epochs [145,145], i have 145, src has [1,145]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 145 handle_osd_map epochs [145,145], i have 145, src has [1,145]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 145 ms_handle_reset con 0x562051d30000 session 0x562052c130e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 145 heartbeat osd_stat(store_statfs(0x1b7717000/0x0/0x1bfc00000, data 0x3a9e42c/0x3b73000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131489792 unmapped: 39272448 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:48.727151+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131489792 unmapped: 39272448 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1462472 data_alloc: 184549376 data_used: 14258176
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:49.727315+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131489792 unmapped: 39272448 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:50.727474+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131489792 unmapped: 39272448 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 146 heartbeat osd_stat(store_statfs(0x1b7713000/0x0/0x1bfc00000, data 0x3a9fb9b/0x3b76000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052820400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.066790581s of 10.054980278s, submitted: 352
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:51.727672+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131489792 unmapped: 39272448 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 147 ms_handle_reset con 0x562052820400 session 0x5620570edc20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:52.727794+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131506176 unmapped: 39256064 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:53.727984+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6ac00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 148 ms_handle_reset con 0x562053b6ac00 session 0x562052bafc20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131514368 unmapped: 39247872 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 148 handle_osd_map epochs [148,148], i have 148, src has [1,148]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1470996 data_alloc: 184549376 data_used: 14274560
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054236c00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 149 handle_osd_map epochs [148,149], i have 149, src has [1,149]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:54.728202+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 149 ms_handle_reset con 0x562054236c00 session 0x5620570ede00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131538944 unmapped: 39223296 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 149 heartbeat osd_stat(store_statfs(0x1b7708000/0x0/0x1bfc00000, data 0x3aa8651/0x3b85000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 149 handle_osd_map epochs [149,150], i have 149, src has [1,150]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 150 handle_osd_map epochs [150,150], i have 150, src has [1,150]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 150 ms_handle_reset con 0x562051b49400 session 0x562058590b40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:55.728398+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131538944 unmapped: 39223296 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051d30000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 150 ms_handle_reset con 0x562051d30000 session 0x562055c52960
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:56.728518+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052820400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 150 ms_handle_reset con 0x562052820400 session 0x5620519630e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131571712 unmapped: 39190528 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6ac00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 150 ms_handle_reset con 0x562053b6ac00 session 0x5620519621e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:57.728668+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131579904 unmapped: 39182336 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 150 heartbeat osd_stat(store_statfs(0x1b7707000/0x0/0x1bfc00000, data 0x3aaa929/0x3b87000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:58.728805+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131579904 unmapped: 39182336 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1476462 data_alloc: 184549376 data_used: 14290944
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:59.728924+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131579904 unmapped: 39182336 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 150 heartbeat osd_stat(store_statfs(0x1b7705000/0x0/0x1bfc00000, data 0x3aaa99b/0x3b89000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:00.729060+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131579904 unmapped: 39182336 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 151 handle_osd_map epochs [151,151], i have 151, src has [1,151]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:01.729186+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 151 heartbeat osd_stat(store_statfs(0x1b7706000/0x0/0x1bfc00000, data 0x3aaa9ca/0x3b88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131604480 unmapped: 39157760 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.032369614s of 10.548787117s, submitted: 193
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ce7000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 151 ms_handle_reset con 0x562053ce7000 session 0x56205401e1e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:02.729351+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131645440 unmapped: 39116800 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:03.729517+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 152 handle_osd_map epochs [152,152], i have 152, src has [1,152]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131661824 unmapped: 39100416 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 152 heartbeat osd_stat(store_statfs(0x1b76fc000/0x0/0x1bfc00000, data 0x3aaefbf/0x3b91000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1491106 data_alloc: 184549376 data_used: 14303232
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:04.729726+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051d30000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131661824 unmapped: 39100416 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 152 ms_handle_reset con 0x562051d30000 session 0x5620569bd2c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: mgrc handle_mgr_map Got map version 46
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/4212177170,v1:172.18.0.108:6811/4212177170]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:05.729883+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131866624 unmapped: 38895616 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:06.730049+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131883008 unmapped: 38879232 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:07.730256+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131883008 unmapped: 38879232 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 152 heartbeat osd_stat(store_statfs(0x1b76fe000/0x0/0x1bfc00000, data 0x3aaf135/0x3b90000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:08.730519+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131883008 unmapped: 38879232 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1489902 data_alloc: 184549376 data_used: 14303232
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:09.730655+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131891200 unmapped: 38871040 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:10.730802+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131891200 unmapped: 38871040 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:11.730982+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 153 handle_osd_map epochs [153,153], i have 153, src has [1,153]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131915776 unmapped: 38846464 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.700977325s of 10.004477501s, submitted: 110
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 153 heartbeat osd_stat(store_statfs(0x1b76fb000/0x0/0x1bfc00000, data 0x3ab1245/0x3b92000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:12.731136+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131915776 unmapped: 38846464 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:13.731387+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131915776 unmapped: 38846464 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1492538 data_alloc: 184549376 data_used: 14315520
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:14.731561+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131915776 unmapped: 38846464 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:15.731747+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131915776 unmapped: 38846464 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:16.731959+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131915776 unmapped: 38846464 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 153 heartbeat osd_stat(store_statfs(0x1b76fd000/0x0/0x1bfc00000, data 0x3ab1262/0x3b91000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:17.732160+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131915776 unmapped: 38846464 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:18.732361+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131915776 unmapped: 38846464 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 153 heartbeat osd_stat(store_statfs(0x1b76fd000/0x0/0x1bfc00000, data 0x3ab1262/0x3b91000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1492538 data_alloc: 184549376 data_used: 14315520
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:19.732579+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131915776 unmapped: 38846464 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:20.733925+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131915776 unmapped: 38846464 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:21.734146+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.959606171s of 10.001149178s, submitted: 9
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131915776 unmapped: 38846464 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:22.734291+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131915776 unmapped: 38846464 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:23.734521+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 153 heartbeat osd_stat(store_statfs(0x1b76fd000/0x0/0x1bfc00000, data 0x3ab1262/0x3b91000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131915776 unmapped: 38846464 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1492138 data_alloc: 184549376 data_used: 14315520
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:24.734714+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131915776 unmapped: 38846464 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:25.734934+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131915776 unmapped: 38846464 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:26.735067+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 153 heartbeat osd_stat(store_statfs(0x1b76fd000/0x0/0x1bfc00000, data 0x3ab1262/0x3b91000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131915776 unmapped: 38846464 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:27.735216+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131915776 unmapped: 38846464 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:28.735378+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131915776 unmapped: 38846464 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1492490 data_alloc: 184549376 data_used: 14315520
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:29.735551+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131915776 unmapped: 38846464 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:30.735895+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 153 heartbeat osd_stat(store_statfs(0x1b76fd000/0x0/0x1bfc00000, data 0x3ab12c7/0x3b91000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131915776 unmapped: 38846464 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:31.736092+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.986072540s of 10.000637054s, submitted: 3
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131915776 unmapped: 38846464 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:32.736290+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131923968 unmapped: 38838272 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:33.736498+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131923968 unmapped: 38838272 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 153 heartbeat osd_stat(store_statfs(0x1b76fd000/0x0/0x1bfc00000, data 0x3ab1391/0x3b91000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:34.736648+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1492506 data_alloc: 184549376 data_used: 14315520
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131923968 unmapped: 38838272 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:35.736795+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131923968 unmapped: 38838272 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 153 heartbeat osd_stat(store_statfs(0x1b76fd000/0x0/0x1bfc00000, data 0x3ab1391/0x3b91000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:36.736937+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131932160 unmapped: 38830080 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:37.737104+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 153 heartbeat osd_stat(store_statfs(0x1b76fd000/0x0/0x1bfc00000, data 0x3ab145b/0x3b91000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 153 handle_osd_map epochs [154,154], i have 154, src has [1,154]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131940352 unmapped: 38821888 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 154 heartbeat osd_stat(store_statfs(0x1b76f9000/0x0/0x1bfc00000, data 0x3ab3639/0x3b94000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:38.737274+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131940352 unmapped: 38821888 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:39.737509+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1495552 data_alloc: 184549376 data_used: 14323712
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131940352 unmapped: 38821888 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 154 heartbeat osd_stat(store_statfs(0x1b76f9000/0x0/0x1bfc00000, data 0x3ab3639/0x3b94000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052820400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:40.737631+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 154 ms_handle_reset con 0x562052820400 session 0x5620569bd680
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131940352 unmapped: 38821888 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:41.737811+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131940352 unmapped: 38821888 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:42.737979+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131948544 unmapped: 38813696 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6ac00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 11.458297729s of 11.619625092s, submitted: 45
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 154 ms_handle_reset con 0x562053b6ac00 session 0x5620569bda40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:43.738153+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 154 heartbeat osd_stat(store_statfs(0x1b76f7000/0x0/0x1bfc00000, data 0x3ab36bc/0x3b97000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131948544 unmapped: 38813696 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:44.738303+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain podman[331605]: 2025-12-02 10:21:05.19586923 +0000 UTC m=+0.181814292 container health_status c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1502690 data_alloc: 184549376 data_used: 14323712
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131948544 unmapped: 38813696 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:45.738523+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131948544 unmapped: 38813696 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 154 handle_osd_map epochs [154,155], i have 154, src has [1,155]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 handle_osd_map epochs [155,155], i have 155, src has [1,155]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ce7000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 ms_handle_reset con 0x562053ce7000 session 0x5620569bdc20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:46.738646+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d1e000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d1fc00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 ms_handle_reset con 0x562053d1e000 session 0x5620569bde00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 ms_handle_reset con 0x562053d1fc00 session 0x562053cba780
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131981312 unmapped: 38780928 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:47.738815+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131981312 unmapped: 38780928 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 heartbeat osd_stat(store_statfs(0x1b76f4000/0x0/0x1bfc00000, data 0x3ab57a1/0x3b99000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:48.738985+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131981312 unmapped: 38780928 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:49.739148+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1506521 data_alloc: 184549376 data_used: 14336000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131981312 unmapped: 38780928 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051d30000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 ms_handle_reset con 0x562051d30000 session 0x5620575663c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:50.739246+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131989504 unmapped: 38772736 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052820400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 ms_handle_reset con 0x562052820400 session 0x562057566960
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6ac00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:51.739365+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 heartbeat osd_stat(store_statfs(0x1b76f6000/0x0/0x1bfc00000, data 0x3ab573f/0x3b98000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 ms_handle_reset con 0x562053b6ac00 session 0x562057566f00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 132030464 unmapped: 38731776 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 heartbeat osd_stat(store_statfs(0x1b76f7000/0x0/0x1bfc00000, data 0x3ab572f/0x3b97000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:52.739545+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 heartbeat osd_stat(store_statfs(0x1b76f7000/0x0/0x1bfc00000, data 0x3ab572f/0x3b97000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 132030464 unmapped: 38731776 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:53.739760+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 132030464 unmapped: 38731776 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:54.739940+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1503086 data_alloc: 184549376 data_used: 14336000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 132030464 unmapped: 38731776 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:55.740071+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 132030464 unmapped: 38731776 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:56.740204+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 13.003887177s of 13.375108719s, submitted: 95
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 132030464 unmapped: 38731776 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:57.740358+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 heartbeat osd_stat(store_statfs(0x1b76f7000/0x0/0x1bfc00000, data 0x3ab572f/0x3b97000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 132030464 unmapped: 38731776 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:58.740535+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 132030464 unmapped: 38731776 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:59.740677+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1503086 data_alloc: 184549376 data_used: 14336000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 132030464 unmapped: 38731776 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 heartbeat osd_stat(store_statfs(0x1b76f7000/0x0/0x1bfc00000, data 0x3ab572f/0x3b97000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:00.740838+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 132030464 unmapped: 38731776 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:01.740976+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 132030464 unmapped: 38731776 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 heartbeat osd_stat(store_statfs(0x1b76f5000/0x0/0x1bfc00000, data 0x3ab57f8/0x3b98000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:02.741124+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 132030464 unmapped: 38731776 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ce7000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 ms_handle_reset con 0x562053ce7000 session 0x562051899680
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ce7000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:03.741297+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 ms_handle_reset con 0x562053ce7000 session 0x562053c59e00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 132227072 unmapped: 38535168 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:04.741527+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1498934 data_alloc: 184549376 data_used: 15450112
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 132227072 unmapped: 38535168 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:05.741684+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 132227072 unmapped: 38535168 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:06.741845+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 heartbeat osd_stat(store_statfs(0x1b76f6000/0x0/0x1bfc00000, data 0x3ab57f6/0x3b98000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.960068703s of 10.004857063s, submitted: 8
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 132227072 unmapped: 38535168 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 heartbeat osd_stat(store_statfs(0x1b76f6000/0x0/0x1bfc00000, data 0x3ab57f6/0x3b98000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:07.741984+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 132235264 unmapped: 38526976 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:08.742151+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 heartbeat osd_stat(store_statfs(0x1b76f4000/0x0/0x1bfc00000, data 0x3ab58be/0x3b99000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 132235264 unmapped: 38526976 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: mgrc handle_mgr_map Got map version 47
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/4212177170,v1:172.18.0.108:6811/4212177170]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:09.742308+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1500012 data_alloc: 184549376 data_used: 15450112
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x5620541c4c00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 ms_handle_reset con 0x5620541c4c00 session 0x562052bae960
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 131940352 unmapped: 38821888 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d37000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:10.742421+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 ms_handle_reset con 0x562053d37000 session 0x562051899680
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 134496256 unmapped: 36265984 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:11.742548+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 134496256 unmapped: 36265984 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054033c00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 ms_handle_reset con 0x562054033c00 session 0x562057566960
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:12.742702+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 heartbeat osd_stat(store_statfs(0x1b76f6000/0x0/0x1bfc00000, data 0x3ab585a/0x3b98000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 134389760 unmapped: 36372480 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:13.742904+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 134389760 unmapped: 36372480 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:14.743116+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1508906 data_alloc: 184549376 data_used: 15450112
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 134389760 unmapped: 36372480 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:15.743282+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 134389760 unmapped: 36372480 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 heartbeat osd_stat(store_statfs(0x1b76f5000/0x0/0x1bfc00000, data 0x3ab58c2/0x3b98000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:16.743551+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.705072403s of 10.001235962s, submitted: 57
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 134389760 unmapped: 36372480 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:17.743704+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 134389760 unmapped: 36372480 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:18.743897+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 134389760 unmapped: 36372480 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:19.744080+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1508426 data_alloc: 184549376 data_used: 15450112
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 134389760 unmapped: 36372480 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:20.744253+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 134389760 unmapped: 36372480 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:21.744940+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 135438336 unmapped: 35323904 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 heartbeat osd_stat(store_statfs(0x1b76f4000/0x0/0x1bfc00000, data 0x3ab5988/0x3b99000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:22.745521+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 135438336 unmapped: 35323904 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:23.745926+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 134397952 unmapped: 36364288 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:24.746111+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1509328 data_alloc: 184549376 data_used: 15450112
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 134397952 unmapped: 36364288 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:25.746266+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 134397952 unmapped: 36364288 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:26.746494+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.899518967s of 10.021275520s, submitted: 12
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 134397952 unmapped: 36364288 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:27.746643+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 heartbeat osd_stat(store_statfs(0x1b76f6000/0x0/0x1bfc00000, data 0x3ab5924/0x3b98000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 134397952 unmapped: 36364288 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:28.746800+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 134397952 unmapped: 36364288 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:29.747000+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1508830 data_alloc: 184549376 data_used: 15450112
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 134397952 unmapped: 36364288 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:30.747315+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 134397952 unmapped: 36364288 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:31.747614+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 134397952 unmapped: 36364288 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:32.747896+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 134397952 unmapped: 36364288 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 heartbeat osd_stat(store_statfs(0x1b76f7000/0x0/0x1bfc00000, data 0x3ab598d/0x3b97000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:33.748149+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054045400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 ms_handle_reset con 0x562054045400 session 0x562053cba780
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 134397952 unmapped: 36364288 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:34.748316+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1515794 data_alloc: 184549376 data_used: 15450112
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 134430720 unmapped: 36331520 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:35.748535+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054045400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 134455296 unmapped: 36306944 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain podman[331598]: 2025-12-02 10:21:05.198711227 +0000 UTC m=+0.192735556 container exec_died 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 ms_handle_reset con 0x562054045400 session 0x5620569bd2c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:36.748764+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ce7000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 ms_handle_reset con 0x562051b49400 session 0x562053e64d20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.869821548s of 10.006925583s, submitted: 236
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 135159808 unmapped: 35602432 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:37.748900+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 135102464 unmapped: 35659776 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:38.749046+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 heartbeat osd_stat(store_statfs(0x1b6ef2000/0x0/0x1bfc00000, data 0x42b5c29/0x439c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 135143424 unmapped: 35618816 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: mgrc handle_mgr_map Got map version 48
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/4212177170,v1:172.18.0.108:6811/4212177170]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:39.749189+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1739466 data_alloc: 184549376 data_used: 15450112
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143564800 unmapped: 27197440 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:40.749388+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 heartbeat osd_stat(store_statfs(0x1b2552000/0x0/0x1bfc00000, data 0x7ab5c2b/0x7b9c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [0,0,0,0,0,0,0,0,1])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143622144 unmapped: 27140096 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:41.749545+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143745024 unmapped: 27017216 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:42.749660+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 135561216 unmapped: 35201024 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:43.749849+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 135634944 unmapped: 35127296 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:44.750071+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 heartbeat osd_stat(store_statfs(0x1aed52000/0x0/0x1bfc00000, data 0xb2b5c97/0xb39c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2399288 data_alloc: 184549376 data_used: 15450112
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 145088512 unmapped: 25673728 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:45.750255+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 136634368 unmapped: 34127872 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:46.750406+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 8.998002052s of 10.033533096s, submitted: 79
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 145227776 unmapped: 25534464 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:47.750532+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 137011200 unmapped: 33751040 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:48.750695+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 heartbeat osd_stat(store_statfs(0x1aad52000/0x0/0x1bfc00000, data 0xf2b5c9d/0xf39c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 137076736 unmapped: 33685504 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:49.750862+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2837126 data_alloc: 184549376 data_used: 15450112
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 137125888 unmapped: 33636352 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:50.751044+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 heartbeat osd_stat(store_statfs(0x1a8d53000/0x0/0x1bfc00000, data 0x112b5c3b/0x1139b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 145727488 unmapped: 25034752 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:51.751218+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 137453568 unmapped: 33308672 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:52.751362+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 137592832 unmapped: 33169408 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:53.751518+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 137699328 unmapped: 33062912 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 heartbeat osd_stat(store_statfs(0x1a5553000/0x0/0x1bfc00000, data 0x14ab5c3b/0x14b9b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:54.751642+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 3515486 data_alloc: 184549376 data_used: 15450112
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 137863168 unmapped: 32899072 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:55.751772+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 heartbeat osd_stat(store_statfs(0x1a3d53000/0x0/0x1bfc00000, data 0x162b5ca0/0x1639b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [0,0,0,0,0,0,0,0,1])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 146432000 unmapped: 24330240 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:56.751928+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.021053314s of 10.307092667s, submitted: 62
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 138133504 unmapped: 32628736 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:57.752051+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 146530304 unmapped: 24231936 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:58.752211+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 heartbeat osd_stat(store_statfs(0x1a1553000/0x0/0x1bfc00000, data 0x18ab5cb0/0x18b9b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [0,0,0,0,0,0,0,0,0,0,1])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 146702336 unmapped: 24059904 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:59.752437+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 3935734 data_alloc: 184549376 data_used: 15450112
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 138485760 unmapped: 32276480 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:00.759558+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 heartbeat osd_stat(store_statfs(0x1a0553000/0x0/0x1bfc00000, data 0x19ab5cb0/0x19b9b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 138608640 unmapped: 32153600 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:01.759796+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 138715136 unmapped: 32047104 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:02.759958+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 156 handle_osd_map epochs [156,156], i have 156, src has [1,156]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 156 handle_osd_map epochs [156,156], i have 156, src has [1,156]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 154648576 unmapped: 16113664 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:03.760152+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 147415040 unmapped: 23347200 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:04.760296+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 4488346 data_alloc: 184549376 data_used: 15458304
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 154902528 unmapped: 15859712 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:05.760576+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 148750336 unmapped: 22011904 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:06.760745+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 156 heartbeat osd_stat(store_statfs(0x19a54e000/0x0/0x1bfc00000, data 0x1fab80fb/0x1fba0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [0,0,0,0,0,0,0,0,1])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 8.772010803s of 10.004743576s, submitted: 88
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 151052288 unmapped: 19709952 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:07.760889+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 156 heartbeat osd_stat(store_statfs(0x197d4e000/0x0/0x1bfc00000, data 0x222b80fb/0x223a0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [0,0,0,0,0,0,0,1])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 151298048 unmapped: 19464192 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:08.761060+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143056896 unmapped: 27705344 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:09.761238+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 5090910 data_alloc: 184549376 data_used: 15458304
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143056896 unmapped: 27705344 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:10.761496+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143204352 unmapped: 27557888 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:11.761687+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 157 handle_osd_map epochs [157,157], i have 157, src has [1,157]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 157 handle_osd_map epochs [157,158], i have 157, src has [1,158]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143368192 unmapped: 27394048 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 158 heartbeat osd_stat(store_statfs(0x19454b000/0x0/0x1bfc00000, data 0x25aba285/0x25ba2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 158 handle_osd_map epochs [158,158], i have 158, src has [1,158]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:12.761807+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143523840 unmapped: 27238400 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:13.762003+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 151928832 unmapped: 18833408 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:14.762123+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 5536230 data_alloc: 184549376 data_used: 15470592
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 152985600 unmapped: 17776640 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:15.762284+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 158 handle_osd_map epochs [159,159], i have 159, src has [1,159]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 158 handle_osd_map epochs [159,159], i have 159, src has [1,159]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143826944 unmapped: 26935296 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:16.762398+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 8.879975319s of 10.210795403s, submitted: 122
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 159 handle_osd_map epochs [160,160], i have 160, src has [1,160]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143917056 unmapped: 26845184 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:17.762647+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 160 heartbeat osd_stat(store_statfs(0x190542000/0x0/0x1bfc00000, data 0x29abe6bc/0x29baa000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143917056 unmapped: 26845184 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 161 handle_osd_map epochs [161,161], i have 161, src has [1,161]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:18.762807+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 161 ms_handle_reset con 0x562053ce7000 session 0x562055c52960
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143941632 unmapped: 26820608 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:19.762986+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d37000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 5712814 data_alloc: 184549376 data_used: 15491072
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143949824 unmapped: 26812416 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 161 handle_osd_map epochs [161,162], i have 161, src has [1,162]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 162 handle_osd_map epochs [162,162], i have 162, src has [1,162]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 162 handle_osd_map epochs [162,162], i have 162, src has [1,162]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:20.763128+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 162 handle_osd_map epochs [162,162], i have 162, src has [1,162]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 162 handle_osd_map epochs [162,162], i have 162, src has [1,162]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 162 ms_handle_reset con 0x562053d37000 session 0x562052bae1e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 142032896 unmapped: 28729344 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:21.763301+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054033c00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 163 ms_handle_reset con 0x562054033c00 session 0x562054506960
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 163 handle_osd_map epochs [163,163], i have 163, src has [1,163]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 163 ms_handle_reset con 0x562051b49400 session 0x56205750b680
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 163 heartbeat osd_stat(store_statfs(0x1b6136000/0x0/0x1bfc00000, data 0x3ac6fbd/0x3bb7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5f0f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143106048 unmapped: 27656192 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:22.763469+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ce7000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 163 ms_handle_reset con 0x562053ce7000 session 0x562054692000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143106048 unmapped: 27656192 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:23.763647+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d37000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 163 ms_handle_reset con 0x562053d37000 session 0x562054692b40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054045400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 163 ms_handle_reset con 0x562054045400 session 0x56205858b4a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143122432 unmapped: 27639808 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:24.763818+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1646729 data_alloc: 184549376 data_used: 15491072
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x5620541c4c00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 163 ms_handle_reset con 0x5620541c4c00 session 0x5620546934a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143106048 unmapped: 27656192 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:25.763964+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 163 handle_osd_map epochs [163,164], i have 163, src has [1,164]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 164 handle_osd_map epochs [164,164], i have 164, src has [1,164]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143114240 unmapped: 27648000 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:26.764443+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 164 ms_handle_reset con 0x562051b49400 session 0x562051963680
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 164 handle_osd_map epochs [164,164], i have 164, src has [1,164]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ce7000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 164 ms_handle_reset con 0x562053ce7000 session 0x562052bae5a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 164 handle_osd_map epochs [164,165], i have 164, src has [1,165]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143024128 unmapped: 27738112 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:27.764684+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.162310600s of 10.151850700s, submitted: 328
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 165 heartbeat osd_stat(store_statfs(0x1b6133000/0x0/0x1bfc00000, data 0x3ac9166/0x3bba000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5f0f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d37000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 165 ms_handle_reset con 0x562053d37000 session 0x56205401ef00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 165 heartbeat osd_stat(store_statfs(0x1b6131000/0x0/0x1bfc00000, data 0x3acb3ce/0x3bbd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5f0f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143024128 unmapped: 27738112 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:28.764901+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054045400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 165 ms_handle_reset con 0x562054045400 session 0x562054b3ab40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143024128 unmapped: 27738112 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054ea6800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:29.765072+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 165 ms_handle_reset con 0x562054ea6800 session 0x56205750a3c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 165 heartbeat osd_stat(store_statfs(0x1b6130000/0x0/0x1bfc00000, data 0x3acb3ce/0x3bbd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5f0f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1651485 data_alloc: 184549376 data_used: 15503360
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143056896 unmapped: 27705344 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:30.765314+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143065088 unmapped: 27697152 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:31.765500+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 165 ms_handle_reset con 0x562051b49400 session 0x562055b14d20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143081472 unmapped: 27680768 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:32.765643+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 165 heartbeat osd_stat(store_statfs(0x1b6130000/0x0/0x1bfc00000, data 0x3acb4fb/0x3bbe000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5f0f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 165 heartbeat osd_stat(store_statfs(0x1b6130000/0x0/0x1bfc00000, data 0x3acb4fb/0x3bbe000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5f0f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143081472 unmapped: 27680768 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:33.765873+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 165 heartbeat osd_stat(store_statfs(0x1b6130000/0x0/0x1bfc00000, data 0x3acb4f9/0x3bbe000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5f0f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143089664 unmapped: 27672576 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:34.766065+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1656080 data_alloc: 184549376 data_used: 15503360
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143089664 unmapped: 27672576 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:35.766216+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ce7000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 166 handle_osd_map epochs [166,166], i have 166, src has [1,166]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 166 ms_handle_reset con 0x562053ce7000 session 0x562052c134a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143106048 unmapped: 27656192 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:36.766342+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 166 heartbeat osd_stat(store_statfs(0x1b612c000/0x0/0x1bfc00000, data 0x3acd62e/0x3bc1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5f0f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 166 handle_osd_map epochs [167,167], i have 167, src has [1,167]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:37.766607+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143114240 unmapped: 27648000 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.736468315s of 10.248871803s, submitted: 136
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:38.766831+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143147008 unmapped: 27615232 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 167 heartbeat osd_stat(store_statfs(0x1b6126000/0x0/0x1bfc00000, data 0x3acf970/0x3bc6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5f0f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 167 handle_osd_map epochs [168,168], i have 167, src has [1,168]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: mgrc handle_mgr_map Got map version 49
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/4212177170,v1:172.18.0.108:6811/4212177170]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:39.766991+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143040512 unmapped: 27721728 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1672450 data_alloc: 184549376 data_used: 15515648
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d37000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 168 handle_osd_map epochs [169,169], i have 168, src has [1,169]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054045400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 169 handle_osd_map epochs [168,169], i have 169, src has [1,169]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:40.767133+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143048704 unmapped: 27713536 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 169 ms_handle_reset con 0x562053d37000 session 0x562054692d20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 169 ms_handle_reset con 0x562054045400 session 0x562053c734a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 169 handle_osd_map epochs [170,170], i have 169, src has [1,170]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:41.767305+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143065088 unmapped: 27697152 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 170 handle_osd_map epochs [170,170], i have 170, src has [1,170]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x5620541c5800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 170 ms_handle_reset con 0x5620541c5800 session 0x56205703d2c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 170 ms_handle_reset con 0x562051b49400 session 0x56205401ed20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:42.767480+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 170 handle_osd_map epochs [171,171], i have 170, src has [1,171]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143081472 unmapped: 27680768 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 171 handle_osd_map epochs [171,171], i have 171, src has [1,171]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ce7000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 171 ms_handle_reset con 0x562053ce7000 session 0x56205858b0e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:43.767688+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143106048 unmapped: 27656192 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d37000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:44.767817+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143114240 unmapped: 27648000 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 171 handle_osd_map epochs [172,172], i have 171, src has [1,172]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054045400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 172 ms_handle_reset con 0x562054045400 session 0x562054507860
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 172 ms_handle_reset con 0x562053d37000 session 0x562054692b40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 172 heartbeat osd_stat(store_statfs(0x1b70f1000/0x0/0x1bfc00000, data 0x3adacd3/0x3bda000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x56205134d400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 172 ms_handle_reset con 0x56205134d400 session 0x562055b14000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1688762 data_alloc: 184549376 data_used: 15519744
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 172 heartbeat osd_stat(store_statfs(0x1b70f1000/0x0/0x1bfc00000, data 0x3adacd3/0x3bda000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:45.768014+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 142819328 unmapped: 27942912 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: mgrc handle_mgr_map Got map version 50
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/4212177170,v1:172.18.0.108:6811/4212177170]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 172 handle_osd_map epochs [172,173], i have 172, src has [1,173]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:46.768412+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 142983168 unmapped: 27779072 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 173 handle_osd_map epochs [173,173], i have 173, src has [1,173]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:47.768557+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 142983168 unmapped: 27779072 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 173 handle_osd_map epochs [174,174], i have 173, src has [1,174]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.526350021s of 10.049694061s, submitted: 198
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 174 handle_osd_map epochs [173,174], i have 174, src has [1,174]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 174 ms_handle_reset con 0x562051b49400 session 0x562055b152c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 174 heartbeat osd_stat(store_statfs(0x1b70f1000/0x0/0x1bfc00000, data 0x3adc967/0x3bdc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:48.768743+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 142991360 unmapped: 27770880 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:49.768968+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 142991360 unmapped: 27770880 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1690892 data_alloc: 184549376 data_used: 15519744
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:50.769143+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 142991360 unmapped: 27770880 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 174 heartbeat osd_stat(store_statfs(0x1b70ed000/0x0/0x1bfc00000, data 0x3adebd7/0x3bdf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:51.769320+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 174 handle_osd_map epochs [175,175], i have 174, src has [1,175]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143007744 unmapped: 27754496 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:52.769529+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143007744 unmapped: 27754496 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 175 heartbeat osd_stat(store_statfs(0x1b70ea000/0x0/0x1bfc00000, data 0x3ae0ce9/0x3be3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:53.769771+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143007744 unmapped: 27754496 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 175 heartbeat osd_stat(store_statfs(0x1b70ea000/0x0/0x1bfc00000, data 0x3ae0ce9/0x3be3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:54.770021+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 175 heartbeat osd_stat(store_statfs(0x1b70ea000/0x0/0x1bfc00000, data 0x3ae0ce9/0x3be3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143007744 unmapped: 27754496 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1694398 data_alloc: 184549376 data_used: 15523840
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:55.770206+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143007744 unmapped: 27754496 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 175 handle_osd_map epochs [175,176], i have 175, src has [1,176]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:56.770432+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143024128 unmapped: 27738112 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 176 heartbeat osd_stat(store_statfs(0x1b70e5000/0x0/0x1bfc00000, data 0x3ae2f0d/0x3be7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:57.770670+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143024128 unmapped: 27738112 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:58.770876+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143024128 unmapped: 27738112 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:59.770974+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143024128 unmapped: 27738112 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 176 heartbeat osd_stat(store_statfs(0x1b70e5000/0x0/0x1bfc00000, data 0x3ae2f0d/0x3be7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1698572 data_alloc: 184549376 data_used: 15536128
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:00.771726+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143024128 unmapped: 27738112 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 13.025183678s of 13.167127609s, submitted: 71
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:01.771821+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143024128 unmapped: 27738112 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:02.772000+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143024128 unmapped: 27738112 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ce7000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:03.772214+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 176 ms_handle_reset con 0x562053ce7000 session 0x562052bb03c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d37000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143024128 unmapped: 27738112 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 176 ms_handle_reset con 0x562053d37000 session 0x56205703e3c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 176 heartbeat osd_stat(store_statfs(0x1b70e7000/0x0/0x1bfc00000, data 0x3ae2f0b/0x3be7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:04.772378+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143589376 unmapped: 27172864 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054045400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 176 ms_handle_reset con 0x562054045400 session 0x5620569bc780
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1709470 data_alloc: 184549376 data_used: 15536128
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:05.772547+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143589376 unmapped: 27172864 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:06.772740+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 143589376 unmapped: 27172864 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6b800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 176 ms_handle_reset con 0x562053b6b800 session 0x56205554f0e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:07.772859+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 176 ms_handle_reset con 0x562051b49400 session 0x562055c47860
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6b800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 144695296 unmapped: 26066944 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 176 ms_handle_reset con 0x562053b6b800 session 0x562055c52000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ce7000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 176 ms_handle_reset con 0x562053ce7000 session 0x562055c52b40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:08.773029+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 144605184 unmapped: 26157056 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:09.773184+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 144605184 unmapped: 26157056 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 176 heartbeat osd_stat(store_statfs(0x1b63a0000/0x0/0x1bfc00000, data 0x4827fe0/0x492e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain podman[331600]: 2025-12-02 10:21:05.21872275 +0000 UTC m=+0.205157037 container exec_died a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=edpm)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1816265 data_alloc: 184549376 data_used: 15470592
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:10.773321+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d37000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 144556032 unmapped: 26206208 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 176 handle_osd_map epochs [176,177], i have 176, src has [1,177]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.999949455s of 10.430896759s, submitted: 90
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 177 handle_osd_map epochs [177,177], i have 177, src has [1,177]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:11.773506+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 177 ms_handle_reset con 0x562053d37000 session 0x562054eea5a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 144556032 unmapped: 26206208 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 177 handle_osd_map epochs [177,177], i have 177, src has [1,177]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:12.773625+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051d31000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 177 ms_handle_reset con 0x562051d31000 session 0x5620570ecb40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 144564224 unmapped: 26198016 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 177 ms_handle_reset con 0x562051b49400 session 0x5620585dfc20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 177 heartbeat osd_stat(store_statfs(0x1b6397000/0x0/0x1bfc00000, data 0x482a33c/0x4935000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain systemd[1]: 225906fd8e8b4f13c74154e38177daa35de73aa5fcbf675276fa0f492093bda1.service: Deactivated successfully.
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:13.773805+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 144596992 unmapped: 26165248 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:14.773955+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 144596992 unmapped: 26165248 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6b800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 177 ms_handle_reset con 0x562053b6b800 session 0x562053c725a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ce7000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 177 ms_handle_reset con 0x562053ce7000 session 0x562053ba52c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d37000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 177 ms_handle_reset con 0x562053d37000 session 0x562054b3a1e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1832524 data_alloc: 184549376 data_used: 15478784
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:15.774126+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 177 handle_osd_map epochs [178,178], i have 177, src has [1,178]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 178 handle_osd_map epochs [178,178], i have 178, src has [1,178]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6a400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 178 ms_handle_reset con 0x562053b6a400 session 0x5620546923c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 145661952 unmapped: 25100288 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:16.774347+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 178 handle_osd_map epochs [179,179], i have 178, src has [1,179]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 179 ms_handle_reset con 0x562051b49400 session 0x56205716bc20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 145678336 unmapped: 25083904 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:17.774531+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 145686528 unmapped: 25075712 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:18.774714+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 145686528 unmapped: 25075712 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 179 heartbeat osd_stat(store_statfs(0x1b638d000/0x0/0x1bfc00000, data 0x482eded/0x493f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 179 handle_osd_map epochs [180,180], i have 179, src has [1,180]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 179 handle_osd_map epochs [180,180], i have 180, src has [1,180]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6b800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:19.774854+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 180 ms_handle_reset con 0x562053b6b800 session 0x562053cbb2c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 145727488 unmapped: 25034752 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1851335 data_alloc: 184549376 data_used: 15491072
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:20.775004+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 145735680 unmapped: 25026560 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ce7000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 180 ms_handle_reset con 0x562053ce7000 session 0x56205703dc20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d37000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 180 ms_handle_reset con 0x562053d37000 session 0x562054693c20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:21.775139+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 146825216 unmapped: 23937024 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.849616051s of 10.363825798s, submitted: 122
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:22.775332+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 145776640 unmapped: 24985600 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x56205134d800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 180 ms_handle_reset con 0x56205134d800 session 0x562054506000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 180 ms_handle_reset con 0x562051b49400 session 0x562054b3a3c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:23.775474+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 145793024 unmapped: 24969216 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 180 handle_osd_map epochs [181,181], i have 180, src has [1,181]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 180 handle_osd_map epochs [181,181], i have 181, src has [1,181]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 181 heartbeat osd_stat(store_statfs(0x1b638b000/0x0/0x1bfc00000, data 0x483100b/0x4943000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:24.775595+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6b800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 181 ms_handle_reset con 0x562053b6b800 session 0x5620585a21e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 145817600 unmapped: 24944640 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 181 ms_handle_reset con 0x562052821800 session 0x56205703e000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 181 heartbeat osd_stat(store_statfs(0x1b6386000/0x0/0x1bfc00000, data 0x483327d/0x4947000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1856050 data_alloc: 184549376 data_used: 15503360
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:25.775741+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 145817600 unmapped: 24944640 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:26.775908+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 145825792 unmapped: 24936448 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ce7000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:27.776024+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 181 ms_handle_reset con 0x562053ce7000 session 0x562053b34f00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 145858560 unmapped: 24903680 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d37000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 181 heartbeat osd_stat(store_statfs(0x1b6387000/0x0/0x1bfc00000, data 0x483327d/0x4947000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 181 ms_handle_reset con 0x562053d37000 session 0x562052c174a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 181 handle_osd_map epochs [181,182], i have 181, src has [1,182]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 182 handle_osd_map epochs [182,182], i have 182, src has [1,182]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:28.776147+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 145899520 unmapped: 24862720 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 182 ms_handle_reset con 0x562052821800 session 0x56205554f860
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 182 handle_osd_map epochs [182,182], i have 182, src has [1,182]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:29.776542+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 182 handle_osd_map epochs [183,183], i have 182, src has [1,183]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 183 handle_osd_map epochs [183,183], i have 183, src has [1,183]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 145956864 unmapped: 24805376 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 183 ms_handle_reset con 0x562051b49400 session 0x56205858be00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6b800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 183 ms_handle_reset con 0x562053b6b800 session 0x56205554e780
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1863233 data_alloc: 184549376 data_used: 15507456
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:30.776711+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 145956864 unmapped: 24805376 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ce7000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 183 ms_handle_reset con 0x562053ce7000 session 0x56205858ab40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:31.776841+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.387662888s of 10.000132561s, submitted: 166
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 146006016 unmapped: 24756224 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 183 handle_osd_map epochs [184,184], i have 183, src has [1,184]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 184 handle_osd_map epochs [184,184], i have 184, src has [1,184]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d37000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:32.776976+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 184 ms_handle_reset con 0x562053d37000 session 0x5620541c7e00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 146087936 unmapped: 24674304 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 184 ms_handle_reset con 0x562051b49400 session 0x5620585a3680
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 184 heartbeat osd_stat(store_statfs(0x1b6382000/0x0/0x1bfc00000, data 0x48392e1/0x494b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6b800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 184 ms_handle_reset con 0x562052821800 session 0x562053b8da40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 184 ms_handle_reset con 0x562053b6b800 session 0x56205401e5a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:33.777154+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 145784832 unmapped: 24977408 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ce7000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 184 ms_handle_reset con 0x562053ce7000 session 0x56205401fe00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:34.777340+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 145809408 unmapped: 24952832 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d37000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d1e800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 184 ms_handle_reset con 0x562053d1e800 session 0x562053cba780
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1872488 data_alloc: 184549376 data_used: 15585280
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:35.777517+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 145833984 unmapped: 24928256 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 184 handle_osd_map epochs [185,185], i have 184, src has [1,185]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 185 handle_osd_map epochs [185,186], i have 185, src has [1,186]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 186 ms_handle_reset con 0x562052821800 session 0x56205750bc20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:36.777687+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6b800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 186 ms_handle_reset con 0x562053b6b800 session 0x56205401f0e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ce7000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 186 handle_osd_map epochs [185,186], i have 186, src has [1,186]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 145874944 unmapped: 24887296 heap: 170762240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 186 ms_handle_reset con 0x562053ce7000 session 0x562054ee83c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6b000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 186 handle_osd_map epochs [186,187], i have 186, src has [1,187]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 187 handle_osd_map epochs [187,187], i have 187, src has [1,187]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 187 handle_osd_map epochs [187,187], i have 187, src has [1,187]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 187 handle_osd_map epochs [187,187], i have 187, src has [1,187]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:37.777815+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 187 ms_handle_reset con 0x562051b49400 session 0x562053cba5a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 187 ms_handle_reset con 0x562053b6b000 session 0x562054eeb860
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 187 ms_handle_reset con 0x562053d37000 session 0x56205401f4a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 187 ms_handle_reset con 0x562051b49400 session 0x5620569bc1e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 145924096 unmapped: 28516352 heap: 174440448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 187 heartbeat osd_stat(store_statfs(0x1b557d000/0x0/0x1bfc00000, data 0x5633fad/0x5750000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:38.777987+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 145924096 unmapped: 28516352 heap: 174440448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6b800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 187 ms_handle_reset con 0x562053b6b800 session 0x562053c59e00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 187 handle_osd_map epochs [188,188], i have 187, src has [1,188]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ce7000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:39.778248+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 188 heartbeat osd_stat(store_statfs(0x1b5579000/0x0/0x1bfc00000, data 0x563619f/0x5752000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [0,1])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 188 ms_handle_reset con 0x562052821800 session 0x562053c423c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 188 ms_handle_reset con 0x562053ce7000 session 0x562053c58b40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 145973248 unmapped: 28467200 heap: 174440448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ce7000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 188 ms_handle_reset con 0x562053ce7000 session 0x562054ee9e00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1994089 data_alloc: 184549376 data_used: 15609856
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:40.778381+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 188 ms_handle_reset con 0x562051b49400 session 0x562058591e00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 145997824 unmapped: 28442624 heap: 174440448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 188 handle_osd_map epochs [189,189], i have 188, src has [1,189]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 189 ms_handle_reset con 0x562052821800 session 0x5620585a3860
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:41.778527+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6b800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d37000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 189 ms_handle_reset con 0x562053d37000 session 0x562053b8d2c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 189 handle_osd_map epochs [189,189], i have 189, src has [1,189]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 8.913159370s of 10.003453255s, submitted: 312
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 189 ms_handle_reset con 0x562053b6b800 session 0x562053e652c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 145817600 unmapped: 28622848 heap: 174440448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:42.778716+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 145817600 unmapped: 28622848 heap: 174440448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 189 handle_osd_map epochs [189,190], i have 189, src has [1,190]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 189 handle_osd_map epochs [190,190], i have 190, src has [1,190]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:43.778987+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 190 ms_handle_reset con 0x562051b49400 session 0x5620585914a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 145850368 unmapped: 28590080 heap: 174440448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 190 heartbeat osd_stat(store_statfs(0x1b636a000/0x0/0x1bfc00000, data 0x48460ef/0x4963000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:44.779701+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 190 ms_handle_reset con 0x562052821800 session 0x562053b35e00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 145850368 unmapped: 28590080 heap: 174440448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1897993 data_alloc: 184549376 data_used: 15622144
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:45.779850+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 145850368 unmapped: 28590080 heap: 174440448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ce7000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 190 ms_handle_reset con 0x562053ce7000 session 0x562055c53c20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d37000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 190 handle_osd_map epochs [191,191], i have 190, src has [1,191]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 191 handle_osd_map epochs [191,191], i have 191, src has [1,191]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 191 ms_handle_reset con 0x562053d37000 session 0x562057689c20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:46.779987+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 191 heartbeat osd_stat(store_statfs(0x1b6367000/0x0/0x1bfc00000, data 0x48483a1/0x4966000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 145891328 unmapped: 28549120 heap: 174440448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 191 ms_handle_reset con 0x562051b49000 session 0x562057688b40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:47.780137+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 191 ms_handle_reset con 0x562051b49000 session 0x562052ba4960
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 145866752 unmapped: 28573696 heap: 174440448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:48.780276+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 145866752 unmapped: 28573696 heap: 174440448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:49.780431+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 145866752 unmapped: 28573696 heap: 174440448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1899113 data_alloc: 184549376 data_used: 15634432
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:50.780636+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 191 heartbeat osd_stat(store_statfs(0x1b5f69000/0x0/0x1bfc00000, data 0x484841a/0x4965000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 145866752 unmapped: 28573696 heap: 174440448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 191 handle_osd_map epochs [192,192], i have 191, src has [1,192]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:51.780744+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 192 handle_osd_map epochs [192,192], i have 192, src has [1,192]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.623670578s of 10.018299103s, submitted: 152
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 145891328 unmapped: 28549120 heap: 174440448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:52.780895+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 192 handle_osd_map epochs [192,192], i have 192, src has [1,192]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 145940480 unmapped: 28499968 heap: 174440448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 192 heartbeat osd_stat(store_statfs(0x1b5f65000/0x0/0x1bfc00000, data 0x484a510/0x4968000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:53.781092+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 146997248 unmapped: 27443200 heap: 174440448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:54.781270+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 192 handle_osd_map epochs [193,193], i have 192, src has [1,193]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 193 ms_handle_reset con 0x562051b49400 session 0x562053ba5a40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 147030016 unmapped: 27410432 heap: 174440448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 193 heartbeat osd_stat(store_statfs(0x1b5f60000/0x0/0x1bfc00000, data 0x484a6b9/0x496d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1916380 data_alloc: 184549376 data_used: 15654912
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:55.781428+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 193 handle_osd_map epochs [194,194], i have 193, src has [1,194]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 194 heartbeat osd_stat(store_statfs(0x1b5f5e000/0x0/0x1bfc00000, data 0x484c993/0x4970000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 147079168 unmapped: 27361280 heap: 174440448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain systemd[1]: a419d0a64b4b99d861b818444fb736962d5186e5cbae68d258c0a1a9860e395b.service: Deactivated successfully.
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:56.781676+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 194 handle_osd_map epochs [195,195], i have 194, src has [1,195]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 195 handle_osd_map epochs [195,195], i have 195, src has [1,195]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 147087360 unmapped: 27353088 heap: 174440448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 195 ms_handle_reset con 0x562052821800 session 0x5620575663c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ce7000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:57.781812+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 195 handle_osd_map epochs [196,196], i have 195, src has [1,196]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 196 handle_osd_map epochs [196,196], i have 196, src has [1,196]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 196 ms_handle_reset con 0x562053ce7000 session 0x562052bae960
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 147120128 unmapped: 27320320 heap: 174440448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d37000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:58.781996+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054032800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 196 handle_osd_map epochs [197,197], i have 196, src has [1,197]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 197 ms_handle_reset con 0x562054032800 session 0x562057566960
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 147177472 unmapped: 27262976 heap: 174440448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 197 handle_osd_map epochs [197,197], i have 197, src has [1,197]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 197 heartbeat osd_stat(store_statfs(0x1b5f55000/0x0/0x1bfc00000, data 0x48530cf/0x4979000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,1,2,3,5] op hist [1])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 197 ms_handle_reset con 0x562053d37000 session 0x5620585a2d20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054032800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:59.782253+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 197 handle_osd_map epochs [198,198], i have 197, src has [1,198]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 147259392 unmapped: 27181056 heap: 174440448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 198 ms_handle_reset con 0x562054032800 session 0x562053c88f00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1934796 data_alloc: 184549376 data_used: 15675392
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:00.782410+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 147316736 unmapped: 27123712 heap: 174440448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 198 ms_handle_reset con 0x562051b49400 session 0x5620575670e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 198 ms_handle_reset con 0x562051b49000 session 0x5620585a32c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 198 handle_osd_map epochs [199,199], i have 198, src has [1,199]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 198 handle_osd_map epochs [199,199], i have 199, src has [1,199]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 198 handle_osd_map epochs [199,199], i have 199, src has [1,199]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:01.782498+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 147349504 unmapped: 27090944 heap: 174440448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 199 handle_osd_map epochs [200,200], i have 199, src has [1,200]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.859435081s of 10.638716698s, submitted: 265
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:02.782600+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 200 ms_handle_reset con 0x562052821800 session 0x562053cbbc20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 147357696 unmapped: 27082752 heap: 174440448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:03.782752+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 200 ms_handle_reset con 0x562052821800 session 0x562053b2d4a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 200 ms_handle_reset con 0x562051b49000 session 0x562051d145a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 200 heartbeat osd_stat(store_statfs(0x1b5f43000/0x0/0x1bfc00000, data 0x485b9c1/0x4988000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 147382272 unmapped: 27058176 heap: 174440448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:04.782892+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 147382272 unmapped: 27058176 heap: 174440448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1938822 data_alloc: 184549376 data_used: 15687680
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:05.783080+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 147382272 unmapped: 27058176 heap: 174440448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 200 handle_osd_map epochs [201,201], i have 200, src has [1,201]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:06.783237+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 147390464 unmapped: 27049984 heap: 174440448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 201 handle_osd_map epochs [201,201], i have 201, src has [1,201]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 201 handle_osd_map epochs [201,201], i have 201, src has [1,201]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:07.783391+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 201 handle_osd_map epochs [202,202], i have 201, src has [1,202]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 147406848 unmapped: 27033600 heap: 174440448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 202 heartbeat osd_stat(store_statfs(0x1b5f41000/0x0/0x1bfc00000, data 0x485db08/0x498d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:08.783539+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 202 ms_handle_reset con 0x562051b49400 session 0x562057688f00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 147398656 unmapped: 27041792 heap: 174440448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:09.783716+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d37000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054032800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155885568 unmapped: 18554880 heap: 174440448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2210924 data_alloc: 184549376 data_used: 15712256
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:10.783821+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176087040 unmapped: 15155200 heap: 191242240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:11.783972+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 154304512 unmapped: 36937728 heap: 191242240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 8.490189552s of 10.004878998s, submitted: 215
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:12.784123+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155828224 unmapped: 35414016 heap: 191242240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:13.784312+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 202 heartbeat osd_stat(store_statfs(0x1a9d9c000/0x0/0x1bfc00000, data 0xf85febe/0xf992000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [0,0,0,0,1,0,0,1,2])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155926528 unmapped: 35315712 heap: 191242240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:14.784491+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 164413440 unmapped: 26828800 heap: 191242240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 4044602 data_alloc: 184549376 data_used: 15712256
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:15.784630+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 151896064 unmapped: 39346176 heap: 191242240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:16.784774+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 160366592 unmapped: 30875648 heap: 191242240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ce7000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:17.784909+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 202 ms_handle_reset con 0x562053ce7000 session 0x562053c8a1e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 202 heartbeat osd_stat(store_statfs(0x19c19c000/0x0/0x1bfc00000, data 0x1d45ff88/0x1d592000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [0,0,0,0,2,0,0,1])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 157261824 unmapped: 33980416 heap: 191242240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:18.785067+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 153174016 unmapped: 38068224 heap: 191242240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x5620553b9800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 202 ms_handle_reset con 0x5620553b9800 session 0x5620569bd4a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:19.785240+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 202 heartbeat osd_stat(store_statfs(0x196d9b000/0x0/0x1bfc00000, data 0x2286002a/0x22993000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [0,0,0,0,0,1,1,1,2])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 165896192 unmapped: 25346048 heap: 191242240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 5753752 data_alloc: 184549376 data_used: 15716352
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:20.785420+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 202 ms_handle_reset con 0x562051b49000 session 0x5620585df0e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 153477120 unmapped: 37765120 heap: 191242240 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:21.785618+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 162201600 unmapped: 33701888 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 202 handle_osd_map epochs [203,203], i have 202, src has [1,203]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 203 handle_osd_map epochs [203,203], i have 203, src has [1,203]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 203 handle_osd_map epochs [203,203], i have 203, src has [1,203]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 203 handle_osd_map epochs [203,203], i have 203, src has [1,203]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 6.025051117s of 10.025732040s, submitted: 381
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:22.785757+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 203 ms_handle_reset con 0x562051b49400 session 0x562051336f00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 203 ms_handle_reset con 0x562053d37000 session 0x562055c52f00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 203 ms_handle_reset con 0x562054032800 session 0x562053b8c000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ce7000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 203 ms_handle_reset con 0x562053ce7000 session 0x56205401f0e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 154050560 unmapped: 41852928 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ce7000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 203 ms_handle_reset con 0x562053ce7000 session 0x56205401fe00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 203 handle_osd_map epochs [204,204], i have 203, src has [1,204]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:23.785965+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 204 handle_osd_map epochs [204,204], i have 204, src has [1,204]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 204 handle_osd_map epochs [204,204], i have 204, src has [1,204]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 204 handle_osd_map epochs [204,204], i have 204, src has [1,204]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 204 ms_handle_reset con 0x562052821800 session 0x562052ba90e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 154230784 unmapped: 41672704 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 204 handle_osd_map epochs [204,205], i have 204, src has [1,205]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 204 handle_osd_map epochs [205,205], i have 205, src has [1,205]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:24.786098+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 205 ms_handle_reset con 0x562051b49000 session 0x562053b35c20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 153862144 unmapped: 42041344 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 205 ms_handle_reset con 0x562051b49400 session 0x562053e641e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 205 handle_osd_map epochs [205,206], i have 205, src has [1,206]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 206 handle_osd_map epochs [206,206], i have 206, src has [1,206]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:25.786204+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2144930 data_alloc: 184549376 data_used: 15732736
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 206 heartbeat osd_stat(store_statfs(0x1b3593000/0x0/0x1bfc00000, data 0x48666bd/0x4999000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 153886720 unmapped: 42016768 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 206 handle_osd_map epochs [206,207], i have 206, src has [1,207]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 207 handle_osd_map epochs [207,207], i have 207, src has [1,207]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:26.786328+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 207 handle_osd_map epochs [207,207], i have 207, src has [1,207]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 153886720 unmapped: 42016768 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 207 heartbeat osd_stat(store_statfs(0x1b3589000/0x0/0x1bfc00000, data 0x486ac87/0x49a1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d37000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 207 ms_handle_reset con 0x562053d37000 session 0x562053e0a000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:27.786475+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 207 handle_osd_map epochs [208,208], i have 207, src has [1,208]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 153911296 unmapped: 41992192 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:28.786615+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 208 handle_osd_map epochs [208,209], i have 208, src has [1,209]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 209 handle_osd_map epochs [209,209], i have 209, src has [1,209]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 153862144 unmapped: 42041344 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:29.786774+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 209 handle_osd_map epochs [208,209], i have 209, src has [1,209]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 209 handle_osd_map epochs [209,209], i have 209, src has [1,209]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 154910720 unmapped: 40992768 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:30.786968+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2160690 data_alloc: 184549376 data_used: 15732736
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 209 handle_osd_map epochs [210,210], i have 209, src has [1,210]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 210 ms_handle_reset con 0x562051b49000 session 0x562052c12000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 210 heartbeat osd_stat(store_statfs(0x1b4d80000/0x0/0x1bfc00000, data 0x486f4a5/0x49ad000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 154935296 unmapped: 40968192 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 210 handle_osd_map epochs [211,211], i have 210, src has [1,211]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 211 handle_osd_map epochs [211,211], i have 211, src has [1,211]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:31.787148+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 211 ms_handle_reset con 0x562051b49400 session 0x5620518994a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 211 handle_osd_map epochs [211,211], i have 211, src has [1,211]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 211 heartbeat osd_stat(store_statfs(0x1b4d7a000/0x0/0x1bfc00000, data 0x487177b/0x49b2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 154951680 unmapped: 40951808 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ce7000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 211 ms_handle_reset con 0x562053ce7000 session 0x562055c470e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:32.787317+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 8.667336464s of 10.081218719s, submitted: 521
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 211 handle_osd_map epochs [212,212], i have 211, src has [1,212]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 212 handle_osd_map epochs [212,212], i have 212, src has [1,212]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 212 ms_handle_reset con 0x562052821800 session 0x56205554f680
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 154984448 unmapped: 40919040 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d37000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 212 ms_handle_reset con 0x562053d37000 session 0x56205261b860
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:33.788428+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 212 handle_osd_map epochs [212,212], i have 212, src has [1,212]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 212 ms_handle_reset con 0x562051b49000 session 0x56205554eb40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155066368 unmapped: 40837120 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 212 ms_handle_reset con 0x562051b49400 session 0x562052ba8b40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:34.788610+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 212 ms_handle_reset con 0x562052821800 session 0x56205554e960
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 212 handle_osd_map epochs [213,213], i have 212, src has [1,213]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 213 handle_osd_map epochs [213,213], i have 213, src has [1,213]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155099136 unmapped: 40804352 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:35.788781+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2171504 data_alloc: 184549376 data_used: 15732736
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ce7000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 213 ms_handle_reset con 0x562053ce7000 session 0x562054eeab40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155082752 unmapped: 40820736 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 213 handle_osd_map epochs [214,214], i have 213, src has [1,214]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:36.788950+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 214 handle_osd_map epochs [214,214], i have 214, src has [1,214]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d37000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 214 handle_osd_map epochs [214,214], i have 214, src has [1,214]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155082752 unmapped: 40820736 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 214 handle_osd_map epochs [214,214], i have 214, src has [1,214]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 214 ms_handle_reset con 0x562053d37000 session 0x562052baad20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:37.789136+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 214 heartbeat osd_stat(store_statfs(0x1b496c000/0x0/0x1bfc00000, data 0x487a04f/0x49c0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155082752 unmapped: 40820736 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:38.789300+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 214 handle_osd_map epochs [215,215], i have 214, src has [1,215]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 215 handle_osd_map epochs [215,215], i have 215, src has [1,215]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155099136 unmapped: 40804352 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:39.789532+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 215 handle_osd_map epochs [215,215], i have 215, src has [1,215]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 215 ms_handle_reset con 0x562051b49000 session 0x562052766780
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155107328 unmapped: 40796160 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:40.789708+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2189601 data_alloc: 184549376 data_used: 15773696
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 215 heartbeat osd_stat(store_statfs(0x1b4967000/0x0/0x1bfc00000, data 0x487c34f/0x49c6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 215 ms_handle_reset con 0x562051b49400 session 0x562053b27860
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 215 ms_handle_reset con 0x562052821800 session 0x562053c42b40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155164672 unmapped: 40738816 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 215 handle_osd_map epochs [215,216], i have 215, src has [1,216]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:41.789876+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 216 handle_osd_map epochs [216,216], i have 216, src has [1,216]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054032800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 216 handle_osd_map epochs [216,216], i have 216, src has [1,216]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 216 ms_handle_reset con 0x562054032800 session 0x562057688000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 216 handle_osd_map epochs [216,216], i have 216, src has [1,216]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155164672 unmapped: 40738816 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:42.790031+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x5620553b9800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.315974236s of 10.033008575s, submitted: 220
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 216 ms_handle_reset con 0x5620553b9800 session 0x5620541c6780
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 216 handle_osd_map epochs [217,217], i have 216, src has [1,217]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155172864 unmapped: 40730624 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:43.790236+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 217 ms_handle_reset con 0x562051b49000 session 0x562052c134a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 217 ms_handle_reset con 0x562051b49400 session 0x56205716b4a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155205632 unmapped: 40697856 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 217 ms_handle_reset con 0x562052821800 session 0x562054507a40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:44.790422+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155213824 unmapped: 40689664 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 217 handle_osd_map epochs [217,217], i have 217, src has [1,217]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054032800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 217 ms_handle_reset con 0x562054032800 session 0x5620541c6000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:45.790653+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x5620553b9800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2197301 data_alloc: 184549376 data_used: 15790080
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 217 ms_handle_reset con 0x5620553b9800 session 0x562058590d20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155238400 unmapped: 40665088 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:46.790924+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 217 heartbeat osd_stat(store_statfs(0x1b4962000/0x0/0x1bfc00000, data 0x4880866/0x49cc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155254784 unmapped: 40648704 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 217 ms_handle_reset con 0x562051b49000 session 0x5620585912c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:47.791075+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 217 handle_osd_map epochs [218,218], i have 217, src has [1,218]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 154869760 unmapped: 41033728 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:48.791347+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 218 handle_osd_map epochs [219,219], i have 218, src has [1,219]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 154877952 unmapped: 41025536 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:49.791513+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 154877952 unmapped: 41025536 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:50.791686+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2205293 data_alloc: 184549376 data_used: 15736832
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 219 ms_handle_reset con 0x562051b49400 session 0x56205401eb40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 154877952 unmapped: 41025536 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:51.791855+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 219 handle_osd_map epochs [220,220], i have 219, src has [1,220]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 220 heartbeat osd_stat(store_statfs(0x1b495d000/0x0/0x1bfc00000, data 0x4884ddc/0x49d1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155926528 unmapped: 39976960 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:52.792097+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155918336 unmapped: 39985152 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:53.792295+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155918336 unmapped: 39985152 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:54.792512+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155918336 unmapped: 39985152 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:55.792818+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2214691 data_alloc: 184549376 data_used: 15749120
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 12.920860291s of 13.579236031s, submitted: 213
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155918336 unmapped: 39985152 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 220 handle_osd_map epochs [221,221], i have 220, src has [1,221]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:56.792946+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 221 ms_handle_reset con 0x562052821800 session 0x5620569bd680
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054032800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 221 handle_osd_map epochs [221,221], i have 221, src has [1,221]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 221 heartbeat osd_stat(store_statfs(0x1b4958000/0x0/0x1bfc00000, data 0x4886efe/0x49d6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 221 ms_handle_reset con 0x562054032800 session 0x562054693860
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053e4f800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 221 ms_handle_reset con 0x562053e4f800 session 0x56205401f860
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155967488 unmapped: 39936000 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:57.793094+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155967488 unmapped: 39936000 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:58.793276+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155967488 unmapped: 39936000 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:59.793502+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155967488 unmapped: 39936000 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:00.793682+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2216111 data_alloc: 184549376 data_used: 15761408
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155967488 unmapped: 39936000 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:01.793859+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155967488 unmapped: 39936000 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:02.794022+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 221 heartbeat osd_stat(store_statfs(0x1b4956000/0x0/0x1bfc00000, data 0x4888fd4/0x49d7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155967488 unmapped: 39936000 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:03.794257+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155967488 unmapped: 39936000 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 221 heartbeat osd_stat(store_statfs(0x1b4956000/0x0/0x1bfc00000, data 0x4888fd4/0x49d7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:04.794509+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155967488 unmapped: 39936000 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 221 heartbeat osd_stat(store_statfs(0x1b4956000/0x0/0x1bfc00000, data 0x4888fd4/0x49d7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:05.794711+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2216111 data_alloc: 184549376 data_used: 15761408
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155967488 unmapped: 39936000 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.049374580s of 10.272638321s, submitted: 60
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 221 heartbeat osd_stat(store_statfs(0x1b4956000/0x0/0x1bfc00000, data 0x4888fd4/0x49d7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [0,0,1])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:06.794875+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155967488 unmapped: 39936000 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:07.795189+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 221 heartbeat osd_stat(store_statfs(0x1b4956000/0x0/0x1bfc00000, data 0x4888fe4/0x49d8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 221 ms_handle_reset con 0x562051b49000 session 0x562055c521e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 221 heartbeat osd_stat(store_statfs(0x1b4956000/0x0/0x1bfc00000, data 0x4888fe4/0x49d8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155967488 unmapped: 39936000 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:08.795530+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155967488 unmapped: 39936000 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:09.795812+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155967488 unmapped: 39936000 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:10.795982+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2218996 data_alloc: 184549376 data_used: 15765504
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155967488 unmapped: 39936000 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:11.796166+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155983872 unmapped: 39919616 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:12.797406+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155975680 unmapped: 39927808 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:13.797990+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 221 heartbeat osd_stat(store_statfs(0x1b4957000/0x0/0x1bfc00000, data 0x48890be/0x49d7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155975680 unmapped: 39927808 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:14.798531+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 221 heartbeat osd_stat(store_statfs(0x1b4957000/0x0/0x1bfc00000, data 0x48890be/0x49d7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 221 ms_handle_reset con 0x562051b49400 session 0x562057688780
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155975680 unmapped: 39927808 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:15.799018+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2225078 data_alloc: 184549376 data_used: 15765504
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155975680 unmapped: 39927808 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.927985191s of 10.094664574s, submitted: 38
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:16.799183+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 221 ms_handle_reset con 0x562052821800 session 0x5620585914a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054032800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155992064 unmapped: 39911424 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 221 ms_handle_reset con 0x562054032800 session 0x562053c43860
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:17.799394+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 221 handle_osd_map epochs [222,222], i have 221, src has [1,222]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 156008448 unmapped: 39895040 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:18.799557+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ce4c00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 222 ms_handle_reset con 0x562053ce4c00 session 0x562054692b40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 222 handle_osd_map epochs [223,223], i have 222, src has [1,223]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 156024832 unmapped: 39878656 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:19.799859+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 223 handle_osd_map epochs [223,223], i have 223, src has [1,223]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 223 heartbeat osd_stat(store_statfs(0x1b494a000/0x0/0x1bfc00000, data 0x488d8c3/0x49e2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 223 ms_handle_reset con 0x562051b49000 session 0x5620541c6f00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 156041216 unmapped: 39862272 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:20.800207+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2232788 data_alloc: 184549376 data_used: 15773696
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 156098560 unmapped: 39804928 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 223 handle_osd_map epochs [224,224], i have 223, src has [1,224]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 224 ms_handle_reset con 0x562051b49400 session 0x5620569bc000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:21.800517+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 224 handle_osd_map epochs [224,224], i have 224, src has [1,224]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 224 ms_handle_reset con 0x562052821800 session 0x562054692780
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 156106752 unmapped: 39796736 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054032800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 224 ms_handle_reset con 0x562054032800 session 0x56205401e1e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:22.800715+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053b6bc00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 224 ms_handle_reset con 0x562053b6bc00 session 0x5620569bcf00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155164672 unmapped: 40738816 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:23.800915+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 224 heartbeat osd_stat(store_statfs(0x1b494a000/0x0/0x1bfc00000, data 0x488fb04/0x49e4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155164672 unmapped: 40738816 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:24.801064+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 224 heartbeat osd_stat(store_statfs(0x1b4949000/0x0/0x1bfc00000, data 0x488fb9f/0x49e5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155172864 unmapped: 40730624 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:25.801288+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2242775 data_alloc: 184549376 data_used: 15785984
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 224 ms_handle_reset con 0x562051b49000 session 0x562053e0ab40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155172864 unmapped: 40730624 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 224 handle_osd_map epochs [225,225], i have 224, src has [1,225]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:26.801513+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.504270554s of 10.175120354s, submitted: 222
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 225 handle_osd_map epochs [225,225], i have 225, src has [1,225]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 225 handle_osd_map epochs [225,225], i have 225, src has [1,225]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155181056 unmapped: 40722432 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 225 handle_osd_map epochs [225,225], i have 225, src has [1,225]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 225 ms_handle_reset con 0x562051b49400 session 0x562053cba3c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 225 handle_osd_map epochs [225,226], i have 225, src has [1,226]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:27.801666+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 226 heartbeat osd_stat(store_statfs(0x1b493a000/0x0/0x1bfc00000, data 0x489417e/0x49f2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155213824 unmapped: 40689664 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 226 handle_osd_map epochs [226,227], i have 226, src has [1,227]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:28.801796+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155246592 unmapped: 40656896 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:29.801977+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 227 ms_handle_reset con 0x562052821800 session 0x562055b143c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054032800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155246592 unmapped: 40656896 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:30.802113+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 227 ms_handle_reset con 0x562054032800 session 0x5620570ec3c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054044800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 227 ms_handle_reset con 0x562054044800 session 0x562053cba1e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2258622 data_alloc: 184549376 data_used: 15798272
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 227 ms_handle_reset con 0x562051b49000 session 0x5620541c6d20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155279360 unmapped: 40624128 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:31.802247+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 227 ms_handle_reset con 0x562051b49400 session 0x562055c534a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 227 heartbeat osd_stat(store_statfs(0x1b493d000/0x0/0x1bfc00000, data 0x4896339/0x49f1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155320320 unmapped: 40583168 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:32.802414+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 227 ms_handle_reset con 0x562052821800 session 0x56205703fc20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155353088 unmapped: 40550400 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:33.802601+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 227 handle_osd_map epochs [228,228], i have 227, src has [1,228]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 228 handle_osd_map epochs [228,228], i have 228, src has [1,228]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054032800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 228 ms_handle_reset con 0x562054032800 session 0x562054506b40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155344896 unmapped: 40558592 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:34.802744+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155344896 unmapped: 40558592 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:35.802950+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d1f400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 228 handle_osd_map epochs [228,228], i have 228, src has [1,228]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 228 ms_handle_reset con 0x562053d1f400 session 0x5620576894a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2269832 data_alloc: 184549376 data_used: 15810560
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155353088 unmapped: 40550400 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:36.803238+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 228 handle_osd_map epochs [229,229], i have 228, src has [1,229]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.277744293s of 10.044932365s, submitted: 212
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 229 ms_handle_reset con 0x562051b49000 session 0x56205703f680
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 229 handle_osd_map epochs [229,229], i have 229, src has [1,229]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 229 ms_handle_reset con 0x562051b49400 session 0x56205750b4a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 229 heartbeat osd_stat(store_statfs(0x1b4934000/0x0/0x1bfc00000, data 0x489a74f/0x49f9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155402240 unmapped: 40501248 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 229 handle_osd_map epochs [230,230], i have 229, src has [1,230]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:37.803918+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 230 ms_handle_reset con 0x562052821800 session 0x56205703f2c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054032800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 230 ms_handle_reset con 0x562054032800 session 0x562053b8c960
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d37400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 230 ms_handle_reset con 0x562053d37400 session 0x56205703f4a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155435008 unmapped: 40468480 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:38.804174+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 230 ms_handle_reset con 0x562051b49000 session 0x5620576890e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155435008 unmapped: 40468480 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:39.804362+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 230 handle_osd_map epochs [230,231], i have 230, src has [1,231]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 231 handle_osd_map epochs [231,231], i have 231, src has [1,231]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 231 ms_handle_reset con 0x562051b49400 session 0x56205703ef00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155467776 unmapped: 40435712 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:40.804522+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2273353 data_alloc: 184549376 data_used: 15818752
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 231 ms_handle_reset con 0x562052821800 session 0x562053bcb4a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054032800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 231 ms_handle_reset con 0x562054032800 session 0x5620585a2f00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053cc2c00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155672576 unmapped: 40230912 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 231 ms_handle_reset con 0x562053cc2c00 session 0x562058591680
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:41.804674+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 231 heartbeat osd_stat(store_statfs(0x1b4932000/0x0/0x1bfc00000, data 0x489ea8d/0x49fc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155672576 unmapped: 40230912 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 231 heartbeat osd_stat(store_statfs(0x1b4932000/0x0/0x1bfc00000, data 0x489ea8d/0x49fc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 231 ms_handle_reset con 0x562051b49000 session 0x562052baed20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:42.804794+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155713536 unmapped: 40189952 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:43.804963+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 231 heartbeat osd_stat(store_statfs(0x1b492f000/0x0/0x1bfc00000, data 0x489ebd3/0x49ff000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155721728 unmapped: 40181760 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:44.805223+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 231 ms_handle_reset con 0x562051b49400 session 0x56205401f4a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 231 ms_handle_reset con 0x562052821800 session 0x562053cbab40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054032800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ce5400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 231 ms_handle_reset con 0x562053ce5400 session 0x562053c8b0e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155844608 unmapped: 40058880 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:45.805424+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 231 ms_handle_reset con 0x562054032800 session 0x562053cbad20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 231 ms_handle_reset con 0x562051b49000 session 0x562054692000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2359265 data_alloc: 184549376 data_used: 15884288
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155860992 unmapped: 40042496 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 231 handle_osd_map epochs [232,232], i have 231, src has [1,232]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 232 handle_osd_map epochs [232,232], i have 232, src has [1,232]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:46.805602+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 232 ms_handle_reset con 0x562051b49400 session 0x562053e64000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.316975594s of 10.031318665s, submitted: 178
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 155664384 unmapped: 40239104 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:47.805755+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 232 handle_osd_map epochs [232,233], i have 232, src has [1,233]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 232 handle_osd_map epochs [233,233], i have 233, src has [1,233]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 233 handle_osd_map epochs [233,233], i have 233, src has [1,233]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 233 ms_handle_reset con 0x562052821800 session 0x5620585a23c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 156729344 unmapped: 39174144 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:48.805898+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ce5400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054032800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 233 ms_handle_reset con 0x562054032800 session 0x5620546921e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053cdfc00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 233 ms_handle_reset con 0x562053ce5400 session 0x562053bff4a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 233 handle_osd_map epochs [234,234], i have 233, src has [1,234]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 234 handle_osd_map epochs [234,234], i have 234, src has [1,234]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 234 ms_handle_reset con 0x562053cdfc00 session 0x562053bfed20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 234 ms_handle_reset con 0x562051b49000 session 0x562053bcb680
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 156770304 unmapped: 39133184 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 234 ms_handle_reset con 0x562052821800 session 0x562054ee83c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:49.806030+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ce5400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 234 ms_handle_reset con 0x562053ce5400 session 0x562053b34f00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 234 heartbeat osd_stat(store_statfs(0x1b4151000/0x0/0x1bfc00000, data 0x5079410/0x51dd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 234 handle_osd_map epochs [235,235], i have 234, src has [1,235]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 235 handle_osd_map epochs [235,235], i have 235, src has [1,235]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 235 ms_handle_reset con 0x562051b49400 session 0x562052ba4780
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 235 ms_handle_reset con 0x562051b49000 session 0x562053bca1e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 156860416 unmapped: 39043072 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:50.806186+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 235 ms_handle_reset con 0x562052821800 session 0x562053bcaf00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2307787 data_alloc: 184549376 data_used: 15908864
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 156884992 unmapped: 39018496 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:51.806369+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 156893184 unmapped: 39010304 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:52.806548+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 156893184 unmapped: 39010304 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:53.806768+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053cdfc00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 235 handle_osd_map epochs [236,236], i have 235, src has [1,236]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 236 handle_osd_map epochs [236,236], i have 236, src has [1,236]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 236 ms_handle_reset con 0x562053cdfc00 session 0x5620513374a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 156893184 unmapped: 39010304 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:54.807177+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ce5400
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054032800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 236 ms_handle_reset con 0x562054032800 session 0x562052995e00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 236 ms_handle_reset con 0x562053ce5400 session 0x56205703de00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 156893184 unmapped: 39010304 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:55.807416+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 236 handle_osd_map epochs [237,237], i have 236, src has [1,237]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 237 ms_handle_reset con 0x562051b49000 session 0x5620513372c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2325357 data_alloc: 184549376 data_used: 15925248
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 237 ms_handle_reset con 0x562052821800 session 0x562053b270e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 237 heartbeat osd_stat(store_statfs(0x1b4516000/0x0/0x1bfc00000, data 0x48abcdf/0x4a17000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053cdfc00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 237 ms_handle_reset con 0x562053cdfc00 session 0x562053b26960
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 156909568 unmapped: 38993920 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:56.807658+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 237 handle_osd_map epochs [238,238], i have 237, src has [1,238]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 238 handle_osd_map epochs [238,238], i have 238, src has [1,238]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.219988823s of 10.000176430s, submitted: 225
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 156950528 unmapped: 38952960 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:57.807907+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 238 handle_osd_map epochs [238,239], i have 238, src has [1,239]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 239 handle_osd_map epochs [239,239], i have 239, src has [1,239]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 239 handle_osd_map epochs [239,239], i have 239, src has [1,239]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 156950528 unmapped: 38952960 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:58.808039+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 156950528 unmapped: 38952960 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 02 10:21:05 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3504013880' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:59.808204+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 239 handle_osd_map epochs [240,240], i have 239, src has [1,240]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054033000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 240 ms_handle_reset con 0x562054033000 session 0x562051962000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 156975104 unmapped: 38928384 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:00.808545+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x5620553b8c00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 240 ms_handle_reset con 0x5620553b8c00 session 0x56205750ad20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2338689 data_alloc: 184549376 data_used: 15933440
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 240 ms_handle_reset con 0x562052821800 session 0x562057689e00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 156983296 unmapped: 38920192 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053cdfc00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:01.808683+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 240 handle_osd_map epochs [241,241], i have 240, src has [1,241]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 241 handle_osd_map epochs [241,241], i have 241, src has [1,241]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 241 ms_handle_reset con 0x562053cdfc00 session 0x5620518994a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 241 ms_handle_reset con 0x562051b49000 session 0x562054eea960
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 241 heartbeat osd_stat(store_statfs(0x1b450f000/0x0/0x1bfc00000, data 0x48b2353/0x4a1f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054033000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 241 ms_handle_reset con 0x562054033000 session 0x562054eea3c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 157048832 unmapped: 38854656 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:02.808820+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053cdf000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 241 ms_handle_reset con 0x562053cdf000 session 0x562053ba5680
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 157106176 unmapped: 38797312 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:03.809040+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 241 ms_handle_reset con 0x562051b49000 session 0x562053c8a960
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 241 handle_osd_map epochs [242,242], i have 241, src has [1,242]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 157163520 unmapped: 38739968 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:04.809271+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 242 ms_handle_reset con 0x562052821800 session 0x562052ba2f00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 157163520 unmapped: 38739968 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:05.809411+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 242 handle_osd_map epochs [243,243], i have 242, src has [1,243]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2341121 data_alloc: 184549376 data_used: 15949824
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 157171712 unmapped: 38731776 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 243 handle_osd_map epochs [244,244], i have 243, src has [1,244]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:06.809620+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053cdfc00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 244 ms_handle_reset con 0x562053cdfc00 session 0x562053e65c20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.194605827s of 10.000037193s, submitted: 223
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 158253056 unmapped: 37650432 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:07.809768+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 244 heartbeat osd_stat(store_statfs(0x1b4502000/0x0/0x1bfc00000, data 0x48bab75/0x4a2a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 158261248 unmapped: 37642240 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:08.809935+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 158261248 unmapped: 37642240 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:09.810180+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 158261248 unmapped: 37642240 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:10.810343+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 244 heartbeat osd_stat(store_statfs(0x1b4502000/0x0/0x1bfc00000, data 0x48bab75/0x4a2a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2345571 data_alloc: 184549376 data_used: 15958016
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 158179328 unmapped: 37724160 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:11.810527+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 244 handle_osd_map epochs [245,245], i have 244, src has [1,245]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 245 handle_osd_map epochs [245,245], i have 245, src has [1,245]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 158203904 unmapped: 37699584 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:12.810671+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 158203904 unmapped: 37699584 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:13.810854+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 158203904 unmapped: 37699584 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:14.811041+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 245 heartbeat osd_stat(store_statfs(0x1b4500000/0x0/0x1bfc00000, data 0x48bcc6b/0x4a2d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 158203904 unmapped: 37699584 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:15.811261+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2347697 data_alloc: 184549376 data_used: 15958016
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 158203904 unmapped: 37699584 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:16.811472+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.962034225s of 10.001440048s, submitted: 24
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054033000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 245 ms_handle_reset con 0x562054033000 session 0x562053e65860
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053cffc00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 245 ms_handle_reset con 0x562053cffc00 session 0x562053c430e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 158203904 unmapped: 37699584 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:17.811608+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 245 ms_handle_reset con 0x562051b49000 session 0x562054eea000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 158220288 unmapped: 37683200 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:18.811748+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:19.811951+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 158220288 unmapped: 37683200 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 245 ms_handle_reset con 0x562052821800 session 0x562053e643c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053cdfc00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:20.812101+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 158244864 unmapped: 37658624 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 245 heartbeat osd_stat(store_statfs(0x1b44fe000/0x0/0x1bfc00000, data 0x48bce16/0x4a30000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [1])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 245 ms_handle_reset con 0x562053cdfc00 session 0x562054eea1e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054033000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2355211 data_alloc: 184549376 data_used: 15958016
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 245 ms_handle_reset con 0x562054033000 session 0x562053c8ad20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821c00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:21.812263+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 158916608 unmapped: 36986880 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 245 ms_handle_reset con 0x562052821c00 session 0x5620585dfa40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:22.812421+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 158916608 unmapped: 36986880 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 245 heartbeat osd_stat(store_statfs(0x1b39a6000/0x0/0x1bfc00000, data 0x5412f13/0x5588000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051b49000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 245 ms_handle_reset con 0x562052821800 session 0x56205291d2c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:23.812625+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 158859264 unmapped: 37044224 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 245 handle_osd_map epochs [246,246], i have 245, src has [1,246]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 246 handle_osd_map epochs [246,246], i have 246, src has [1,246]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821c00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 246 handle_osd_map epochs [246,246], i have 246, src has [1,246]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053cdfc00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 246 ms_handle_reset con 0x562052821c00 session 0x5620585905a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:24.812839+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 162119680 unmapped: 33783808 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 246 handle_osd_map epochs [247,247], i have 246, src has [1,247]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 247 handle_osd_map epochs [247,247], i have 247, src has [1,247]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 247 handle_osd_map epochs [247,247], i have 247, src has [1,247]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 247 ms_handle_reset con 0x562053cdfc00 session 0x562054eeb0e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054033000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 247 ms_handle_reset con 0x562054033000 session 0x562053b34960
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 247 ms_handle_reset con 0x562051b49000 session 0x562054ee8b40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:25.813047+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 162127872 unmapped: 33775616 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2556048 data_alloc: 184549376 data_used: 15966208
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 247 ms_handle_reset con 0x562052821800 session 0x562052bafc20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:26.813208+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 161996800 unmapped: 33906688 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.192395210s of 10.015973091s, submitted: 165
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:27.813509+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 161996800 unmapped: 33906688 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 247 heartbeat osd_stat(store_statfs(0x1b2e75000/0x0/0x1bfc00000, data 0x5f3dbb0/0x60b9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821c00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053cdfc00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 247 ms_handle_reset con 0x562053cdfc00 session 0x562057566780
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054033000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d37c00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 247 heartbeat osd_stat(store_statfs(0x1b2e75000/0x0/0x1bfc00000, data 0x5f3dbb0/0x60b9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [1])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 247 ms_handle_reset con 0x562053d37c00 session 0x5620570eda40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:28.813696+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 162545664 unmapped: 33357824 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 247 handle_osd_map epochs [247,248], i have 247, src has [1,248]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 248 ms_handle_reset con 0x562054033000 session 0x562053c58b40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054292800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 248 ms_handle_reset con 0x562054292800 session 0x562053bca000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 248 ms_handle_reset con 0x562052821c00 session 0x5620575672c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:29.813863+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 162594816 unmapped: 33308672 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054292800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:30.814044+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 162603008 unmapped: 33300480 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 248 handle_osd_map epochs [249,249], i have 248, src has [1,249]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 249 handle_osd_map epochs [249,249], i have 249, src has [1,249]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 249 ms_handle_reset con 0x562054292800 session 0x56205703fa40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2653724 data_alloc: 184549376 data_used: 15978496
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 249 ms_handle_reset con 0x562052821800 session 0x562053bfe1e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:31.814211+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 162660352 unmapped: 33243136 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053cdfc00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:32.814424+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 162668544 unmapped: 33234944 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 249 heartbeat osd_stat(store_statfs(0x1b2e71000/0x0/0x1bfc00000, data 0x5f42178/0x60bd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 249 handle_osd_map epochs [249,250], i have 249, src has [1,250]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 250 handle_osd_map epochs [250,250], i have 250, src has [1,250]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 250 handle_osd_map epochs [250,250], i have 250, src has [1,250]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:33.814657+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 162766848 unmapped: 33136640 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 250 ms_handle_reset con 0x562053cdfc00 session 0x562052c16000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053d37c00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 250 ms_handle_reset con 0x562053d37c00 session 0x562053b27680
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 250 heartbeat osd_stat(store_statfs(0x1b3992000/0x0/0x1bfc00000, data 0x541e45d/0x559b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:34.814817+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 162775040 unmapped: 33128448 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 250 handle_osd_map epochs [251,251], i have 250, src has [1,251]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 251 handle_osd_map epochs [251,251], i have 251, src has [1,251]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 251 ms_handle_reset con 0x562052821800 session 0x56205291d4a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821c00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 251 ms_handle_reset con 0x562052821c00 session 0x562053bff860
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:35.815049+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 163913728 unmapped: 31989760 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2397703 data_alloc: 184549376 data_used: 15998976
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:36.815287+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 251 handle_osd_map epochs [252,252], i have 251, src has [1,252]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 163921920 unmapped: 31981568 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 252 handle_osd_map epochs [252,252], i have 252, src has [1,252]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 8.887763023s of 10.000339508s, submitted: 299
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 252 heartbeat osd_stat(store_statfs(0x1b44e8000/0x0/0x1bfc00000, data 0x48c9fea/0x4a43000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:37.815524+0000)
Dec 02 10:21:05 np0005541914.localdomain podman[331599]: 2025-12-02 10:21:05.27974032 +0000 UTC m=+0.277449402 container health_status 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 163962880 unmapped: 31940608 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:38.815714+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 163962880 unmapped: 31940608 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 252 heartbeat osd_stat(store_statfs(0x1b44e2000/0x0/0x1bfc00000, data 0x48cc17a/0x4a48000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:39.815927+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 163962880 unmapped: 31940608 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:40.816175+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 163962880 unmapped: 31940608 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2402873 data_alloc: 184549376 data_used: 16003072
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 252 handle_osd_map epochs [253,253], i have 252, src has [1,253]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:41.816316+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 163971072 unmapped: 31932416 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 253 handle_osd_map epochs [253,253], i have 253, src has [1,253]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 253 handle_osd_map epochs [253,253], i have 253, src has [1,253]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 253 handle_osd_map epochs [253,253], i have 253, src has [1,253]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:42.816437+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 164069376 unmapped: 31834112 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053cdfc00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:43.816699+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 169689088 unmapped: 26214400 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 253 ms_handle_reset con 0x562053cdfc00 session 0x562053c43c20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 253 heartbeat osd_stat(store_statfs(0x1b3f7c000/0x0/0x1bfc00000, data 0x4e30f1f/0x4fb1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:44.817048+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 165584896 unmapped: 30318592 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:45.817343+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 253 heartbeat osd_stat(store_statfs(0x1b3323000/0x0/0x1bfc00000, data 0x5a8af58/0x5c0b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 165822464 unmapped: 30081024 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2554657 data_alloc: 184549376 data_used: 16015360
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:46.817531+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 166125568 unmapped: 29777920 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.460114479s of 10.002643585s, submitted: 141
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:47.817697+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 165740544 unmapped: 30162944 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 253 heartbeat osd_stat(store_statfs(0x1b328b000/0x0/0x1bfc00000, data 0x5b21738/0x5ca3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:48.817865+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 165560320 unmapped: 30343168 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054292800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 253 heartbeat osd_stat(store_statfs(0x1b3263000/0x0/0x1bfc00000, data 0x5b48cb5/0x5ccb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [1])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:49.818007+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 167215104 unmapped: 28688384 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:50.818504+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 167698432 unmapped: 28205056 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2595509 data_alloc: 201326592 data_used: 20209664
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 253 heartbeat osd_stat(store_statfs(0x1b3236000/0x0/0x1bfc00000, data 0x5b757e8/0x5cf8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:51.818674+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 167845888 unmapped: 28057600 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:52.818817+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 167690240 unmapped: 28213248 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 253 heartbeat osd_stat(store_statfs(0x1b31c6000/0x0/0x1bfc00000, data 0x5be41e0/0x5d67000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 253 heartbeat osd_stat(store_statfs(0x1b31c6000/0x0/0x1bfc00000, data 0x5be41e0/0x5d67000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:53.818962+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 168189952 unmapped: 27713536 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:54.819107+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 168263680 unmapped: 27639808 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:55.819242+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 168353792 unmapped: 27549696 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2613513 data_alloc: 201326592 data_used: 20209664
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:56.819391+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 169852928 unmapped: 26050560 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.516850471s of 10.006826401s, submitted: 108
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 253 heartbeat osd_stat(store_statfs(0x1b3108000/0x0/0x1bfc00000, data 0x5ca1d1c/0x5e25000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:57.819538+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 170065920 unmapped: 25837568 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:58.819910+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 170065920 unmapped: 25837568 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:59.820050+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 173809664 unmapped: 22093824 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:00.820257+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 173809664 unmapped: 22093824 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 253 heartbeat osd_stat(store_statfs(0x1b2383000/0x0/0x1bfc00000, data 0x6a228f0/0x6ba4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2729769 data_alloc: 201326592 data_used: 20209664
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:01.820536+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 253 heartbeat osd_stat(store_statfs(0x1b2383000/0x0/0x1bfc00000, data 0x6a228f0/0x6ba4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 172605440 unmapped: 23298048 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:02.820729+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 253 handle_osd_map epochs [254,254], i have 253, src has [1,254]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 254 handle_osd_map epochs [254,254], i have 254, src has [1,254]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 174415872 unmapped: 21487616 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:03.820957+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 174489600 unmapped: 21413888 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:04.821120+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 174981120 unmapped: 20922368 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:05.821342+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 174931968 unmapped: 20971520 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2747257 data_alloc: 201326592 data_used: 20226048
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 254 heartbeat osd_stat(store_statfs(0x1b2242000/0x0/0x1bfc00000, data 0x6b6a35d/0x6cec000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:06.821492+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 174931968 unmapped: 20971520 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.019406319s of 10.003665924s, submitted: 241
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:07.821663+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 254 handle_osd_map epochs [255,255], i have 254, src has [1,255]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 174940160 unmapped: 20963328 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:08.821860+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 174432256 unmapped: 21471232 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:09.822172+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054033000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 255 ms_handle_reset con 0x562054033000 session 0x5620585901e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 174432256 unmapped: 21471232 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 255 handle_osd_map epochs [255,256], i have 255, src has [1,256]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 256 handle_osd_map epochs [256,256], i have 256, src has [1,256]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 256 handle_osd_map epochs [256,256], i have 256, src has [1,256]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562051d49c00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ce4000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 256 ms_handle_reset con 0x562051d49c00 session 0x5620546923c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:10.822340+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 175497216 unmapped: 20406272 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 256 handle_osd_map epochs [256,257], i have 256, src has [1,257]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 257 heartbeat osd_stat(store_statfs(0x1b1dff000/0x0/0x1bfc00000, data 0x6ba6722/0x6d2e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x70cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 257 ms_handle_reset con 0x562053ce4000 session 0x562053cbad20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2762897 data_alloc: 201326592 data_used: 20262912
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 257 handle_osd_map epochs [257,258], i have 257, src has [1,258]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:11.822525+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 175538176 unmapped: 20365312 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:12.822815+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 258 heartbeat osd_stat(store_statfs(0x1b1df7000/0x0/0x1bfc00000, data 0x6baaefe/0x6d35000, compress 0x0/0x0/0x0, omap 0x649, meta 0x70cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 175538176 unmapped: 20365312 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 258 ms_handle_reset con 0x562052821800 session 0x562053b35e00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 258 heartbeat osd_stat(store_statfs(0x1b1df7000/0x0/0x1bfc00000, data 0x6baaefe/0x6d35000, compress 0x0/0x0/0x0, omap 0x649, meta 0x70cf9b7), peers [0,1,2,3,5] op hist [0,0,1])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:13.823014+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 175538176 unmapped: 20365312 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 258 handle_osd_map epochs [258,259], i have 258, src has [1,259]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 259 handle_osd_map epochs [259,259], i have 259, src has [1,259]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 259 handle_osd_map epochs [259,259], i have 259, src has [1,259]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053cdfc00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054033000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 259 ms_handle_reset con 0x562054033000 session 0x56205703ef00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:14.823302+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 175554560 unmapped: 20348928 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 259 heartbeat osd_stat(store_statfs(0x1b1df3000/0x0/0x1bfc00000, data 0x6bad21a/0x6d39000, compress 0x0/0x0/0x0, omap 0x649, meta 0x70cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 259 handle_osd_map epochs [260,260], i have 259, src has [1,260]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 260 ms_handle_reset con 0x562053cdfc00 session 0x56205703c960
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:15.823533+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176619520 unmapped: 19283968 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2777851 data_alloc: 201326592 data_used: 20262912
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053e4e000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:16.823781+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 260 heartbeat osd_stat(store_statfs(0x1b1dee000/0x0/0x1bfc00000, data 0x6baf929/0x6d3d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x70cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176619520 unmapped: 19283968 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 260 handle_osd_map epochs [261,261], i have 260, src has [1,261]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.690195084s of 10.147163391s, submitted: 121
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 261 ms_handle_reset con 0x562053e4e000 session 0x562053bffe00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:17.823937+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176619520 unmapped: 19283968 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 261 heartbeat osd_stat(store_statfs(0x1b1deb000/0x0/0x1bfc00000, data 0x6bb16e4/0x6d41000, compress 0x0/0x0/0x0, omap 0x649, meta 0x70cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053e4e000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 261 ms_handle_reset con 0x562053e4e000 session 0x5620576885a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:18.824160+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176627712 unmapped: 19275776 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 261 handle_osd_map epochs [262,262], i have 261, src has [1,262]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 262 handle_osd_map epochs [262,262], i have 262, src has [1,262]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053cdfc00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 262 ms_handle_reset con 0x562052821800 session 0x562051d154a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:19.824378+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176627712 unmapped: 19275776 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 262 handle_osd_map epochs [263,263], i have 262, src has [1,263]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 263 ms_handle_reset con 0x562053cdfc00 session 0x56205291d4a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ce4000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 263 ms_handle_reset con 0x562053ce4000 session 0x562057688d20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:20.824584+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054033000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176644096 unmapped: 19259392 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 263 handle_osd_map epochs [264,264], i have 263, src has [1,264]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 264 ms_handle_reset con 0x562054033000 session 0x5620585a3a40
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2795611 data_alloc: 201326592 data_used: 20267008
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054033000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 264 ms_handle_reset con 0x562054033000 session 0x562058590f00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:21.824745+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 264 handle_osd_map epochs [265,265], i have 264, src has [1,265]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562052821800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 265 handle_osd_map epochs [265,265], i have 265, src has [1,265]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176652288 unmapped: 19251200 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:22.824894+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 265 handle_osd_map epochs [265,266], i have 265, src has [1,266]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 266 handle_osd_map epochs [266,266], i have 266, src has [1,266]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177709056 unmapped: 18194432 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 266 ms_handle_reset con 0x562052821800 session 0x5620585a25a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:23.825077+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177717248 unmapped: 18186240 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 266 heartbeat osd_stat(store_statfs(0x1b1ddc000/0x0/0x1bfc00000, data 0x6bbbd9a/0x6d52000, compress 0x0/0x0/0x0, omap 0x649, meta 0x70cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:24.825235+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177717248 unmapped: 18186240 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 266 ms_handle_reset con 0x562054292800 session 0x562051962780
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:25.825512+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053cdfc00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177733632 unmapped: 18169856 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2540745 data_alloc: 184549376 data_used: 16064512
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 266 ms_handle_reset con 0x562053cdfc00 session 0x562051898000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:26.826528+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 266 handle_osd_map epochs [267,267], i have 266, src has [1,267]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 267 handle_osd_map epochs [267,267], i have 267, src has [1,267]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176095232 unmapped: 19808256 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 267 heartbeat osd_stat(store_statfs(0x1b4cad000/0x0/0x1bfc00000, data 0x4ceae45/0x4e80000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:27.826663+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 267 handle_osd_map epochs [268,268], i have 267, src has [1,268]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.794202805s of 10.743121147s, submitted: 255
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 268 handle_osd_map epochs [268,268], i have 268, src has [1,268]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176095232 unmapped: 19808256 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:28.826861+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176103424 unmapped: 19800064 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:29.827116+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176103424 unmapped: 19800064 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:30.827366+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 268 heartbeat osd_stat(store_statfs(0x1b4ca8000/0x0/0x1bfc00000, data 0x4cef263/0x4e86000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176103424 unmapped: 19800064 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2546876 data_alloc: 184549376 data_used: 16076800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:31.827539+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 269 handle_osd_map epochs [269,269], i have 269, src has [1,269]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176128000 unmapped: 19775488 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:32.827746+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 270 handle_osd_map epochs [270,270], i have 270, src has [1,270]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 270 heartbeat osd_stat(store_statfs(0x1b4ca2000/0x0/0x1bfc00000, data 0x4cf36fa/0x4e8b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176128000 unmapped: 19775488 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 270 heartbeat osd_stat(store_statfs(0x1b4ca2000/0x0/0x1bfc00000, data 0x4cf36fa/0x4e8b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:33.828013+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 270 handle_osd_map epochs [271,271], i have 270, src has [1,271]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 271 handle_osd_map epochs [271,271], i have 271, src has [1,271]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ce4000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 271 heartbeat osd_stat(store_statfs(0x1b4c9d000/0x0/0x1bfc00000, data 0x4cf59df/0x4e8f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176152576 unmapped: 19750912 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 271 ms_handle_reset con 0x562053ce4000 session 0x562057566d20
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:34.828330+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176177152 unmapped: 19726336 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:35.828572+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 271 heartbeat osd_stat(store_statfs(0x1b4c9e000/0x0/0x1bfc00000, data 0x4cf59e2/0x4e8e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176185344 unmapped: 19718144 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2552005 data_alloc: 184549376 data_used: 16089088
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:36.828708+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 271 handle_osd_map epochs [272,272], i have 271, src has [1,272]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176185344 unmapped: 19718144 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:37.829005+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176193536 unmapped: 19709952 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.807984352s of 10.483350754s, submitted: 227
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:38.829208+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176193536 unmapped: 19709952 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:39.829352+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176193536 unmapped: 19709952 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 272 heartbeat osd_stat(store_statfs(0x1b4c9e000/0x0/0x1bfc00000, data 0x4cf7d24/0x4e90000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:40.829551+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176193536 unmapped: 19709952 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2554589 data_alloc: 184549376 data_used: 16101376
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:41.829711+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 273 handle_osd_map epochs [273,273], i have 273, src has [1,273]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 273 handle_osd_map epochs [273,273], i have 273, src has [1,273]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176193536 unmapped: 19709952 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:42.829933+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b4c99000/0x0/0x1bfc00000, data 0x4cf9f21/0x4e94000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 273 handle_osd_map epochs [273,274], i have 273, src has [1,274]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 274 handle_osd_map epochs [274,274], i have 274, src has [1,274]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 274 handle_osd_map epochs [274,274], i have 274, src has [1,274]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176218112 unmapped: 19685376 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:43.830208+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176218112 unmapped: 19685376 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053cdfc00
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 274 ms_handle_reset con 0x562053cdfc00 session 0x562052c12960
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:44.830515+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176373760 unmapped: 19529728 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: mgrc handle_mgr_map Got map version 51
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/4212177170,v1:172.18.0.108:6811/4212177170]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 274 handle_osd_map epochs [275,275], i have 274, src has [1,275]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 275 handle_osd_map epochs [275,275], i have 275, src has [1,275]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:45.830671+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176406528 unmapped: 19496960 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 275 heartbeat osd_stat(store_statfs(0x1b4c92000/0x0/0x1bfc00000, data 0x4cfe467/0x4e9b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2568501 data_alloc: 184549376 data_used: 16125952
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562053ce4000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 275 ms_handle_reset con 0x562053ce4000 session 0x562053c59860
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:46.830818+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176422912 unmapped: 19480576 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:47.831014+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176422912 unmapped: 19480576 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:48.831189+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.313435555s of 10.696524620s, submitted: 106
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 275 heartbeat osd_stat(store_statfs(0x1b4c92000/0x0/0x1bfc00000, data 0x4cfe4c5/0x4e9b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176422912 unmapped: 19480576 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:49.831396+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176422912 unmapped: 19480576 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:50.831560+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176422912 unmapped: 19480576 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2569337 data_alloc: 184549376 data_used: 16125952
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:51.831725+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 275 handle_osd_map epochs [276,276], i have 275, src has [1,276]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 276 heartbeat osd_stat(store_statfs(0x1b4c92000/0x0/0x1bfc00000, data 0x4cfe68f/0x4e9c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176431104 unmapped: 19472384 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:52.831853+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 276 heartbeat osd_stat(store_statfs(0x1b4c8d000/0x0/0x1bfc00000, data 0x4d00885/0x4ea0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176431104 unmapped: 19472384 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:53.832079+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176431104 unmapped: 19472384 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:54.832251+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176431104 unmapped: 19472384 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:55.832436+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 276 heartbeat osd_stat(store_statfs(0x1b4c8b000/0x0/0x1bfc00000, data 0x4d009bb/0x4ea2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176431104 unmapped: 19472384 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2578815 data_alloc: 184549376 data_used: 16138240
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:56.832653+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176431104 unmapped: 19472384 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:57.832822+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176431104 unmapped: 19472384 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:58.833004+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176431104 unmapped: 19472384 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.208958626s of 10.318691254s, submitted: 37
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:59.833165+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 276 heartbeat osd_stat(store_statfs(0x1b4c8c000/0x0/0x1bfc00000, data 0x4d00aea/0x4ea2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176439296 unmapped: 19464192 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:00.833333+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176439296 unmapped: 19464192 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2577699 data_alloc: 184549376 data_used: 16138240
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:01.833527+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176455680 unmapped: 19447808 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:02.833719+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 276 handle_osd_map epochs [277,277], i have 276, src has [1,277]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176463872 unmapped: 19439616 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:03.833933+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176463872 unmapped: 19439616 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:04.834117+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 277 heartbeat osd_stat(store_statfs(0x1b4c87000/0x0/0x1bfc00000, data 0x4d02e92/0x4ea6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176463872 unmapped: 19439616 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:05.834287+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176463872 unmapped: 19439616 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2583237 data_alloc: 184549376 data_used: 16146432
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:06.834480+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176463872 unmapped: 19439616 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:07.834675+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 277 heartbeat osd_stat(store_statfs(0x1b4c87000/0x0/0x1bfc00000, data 0x4d02e92/0x4ea6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176463872 unmapped: 19439616 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:08.834811+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176463872 unmapped: 19439616 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.836859703s of 10.008763313s, submitted: 44
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:09.834974+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176463872 unmapped: 19439616 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 277 heartbeat osd_stat(store_statfs(0x1b4c89000/0x0/0x1bfc00000, data 0x4d02ec1/0x4ea5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:10.835189+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176463872 unmapped: 19439616 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2581843 data_alloc: 184549376 data_used: 16146432
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:11.835364+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 277 handle_osd_map epochs [277,278], i have 277, src has [1,278]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 278 handle_osd_map epochs [278,278], i have 278, src has [1,278]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176496640 unmapped: 19406848 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:12.835560+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176496640 unmapped: 19406848 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:13.835801+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b4c86000/0x0/0x1bfc00000, data 0x4d04fe6/0x4ea7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176496640 unmapped: 19406848 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:14.836014+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176504832 unmapped: 19398656 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:15.836170+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176504832 unmapped: 19398656 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2583789 data_alloc: 184549376 data_used: 16158720
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:16.836345+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176504832 unmapped: 19398656 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:17.836539+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176504832 unmapped: 19398656 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:18.836722+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176504832 unmapped: 19398656 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b4c88000/0x0/0x1bfc00000, data 0x4d0510e/0x4ea6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:19.836885+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176504832 unmapped: 19398656 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:20.837080+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176504832 unmapped: 19398656 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2583323 data_alloc: 184549376 data_used: 16158720
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:21.837260+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 12.432624817s of 12.574303627s, submitted: 38
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176504832 unmapped: 19398656 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:22.837540+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176504832 unmapped: 19398656 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:23.837786+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176504832 unmapped: 19398656 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:24.837962+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176504832 unmapped: 19398656 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b4c87000/0x0/0x1bfc00000, data 0x4d051a9/0x4ea7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:25.838123+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176504832 unmapped: 19398656 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2586459 data_alloc: 184549376 data_used: 16158720
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:26.838316+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176504832 unmapped: 19398656 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b4c86000/0x0/0x1bfc00000, data 0x4d052a9/0x4ea8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:27.838498+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b4c85000/0x0/0x1bfc00000, data 0x4d05344/0x4ea9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176513024 unmapped: 19390464 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:28.838733+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176513024 unmapped: 19390464 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:29.838901+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176521216 unmapped: 19382272 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b4c85000/0x0/0x1bfc00000, data 0x4d054d8/0x4ea9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:30.839067+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176521216 unmapped: 19382272 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2589871 data_alloc: 184549376 data_used: 16158720
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:31.839274+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.918210983s of 10.000394821s, submitted: 15
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 176521216 unmapped: 19382272 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:32.839403+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 278 handle_osd_map epochs [279,279], i have 278, src has [1,279]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177561600 unmapped: 18341888 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 279 handle_osd_map epochs [279,279], i have 279, src has [1,279]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:33.839689+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 279 heartbeat osd_stat(store_statfs(0x1b4c85000/0x0/0x1bfc00000, data 0x4d0553d/0x4ea9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177569792 unmapped: 18333696 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:34.839856+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177569792 unmapped: 18333696 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:35.840021+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177569792 unmapped: 18333696 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2596085 data_alloc: 184549376 data_used: 16166912
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:36.840190+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054033000
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177569792 unmapped: 18333696 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:37.840352+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177569792 unmapped: 18333696 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:38.840566+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177610752 unmapped: 18292736 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:39.840748+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 279 heartbeat osd_stat(store_statfs(0x1b4c81000/0x0/0x1bfc00000, data 0x4d079f8/0x4ead000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177610752 unmapped: 18292736 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:40.840958+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177610752 unmapped: 18292736 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2595939 data_alloc: 184549376 data_used: 16175104
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:41.841098+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.790739059s of 10.001160622s, submitted: 80
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 280 heartbeat osd_stat(store_statfs(0x1b4c7e000/0x0/0x1bfc00000, data 0x4d09a62/0x4eaf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177610752 unmapped: 18292736 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:42.841234+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177610752 unmapped: 18292736 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:43.841490+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177618944 unmapped: 18284544 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:44.841691+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177618944 unmapped: 18284544 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:45.841861+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 9000.2 total, 600.0 interval
                                                          Cumulative writes: 24K writes, 90K keys, 24K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.01 MB/s
                                                          Cumulative WAL: 24K writes, 8465 syncs, 2.85 writes per sync, written: 0.06 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 14K writes, 50K keys, 14K commit groups, 1.0 writes per commit group, ingest: 27.33 MB, 0.05 MB/s
                                                          Interval WAL: 14K writes, 6036 syncs, 2.36 writes per sync, written: 0.03 GB, 0.05 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177627136 unmapped: 18276352 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2601695 data_alloc: 184549376 data_used: 16187392
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:46.842051+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177627136 unmapped: 18276352 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain podman[331605]: 2025-12-02 10:21:05.301814926 +0000 UTC m=+0.287759978 container exec_died c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 280 heartbeat osd_stat(store_statfs(0x1b4c7e000/0x0/0x1bfc00000, data 0x4d09d5b/0x4eb0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:47.842218+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177627136 unmapped: 18276352 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:48.842421+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 280 heartbeat osd_stat(store_statfs(0x1b4c7d000/0x0/0x1bfc00000, data 0x4d09df6/0x4eb1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177627136 unmapped: 18276352 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:49.842611+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177627136 unmapped: 18276352 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:50.842774+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177627136 unmapped: 18276352 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2601409 data_alloc: 184549376 data_used: 16187392
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:51.842970+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.854529381s of 10.007349968s, submitted: 38
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177627136 unmapped: 18276352 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:52.844942+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 281 heartbeat osd_stat(store_statfs(0x1b4c7f000/0x0/0x1bfc00000, data 0x4d09e54/0x4eaf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177635328 unmapped: 18268160 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:53.845315+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177651712 unmapped: 18251776 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:54.849181+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177651712 unmapped: 18251776 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:55.850674+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177651712 unmapped: 18251776 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2605245 data_alloc: 184549376 data_used: 16195584
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:56.851133+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 281 heartbeat osd_stat(store_statfs(0x1b4c7b000/0x0/0x1bfc00000, data 0x4d0c261/0x4eb3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177668096 unmapped: 18235392 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:57.851323+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177668096 unmapped: 18235392 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:58.851552+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177668096 unmapped: 18235392 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:59.852291+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177668096 unmapped: 18235392 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:00.852524+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177668096 unmapped: 18235392 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:01.852708+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2608805 data_alloc: 184549376 data_used: 16207872
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 283 handle_osd_map epochs [283,283], i have 283, src has [1,283]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 283 heartbeat osd_stat(store_statfs(0x1b4c73000/0x0/0x1bfc00000, data 0x4d106e5/0x4eba000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 283 heartbeat osd_stat(store_statfs(0x1b4c73000/0x0/0x1bfc00000, data 0x4d106e5/0x4eba000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177668096 unmapped: 18235392 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:02.853036+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.784443855s of 11.052344322s, submitted: 95
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177668096 unmapped: 18235392 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:03.853341+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177676288 unmapped: 18227200 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:04.853676+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177676288 unmapped: 18227200 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:05.853897+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177676288 unmapped: 18227200 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:06.854347+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2610895 data_alloc: 184549376 data_used: 16207872
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 284 handle_osd_map epochs [284,284], i have 284, src has [1,284]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b4c74000/0x0/0x1bfc00000, data 0x4d107af/0x4eba000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177676288 unmapped: 18227200 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:07.854508+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177676288 unmapped: 18227200 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:08.854746+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b4c6f000/0x0/0x1bfc00000, data 0x4d12940/0x4ebe000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177676288 unmapped: 18227200 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:09.854946+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177676288 unmapped: 18227200 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:10.855305+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177676288 unmapped: 18227200 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:11.855688+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2616453 data_alloc: 184549376 data_used: 16220160
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177676288 unmapped: 18227200 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:12.856070+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177676288 unmapped: 18227200 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:13.856315+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 11.307355881s of 11.419548988s, submitted: 25
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177676288 unmapped: 18227200 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:14.856573+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b3ad1000/0x0/0x1bfc00000, data 0x4d1296f/0x4ebd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b3ad1000/0x0/0x1bfc00000, data 0x4d1296f/0x4ebd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177676288 unmapped: 18227200 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:15.856773+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177676288 unmapped: 18227200 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:16.857086+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2615075 data_alloc: 184549376 data_used: 16220160
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b3ad1000/0x0/0x1bfc00000, data 0x4d1296f/0x4ebd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177676288 unmapped: 18227200 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:17.857297+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178724864 unmapped: 17178624 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:18.857713+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177684480 unmapped: 18219008 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:19.857959+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b3ad2000/0x0/0x1bfc00000, data 0x4d1299e/0x4ebc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177684480 unmapped: 18219008 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:20.858237+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177684480 unmapped: 18219008 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:21.858395+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2614401 data_alloc: 184549376 data_used: 16220160
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177684480 unmapped: 18219008 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:22.858618+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177684480 unmapped: 18219008 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:23.858916+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b3ad2000/0x0/0x1bfc00000, data 0x4d1299e/0x4ebc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177684480 unmapped: 18219008 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:24.859120+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177684480 unmapped: 18219008 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:25.859283+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177684480 unmapped: 18219008 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:26.859489+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2614401 data_alloc: 184549376 data_used: 16220160
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 12.459233284s of 12.520849228s, submitted: 12
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177684480 unmapped: 18219008 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:27.859637+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177684480 unmapped: 18219008 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b3ad2000/0x0/0x1bfc00000, data 0x4d1299e/0x4ebc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:28.859801+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177684480 unmapped: 18219008 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:29.860022+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b3ad2000/0x0/0x1bfc00000, data 0x4d1299e/0x4ebc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177684480 unmapped: 18219008 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:30.860214+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177684480 unmapped: 18219008 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:31.860380+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2614017 data_alloc: 184549376 data_used: 16220160
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177684480 unmapped: 18219008 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:32.860546+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177684480 unmapped: 18219008 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:33.860721+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b3ad1000/0x0/0x1bfc00000, data 0x4d12a68/0x4ebd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177684480 unmapped: 18219008 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:34.860899+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:35.861037+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177684480 unmapped: 18219008 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:36.861183+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177684480 unmapped: 18219008 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.951801300s of 10.005426407s, submitted: 10
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2614641 data_alloc: 184549376 data_used: 16220160
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:37.861355+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177684480 unmapped: 18219008 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: mgrc handle_mgr_map Got map version 52
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/4212177170,v1:172.18.0.108:6811/4212177170]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:38.861529+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177684480 unmapped: 18219008 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:39.861656+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177684480 unmapped: 18219008 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b3ad1000/0x0/0x1bfc00000, data 0x4d12b32/0x4ebd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:40.861851+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177684480 unmapped: 18219008 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:41.862041+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177684480 unmapped: 18219008 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2614209 data_alloc: 184549376 data_used: 16224256
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:42.862138+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177684480 unmapped: 18219008 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:43.862315+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177684480 unmapped: 18219008 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b3ad1000/0x0/0x1bfc00000, data 0x4d12b32/0x4ebd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b3ad1000/0x0/0x1bfc00000, data 0x4d12b32/0x4ebd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:44.862478+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177684480 unmapped: 18219008 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:45.862648+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177684480 unmapped: 18219008 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:46.862804+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177684480 unmapped: 18219008 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.963129044s of 10.001005173s, submitted: 7
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2616361 data_alloc: 184549376 data_used: 16224256
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b3ad1000/0x0/0x1bfc00000, data 0x4d12bfc/0x4ebd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:47.862922+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177692672 unmapped: 18210816 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b3acf000/0x0/0x1bfc00000, data 0x4d12d32/0x4ebf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:48.863180+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177692672 unmapped: 18210816 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:49.863354+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177692672 unmapped: 18210816 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:50.863580+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b3acf000/0x0/0x1bfc00000, data 0x4d12d32/0x4ebf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177692672 unmapped: 18210816 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:51.863728+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177692672 unmapped: 18210816 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2618129 data_alloc: 184549376 data_used: 16224256
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:52.863882+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177692672 unmapped: 18210816 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:53.864110+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177692672 unmapped: 18210816 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b3ad1000/0x0/0x1bfc00000, data 0x4d12d90/0x4ebd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:54.864271+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177692672 unmapped: 18210816 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:55.864569+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177692672 unmapped: 18210816 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:56.865672+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177692672 unmapped: 18210816 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain systemd[1]: c02da970e4922b9ff6b546bbb275211b86f60d2534704c7ced783ce42da7fabf.service: Deactivated successfully.
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2616429 data_alloc: 184549376 data_used: 16224256
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:57.866004+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177692672 unmapped: 18210816 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:58.866893+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177700864 unmapped: 18202624 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 12.485031128s of 12.546902657s, submitted: 12
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:59.867555+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177700864 unmapped: 18202624 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b3ad1000/0x0/0x1bfc00000, data 0x4d12df5/0x4ebd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:00.868224+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177700864 unmapped: 18202624 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:01.868757+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177700864 unmapped: 18202624 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2617781 data_alloc: 184549376 data_used: 16224256
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:02.869066+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177700864 unmapped: 18202624 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 284 heartbeat osd_stat(store_statfs(0x1b3ad0000/0x0/0x1bfc00000, data 0x4d12f5a/0x4ebe000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:03.869509+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177700864 unmapped: 18202624 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:04.869928+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177700864 unmapped: 18202624 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:05.870253+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 177709056 unmapped: 18194432 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:06.870538+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 284 ms_handle_reset con 0x562054033000 session 0x562053c8a5a0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178028544 unmapped: 17874944 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2616915 data_alloc: 184549376 data_used: 16224256
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:07.870791+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: mgrc handle_mgr_map Got map version 53
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/4212177170,v1:172.18.0.108:6811/4212177170]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178044928 unmapped: 17858560 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 285 heartbeat osd_stat(store_statfs(0x1b3ad1000/0x0/0x1bfc00000, data 0x4d12f89/0x4ebd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:08.871041+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 285 heartbeat osd_stat(store_statfs(0x1b3ad1000/0x0/0x1bfc00000, data 0x4d12f89/0x4ebd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178044928 unmapped: 17858560 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:09.871212+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178044928 unmapped: 17858560 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 285 heartbeat osd_stat(store_statfs(0x1b3acd000/0x0/0x1bfc00000, data 0x4d15197/0x4ec0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:10.871421+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178044928 unmapped: 17858560 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:11.871640+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 12.255703926s of 12.447771072s, submitted: 259
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178044928 unmapped: 17858560 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2619961 data_alloc: 184549376 data_used: 16232448
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 285 heartbeat osd_stat(store_statfs(0x1b3acd000/0x0/0x1bfc00000, data 0x4d15197/0x4ec0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:12.871852+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178044928 unmapped: 17858560 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 285 heartbeat osd_stat(store_statfs(0x1b3acd000/0x0/0x1bfc00000, data 0x4d15197/0x4ec0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:13.872077+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178044928 unmapped: 17858560 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:14.872241+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178044928 unmapped: 17858560 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 285 heartbeat osd_stat(store_statfs(0x1b3acd000/0x0/0x1bfc00000, data 0x4d15197/0x4ec0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:15.872566+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178044928 unmapped: 17858560 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:16.872726+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _renew_subs
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _send_mon_message to mon.np0005541913 at v2:172.18.0.104:3300/0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178044928 unmapped: 17858560 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2622935 data_alloc: 184549376 data_used: 16232448
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:17.872953+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178044928 unmapped: 17858560 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3aca000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:18.873110+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178044928 unmapped: 17858560 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3aca000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:19.873380+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178044928 unmapped: 17858560 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:20.873593+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178044928 unmapped: 17858560 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:21.873822+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178044928 unmapped: 17858560 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2622935 data_alloc: 184549376 data_used: 16232448
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:22.873954+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178044928 unmapped: 17858560 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:23.874125+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178044928 unmapped: 17858560 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3aca000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:24.874355+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178044928 unmapped: 17858560 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:25.874579+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178044928 unmapped: 17858560 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:26.874795+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178044928 unmapped: 17858560 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2622935 data_alloc: 184549376 data_used: 16232448
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:27.874961+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178044928 unmapped: 17858560 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:28.875180+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178044928 unmapped: 17858560 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3aca000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3aca000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:29.875333+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178044928 unmapped: 17858560 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:30.875616+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 ms_handle_reset con 0x562053d1e400 session 0x562053b8c1e0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: handle_auth_request added challenge on 0x562054292800
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178044928 unmapped: 17858560 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:31.875845+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178044928 unmapped: 17858560 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2622935 data_alloc: 184549376 data_used: 16232448
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:32.876017+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178044928 unmapped: 17858560 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:33.876198+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178044928 unmapped: 17858560 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3aca000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:34.876388+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178044928 unmapped: 17858560 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:35.876519+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178044928 unmapped: 17858560 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:36.876651+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178044928 unmapped: 17858560 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2622935 data_alloc: 184549376 data_used: 16232448
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:37.876835+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178044928 unmapped: 17858560 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:38.876982+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3aca000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178044928 unmapped: 17858560 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:39.877140+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178044928 unmapped: 17858560 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:40.877327+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178053120 unmapped: 17850368 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:41.877550+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178053120 unmapped: 17850368 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2622935 data_alloc: 184549376 data_used: 16232448
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:42.877734+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3aca000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178053120 unmapped: 17850368 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:43.877937+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178053120 unmapped: 17850368 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:44.878039+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178053120 unmapped: 17850368 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:45.878189+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178053120 unmapped: 17850368 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:46.878296+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178053120 unmapped: 17850368 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2622935 data_alloc: 184549376 data_used: 16232448
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3aca000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:47.890903+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178053120 unmapped: 17850368 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:48.891097+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178061312 unmapped: 17842176 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:49.891290+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178061312 unmapped: 17842176 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:50.891566+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178061312 unmapped: 17842176 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:51.891795+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178061312 unmapped: 17842176 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2622935 data_alloc: 184549376 data_used: 16232448
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:52.892071+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178069504 unmapped: 17833984 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3aca000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:53.892303+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178069504 unmapped: 17833984 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:54.892540+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178069504 unmapped: 17833984 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3aca000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:55.892751+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178069504 unmapped: 17833984 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:56.892929+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178069504 unmapped: 17833984 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2622935 data_alloc: 184549376 data_used: 16232448
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3aca000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:57.893115+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178069504 unmapped: 17833984 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:58.893306+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178069504 unmapped: 17833984 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:59.893551+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178069504 unmapped: 17833984 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:00.893718+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178077696 unmapped: 17825792 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3aca000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:01.894201+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178077696 unmapped: 17825792 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2622935 data_alloc: 184549376 data_used: 16232448
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:02.894917+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178077696 unmapped: 17825792 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:03.895788+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178077696 unmapped: 17825792 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:04.896276+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178085888 unmapped: 17817600 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3aca000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:05.896915+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 178085888 unmapped: 17817600 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:06.897590+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 54.900573730s of 54.936065674s, submitted: 17
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 ms_handle_reset con 0x562054ea6c00 session 0x562053ba43c0
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179527680 unmapped: 16375808 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2622055 data_alloc: 184549376 data_used: 16232448
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3aca000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:07.897994+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: mgrc handle_mgr_map Got map version 54
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/4212177170,v1:172.18.0.108:6811/4212177170]
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3acb000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179527680 unmapped: 16375808 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:08.898263+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179535872 unmapped: 16367616 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:09.898620+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179535872 unmapped: 16367616 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:10.900019+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179535872 unmapped: 16367616 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:11.900674+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179535872 unmapped: 16367616 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2622055 data_alloc: 184549376 data_used: 16232448
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3acb000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:12.901023+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179544064 unmapped: 16359424 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain podman[331599]: 2025-12-02 10:21:05.318742275 +0000 UTC m=+0.316451347 container exec_died 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0 (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:13.901602+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179544064 unmapped: 16359424 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:14.902163+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179544064 unmapped: 16359424 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:15.902708+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179544064 unmapped: 16359424 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:16.903205+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3acb000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179544064 unmapped: 16359424 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2622055 data_alloc: 184549376 data_used: 16232448
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:17.903576+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179544064 unmapped: 16359424 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:18.904006+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179544064 unmapped: 16359424 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:19.904387+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179544064 unmapped: 16359424 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:20.905168+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179552256 unmapped: 16351232 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3acb000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:21.905582+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3acb000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179552256 unmapped: 16351232 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2622055 data_alloc: 184549376 data_used: 16232448
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:22.906002+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179552256 unmapped: 16351232 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3acb000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:23.906343+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179552256 unmapped: 16351232 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:24.906754+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179552256 unmapped: 16351232 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:25.907086+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179552256 unmapped: 16351232 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:26.907351+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179552256 unmapped: 16351232 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2622055 data_alloc: 184549376 data_used: 16232448
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:27.907648+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179552256 unmapped: 16351232 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:28.907949+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179560448 unmapped: 16343040 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3acb000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:29.908159+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179560448 unmapped: 16343040 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3acb000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:30.908352+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179560448 unmapped: 16343040 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:31.908623+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179560448 unmapped: 16343040 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2622055 data_alloc: 184549376 data_used: 16232448
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:32.908913+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179560448 unmapped: 16343040 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:33.909166+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179560448 unmapped: 16343040 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:34.909305+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179560448 unmapped: 16343040 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3acb000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:35.909545+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179560448 unmapped: 16343040 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:36.909696+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179568640 unmapped: 16334848 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2622055 data_alloc: 184549376 data_used: 16232448
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:37.909874+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179568640 unmapped: 16334848 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:38.910048+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179568640 unmapped: 16334848 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:39.910228+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179568640 unmapped: 16334848 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:40.910403+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3acb000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179568640 unmapped: 16334848 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:41.910583+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179568640 unmapped: 16334848 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2622055 data_alloc: 184549376 data_used: 16232448
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:42.910744+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179568640 unmapped: 16334848 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:43.910918+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179568640 unmapped: 16334848 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:44.911067+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179576832 unmapped: 16326656 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3acb000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:45.911216+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179576832 unmapped: 16326656 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:46.911396+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179576832 unmapped: 16326656 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2622055 data_alloc: 184549376 data_used: 16232448
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:47.911584+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3acb000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179576832 unmapped: 16326656 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:48.911768+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179585024 unmapped: 16318464 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3acb000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:49.911936+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179593216 unmapped: 16310272 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:50.912117+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179593216 unmapped: 16310272 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:51.912274+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3acb000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179593216 unmapped: 16310272 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2622055 data_alloc: 184549376 data_used: 16232448
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:52.912446+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3acb000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179601408 unmapped: 16302080 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:53.912689+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179601408 unmapped: 16302080 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:54.912856+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179601408 unmapped: 16302080 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:55.912987+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179601408 unmapped: 16302080 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:56.913162+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179601408 unmapped: 16302080 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2622055 data_alloc: 184549376 data_used: 16232448
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:57.913306+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3acb000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179601408 unmapped: 16302080 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:58.913494+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179601408 unmapped: 16302080 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:59.913679+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3acb000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179609600 unmapped: 16293888 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:00.913873+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3acb000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179609600 unmapped: 16293888 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:01.914056+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179609600 unmapped: 16293888 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2622055 data_alloc: 184549376 data_used: 16232448
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:02.914226+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179609600 unmapped: 16293888 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:03.914423+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179609600 unmapped: 16293888 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:04.914600+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179609600 unmapped: 16293888 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3acb000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:05.914795+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179609600 unmapped: 16293888 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:06.915064+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179609600 unmapped: 16293888 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2622055 data_alloc: 184549376 data_used: 16232448
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:07.915311+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179617792 unmapped: 16285696 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:08.915630+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179617792 unmapped: 16285696 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3acb000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:09.915899+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179617792 unmapped: 16285696 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:10.916250+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179617792 unmapped: 16285696 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:11.916508+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179617792 unmapped: 16285696 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2622055 data_alloc: 184549376 data_used: 16232448
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:12.917104+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179617792 unmapped: 16285696 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:13.917351+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3acb000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179617792 unmapped: 16285696 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:14.917580+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179617792 unmapped: 16285696 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:15.917850+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3acb000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179625984 unmapped: 16277504 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:16.918208+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179625984 unmapped: 16277504 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2622055 data_alloc: 184549376 data_used: 16232448
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:17.918435+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3acb000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179625984 unmapped: 16277504 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:18.918698+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179625984 unmapped: 16277504 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:19.918915+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179625984 unmapped: 16277504 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3acb000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:20.919049+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179625984 unmapped: 16277504 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:21.919197+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3acb000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179625984 unmapped: 16277504 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2622055 data_alloc: 184549376 data_used: 16232448
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:22.919375+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179625984 unmapped: 16277504 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:23.919658+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179642368 unmapped: 16261120 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:24.919896+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179642368 unmapped: 16261120 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:25.920270+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3acb000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179642368 unmapped: 16261120 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:26.920553+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179642368 unmapped: 16261120 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2622055 data_alloc: 184549376 data_used: 16232448
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:27.920893+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179642368 unmapped: 16261120 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:28.921048+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179642368 unmapped: 16261120 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:29.921196+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179642368 unmapped: 16261120 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:30.921351+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179642368 unmapped: 16261120 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:31.921504+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3acb000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: do_command 'config diff' '{prefix=config diff}'
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: do_command 'config show' '{prefix=config show}'
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: do_command 'counter dump' '{prefix=counter dump}'
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179388416 unmapped: 16515072 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:32.921654+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: bluestore.MempoolThread(0x5620503f1b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2622055 data_alloc: 184549376 data_used: 16232448
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: do_command 'counter schema' '{prefix=counter schema}'
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179396608 unmapped: 16506880 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:33.921884+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: osd.4 286 heartbeat osd_stat(store_statfs(0x1b3acb000/0x0/0x1bfc00000, data 0x4d172ad/0x4ec3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x726f9b7), peers [0,1,2,3,5] op hist [])
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: prioritycache tune_memory target: 3561601228 mapped: 179642368 unmapped: 16261120 heap: 195903488 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: tick
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_tickets
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:34.922025+0000)
Dec 02 10:21:05 np0005541914.localdomain ceph-osd[32707]: do_command 'log dump' '{prefix=log dump}'
Dec 02 10:21:05 np0005541914.localdomain systemd[1]: 8b4b3be0e5978311dac66136a74bf4d14e294da6c1ffdeb05bf9a58bece69fd0.service: Deactivated successfully.
Dec 02 10:21:05 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.49503 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:05 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:05.420 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:21:05 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.69671 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:05 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v827: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:21:05 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.59263 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:05 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec 02 10:21:05 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/265532473' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 02 10:21:05 np0005541914.localdomain ceph-mon[301710]: from='client.59206 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:05 np0005541914.localdomain ceph-mon[301710]: from='client.49467 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/1410707360' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Dec 02 10:21:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/3639373356' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 02 10:21:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.32:0/3639373356' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 02 10:21:05 np0005541914.localdomain ceph-mon[301710]: from='client.69635 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/1015257701' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Dec 02 10:21:05 np0005541914.localdomain ceph-mon[301710]: from='client.59224 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/2446013239' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 02 10:21:05 np0005541914.localdomain ceph-mon[301710]: from='client.49491 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:05 np0005541914.localdomain ceph-mon[301710]: from='client.69656 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/2379843221' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 02 10:21:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/625052886' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Dec 02 10:21:05 np0005541914.localdomain ceph-mon[301710]: from='client.59245 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:05 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/3504013880' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 02 10:21:05 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.69686 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:05 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.49518 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:05 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.49524 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:06 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.69704 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 02 10:21:06 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3079201673' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 02 10:21:06 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.49539 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:06 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.59278 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:06 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.69716 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:06 np0005541914.localdomain crontab[331908]: (root) LIST (root)
Dec 02 10:21:06 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.49551 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:06 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 02 10:21:06 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1425092102' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 02 10:21:06 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.59290 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:06 np0005541914.localdomain ceph-mon[301710]: from='client.49503 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:06 np0005541914.localdomain ceph-mon[301710]: from='client.69671 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:06 np0005541914.localdomain ceph-mon[301710]: pgmap v827: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:21:06 np0005541914.localdomain ceph-mon[301710]: from='client.59263 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:06 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/3955623284' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 02 10:21:06 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/265532473' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 02 10:21:06 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/557616291' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 02 10:21:06 np0005541914.localdomain ceph-mon[301710]: from='client.69686 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:06 np0005541914.localdomain ceph-mon[301710]: from='client.49518 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:06 np0005541914.localdomain ceph-mon[301710]: from='client.49524 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:06 np0005541914.localdomain ceph-mon[301710]: from='client.69704 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:06 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/3884556716' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 02 10:21:06 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/3079201673' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 02 10:21:06 np0005541914.localdomain ceph-mon[301710]: from='client.49539 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:06 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/1021344335' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 02 10:21:06 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/848192841' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 02 10:21:06 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/1199995000' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 02 10:21:06 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/1425092102' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 02 10:21:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Optimize plan auto_2025-12-02_10:21:06
Dec 02 10:21:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 02 10:21:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] do_upmap
Dec 02 10:21:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] pools ['volumes', 'backups', 'manila_data', 'images', 'vms', 'manila_metadata', '.mgr']
Dec 02 10:21:06 np0005541914.localdomain ceph-mgr[287188]: [balancer INFO root] prepared 0/10 changes
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.69731 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] scanning for idle connections..
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: [volumes INFO mgr_util] cleaning up connections: []
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.59311 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.49569 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon stat"} v 0)
Dec 02 10:21:07 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/748612833' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] _maybe_adjust
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0033244564838079286 of space, bias 1.0, pg target 0.6648912967615858 quantized to 32 (current 32)
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0014861089300670016 of space, bias 1.0, pg target 0.29672641637004465 quantized to 32 (current 32)
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004299383200725851 of space, bias 1.0, pg target 0.8584435124115949 quantized to 32 (current 32)
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.7263051367950866e-07 of space, bias 1.0, pg target 5.425347222222222e-05 quantized to 32 (current 32)
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 0.002499749179927415 of space, bias 4.0, pg target 1.9898003472222223 quantized to 16 (current 16)
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v828: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.59326 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.49581 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:07 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.69761 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:07 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:21:07.847+0000 7fd3aff61640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 02 10:21:07 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 02 10:21:08 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.49593 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:08 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:08.135 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:21:08 np0005541914.localdomain ceph-mon[301710]: from='client.59278 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:08 np0005541914.localdomain ceph-mon[301710]: from='client.69716 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:08 np0005541914.localdomain ceph-mon[301710]: from='client.49551 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:08 np0005541914.localdomain ceph-mon[301710]: from='client.59290 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:08 np0005541914.localdomain ceph-mon[301710]: from='client.69731 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:08 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/3237429667' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 02 10:21:08 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/2279675103' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 02 10:21:08 np0005541914.localdomain ceph-mon[301710]: from='client.59311 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:08 np0005541914.localdomain ceph-mon[301710]: from='client.49569 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:08 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/748612833' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Dec 02 10:21:08 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/2019927326' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 02 10:21:08 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/1320677971' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Dec 02 10:21:08 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/339330554' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Dec 02 10:21:08 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "node ls"} v 0)
Dec 02 10:21:08 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1381129295' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Dec 02 10:21:08 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.59356 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:08 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 02 10:21:08 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:21:08.255+0000 7fd3aff61640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 02 10:21:08 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Dec 02 10:21:08 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/47451548' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Dec 02 10:21:08 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Dec 02 10:21:08 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/830079725' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Dec 02 10:21:08 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Dec 02 10:21:08 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2401852479' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Dec 02 10:21:08 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Dec 02 10:21:08 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1001684263' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Dec 02 10:21:08 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.49611 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:08 np0005541914.localdomain ceph-mgr[287188]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 02 10:21:08 np0005541914.localdomain ceph-c7c8e171-a193-56fb-95fa-8879fcfa7074-mgr-np0005541914-lljzmk[287184]: 2025-12-02T10:21:08.887+0000 7fd3aff61640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 02 10:21:09 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Dec 02 10:21:09 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2776594278' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Dec 02 10:21:09 np0005541914.localdomain ceph-mon[301710]: pgmap v828: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:21:09 np0005541914.localdomain ceph-mon[301710]: from='client.59326 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:09 np0005541914.localdomain ceph-mon[301710]: from='client.49581 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:09 np0005541914.localdomain ceph-mon[301710]: from='client.69761 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:09 np0005541914.localdomain ceph-mon[301710]: from='client.49593 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:09 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/1381129295' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Dec 02 10:21:09 np0005541914.localdomain ceph-mon[301710]: from='client.59356 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:09 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/47451548' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Dec 02 10:21:09 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/830079725' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Dec 02 10:21:09 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/176858326' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Dec 02 10:21:09 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/3322998944' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Dec 02 10:21:09 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/2401852479' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Dec 02 10:21:09 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/1001684263' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Dec 02 10:21:09 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/2835644307' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Dec 02 10:21:09 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/74853862' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Dec 02 10:21:09 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Dec 02 10:21:09 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2033131510' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:13.568528+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99000320 unmapped: 573440 heap: 99573760 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:14.568656+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99000320 unmapped: 573440 heap: 99573760 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Got map version 35
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/2383186409,v1:172.18.0.106:6811/2383186409]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:15.568770+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99000320 unmapped: 573440 heap: 99573760 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Got map version 36
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/2383186409,v1:172.18.0.106:6811/2383186409]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:16.568933+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 98934784 unmapped: 1687552 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 90 heartbeat osd_stat(store_statfs(0x1b958c000/0x0/0x1bfc00000, data 0x247cab5/0x2501000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:17.569206+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 98934784 unmapped: 1687552 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 920474 data_alloc: 184549376 data_used: 17068032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:18.569360+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 98934784 unmapped: 1687552 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:19.569582+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 98934784 unmapped: 1687552 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 90 heartbeat osd_stat(store_statfs(0x1b958c000/0x0/0x1bfc00000, data 0x247cab5/0x2501000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:20.569864+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 98934784 unmapped: 1687552 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:21.570001+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 98934784 unmapped: 1687552 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 90 heartbeat osd_stat(store_statfs(0x1b958c000/0x0/0x1bfc00000, data 0x247cab5/0x2501000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:22.570123+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 98934784 unmapped: 1687552 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 920474 data_alloc: 184549376 data_used: 17068032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:23.570343+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 98934784 unmapped: 1687552 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:24.570527+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 90 heartbeat osd_stat(store_statfs(0x1b958c000/0x0/0x1bfc00000, data 0x247cab5/0x2501000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 98934784 unmapped: 1687552 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:25.570766+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 98934784 unmapped: 1687552 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:26.570915+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 98934784 unmapped: 1687552 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:27.571127+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 98934784 unmapped: 1687552 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 920474 data_alloc: 184549376 data_used: 17068032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:28.571361+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 98934784 unmapped: 1687552 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:29.571717+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 98934784 unmapped: 1687552 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 90 heartbeat osd_stat(store_statfs(0x1b958c000/0x0/0x1bfc00000, data 0x247cab5/0x2501000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:30.571867+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 98934784 unmapped: 1687552 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:31.572053+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 98934784 unmapped: 1687552 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:32.572217+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 98967552 unmapped: 1654784 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 920474 data_alloc: 184549376 data_used: 17068032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:33.572360+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 98967552 unmapped: 1654784 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:34.572498+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 98967552 unmapped: 1654784 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Got map version 37
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/2383186409,v1:172.18.0.106:6811/2383186409]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:35.572661+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99115008 unmapped: 1507328 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 90 heartbeat osd_stat(store_statfs(0x1b958c000/0x0/0x1bfc00000, data 0x247cab5/0x2501000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 90 heartbeat osd_stat(store_statfs(0x1b958c000/0x0/0x1bfc00000, data 0x247cab5/0x2501000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:36.572804+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99115008 unmapped: 1507328 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:37.572969+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99115008 unmapped: 1507328 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 920474 data_alloc: 184549376 data_used: 17068032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:38.573087+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99115008 unmapped: 1507328 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:39.573263+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99115008 unmapped: 1507328 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:40.573420+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99115008 unmapped: 1507328 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:41.573596+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99115008 unmapped: 1507328 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 90 heartbeat osd_stat(store_statfs(0x1b958c000/0x0/0x1bfc00000, data 0x247cab5/0x2501000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:42.573737+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99115008 unmapped: 1507328 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 920474 data_alloc: 184549376 data_used: 17068032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:43.573924+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99115008 unmapped: 1507328 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:44.574110+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99115008 unmapped: 1507328 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:45.574294+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99115008 unmapped: 1507328 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:46.574429+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99115008 unmapped: 1507328 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 90 heartbeat osd_stat(store_statfs(0x1b958c000/0x0/0x1bfc00000, data 0x247cab5/0x2501000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:47.574667+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99115008 unmapped: 1507328 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 90 heartbeat osd_stat(store_statfs(0x1b958c000/0x0/0x1bfc00000, data 0x247cab5/0x2501000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 920474 data_alloc: 184549376 data_used: 17068032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:48.574850+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:49.575044+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:50.575186+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 90 heartbeat osd_stat(store_statfs(0x1b958c000/0x0/0x1bfc00000, data 0x247cab5/0x2501000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:51.575335+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:52.575525+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 920474 data_alloc: 184549376 data_used: 17068032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:53.575733+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 90 heartbeat osd_stat(store_statfs(0x1b958c000/0x0/0x1bfc00000, data 0x247cab5/0x2501000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:54.575906+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:55.576055+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:56.576194+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 90 heartbeat osd_stat(store_statfs(0x1b958c000/0x0/0x1bfc00000, data 0x247cab5/0x2501000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:57.576409+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 920474 data_alloc: 184549376 data_used: 17068032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:58.576582+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:58:59.576748+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:00.576930+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:01.577099+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:02.577224+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 90 heartbeat osd_stat(store_statfs(0x1b958c000/0x0/0x1bfc00000, data 0x247cab5/0x2501000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 920474 data_alloc: 184549376 data_used: 17068032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:03.577356+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:04.577509+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 90 heartbeat osd_stat(store_statfs(0x1b958c000/0x0/0x1bfc00000, data 0x247cab5/0x2501000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:05.577672+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 90 heartbeat osd_stat(store_statfs(0x1b958c000/0x0/0x1bfc00000, data 0x247cab5/0x2501000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:06.577877+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:07.578117+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 90 heartbeat osd_stat(store_statfs(0x1b958c000/0x0/0x1bfc00000, data 0x247cab5/0x2501000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 920474 data_alloc: 184549376 data_used: 17068032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:08.578252+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:09.578421+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:10.578602+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 90 heartbeat osd_stat(store_statfs(0x1b958c000/0x0/0x1bfc00000, data 0x247cab5/0x2501000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:11.578779+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 90 heartbeat osd_stat(store_statfs(0x1b958c000/0x0/0x1bfc00000, data 0x247cab5/0x2501000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:12.578974+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 920474 data_alloc: 184549376 data_used: 17068032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:13.579191+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:14.579369+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:15.579544+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 90 heartbeat osd_stat(store_statfs(0x1b958c000/0x0/0x1bfc00000, data 0x247cab5/0x2501000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:16.579728+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:17.579964+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 920474 data_alloc: 184549376 data_used: 17068032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:18.580133+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:19.580348+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:20.580546+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 90 heartbeat osd_stat(store_statfs(0x1b958c000/0x0/0x1bfc00000, data 0x247cab5/0x2501000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:21.580734+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 90 heartbeat osd_stat(store_statfs(0x1b958c000/0x0/0x1bfc00000, data 0x247cab5/0x2501000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:22.580914+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 90 heartbeat osd_stat(store_statfs(0x1b958c000/0x0/0x1bfc00000, data 0x247cab5/0x2501000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 920474 data_alloc: 184549376 data_used: 17068032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:23.581075+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:24.581240+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 90 heartbeat osd_stat(store_statfs(0x1b958c000/0x0/0x1bfc00000, data 0x247cab5/0x2501000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:25.581437+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:26.581616+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:27.581832+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 90 heartbeat osd_stat(store_statfs(0x1b958c000/0x0/0x1bfc00000, data 0x247cab5/0x2501000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 920474 data_alloc: 184549376 data_used: 17068032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 90 heartbeat osd_stat(store_statfs(0x1b958c000/0x0/0x1bfc00000, data 0x247cab5/0x2501000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:28.581994+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:29.582204+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 90 heartbeat osd_stat(store_statfs(0x1b958c000/0x0/0x1bfc00000, data 0x247cab5/0x2501000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:30.582377+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 90 heartbeat osd_stat(store_statfs(0x1b958c000/0x0/0x1bfc00000, data 0x247cab5/0x2501000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:31.582597+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 90 heartbeat osd_stat(store_statfs(0x1b958c000/0x0/0x1bfc00000, data 0x247cab5/0x2501000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:32.582775+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 920474 data_alloc: 184549376 data_used: 17068032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:33.583008+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:34.583159+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:35.583343+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99131392 unmapped: 1490944 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:36.583486+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 90 handle_osd_map epochs [90,91], i have 90, src has [1,91]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 86.753425598s of 86.825950623s, submitted: 17
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Got map version 38
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Active mgr is now 
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc reconnect Terminating session with v2:172.18.0.106:6810/2383186409
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc reconnect No active mgr available yet
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 handle_osd_map epochs [91,91], i have 91, src has [1,91]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 ms_handle_reset con 0x561031d41000 session 0x56102f8845a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 ms_handle_reset con 0x561031971c00 session 0x56102fa40960
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 ms_handle_reset con 0x56102cf8e000 session 0x56102e7e4f00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e30d800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 heartbeat osd_stat(store_statfs(0x1b958c000/0x0/0x1bfc00000, data 0x247cab5/0x2501000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99270656 unmapped: 1351680 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:37.583628+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Got map version 39
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/4212177170,v1:172.18.0.108:6811/4212177170]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc reconnect Starting new session with [v2:172.18.0.108:6810/4212177170,v1:172.18.0.108:6811/4212177170]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: get_auth_request con 0x56102eb0a000 auth_method 0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_configure stats_period=5
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 923270 data_alloc: 184549376 data_used: 17068032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:38.583773+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99246080 unmapped: 1376256 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561031d40000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Got map version 40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/4212177170,v1:172.18.0.108:6811/4212177170]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102f8ae400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:39.583932+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99401728 unmapped: 1220608 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:40.584074+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99401728 unmapped: 1220608 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:41.584244+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Got map version 41
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/4212177170,v1:172.18.0.108:6811/4212177170]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99508224 unmapped: 1114112 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 heartbeat osd_stat(store_statfs(0x1b958a000/0x0/0x1bfc00000, data 0x247f12f/0x2504000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:42.584401+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99508224 unmapped: 1114112 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Got map version 42
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/4212177170,v1:172.18.0.108:6811/4212177170]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 921990 data_alloc: 184549376 data_used: 17068032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:43.584506+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:44.584936+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:45.585131+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:46.585496+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 heartbeat osd_stat(store_statfs(0x1b958a000/0x0/0x1bfc00000, data 0x247f12f/0x2504000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:47.585963+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 heartbeat osd_stat(store_statfs(0x1b958a000/0x0/0x1bfc00000, data 0x247f12f/0x2504000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 921990 data_alloc: 184549376 data_used: 17068032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:48.586338+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:49.586516+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:50.586846+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:51.586961+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 heartbeat osd_stat(store_statfs(0x1b958a000/0x0/0x1bfc00000, data 0x247f12f/0x2504000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:52.587223+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 921990 data_alloc: 184549376 data_used: 17068032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:53.587513+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 heartbeat osd_stat(store_statfs(0x1b958a000/0x0/0x1bfc00000, data 0x247f12f/0x2504000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:54.587713+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:55.587868+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:56.588195+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:57.588536+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 921990 data_alloc: 184549376 data_used: 17068032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:58.588713+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T09:59:59.588827+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 heartbeat osd_stat(store_statfs(0x1b958a000/0x0/0x1bfc00000, data 0x247f12f/0x2504000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 heartbeat osd_stat(store_statfs(0x1b958a000/0x0/0x1bfc00000, data 0x247f12f/0x2504000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:00.588972+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 heartbeat osd_stat(store_statfs(0x1b958a000/0x0/0x1bfc00000, data 0x247f12f/0x2504000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:01.589187+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 heartbeat osd_stat(store_statfs(0x1b958a000/0x0/0x1bfc00000, data 0x247f12f/0x2504000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:02.589426+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:03.589834+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 921990 data_alloc: 184549376 data_used: 17068032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:04.590005+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:05.590156+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:06.590379+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:07.590555+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 heartbeat osd_stat(store_statfs(0x1b958a000/0x0/0x1bfc00000, data 0x247f12f/0x2504000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:08.596763+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 921990 data_alloc: 184549376 data_used: 17068032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:09.597132+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:10.597350+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 heartbeat osd_stat(store_statfs(0x1b958a000/0x0/0x1bfc00000, data 0x247f12f/0x2504000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:11.597690+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:12.597901+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 heartbeat osd_stat(store_statfs(0x1b958a000/0x0/0x1bfc00000, data 0x247f12f/0x2504000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:13.598210+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 921990 data_alloc: 184549376 data_used: 17068032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:14.598521+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 heartbeat osd_stat(store_statfs(0x1b958a000/0x0/0x1bfc00000, data 0x247f12f/0x2504000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:15.598689+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:16.599075+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:17.599661+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:18.599992+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 921990 data_alloc: 184549376 data_used: 17068032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 heartbeat osd_stat(store_statfs(0x1b958a000/0x0/0x1bfc00000, data 0x247f12f/0x2504000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:19.601714+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:20.601942+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:21.602131+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:22.602283+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:23.602482+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 921990 data_alloc: 184549376 data_used: 17068032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:24.602745+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 heartbeat osd_stat(store_statfs(0x1b958a000/0x0/0x1bfc00000, data 0x247f12f/0x2504000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 heartbeat osd_stat(store_statfs(0x1b958a000/0x0/0x1bfc00000, data 0x247f12f/0x2504000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:25.603132+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:26.603364+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 heartbeat osd_stat(store_statfs(0x1b958a000/0x0/0x1bfc00000, data 0x247f12f/0x2504000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:27.603670+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:28.603880+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 921990 data_alloc: 184549376 data_used: 17068032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:29.604101+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 heartbeat osd_stat(store_statfs(0x1b958a000/0x0/0x1bfc00000, data 0x247f12f/0x2504000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:30.604272+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:31.605040+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:32.605238+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:33.605410+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 921990 data_alloc: 184549376 data_used: 17068032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 heartbeat osd_stat(store_statfs(0x1b958a000/0x0/0x1bfc00000, data 0x247f12f/0x2504000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:34.605598+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:35.605739+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:36.605945+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 heartbeat osd_stat(store_statfs(0x1b958a000/0x0/0x1bfc00000, data 0x247f12f/0x2504000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:37.606252+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:38.606420+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 921990 data_alloc: 184549376 data_used: 17068032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:39.606612+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:40.606782+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:41.607019+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:42.607177+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 heartbeat osd_stat(store_statfs(0x1b958a000/0x0/0x1bfc00000, data 0x247f12f/0x2504000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:43.607370+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 921990 data_alloc: 184549376 data_used: 17068032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:44.607524+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:45.607654+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:46.607826+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:47.608018+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 heartbeat osd_stat(store_statfs(0x1b958a000/0x0/0x1bfc00000, data 0x247f12f/0x2504000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 heartbeat osd_stat(store_statfs(0x1b958a000/0x0/0x1bfc00000, data 0x247f12f/0x2504000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:48.608168+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 921990 data_alloc: 184549376 data_used: 17068032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:49.608333+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 heartbeat osd_stat(store_statfs(0x1b958a000/0x0/0x1bfc00000, data 0x247f12f/0x2504000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:50.608546+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:51.608790+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:52.608944+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:53.609230+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 921990 data_alloc: 184549376 data_used: 17068032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:54.609394+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 heartbeat osd_stat(store_statfs(0x1b958a000/0x0/0x1bfc00000, data 0x247f12f/0x2504000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:55.609559+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:56.609718+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:57.609875+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 heartbeat osd_stat(store_statfs(0x1b958a000/0x0/0x1bfc00000, data 0x247f12f/0x2504000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:58.610113+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 921990 data_alloc: 184549376 data_used: 17068032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:00:59.610247+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:00.610393+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 heartbeat osd_stat(store_statfs(0x1b958a000/0x0/0x1bfc00000, data 0x247f12f/0x2504000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:01.610536+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:02.610687+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:03.610796+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 921990 data_alloc: 184549376 data_used: 17068032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:04.610931+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 heartbeat osd_stat(store_statfs(0x1b958a000/0x0/0x1bfc00000, data 0x247f12f/0x2504000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:05.611088+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99172352 unmapped: 1449984 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:06.611222+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 89.858627319s of 90.151954651s, submitted: 21
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99196928 unmapped: 1425408 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:07.611430+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99196928 unmapped: 1425408 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 heartbeat osd_stat(store_statfs(0x1b958a000/0x0/0x1bfc00000, data 0x247f249/0x2504000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:08.611620+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 922358 data_alloc: 184549376 data_used: 17068032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99196928 unmapped: 1425408 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Got map version 43
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/4212177170,v1:172.18.0.108:6811/4212177170]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:09.611781+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99196928 unmapped: 1425408 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:10.611965+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99196928 unmapped: 1425408 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:11.612135+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99196928 unmapped: 1425408 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:12.612258+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99196928 unmapped: 1425408 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 heartbeat osd_stat(store_statfs(0x1b958a000/0x0/0x1bfc00000, data 0x247f249/0x2504000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:13.612415+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 922358 data_alloc: 184549376 data_used: 17068032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99196928 unmapped: 1425408 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:14.612588+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99196928 unmapped: 1425408 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:15.612727+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99196928 unmapped: 1425408 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:16.612869+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99196928 unmapped: 1425408 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 heartbeat osd_stat(store_statfs(0x1b958a000/0x0/0x1bfc00000, data 0x247f249/0x2504000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:17.613069+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99196928 unmapped: 1425408 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:18.613245+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 922358 data_alloc: 184549376 data_used: 17068032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99196928 unmapped: 1425408 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 heartbeat osd_stat(store_statfs(0x1b958a000/0x0/0x1bfc00000, data 0x247f249/0x2504000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:19.613414+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99196928 unmapped: 1425408 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:20.613580+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99196928 unmapped: 1425408 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:21.613747+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99196928 unmapped: 1425408 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:22.613888+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99196928 unmapped: 1425408 heap: 100622336 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:23.614054+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.871252060s of 16.881200790s, submitted: 3
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 925580 data_alloc: 184549376 data_used: 17068032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 107610112 unmapped: 2457600 heap: 110067712 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:24.614254+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 heartbeat osd_stat(store_statfs(0x1b9589000/0x0/0x1bfc00000, data 0x247f26c/0x2505000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99950592 unmapped: 16408576 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:25.614427+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 92 handle_osd_map epochs [92,92], i have 92, src has [1,92]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100065280 unmapped: 16293888 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313ba400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 92 handle_osd_map epochs [92,92], i have 92, src has [1,92]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 92 heartbeat osd_stat(store_statfs(0x1b8d84000/0x0/0x1bfc00000, data 0x2c8149f/0x2d09000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:26.614603+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100196352 unmapped: 16162816 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:27.614802+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 92 handle_osd_map epochs [92,93], i have 92, src has [1,93]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 93 ms_handle_reset con 0x5610313ba400 session 0x56103140f2c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100212736 unmapped: 16146432 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:28.614952+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1045405 data_alloc: 184549376 data_used: 17084416
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100212736 unmapped: 16146432 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:29.615127+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:30.615272+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 93 heartbeat osd_stat(store_statfs(0x1b8581000/0x0/0x1bfc00000, data 0x34836af/0x350c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:31.615488+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:32.615661+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:33.615807+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1045405 data_alloc: 184549376 data_used: 17084416
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:34.615960+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:35.616122+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 93 heartbeat osd_stat(store_statfs(0x1b8581000/0x0/0x1bfc00000, data 0x34836af/0x350c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:36.616308+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:37.616480+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 93 heartbeat osd_stat(store_statfs(0x1b8581000/0x0/0x1bfc00000, data 0x34836af/0x350c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:38.616619+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1045405 data_alloc: 184549376 data_used: 17084416
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:39.616787+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:40.616945+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:41.617058+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 93 heartbeat osd_stat(store_statfs(0x1b8581000/0x0/0x1bfc00000, data 0x34836af/0x350c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:42.617195+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets getting new tickets!
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:43.617550+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _finish_auth 0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:43.618837+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1045405 data_alloc: 184549376 data_used: 17084416
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:44.617727+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:45.617869+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 93 heartbeat osd_stat(store_statfs(0x1b8581000/0x0/0x1bfc00000, data 0x34836af/0x350c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:46.617991+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 93 heartbeat osd_stat(store_statfs(0x1b8581000/0x0/0x1bfc00000, data 0x34836af/0x350c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:47.618215+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:48.618411+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1045405 data_alloc: 184549376 data_used: 17084416
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:49.618626+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 93 heartbeat osd_stat(store_statfs(0x1b8581000/0x0/0x1bfc00000, data 0x34836af/0x350c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:50.618765+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 93 heartbeat osd_stat(store_statfs(0x1b8581000/0x0/0x1bfc00000, data 0x34836af/0x350c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:51.619950+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:52.620846+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:53.623217+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1045405 data_alloc: 184549376 data_used: 17084416
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:54.625238+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 93 heartbeat osd_stat(store_statfs(0x1b8581000/0x0/0x1bfc00000, data 0x34836af/0x350c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:55.626134+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:56.626647+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:57.627012+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:58.627732+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1045405 data_alloc: 184549376 data_used: 17084416
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:01:59.628335+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:00.628848+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 93 heartbeat osd_stat(store_statfs(0x1b8581000/0x0/0x1bfc00000, data 0x34836af/0x350c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:01.629323+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:02.629695+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:03.629840+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1045405 data_alloc: 184549376 data_used: 17084416
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:04.630038+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:05.630248+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:06.630399+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 93 heartbeat osd_stat(store_statfs(0x1b8581000/0x0/0x1bfc00000, data 0x34836af/0x350c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:07.630595+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:08.630844+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1045405 data_alloc: 184549376 data_used: 17084416
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:09.631094+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 93 heartbeat osd_stat(store_statfs(0x1b8581000/0x0/0x1bfc00000, data 0x34836af/0x350c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:10.631382+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:11.631652+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:12.631910+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:13.632131+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1045405 data_alloc: 184549376 data_used: 17084416
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:14.632337+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 93 heartbeat osd_stat(store_statfs(0x1b8581000/0x0/0x1bfc00000, data 0x34836af/0x350c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:15.632596+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:16.632765+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:17.632970+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:18.633166+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1045405 data_alloc: 184549376 data_used: 17084416
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:19.633425+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 93 heartbeat osd_stat(store_statfs(0x1b8581000/0x0/0x1bfc00000, data 0x34836af/0x350c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 55.970825195s of 56.213550568s, submitted: 36
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:20.633630+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:21.633838+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 93 heartbeat osd_stat(store_statfs(0x1b8582000/0x0/0x1bfc00000, data 0x34836af/0x350c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100229120 unmapped: 16130048 heap: 116359168 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:22.633971+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 93 heartbeat osd_stat(store_statfs(0x1b7c89000/0x0/0x1bfc00000, data 0x3d7b6bf/0x3e05000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [0,0,0,0,0,0,0,4,1])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 93 ms_handle_reset con 0x56102cf8e000 session 0x56102eba92c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102f8aec00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 93 ms_handle_reset con 0x56102f8aec00 session 0x56102faf72c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561031971c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 93 ms_handle_reset con 0x561031971c00 session 0x56102faf63c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561031d41000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 93 ms_handle_reset con 0x561031d41000 session 0x56103140e5a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313bac00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 93 ms_handle_reset con 0x5610313bac00 session 0x56103140fa40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100556800 unmapped: 19480576 heap: 120037376 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:23.634060+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1120067 data_alloc: 184549376 data_used: 17088512
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 93 ms_handle_reset con 0x56102cf8e000 session 0x56102eb73860
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100556800 unmapped: 19480576 heap: 120037376 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:24.634208+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102f8aec00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 93 ms_handle_reset con 0x56102f8aec00 session 0x56102fa114a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100556800 unmapped: 19480576 heap: 120037376 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:25.634388+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561031971c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 93 ms_handle_reset con 0x561031971c00 session 0x56102fa103c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561031d41000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 93 ms_handle_reset con 0x561031d41000 session 0x56102fa11860
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100564992 unmapped: 19472384 heap: 120037376 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:26.634523+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313bb800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313c4400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 93 handle_osd_map epochs [94,94], i have 93, src has [1,94]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610310c8c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 94 ms_handle_reset con 0x5610310c8c00 session 0x56102e811e00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100671488 unmapped: 19365888 heap: 120037376 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:27.634706+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:28.634844+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 102850560 unmapped: 17186816 heap: 120037376 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610310c8c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 94 heartbeat osd_stat(store_statfs(0x1b7bff000/0x0/0x1bfc00000, data 0x3e00902/0x3e8d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1168633 data_alloc: 184549376 data_used: 19853312
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 94 handle_osd_map epochs [94,95], i have 94, src has [1,95]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 95 handle_osd_map epochs [95,95], i have 95, src has [1,95]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:29.634973+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 104685568 unmapped: 15351808 heap: 120037376 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 95 ms_handle_reset con 0x5610310c8c00 session 0x56102eb6fe00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:30.635124+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 105504768 unmapped: 14532608 heap: 120037376 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:31.635284+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 105504768 unmapped: 14532608 heap: 120037376 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:32.635495+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 105504768 unmapped: 14532608 heap: 120037376 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:33.635748+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 105504768 unmapped: 14532608 heap: 120037376 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1190803 data_alloc: 184549376 data_used: 22990848
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:34.635925+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 105504768 unmapped: 14532608 heap: 120037376 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 95 heartbeat osd_stat(store_statfs(0x1b7bfd000/0x0/0x1bfc00000, data 0x3e02b66/0x3e91000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:35.636028+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 105586688 unmapped: 14450688 heap: 120037376 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 95 handle_osd_map epochs [96,96], i have 95, src has [1,96]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.330035210s of 15.893358231s, submitted: 73
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:36.636187+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 105611264 unmapped: 14426112 heap: 120037376 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:37.636413+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 105627648 unmapped: 14409728 heap: 120037376 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:38.636488+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 108011520 unmapped: 12025856 heap: 120037376 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 96 heartbeat osd_stat(store_statfs(0x1b78e3000/0x0/0x1bfc00000, data 0x4113c5c/0x41a3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1237131 data_alloc: 184549376 data_used: 22978560
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:39.636649+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 108371968 unmapped: 11665408 heap: 120037376 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:40.636804+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 105316352 unmapped: 14721024 heap: 120037376 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 96 heartbeat osd_stat(store_statfs(0x1b76ac000/0x0/0x1bfc00000, data 0x434ac5c/0x43da000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:41.636960+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 105340928 unmapped: 14696448 heap: 120037376 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 96 heartbeat osd_stat(store_statfs(0x1b76ac000/0x0/0x1bfc00000, data 0x434ac5c/0x43da000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:42.637136+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 105357312 unmapped: 14680064 heap: 120037376 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:43.637279+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 105357312 unmapped: 14680064 heap: 120037376 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1239781 data_alloc: 184549376 data_used: 20353024
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:44.637399+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 96 heartbeat osd_stat(store_statfs(0x1b76ac000/0x0/0x1bfc00000, data 0x434ac5c/0x43da000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 104693760 unmapped: 15343616 heap: 120037376 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 96 ms_handle_reset con 0x56102cf8e000 session 0x5610310d34a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:45.637519+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102f8aec00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 96 ms_handle_reset con 0x56102f8aec00 session 0x56102d8a8b40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561031971c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 96 ms_handle_reset con 0x561031971c00 session 0x56102f8850e0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561031d41000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 104546304 unmapped: 15491072 heap: 120037376 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 96 ms_handle_reset con 0x561031d41000 session 0x56102fa0dc20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561031d41000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 96 ms_handle_reset con 0x561031d41000 session 0x5610310d2000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.019745827s of 10.490314484s, submitted: 81
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:46.637655+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 105594880 unmapped: 14442496 heap: 120037376 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:47.637833+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 105594880 unmapped: 14442496 heap: 120037376 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:48.637989+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 105725952 unmapped: 14311424 heap: 120037376 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1299420 data_alloc: 184549376 data_used: 20353024
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 96 ms_handle_reset con 0x56102cf8e000 session 0x56102eb57c20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:49.638163+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102f8aec00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610310c8c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 105455616 unmapped: 14581760 heap: 120037376 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 96 heartbeat osd_stat(store_statfs(0x1b702f000/0x0/0x1bfc00000, data 0x49cec6b/0x4a5f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:50.638295+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 105455616 unmapped: 14581760 heap: 120037376 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:51.638490+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 105455616 unmapped: 14581760 heap: 120037376 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 96 ms_handle_reset con 0x5610313bb800 session 0x56102e86eb40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 96 ms_handle_reset con 0x5610313c4400 session 0x56102f884b40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:52.638632+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 106135552 unmapped: 13901824 heap: 120037376 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:53.638790+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561031971c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 96 ms_handle_reset con 0x561031971c00 session 0x56102eb94f00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 106143744 unmapped: 13893632 heap: 120037376 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 96 handle_osd_map epochs [96,97], i have 96, src has [1,97]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 97 ms_handle_reset con 0x56102cf8e000 session 0x56103140e1e0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313bb800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1313392 data_alloc: 184549376 data_used: 20918272
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313c4400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 97 ms_handle_reset con 0x5610313c4400 session 0x56102f90ef00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 97 ms_handle_reset con 0x5610313bb800 session 0x56103140e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561031d41000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:54.638959+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110567424 unmapped: 9469952 heap: 120037376 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 97 handle_osd_map epochs [97,97], i have 97, src has [1,97]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 97 heartbeat osd_stat(store_statfs(0x1b6b6a000/0x0/0x1bfc00000, data 0x4e91e7b/0x4f23000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [0,0,0,1])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 97 ms_handle_reset con 0x561031d41000 session 0x56102eba9680
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561031d43800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:55.640117+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 113352704 unmapped: 13983744 heap: 127336448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 97 ms_handle_reset con 0x561031d43800 session 0x56102fa414a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 97 handle_osd_map epochs [98,98], i have 97, src has [1,98]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.a] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.5] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.12] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.815990448s of 10.015433311s, submitted: 31
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:56.640722+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 113238016 unmapped: 14098432 heap: 127336448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 98 handle_osd_map epochs [98,99], i have 98, src has [1,99]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 99 handle_osd_map epochs [99,99], i have 99, src has [1,99]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:57.641286+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 113254400 unmapped: 14082048 heap: 127336448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:58.642169+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 113254400 unmapped: 14082048 heap: 127336448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 99 heartbeat osd_stat(store_statfs(0x1b61e5000/0x0/0x1bfc00000, data 0x581236d/0x58a7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1435224 data_alloc: 184549376 data_used: 20946944
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:02:59.642350+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 113303552 unmapped: 14032896 heap: 127336448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:00.642697+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 113303552 unmapped: 14032896 heap: 127336448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.a] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.5] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.12] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:01.642969+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 109666304 unmapped: 17670144 heap: 127336448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 100 heartbeat osd_stat(store_statfs(0x1b61e3000/0x0/0x1bfc00000, data 0x5814463/0x58aa000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:02.643311+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 109666304 unmapped: 17670144 heap: 127336448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 100 ms_handle_reset con 0x56102cf8e000 session 0x56102e7e45a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:03.643519+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 102686720 unmapped: 24649728 heap: 127336448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 100 heartbeat osd_stat(store_statfs(0x1b70a7000/0x0/0x1bfc00000, data 0x4953420/0x49e6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1243383 data_alloc: 184549376 data_used: 11202560
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:04.643827+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 103661568 unmapped: 23674880 heap: 127336448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:05.644010+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 103514112 unmapped: 23822336 heap: 127336448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:06.644263+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.684578896s of 10.211746216s, submitted: 117
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 104144896 unmapped: 23191552 heap: 127336448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:07.644508+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 104144896 unmapped: 23191552 heap: 127336448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:08.644613+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 104390656 unmapped: 22945792 heap: 127336448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1335723 data_alloc: 184549376 data_used: 11509760
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:09.644783+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 100 heartbeat osd_stat(store_statfs(0x1b669e000/0x0/0x1bfc00000, data 0x534f420/0x53e2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313bb800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 104390656 unmapped: 22945792 heap: 127336448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 100 ms_handle_reset con 0x5610313bb800 session 0x56102eb6ef00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313c4400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 100 ms_handle_reset con 0x5610313c4400 session 0x56102eb6f4a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561031d41000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 100 ms_handle_reset con 0x561031d41000 session 0x56102eb6e780
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561031d43c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 100 ms_handle_reset con 0x561031d43c00 session 0x56102ebd7860
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 100 ms_handle_reset con 0x56102cf8e000 session 0x56102eb49a40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313bb800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:10.644928+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 107356160 unmapped: 19980288 heap: 127336448 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 100 ms_handle_reset con 0x5610313bb800 session 0x56102eb49c20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 100 heartbeat osd_stat(store_statfs(0x1b583a000/0x0/0x1bfc00000, data 0x61bf492/0x6254000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:11.645069+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110510080 unmapped: 19668992 heap: 130179072 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313c4400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 100 ms_handle_reset con 0x5610313c4400 session 0x56102faf61e0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561031d41000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 100 ms_handle_reset con 0x561031d41000 session 0x56102cd5be00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:12.645314+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110510080 unmapped: 19668992 heap: 130179072 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030b67800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030b67000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 100 ms_handle_reset con 0x561030b67000 session 0x56102e86f4a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313bb800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 100 ms_handle_reset con 0x5610313bb800 session 0x56102ebd7680
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 100 heartbeat osd_stat(store_statfs(0x1b51a7000/0x0/0x1bfc00000, data 0x6852492/0x68e7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313c4400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561031d41000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:13.645460+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030b67c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 100 ms_handle_reset con 0x56102f8aec00 session 0x56102d92a960
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 100 ms_handle_reset con 0x5610310c8c00 session 0x56102fafb2c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110977024 unmapped: 19202048 heap: 130179072 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1527455 data_alloc: 184549376 data_used: 11452416
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:14.645642+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110985216 unmapped: 19193856 heap: 130179072 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 101 handle_osd_map epochs [101,101], i have 101, src has [1,101]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 101 ms_handle_reset con 0x561030b67c00 session 0x56102eb943c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:15.646945+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99713024 unmapped: 30466048 heap: 130179072 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:16.647095+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99713024 unmapped: 30466048 heap: 130179072 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 101 heartbeat osd_stat(store_statfs(0x1b5fbc000/0x0/0x1bfc00000, data 0x5a3b6a4/0x5ad2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:17.647273+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99713024 unmapped: 30466048 heap: 130179072 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:18.647428+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99713024 unmapped: 30466048 heap: 130179072 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 101 heartbeat osd_stat(store_statfs(0x1b5fbc000/0x0/0x1bfc00000, data 0x5a3b6a4/0x5ad2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1391417 data_alloc: 184549376 data_used: 6483968
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:19.647515+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99713024 unmapped: 30466048 heap: 130179072 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.720112801s of 13.529119492s, submitted: 138
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 101 ms_handle_reset con 0x561030b67800 session 0x56102fa40000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 101 ms_handle_reset con 0x56102cf8e000 session 0x56102e86ed20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102f8aec00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:20.647632+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 101 handle_osd_map epochs [101,102], i have 101, src has [1,102]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 102 heartbeat osd_stat(store_statfs(0x1b5fbc000/0x0/0x1bfc00000, data 0x5a3b6a4/0x5ad2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 98648064 unmapped: 31531008 heap: 130179072 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 102 ms_handle_reset con 0x56102f8aec00 session 0x56102fa0cd20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:21.647746+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 98648064 unmapped: 31531008 heap: 130179072 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:22.647883+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 98648064 unmapped: 31531008 heap: 130179072 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:23.648063+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 98648064 unmapped: 31531008 heap: 130179072 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1255479 data_alloc: 184549376 data_used: 5132288
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:24.648228+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 98648064 unmapped: 31531008 heap: 130179072 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:25.648353+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 98664448 unmapped: 31514624 heap: 130179072 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 102 heartbeat osd_stat(store_statfs(0x1b705e000/0x0/0x1bfc00000, data 0x499978b/0x4a30000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:26.648510+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 98893824 unmapped: 31285248 heap: 130179072 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:27.648686+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 98942976 unmapped: 31236096 heap: 130179072 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:28.649197+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 102 ms_handle_reset con 0x5610313c4400 session 0x56102e86eb40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 102 ms_handle_reset con 0x561031d41000 session 0x56102ebd7860
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 98942976 unmapped: 31236096 heap: 130179072 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1260447 data_alloc: 184549376 data_used: 5251072
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:29.649414+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030b67c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 102 ms_handle_reset con 0x561030b67c00 session 0x56102e811c20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 98959360 unmapped: 31219712 heap: 130179072 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 102 handle_osd_map epochs [102,103], i have 102, src has [1,103]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.761687279s of 10.000120163s, submitted: 72
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 103 handle_osd_map epochs [103,103], i have 103, src has [1,103]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 103 handle_osd_map epochs [103,103], i have 103, src has [1,103]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 103 ms_handle_reset con 0x56102cf8e000 session 0x56102e810d20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102f8aec00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:30.649562+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030b67c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 103 ms_handle_reset con 0x561030b67c00 session 0x56102f90fc20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 103 ms_handle_reset con 0x56102f8aec00 session 0x56103140ef00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313c4400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 103 heartbeat osd_stat(store_statfs(0x1b7056000/0x0/0x1bfc00000, data 0x49a178b/0x4a38000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110575616 unmapped: 19603456 heap: 130179072 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 103 ms_handle_reset con 0x5610313c4400 session 0x56103140f4a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561031d41000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:31.649714+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99246080 unmapped: 34783232 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 104 handle_osd_map epochs [104,104], i have 104, src has [1,104]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 104 ms_handle_reset con 0x561031d41000 session 0x56103140e1e0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.a] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.12] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.5] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 104 handle_osd_map epochs [104,104], i have 104, src has [1,104]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:32.649930+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99262464 unmapped: 34766848 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 105 handle_osd_map epochs [105,105], i have 105, src has [1,105]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:33.650060+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99278848 unmapped: 34750464 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1484832 data_alloc: 184549376 data_used: 5701632
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:34.650215+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 105 handle_osd_map epochs [105,105], i have 105, src has [1,105]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 99311616 unmapped: 34717696 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 105 ms_handle_reset con 0x56102cf8e000 session 0x56103140f0e0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 105 heartbeat osd_stat(store_statfs(0x1b55de000/0x0/0x1bfc00000, data 0x6412e8d/0x64ae000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:35.650442+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.12] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 97984512 unmapped: 36044800 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.a] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.5] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.1c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:36.650709+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 97984512 unmapped: 36044800 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:37.650936+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 97984512 unmapped: 36044800 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 106 heartbeat osd_stat(store_statfs(0x1b6ae9000/0x0/0x1bfc00000, data 0x4f09f01/0x4fa3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:38.651077+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 106 heartbeat osd_stat(store_statfs(0x1b6ae9000/0x0/0x1bfc00000, data 0x4f09f01/0x4fa3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 97984512 unmapped: 36044800 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102f8aec00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 106 ms_handle_reset con 0x56102f8aec00 session 0x56102f8854a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1317773 data_alloc: 184549376 data_used: 5148672
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:39.651250+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030b67c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 106 ms_handle_reset con 0x561030b67c00 session 0x56102f884b40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313c4400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 106 ms_handle_reset con 0x5610313c4400 session 0x56102fa103c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610310c8c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 106 ms_handle_reset con 0x5610310c8c00 session 0x56102eb57c20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 106 ms_handle_reset con 0x56102cf8e000 session 0x56102e7e4960
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 106 heartbeat osd_stat(store_statfs(0x1b6ae9000/0x0/0x1bfc00000, data 0x4f09f73/0x4fa5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102f8aec00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 98009088 unmapped: 36020224 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.033485413s of 10.004430771s, submitted: 158
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:40.651415+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 106 ms_handle_reset con 0x56102f8aec00 session 0x56102e7e4d20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 109993984 unmapped: 24035328 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030b67c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 106 ms_handle_reset con 0x561030b67c00 session 0x56102eb72d20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:41.651729+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 109993984 unmapped: 24035328 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610310c8c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 106 ms_handle_reset con 0x5610310c8c00 session 0x56102fa41c20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313c4400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 106 ms_handle_reset con 0x5610313c4400 session 0x56102f884b40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:42.651906+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 106 ms_handle_reset con 0x56102cf8e000 session 0x56103140f0e0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102f8aec00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030b67c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110018560 unmapped: 24010752 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610310c8c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:43.652073+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110149632 unmapped: 23879680 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 107 handle_osd_map epochs [107,107], i have 107, src has [1,107]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 107 handle_osd_map epochs [107,107], i have 107, src has [1,107]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 107 ms_handle_reset con 0x5610310c8c00 session 0x56102faf6780
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:44.652395+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1239174 data_alloc: 184549376 data_used: 5279744
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100564992 unmapped: 33464320 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:45.652645+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 107 heartbeat osd_stat(store_statfs(0x1b77ec000/0x0/0x1bfc00000, data 0x4205185/0x42a2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100564992 unmapped: 33464320 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:46.652930+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100564992 unmapped: 33464320 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:47.653118+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100564992 unmapped: 33464320 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 107 ms_handle_reset con 0x56102f8aec00 session 0x56102e7e5e00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 107 ms_handle_reset con 0x561030b67c00 session 0x5610310d2000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313c4400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:48.653223+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 107 ms_handle_reset con 0x5610313c4400 session 0x56102d92a960
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100671488 unmapped: 33357824 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 107 heartbeat osd_stat(store_statfs(0x1b854a000/0x0/0x1bfc00000, data 0x34a1175/0x353d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:49.653436+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1134491 data_alloc: 184549376 data_used: 5160960
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 107 ms_handle_reset con 0x56102cf8e000 session 0x561031c3b2c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102f8aec00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 107 ms_handle_reset con 0x56102f8aec00 session 0x561031c3b4a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030b67c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 107 ms_handle_reset con 0x561030b67c00 session 0x561031c3b680
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610310c8c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 107 ms_handle_reset con 0x5610310c8c00 session 0x561031c3b860
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 107 heartbeat osd_stat(store_statfs(0x1b8552000/0x0/0x1bfc00000, data 0x34a1103/0x353b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313c4400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100425728 unmapped: 33603584 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.439040184s of 10.065002441s, submitted: 139
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 107 ms_handle_reset con 0x5610313c4400 session 0x561031c3ba40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 107 ms_handle_reset con 0x56102cf8e000 session 0x56102fa114a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102f8aec00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:50.653608+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 107 ms_handle_reset con 0x56102f8aec00 session 0x56102eb943c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030b67c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 107 ms_handle_reset con 0x561030b67c00 session 0x561032270000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610310c8c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 107 ms_handle_reset con 0x5610310c8c00 session 0x5610322703c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100040704 unmapped: 33988608 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 107 handle_osd_map epochs [108,108], i have 107, src has [1,108]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:51.653757+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100040704 unmapped: 33988608 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 108 heartbeat osd_stat(store_statfs(0x1b7ec6000/0x0/0x1bfc00000, data 0x3b2d113/0x3bc8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:52.653980+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100048896 unmapped: 33980416 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 108 heartbeat osd_stat(store_statfs(0x1b7ec2000/0x0/0x1bfc00000, data 0x3b2f209/0x3bcb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:53.654163+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100048896 unmapped: 33980416 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:54.654347+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1200755 data_alloc: 184549376 data_used: 5177344
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313bb800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 108 ms_handle_reset con 0x5610313bb800 session 0x5610322705a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100048896 unmapped: 33980416 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 108 heartbeat osd_stat(store_statfs(0x1b7ec2000/0x0/0x1bfc00000, data 0x3b2f209/0x3bcb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:55.654535+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100048896 unmapped: 33980416 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 108 ms_handle_reset con 0x56102cf8e000 session 0x561032270960
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:56.654686+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102f8aec00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 108 ms_handle_reset con 0x56102f8aec00 session 0x561032270b40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030b67c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 108 ms_handle_reset con 0x561030b67c00 session 0x561032270d20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100057088 unmapped: 33972224 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610310c8c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cd97800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:57.655024+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100057088 unmapped: 33972224 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:58.655242+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100057088 unmapped: 33972224 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:03:59.655630+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1203190 data_alloc: 184549376 data_used: 5177344
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100057088 unmapped: 33972224 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:00.655788+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 108 heartbeat osd_stat(store_statfs(0x1b7ec1000/0x0/0x1bfc00000, data 0x3b2f23c/0x3bcd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100057088 unmapped: 33972224 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:01.656014+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100057088 unmapped: 33972224 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:02.656182+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100057088 unmapped: 33972224 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:03.656378+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100057088 unmapped: 33972224 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:04.656631+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1203190 data_alloc: 184549376 data_used: 5177344
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100057088 unmapped: 33972224 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:05.657512+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100057088 unmapped: 33972224 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:06.657672+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 108 heartbeat osd_stat(store_statfs(0x1b7ec1000/0x0/0x1bfc00000, data 0x3b2f23c/0x3bcd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100057088 unmapped: 33972224 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:07.657904+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100057088 unmapped: 33972224 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:08.658101+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100073472 unmapped: 33955840 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:09.658269+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1203190 data_alloc: 184549376 data_used: 5177344
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100073472 unmapped: 33955840 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:10.658437+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 108 heartbeat osd_stat(store_statfs(0x1b7ec1000/0x0/0x1bfc00000, data 0x3b2f23c/0x3bcd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100073472 unmapped: 33955840 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:11.658686+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100073472 unmapped: 33955840 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 108 heartbeat osd_stat(store_statfs(0x1b7ec1000/0x0/0x1bfc00000, data 0x3b2f23c/0x3bcd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:12.658832+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100073472 unmapped: 33955840 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:13.658974+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 100073472 unmapped: 33955840 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:14.659141+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1203190 data_alloc: 184549376 data_used: 5177344
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.882183075s of 24.971916199s, submitted: 25
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 103333888 unmapped: 30695424 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:15.659295+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 106840064 unmapped: 27189248 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:16.659506+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 108 ms_handle_reset con 0x5610310c8c00 session 0x5610322712c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 108 ms_handle_reset con 0x56102cd97800 session 0x56102f8fe780
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 106840064 unmapped: 27189248 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:17.659701+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 108 heartbeat osd_stat(store_statfs(0x1b6d67000/0x0/0x1bfc00000, data 0x4c8823c/0x4d26000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 105521152 unmapped: 28508160 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:18.659928+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 105521152 unmapped: 28508160 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:19.660128+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1341530 data_alloc: 184549376 data_used: 5214208
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 105521152 unmapped: 28508160 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:20.660425+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 108 heartbeat osd_stat(store_statfs(0x1b6d2f000/0x0/0x1bfc00000, data 0x4cb823c/0x4d56000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 105537536 unmapped: 28491776 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:21.660711+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 105537536 unmapped: 28491776 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:22.660889+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 105537536 unmapped: 28491776 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:23.661090+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 105119744 unmapped: 28909568 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:24.661266+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1335186 data_alloc: 184549376 data_used: 5214208
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 105119744 unmapped: 28909568 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:25.661424+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 108 ms_handle_reset con 0x56102e38e800 session 0x56102faf72c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 108 ms_handle_reset con 0x56102e38e400 session 0x56102faf61e0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.487558365s of 10.893699646s, submitted: 135
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 105103360 unmapped: 28925952 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 108 ms_handle_reset con 0x56102cf8e000 session 0x56103140f0e0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:26.661563+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 108 heartbeat osd_stat(store_statfs(0x1b854f000/0x0/0x1bfc00000, data 0x34a31f9/0x353e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 105119744 unmapped: 28909568 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:27.661794+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 105119744 unmapped: 28909568 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:28.661973+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 105119744 unmapped: 28909568 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:29.662143+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1153609 data_alloc: 184549376 data_used: 5173248
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102f8aec00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 105119744 unmapped: 28909568 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:30.662329+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 108 ms_handle_reset con 0x56102f8aec00 session 0x56102f884b40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030b67c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 108 ms_handle_reset con 0x561030b67c00 session 0x56102fa40b40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 108 ms_handle_reset con 0x56102cf8e000 session 0x56102eb72d20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 108 ms_handle_reset con 0x56102e38e400 session 0x56102eb570e0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 108 ms_handle_reset con 0x56102e38e800 session 0x56102eb73e00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 105971712 unmapped: 28057600 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102f8aec00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:31.662521+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 109 ms_handle_reset con 0x56102f8aec00 session 0x561031eb4d20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 107077632 unmapped: 26951680 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 109 heartbeat osd_stat(store_statfs(0x1b7f55000/0x0/0x1bfc00000, data 0x3a9889e/0x3b38000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:32.662822+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 106930176 unmapped: 27099136 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:33.662986+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 106930176 unmapped: 27099136 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:34.663145+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1220278 data_alloc: 184549376 data_used: 5181440
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610310c8c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 109 ms_handle_reset con 0x5610310c8c00 session 0x56102ebb4780
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 106930176 unmapped: 27099136 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:35.663285+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.857198715s of 10.306474686s, submitted: 89
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 106930176 unmapped: 27099136 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:36.663421+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 109 heartbeat osd_stat(store_statfs(0x1b7f56000/0x0/0x1bfc00000, data 0x3a9889e/0x3b38000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 109 handle_osd_map epochs [110,110], i have 109, src has [1,110]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 106946560 unmapped: 27082752 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 110 handle_osd_map epochs [110,110], i have 110, src has [1,110]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:37.663586+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 110 ms_handle_reset con 0x56102e38e800 session 0x56102eba9c20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 107069440 unmapped: 26959872 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:38.663766+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 108322816 unmapped: 25706496 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:39.663945+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1244492 data_alloc: 184549376 data_used: 8429568
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 108322816 unmapped: 25706496 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:40.664082+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 110 heartbeat osd_stat(store_statfs(0x1b7f53000/0x0/0x1bfc00000, data 0x3a9a6cf/0x3b39000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102f8aec00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 110 handle_osd_map epochs [111,111], i have 110, src has [1,111]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 110 ms_handle_reset con 0x56102f8aec00 session 0x56102eba92c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 110 handle_osd_map epochs [111,111], i have 111, src has [1,111]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 110 handle_osd_map epochs [111,111], i have 111, src has [1,111]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 110 handle_osd_map epochs [111,111], i have 111, src has [1,111]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 107839488 unmapped: 26189824 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:41.664280+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 107839488 unmapped: 26189824 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:42.664422+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 107839488 unmapped: 26189824 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:43.664602+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 107839488 unmapped: 26189824 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:44.664774+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1247274 data_alloc: 184549376 data_used: 8429568
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 107839488 unmapped: 26189824 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:45.664969+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 107839488 unmapped: 26189824 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:46.668213+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 111 heartbeat osd_stat(store_statfs(0x1b7f51000/0x0/0x1bfc00000, data 0x3a9c7c5/0x3b3c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 107839488 unmapped: 26189824 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:47.668577+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.463768959s of 11.710534096s, submitted: 43
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 113500160 unmapped: 20529152 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:48.668726+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 109551616 unmapped: 24477696 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:49.668868+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1383638 data_alloc: 184549376 data_used: 8548352
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110747648 unmapped: 23281664 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:50.669047+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110747648 unmapped: 23281664 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:51.669238+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110747648 unmapped: 23281664 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 111 heartbeat osd_stat(store_statfs(0x1b68d4000/0x0/0x1bfc00000, data 0x4d1a7c5/0x4dba000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:52.669505+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110764032 unmapped: 23265280 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:53.669663+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 111 heartbeat osd_stat(store_statfs(0x1b68d4000/0x0/0x1bfc00000, data 0x4d1a7c5/0x4dba000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110780416 unmapped: 23248896 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:54.669845+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1397318 data_alloc: 184549376 data_used: 8540160
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110780416 unmapped: 23248896 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:55.669997+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110837760 unmapped: 23191552 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:56.670136+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110837760 unmapped: 23191552 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:57.670324+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 111 heartbeat osd_stat(store_statfs(0x1b68af000/0x0/0x1bfc00000, data 0x4d3f7c5/0x4ddf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110837760 unmapped: 23191552 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:58.670549+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110837760 unmapped: 23191552 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:04:59.670721+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1395126 data_alloc: 184549376 data_used: 8544256
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.625297546s of 12.124798775s, submitted: 146
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 111 ms_handle_reset con 0x56102cf8e000 session 0x56102ebd65a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 111 ms_handle_reset con 0x56102e38e400 session 0x561032270f00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38f800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110854144 unmapped: 23175168 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:00.670870+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 111 ms_handle_reset con 0x56102e38f800 session 0x56102f8ff2c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 109314048 unmapped: 24715264 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:01.671035+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 111 heartbeat osd_stat(store_statfs(0x1b78ae000/0x0/0x1bfc00000, data 0x34a9763/0x3548000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 109314048 unmapped: 24715264 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:02.671204+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 109314048 unmapped: 24715264 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:03.671647+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 109314048 unmapped: 24715264 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:04.671892+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1183272 data_alloc: 184549376 data_used: 5189632
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 109314048 unmapped: 24715264 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:05.672127+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 109314048 unmapped: 24715264 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:06.672417+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 109314048 unmapped: 24715264 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:07.672637+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 111 heartbeat osd_stat(store_statfs(0x1b78ae000/0x0/0x1bfc00000, data 0x34a9763/0x3548000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 109314048 unmapped: 24715264 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:08.672931+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 109314048 unmapped: 24715264 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:09.673125+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1183272 data_alloc: 184549376 data_used: 5189632
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 109314048 unmapped: 24715264 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:10.673315+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 111 heartbeat osd_stat(store_statfs(0x1b78ae000/0x0/0x1bfc00000, data 0x34a9763/0x3548000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 109314048 unmapped: 24715264 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:11.673566+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 109314048 unmapped: 24715264 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:12.673750+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 111 heartbeat osd_stat(store_statfs(0x1b78ae000/0x0/0x1bfc00000, data 0x34a9763/0x3548000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 109314048 unmapped: 24715264 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:13.686427+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 109314048 unmapped: 24715264 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:14.686737+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1183272 data_alloc: 184549376 data_used: 5189632
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 111 heartbeat osd_stat(store_statfs(0x1b78ae000/0x0/0x1bfc00000, data 0x34a9763/0x3548000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 109314048 unmapped: 24715264 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:15.686938+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 111 heartbeat osd_stat(store_statfs(0x1b78ae000/0x0/0x1bfc00000, data 0x34a9763/0x3548000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 109314048 unmapped: 24715264 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:16.687211+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.130987167s of 17.347047806s, submitted: 58
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 109338624 unmapped: 24690688 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:17.687406+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102fe59000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 111 handle_osd_map epochs [112,112], i have 111, src has [1,112]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 112 ms_handle_reset con 0x56102fe59000 session 0x56102faf6000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 109420544 unmapped: 24608768 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:18.687645+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 109420544 unmapped: 24608768 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:19.687878+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1190823 data_alloc: 184549376 data_used: 5197824
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 112 handle_osd_map epochs [112,113], i have 112, src has [1,113]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 113 handle_osd_map epochs [113,113], i have 113, src has [1,113]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 113 handle_osd_map epochs [113,113], i have 113, src has [1,113]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 113 ms_handle_reset con 0x56102cf8e000 session 0x56102eb563c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:20.688065+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110501888 unmapped: 23527424 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 113 heartbeat osd_stat(store_statfs(0x1b813d000/0x0/0x1bfc00000, data 0x34adbfa/0x3550000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:21.688264+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110501888 unmapped: 23527424 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 113 heartbeat osd_stat(store_statfs(0x1b813d000/0x0/0x1bfc00000, data 0x34adbd7/0x354f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:22.688479+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110501888 unmapped: 23527424 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 113 heartbeat osd_stat(store_statfs(0x1b813d000/0x0/0x1bfc00000, data 0x34adbd7/0x354f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 113 heartbeat osd_stat(store_statfs(0x1b813d000/0x0/0x1bfc00000, data 0x34adbd7/0x354f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:23.688650+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110501888 unmapped: 23527424 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:24.688875+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110501888 unmapped: 23527424 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1193544 data_alloc: 184549376 data_used: 5206016
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:25.689105+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110501888 unmapped: 23527424 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 113 handle_osd_map epochs [113,114], i have 113, src has [1,114]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:26.689256+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110510080 unmapped: 23519232 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:27.689484+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110510080 unmapped: 23519232 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 114 heartbeat osd_stat(store_statfs(0x1b813b000/0x0/0x1bfc00000, data 0x34afccd/0x3552000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:28.689649+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110510080 unmapped: 23519232 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 114 heartbeat osd_stat(store_statfs(0x1b813b000/0x0/0x1bfc00000, data 0x34afccd/0x3552000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:29.689810+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110510080 unmapped: 23519232 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1195417 data_alloc: 184549376 data_used: 5206016
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:30.689999+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110510080 unmapped: 23519232 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:31.690154+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110510080 unmapped: 23519232 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:32.690305+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110510080 unmapped: 23519232 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:33.690527+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110518272 unmapped: 23511040 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:34.690769+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110518272 unmapped: 23511040 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 114 heartbeat osd_stat(store_statfs(0x1b813b000/0x0/0x1bfc00000, data 0x34afccd/0x3552000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1195417 data_alloc: 184549376 data_used: 5206016
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:35.690975+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110518272 unmapped: 23511040 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:36.691183+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110518272 unmapped: 23511040 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:37.691478+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110518272 unmapped: 23511040 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:38.691713+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110518272 unmapped: 23511040 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 114 heartbeat osd_stat(store_statfs(0x1b813b000/0x0/0x1bfc00000, data 0x34afccd/0x3552000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:39.691887+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110518272 unmapped: 23511040 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1195417 data_alloc: 184549376 data_used: 5206016
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:40.692130+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110518272 unmapped: 23511040 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:41.692358+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110518272 unmapped: 23511040 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:42.692567+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110518272 unmapped: 23511040 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:43.692735+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 114 heartbeat osd_stat(store_statfs(0x1b813b000/0x0/0x1bfc00000, data 0x34afccd/0x3552000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110526464 unmapped: 23502848 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:44.692981+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110526464 unmapped: 23502848 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1195417 data_alloc: 184549376 data_used: 5206016
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:45.693147+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110526464 unmapped: 23502848 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:46.693347+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110526464 unmapped: 23502848 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:47.693514+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110526464 unmapped: 23502848 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:48.693734+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110526464 unmapped: 23502848 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 114 heartbeat osd_stat(store_statfs(0x1b813b000/0x0/0x1bfc00000, data 0x34afccd/0x3552000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:49.693918+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110526464 unmapped: 23502848 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1195417 data_alloc: 184549376 data_used: 5206016
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:50.694092+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110534656 unmapped: 23494656 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:51.694329+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110534656 unmapped: 23494656 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 114 heartbeat osd_stat(store_statfs(0x1b813b000/0x0/0x1bfc00000, data 0x34afccd/0x3552000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:52.694578+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110534656 unmapped: 23494656 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:53.694803+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110534656 unmapped: 23494656 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 114 heartbeat osd_stat(store_statfs(0x1b813b000/0x0/0x1bfc00000, data 0x34afccd/0x3552000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:54.695015+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110534656 unmapped: 23494656 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1195417 data_alloc: 184549376 data_used: 5206016
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:55.695257+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110534656 unmapped: 23494656 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 114 heartbeat osd_stat(store_statfs(0x1b813b000/0x0/0x1bfc00000, data 0x34afccd/0x3552000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:56.695499+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110534656 unmapped: 23494656 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 114 heartbeat osd_stat(store_statfs(0x1b813b000/0x0/0x1bfc00000, data 0x34afccd/0x3552000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:57.695704+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110534656 unmapped: 23494656 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:58.695891+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110534656 unmapped: 23494656 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:05:59.696148+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110534656 unmapped: 23494656 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1195417 data_alloc: 184549376 data_used: 5206016
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:00.696319+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110542848 unmapped: 23486464 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:01.696560+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110542848 unmapped: 23486464 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 114 heartbeat osd_stat(store_statfs(0x1b813b000/0x0/0x1bfc00000, data 0x34afccd/0x3552000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:02.696793+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110542848 unmapped: 23486464 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 114 heartbeat osd_stat(store_statfs(0x1b813b000/0x0/0x1bfc00000, data 0x34afccd/0x3552000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:03.696963+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110542848 unmapped: 23486464 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:04.697140+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110542848 unmapped: 23486464 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1195417 data_alloc: 184549376 data_used: 5206016
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:05.697381+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110542848 unmapped: 23486464 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 114 heartbeat osd_stat(store_statfs(0x1b813b000/0x0/0x1bfc00000, data 0x34afccd/0x3552000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:06.697681+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110542848 unmapped: 23486464 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 114 handle_osd_map epochs [114,115], i have 114, src has [1,115]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 49.445770264s of 49.641525269s, submitted: 52
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 115 handle_osd_map epochs [115,115], i have 115, src has [1,115]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:07.697889+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110600192 unmapped: 23429120 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 115 heartbeat osd_stat(store_statfs(0x1b8137000/0x0/0x1bfc00000, data 0x34b1f00/0x3556000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:08.698081+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110608384 unmapped: 23420928 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 115 handle_osd_map epochs [116,116], i have 115, src has [1,116]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 116 handle_osd_map epochs [116,116], i have 116, src has [1,116]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:09.698242+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110624768 unmapped: 23404544 heap: 134029312 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 116 handle_osd_map epochs [116,116], i have 116, src has [1,116]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1268833 data_alloc: 184549376 data_used: 5214208
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:10.698421+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 119070720 unmapped: 23355392 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:11.698594+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 119070720 unmapped: 23355392 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 116 handle_osd_map epochs [116,117], i have 116, src has [1,117]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 117 handle_osd_map epochs [117,117], i have 117, src has [1,117]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 117 heartbeat osd_stat(store_statfs(0x1b6130000/0x0/0x1bfc00000, data 0x54b4577/0x555e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [0,0,0,1])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:12.698754+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110755840 unmapped: 31670272 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 117 heartbeat osd_stat(store_statfs(0x1b592a000/0x0/0x1bfc00000, data 0x5cb6bac/0x5d63000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38f800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:13.698935+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110780416 unmapped: 31645696 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 117 handle_osd_map epochs [117,118], i have 117, src has [1,118]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 118 handle_osd_map epochs [118,118], i have 118, src has [1,118]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 118 ms_handle_reset con 0x56102e38f800 session 0x56102ebd74a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:14.699087+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110780416 unmapped: 31645696 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561031971c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 118 handle_osd_map epochs [118,119], i have 118, src has [1,119]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 119 handle_osd_map epochs [119,119], i have 119, src has [1,119]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 119 handle_osd_map epochs [119,119], i have 119, src has [1,119]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 119 ms_handle_reset con 0x56102e38e800 session 0x56102e86ef00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 119 ms_handle_reset con 0x56102e38e400 session 0x56102e86fc20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1557407 data_alloc: 184549376 data_used: 5222400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:15.699257+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 119 ms_handle_reset con 0x561031971c00 session 0x56102fafa3c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110837760 unmapped: 31588352 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:16.699420+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110862336 unmapped: 31563776 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 119 handle_osd_map epochs [120,120], i have 119, src has [1,120]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 119 handle_osd_map epochs [120,120], i have 120, src has [1,120]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 120 handle_osd_map epochs [120,120], i have 120, src has [1,120]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.220081329s of 10.000094414s, submitted: 137
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 120 ms_handle_reset con 0x56102e38e400 session 0x56103140e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 120 handle_osd_map epochs [120,120], i have 120, src has [1,120]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 120 ms_handle_reset con 0x56102cf8e000 session 0x56102eba81e0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:17.699683+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110936064 unmapped: 31490048 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:18.699847+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 120 ms_handle_reset con 0x56102e38e800 session 0x56102fa41860
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 120 handle_osd_map epochs [121,121], i have 120, src has [1,121]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 121 handle_osd_map epochs [121,121], i have 121, src has [1,121]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38f800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 109993984 unmapped: 32432128 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 121 heartbeat osd_stat(store_statfs(0x1b8124000/0x0/0x1bfc00000, data 0x34bcf01/0x3568000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 121 handle_osd_map epochs [121,122], i have 121, src has [1,122]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 122 handle_osd_map epochs [122,122], i have 122, src has [1,122]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 122 handle_osd_map epochs [122,122], i have 122, src has [1,122]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 122 ms_handle_reset con 0x56102e38f800 session 0x56103140f4a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:19.699980+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110084096 unmapped: 32342016 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e452400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 122 ms_handle_reset con 0x56102e452400 session 0x5610310d2960
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 123 ms_handle_reset con 0x56102cf8e000 session 0x56102ebb50e0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1244276 data_alloc: 184549376 data_used: 5251072
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:20.700164+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110157824 unmapped: 32268288 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 123 handle_osd_map epochs [123,124], i have 123, src has [1,124]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 124 handle_osd_map epochs [124,124], i have 124, src has [1,124]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:21.700332+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110174208 unmapped: 32251904 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 124 handle_osd_map epochs [125,125], i have 124, src has [1,125]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 124 handle_osd_map epochs [125,125], i have 125, src has [1,125]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 125 ms_handle_reset con 0x56102e38e400 session 0x56102eb72960
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:22.700523+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110215168 unmapped: 32210944 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 125 ms_handle_reset con 0x56102e38e800 session 0x561031c3a3c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:23.700707+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38f800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110223360 unmapped: 32202752 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 126 ms_handle_reset con 0x56102e38f800 session 0x56102ebd7680
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:24.700868+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110280704 unmapped: 32145408 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 126 heartbeat osd_stat(store_statfs(0x1b810d000/0x0/0x1bfc00000, data 0x34c9ce7/0x3580000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1254528 data_alloc: 184549376 data_used: 5263360
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 126 heartbeat osd_stat(store_statfs(0x1b810d000/0x0/0x1bfc00000, data 0x34c9ce7/0x3580000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:25.701013+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110280704 unmapped: 32145408 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e452400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 127 handle_osd_map epochs [127,127], i have 127, src has [1,127]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 127 ms_handle_reset con 0x56102e452400 session 0x56102fa0d2c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:26.701125+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110346240 unmapped: 32079872 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.452275276s of 10.000322342s, submitted: 203
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 127 ms_handle_reset con 0x56102cf8e000 session 0x56102faf6780
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 128 ms_handle_reset con 0x56102e38e400 session 0x56102fa40f00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:27.701411+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110379008 unmapped: 32047104 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:28.701587+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110379008 unmapped: 32047104 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 128 heartbeat osd_stat(store_statfs(0x1b8103000/0x0/0x1bfc00000, data 0x34ce1f7/0x3588000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 129 handle_osd_map epochs [129,129], i have 129, src has [1,129]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 129 ms_handle_reset con 0x56102e38e800 session 0x56102e7e4b40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:29.701757+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110477312 unmapped: 31948800 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1263826 data_alloc: 184549376 data_used: 5275648
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:30.701929+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110477312 unmapped: 31948800 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:31.702137+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110485504 unmapped: 31940608 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 130 handle_osd_map epochs [130,130], i have 130, src has [1,130]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38f800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 130 ms_handle_reset con 0x56102e38f800 session 0x56102f8845a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610320ea800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:32.702292+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110493696 unmapped: 31932416 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 130 handle_osd_map epochs [130,131], i have 130, src has [1,131]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 131 handle_osd_map epochs [131,131], i have 131, src has [1,131]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 131 ms_handle_reset con 0x5610320ea800 session 0x56102e810d20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:33.702434+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 131 heartbeat osd_stat(store_statfs(0x1b80fb000/0x0/0x1bfc00000, data 0x34d47ad/0x3592000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110534656 unmapped: 31891456 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:34.702612+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 131 ms_handle_reset con 0x56102cf8e000 session 0x5610310d3e00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110551040 unmapped: 31875072 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 132 ms_handle_reset con 0x56102e38e400 session 0x561032270960
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1273604 data_alloc: 184549376 data_used: 5300224
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:35.702789+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110608384 unmapped: 31817728 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:36.702943+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110649344 unmapped: 31776768 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 132 ms_handle_reset con 0x56102e38e800 session 0x56102ebd7c20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 132 heartbeat osd_stat(store_statfs(0x1b80f7000/0x0/0x1bfc00000, data 0x34d6a21/0x3596000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:37.703220+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110649344 unmapped: 31776768 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 132 heartbeat osd_stat(store_statfs(0x1b80f8000/0x0/0x1bfc00000, data 0x34d69bf/0x3595000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:38.703398+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110649344 unmapped: 31776768 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:39.703523+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110649344 unmapped: 31776768 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 132 heartbeat osd_stat(store_statfs(0x1b80f8000/0x0/0x1bfc00000, data 0x34d69bf/0x3595000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1272140 data_alloc: 184549376 data_used: 5296128
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:40.703719+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 110649344 unmapped: 31776768 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 8400.1 total, 600.0 interval
                                                          Cumulative writes: 8408 writes, 34K keys, 8408 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s
                                                          Cumulative WAL: 8408 writes, 2192 syncs, 3.84 writes per sync, written: 0.03 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 3369 writes, 11K keys, 3369 commit groups, 1.0 writes per commit group, ingest: 11.49 MB, 0.02 MB/s
                                                          Interval WAL: 3369 writes, 1442 syncs, 2.34 writes per sync, written: 0.01 GB, 0.02 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.782801628s of 14.235716820s, submitted: 138
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:41.703895+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 133 heartbeat osd_stat(store_statfs(0x1b80f8000/0x0/0x1bfc00000, data 0x34d69bf/0x3595000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111796224 unmapped: 30629888 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:42.704067+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111796224 unmapped: 30629888 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:43.704241+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111796224 unmapped: 30629888 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 133 heartbeat osd_stat(store_statfs(0x1b80f5000/0x0/0x1bfc00000, data 0x34d8ad5/0x3598000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:44.704406+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111796224 unmapped: 30629888 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1275114 data_alloc: 184549376 data_used: 5296128
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:45.704559+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111796224 unmapped: 30629888 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:46.704723+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111796224 unmapped: 30629888 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:47.704905+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38f800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111796224 unmapped: 30629888 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 133 ms_handle_reset con 0x56102e38f800 session 0x561032270d20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:48.705077+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 133 heartbeat osd_stat(store_statfs(0x1b80f4000/0x0/0x1bfc00000, data 0x34d8b47/0x359a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111796224 unmapped: 30629888 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:49.705243+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111796224 unmapped: 30629888 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1277844 data_alloc: 184549376 data_used: 5296128
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610320ea800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:50.705385+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 133 ms_handle_reset con 0x5610320ea800 session 0x5610322712c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111828992 unmapped: 30597120 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 133 ms_handle_reset con 0x56102cf8e000 session 0x561032271680
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:51.705532+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111828992 unmapped: 30597120 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:52.705688+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 133 heartbeat osd_stat(store_statfs(0x1b80f6000/0x0/0x1bfc00000, data 0x34d8ad5/0x3598000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111828992 unmapped: 30597120 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:53.705887+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.287960052s of 12.372238159s, submitted: 29
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 133 ms_handle_reset con 0x56102e38e400 session 0x561032271a40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111828992 unmapped: 30597120 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:54.706084+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111828992 unmapped: 30597120 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 133 heartbeat osd_stat(store_statfs(0x1b80f6000/0x0/0x1bfc00000, data 0x34d8ad5/0x3598000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1276075 data_alloc: 184549376 data_used: 5296128
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:55.706277+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111828992 unmapped: 30597120 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 133 heartbeat osd_stat(store_statfs(0x1b80f6000/0x0/0x1bfc00000, data 0x34d8ad5/0x3598000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:56.706427+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 133 ms_handle_reset con 0x56102e38e800 session 0x56102fa403c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111845376 unmapped: 30580736 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 133 heartbeat osd_stat(store_statfs(0x1b80f6000/0x0/0x1bfc00000, data 0x34d8ad5/0x3598000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:57.706647+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 133 heartbeat osd_stat(store_statfs(0x1b80f4000/0x0/0x1bfc00000, data 0x34d8b47/0x359a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111845376 unmapped: 30580736 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38f800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 133 ms_handle_reset con 0x56102e38f800 session 0x56102faf6b40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:58.706818+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111886336 unmapped: 30539776 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:06:59.706984+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111886336 unmapped: 30539776 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1278172 data_alloc: 184549376 data_used: 5296128
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:00.707200+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111886336 unmapped: 30539776 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:01.707392+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 133 heartbeat osd_stat(store_statfs(0x1b80f4000/0x0/0x1bfc00000, data 0x34d8ad5/0x3598000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111886336 unmapped: 30539776 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:02.707603+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111886336 unmapped: 30539776 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:03.707909+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610320eb000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.975093842s of 10.040219307s, submitted: 14
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 133 ms_handle_reset con 0x5610320eb000 session 0x56102faf65a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111886336 unmapped: 30539776 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:04.708101+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 133 ms_handle_reset con 0x56102cf8e000 session 0x56102eb943c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111919104 unmapped: 30507008 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 133 ms_handle_reset con 0x56102e38e400 session 0x561031c3b0e0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1277889 data_alloc: 184549376 data_used: 5296128
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:05.708258+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111951872 unmapped: 30474240 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 133 heartbeat osd_stat(store_statfs(0x1b80f6000/0x0/0x1bfc00000, data 0x34d8ad5/0x3598000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:06.708444+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111951872 unmapped: 30474240 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:07.708663+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111951872 unmapped: 30474240 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:08.708834+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111951872 unmapped: 30474240 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 133 heartbeat osd_stat(store_statfs(0x1b80f6000/0x0/0x1bfc00000, data 0x34d8ad5/0x3598000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:09.709037+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111951872 unmapped: 30474240 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1277889 data_alloc: 184549376 data_used: 5296128
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:10.709624+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111951872 unmapped: 30474240 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:11.709785+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111951872 unmapped: 30474240 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 133 heartbeat osd_stat(store_statfs(0x1b80f6000/0x0/0x1bfc00000, data 0x34d8ad5/0x3598000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:12.709968+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111951872 unmapped: 30474240 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:13.710112+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111951872 unmapped: 30474240 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:14.711174+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.811990738s of 10.917205811s, submitted: 26
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111378432 unmapped: 31047680 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Got map version 44
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/4212177170,v1:172.18.0.108:6811/4212177170]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1279473 data_alloc: 184549376 data_used: 5296128
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:15.711508+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 133 heartbeat osd_stat(store_statfs(0x1b80f1000/0x0/0x1bfc00000, data 0x34dd870/0x359d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111026176 unmapped: 31399936 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:16.711711+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111026176 unmapped: 31399936 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:17.711928+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111026176 unmapped: 31399936 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:18.712254+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 133 heartbeat osd_stat(store_statfs(0x1b80f0000/0x0/0x1bfc00000, data 0x34dead5/0x359e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111026176 unmapped: 31399936 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:19.712428+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111026176 unmapped: 31399936 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1279569 data_alloc: 184549376 data_used: 5296128
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:20.712648+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 133 heartbeat osd_stat(store_statfs(0x1b80eb000/0x0/0x1bfc00000, data 0x34e411e/0x35a3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111026176 unmapped: 31399936 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Got map version 45
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/4212177170,v1:172.18.0.108:6811/4212177170]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:21.712837+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111132672 unmapped: 31293440 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:22.713163+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 133 heartbeat osd_stat(store_statfs(0x1b80e8000/0x0/0x1bfc00000, data 0x34e6327/0x35a6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111132672 unmapped: 31293440 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:23.713358+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111149056 unmapped: 31277056 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:24.713766+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111149056 unmapped: 31277056 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1279345 data_alloc: 184549376 data_used: 5296128
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:25.714092+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111149056 unmapped: 31277056 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:26.714356+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 133 heartbeat osd_stat(store_statfs(0x1b80e8000/0x0/0x1bfc00000, data 0x34e6327/0x35a6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.133593559s of 12.227295876s, submitted: 24
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111190016 unmapped: 31236096 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:27.714631+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 133 heartbeat osd_stat(store_statfs(0x1b80e7000/0x0/0x1bfc00000, data 0x34e74b2/0x35a7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111190016 unmapped: 31236096 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:28.714850+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 133 heartbeat osd_stat(store_statfs(0x1b80e7000/0x0/0x1bfc00000, data 0x34e74b2/0x35a7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111190016 unmapped: 31236096 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:29.715046+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111190016 unmapped: 31236096 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1279927 data_alloc: 184549376 data_used: 5296128
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:30.715231+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111190016 unmapped: 31236096 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 133 ms_handle_reset con 0x56102e38e800 session 0x56102ebb5a40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:31.715509+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111206400 unmapped: 31219712 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 134 handle_osd_map epochs [134,134], i have 134, src has [1,134]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:32.715676+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38f800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111222784 unmapped: 31203328 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 135 handle_osd_map epochs [135,135], i have 135, src has [1,135]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 135 handle_osd_map epochs [135,135], i have 135, src has [1,135]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 135 handle_osd_map epochs [134,135], i have 135, src has [1,135]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:33.715806+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 135 ms_handle_reset con 0x56102e38f800 session 0x561030b0dc20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 135 heartbeat osd_stat(store_statfs(0x1b80e0000/0x0/0x1bfc00000, data 0x34e9bca/0x35ad000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111230976 unmapped: 31195136 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:34.716052+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610320eac00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102fd99000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111255552 unmapped: 31170560 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 136 ms_handle_reset con 0x56102fd99000 session 0x561030d7c000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 136 ms_handle_reset con 0x5610320eac00 session 0x56102ebd6d20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1305725 data_alloc: 184549376 data_used: 5304320
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:35.716238+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111255552 unmapped: 31170560 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:36.716423+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 137 ms_handle_reset con 0x56102cf8e000 session 0x561030d7c3c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111263744 unmapped: 31162368 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 137 handle_osd_map epochs [137,137], i have 137, src has [1,137]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.031128883s of 10.275044441s, submitted: 51
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 137 heartbeat osd_stat(store_statfs(0x1b80d4000/0x0/0x1bfc00000, data 0x34ee8d5/0x35b7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [0,0,0,1])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 137 ms_handle_reset con 0x56102e38e400 session 0x561030d7c5a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:37.716649+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 137 ms_handle_reset con 0x56102e38e800 session 0x561030d7c780
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111288320 unmapped: 31137792 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 138 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 138 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 138 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:38.716792+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111304704 unmapped: 31121408 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 139 handle_osd_map epochs [138,139], i have 139, src has [1,139]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 139 handle_osd_map epochs [139,139], i have 139, src has [1,139]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38f800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 139 handle_osd_map epochs [139,139], i have 139, src has [1,139]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:39.716958+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 139 ms_handle_reset con 0x56102e38f800 session 0x561030d7c960
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111378432 unmapped: 31047680 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1323462 data_alloc: 184549376 data_used: 5304320
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:40.717084+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111386624 unmapped: 31039488 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 140 handle_osd_map epochs [140,140], i have 140, src has [1,140]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:41.717241+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 140 ms_handle_reset con 0x56102cf8e000 session 0x561030d7cd20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111468544 unmapped: 30957568 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 140 heartbeat osd_stat(store_statfs(0x1b80bd000/0x0/0x1bfc00000, data 0x35020e1/0x35cf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:42.717422+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 140 heartbeat osd_stat(store_statfs(0x1b80bd000/0x0/0x1bfc00000, data 0x35020e1/0x35cf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610320eac00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111476736 unmapped: 30949376 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 140 ms_handle_reset con 0x5610320eac00 session 0x56102f884960
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 141 handle_osd_map epochs [141,141], i have 141, src has [1,141]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:43.717632+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 141 ms_handle_reset con 0x56102e38e800 session 0x561030d7d4a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102fd99800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111550464 unmapped: 30875648 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030981000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 142 handle_osd_map epochs [141,142], i have 142, src has [1,142]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 142 ms_handle_reset con 0x561030981000 session 0x5610310d21e0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:44.717827+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 142 ms_handle_reset con 0x56102fd99800 session 0x561030d7d680
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111599616 unmapped: 30826496 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1336214 data_alloc: 184549376 data_used: 5324800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:45.717966+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 143 handle_osd_map epochs [143,143], i have 143, src has [1,143]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 143 ms_handle_reset con 0x56102cf8e000 session 0x561030d7da40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 143 handle_osd_map epochs [142,143], i have 143, src has [1,143]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111607808 unmapped: 30818304 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 144 ms_handle_reset con 0x56102e38e800 session 0x561030d7dc20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:46.718088+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030981000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 144 handle_osd_map epochs [143,144], i have 144, src has [1,144]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111624192 unmapped: 30801920 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.250299454s of 10.154067993s, submitted: 259
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 145 handle_osd_map epochs [145,145], i have 145, src has [1,145]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 145 handle_osd_map epochs [145,145], i have 145, src has [1,145]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:47.718251+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 145 ms_handle_reset con 0x561030981000 session 0x561030d7de00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111730688 unmapped: 30695424 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 145 heartbeat osd_stat(store_statfs(0x1b80a8000/0x0/0x1bfc00000, data 0x351078e/0x35e4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:48.718399+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111730688 unmapped: 30695424 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:49.718554+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 145 heartbeat osd_stat(store_statfs(0x1b80a8000/0x0/0x1bfc00000, data 0x351078e/0x35e4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111730688 unmapped: 30695424 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 145 heartbeat osd_stat(store_statfs(0x1b80a8000/0x0/0x1bfc00000, data 0x351078e/0x35e4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:50.718719+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1336967 data_alloc: 184549376 data_used: 5324800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 145 heartbeat osd_stat(store_statfs(0x1b80a8000/0x0/0x1bfc00000, data 0x351078e/0x35e4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610320eac00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111788032 unmapped: 30638080 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 146 ms_handle_reset con 0x5610320eac00 session 0x561032298d20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:51.718881+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 146 handle_osd_map epochs [146,146], i have 146, src has [1,146]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111796224 unmapped: 30629888 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 147 handle_osd_map epochs [147,147], i have 147, src has [1,147]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:52.719119+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111804416 unmapped: 30621696 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:53.719325+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030981800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 148 ms_handle_reset con 0x561030981800 session 0x561032298f00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 111845376 unmapped: 30580736 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 148 handle_osd_map epochs [147,148], i have 148, src has [1,148]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 149 ms_handle_reset con 0x56102cf8e000 session 0x5610322990e0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:54.719497+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 149 ms_handle_reset con 0x56102e38e800 session 0x5610322994a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 149 handle_osd_map epochs [149,149], i have 149, src has [1,149]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 112943104 unmapped: 29483008 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 149 handle_osd_map epochs [149,149], i have 149, src has [1,149]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:55.719682+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1358621 data_alloc: 184549376 data_used: 5345280
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 112967680 unmapped: 29458432 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 150 handle_osd_map epochs [150,150], i have 150, src has [1,150]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 150 heartbeat osd_stat(store_statfs(0x1b8088000/0x0/0x1bfc00000, data 0x352c9c8/0x3606000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030981000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:56.719833+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 150 ms_handle_reset con 0x561030981000 session 0x561032299680
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610320eac00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 150 ms_handle_reset con 0x5610320eac00 session 0x5610310d21e0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 113057792 unmapped: 29368320 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:57.720018+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.110202789s of 10.763710022s, submitted: 199
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 113074176 unmapped: 29351936 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:58.720184+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 113074176 unmapped: 29351936 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:07:59.720364+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030980400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 150 ms_handle_reset con 0x561030980400 session 0x561030b0c000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 113074176 unmapped: 29351936 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:00.720516+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1355015 data_alloc: 184549376 data_used: 5345280
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 113082368 unmapped: 29343744 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:01.721279+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 151 heartbeat osd_stat(store_statfs(0x1b8072000/0x0/0x1bfc00000, data 0x3541257/0x361b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 151 ms_handle_reset con 0x56102cf8e000 session 0x56102e7e4000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114188288 unmapped: 28237824 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:02.721485+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114212864 unmapped: 28213248 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 152 heartbeat osd_stat(store_statfs(0x1b806f000/0x0/0x1bfc00000, data 0x3544500/0x361e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:03.721663+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114278400 unmapped: 28147712 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:04.721834+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 152 ms_handle_reset con 0x56102e38e800 session 0x561030d7c960
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Got map version 46
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/4212177170,v1:172.18.0.108:6811/4212177170]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114343936 unmapped: 28082176 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:05.721988+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1367489 data_alloc: 184549376 data_used: 5357568
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114343936 unmapped: 28082176 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:06.722153+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030981000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114475008 unmapped: 27951104 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 152 heartbeat osd_stat(store_statfs(0x1b8041000/0x0/0x1bfc00000, data 0x356dbdb/0x364d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:07.722335+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114475008 unmapped: 27951104 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:08.722540+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114532352 unmapped: 27893760 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.894198418s of 11.171661377s, submitted: 102
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:09.722692+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114532352 unmapped: 27893760 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:10.724162+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1371117 data_alloc: 184549376 data_used: 5357568
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114532352 unmapped: 27893760 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 153 heartbeat osd_stat(store_statfs(0x1b8039000/0x0/0x1bfc00000, data 0x3576e84/0x3655000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:11.724337+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114548736 unmapped: 27877376 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:12.724534+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114548736 unmapped: 27877376 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:13.724695+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114548736 unmapped: 27877376 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:14.724891+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114548736 unmapped: 27877376 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:15.725075+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1375209 data_alloc: 184549376 data_used: 5369856
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114548736 unmapped: 27877376 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:16.725273+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114565120 unmapped: 27860992 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 153 heartbeat osd_stat(store_statfs(0x1b7c30000/0x0/0x1bfc00000, data 0x357ead8/0x365d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:17.725546+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114565120 unmapped: 27860992 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:18.725737+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114565120 unmapped: 27860992 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:19.726064+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114565120 unmapped: 27860992 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:20.726252+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1374623 data_alloc: 184549376 data_used: 5369856
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114565120 unmapped: 27860992 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:21.726418+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.558691025s of 12.637187004s, submitted: 31
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114565120 unmapped: 27860992 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:22.726532+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114565120 unmapped: 27860992 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 153 heartbeat osd_stat(store_statfs(0x1b7c2b000/0x0/0x1bfc00000, data 0x3585203/0x3663000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:23.726756+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114565120 unmapped: 27860992 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:24.726926+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114540544 unmapped: 27885568 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:25.727081+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1371843 data_alloc: 184549376 data_used: 5373952
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114540544 unmapped: 27885568 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:26.727249+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114540544 unmapped: 27885568 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:27.727445+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114540544 unmapped: 27885568 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:28.727640+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 153 heartbeat osd_stat(store_statfs(0x1b7c25000/0x0/0x1bfc00000, data 0x358a546/0x3669000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114540544 unmapped: 27885568 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:29.727780+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114540544 unmapped: 27885568 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:30.727938+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1370713 data_alloc: 184549376 data_used: 5369856
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114573312 unmapped: 27852800 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 153 heartbeat osd_stat(store_statfs(0x1b7c25000/0x0/0x1bfc00000, data 0x358baf6/0x3669000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:31.728197+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.941261292s of 10.005348206s, submitted: 15
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114573312 unmapped: 27852800 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:32.728404+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114573312 unmapped: 27852800 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:33.728577+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114573312 unmapped: 27852800 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 153 heartbeat osd_stat(store_statfs(0x1b7c19000/0x0/0x1bfc00000, data 0x359764e/0x3675000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:34.728719+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114573312 unmapped: 27852800 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:35.728872+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1372623 data_alloc: 184549376 data_used: 5369856
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114573312 unmapped: 27852800 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:36.729037+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114597888 unmapped: 27828224 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:37.729248+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114622464 unmapped: 27803648 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 154 heartbeat osd_stat(store_statfs(0x1b7c05000/0x0/0x1bfc00000, data 0x35aa188/0x3688000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:38.729440+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114622464 unmapped: 27803648 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:39.729613+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114622464 unmapped: 27803648 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:40.729759+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610320eac00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 154 ms_handle_reset con 0x5610320eac00 session 0x56102d8a92c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1376513 data_alloc: 184549376 data_used: 5378048
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114647040 unmapped: 27779072 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 154 heartbeat osd_stat(store_statfs(0x1b7c05000/0x0/0x1bfc00000, data 0x35aa188/0x3688000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:41.729912+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030980800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 154 ms_handle_reset con 0x561030980800 session 0x56102d92b680
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.802433968s of 10.002123833s, submitted: 54
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114663424 unmapped: 27762688 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:42.730078+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 154 heartbeat osd_stat(store_statfs(0x1b7c03000/0x0/0x1bfc00000, data 0x35aaeb4/0x368b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114663424 unmapped: 27762688 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102f8af400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 154 ms_handle_reset con 0x56102f8af400 session 0x56102f8ff680
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:43.730286+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 154 heartbeat osd_stat(store_statfs(0x1b7c03000/0x0/0x1bfc00000, data 0x35aaeb4/0x368b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114663424 unmapped: 27762688 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:44.730443+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 154 ms_handle_reset con 0x56102cf8e000 session 0x56102eb49c20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114663424 unmapped: 27762688 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:45.730630+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1384691 data_alloc: 184549376 data_used: 5378048
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114704384 unmapped: 27721728 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 155 handle_osd_map epochs [155,155], i have 155, src has [1,155]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030980800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 155 handle_osd_map epochs [155,155], i have 155, src has [1,155]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:46.730803+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 155 ms_handle_reset con 0x56102e38e800 session 0x56102eb6ef00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 155 ms_handle_reset con 0x561030980800 session 0x561031c3b0e0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114786304 unmapped: 27639808 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:47.730975+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114786304 unmapped: 27639808 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:48.731144+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114786304 unmapped: 27639808 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 155 heartbeat osd_stat(store_statfs(0x1b7bfd000/0x0/0x1bfc00000, data 0x35acfaa/0x368e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:49.731330+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610320eac00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114786304 unmapped: 27639808 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 155 ms_handle_reset con 0x5610320eac00 session 0x56102eb6f4a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:50.731523+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1382387 data_alloc: 184549376 data_used: 5390336
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 155 heartbeat osd_stat(store_statfs(0x1b7c01000/0x0/0x1bfc00000, data 0x35acf9a/0x368d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561031cde000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114786304 unmapped: 27639808 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:51.731666+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 155 ms_handle_reset con 0x561031cde000 session 0x56102fa0dc20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114802688 unmapped: 27623424 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:52.731831+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114802688 unmapped: 27623424 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:53.732018+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114802688 unmapped: 27623424 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:54.732221+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 114802688 unmapped: 27623424 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:55.732394+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1381690 data_alloc: 184549376 data_used: 5390336
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 155 heartbeat osd_stat(store_statfs(0x1b7c02000/0x0/0x1bfc00000, data 0x35acf38/0x368c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.822316170s of 14.056226730s, submitted: 67
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 115851264 unmapped: 26574848 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:56.732643+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 115851264 unmapped: 26574848 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:57.732774+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 115851264 unmapped: 26574848 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:58.732930+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 155 heartbeat osd_stat(store_statfs(0x1b7bf9000/0x0/0x1bfc00000, data 0x35b4809/0x3695000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 115851264 unmapped: 26574848 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:08:59.733093+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 115867648 unmapped: 26558464 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:00.733428+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1387402 data_alloc: 184549376 data_used: 5390336
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 115867648 unmapped: 26558464 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:01.733582+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 155 heartbeat osd_stat(store_statfs(0x1b7beb000/0x0/0x1bfc00000, data 0x35c1a63/0x36a3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 115875840 unmapped: 26550272 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:02.733748+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 115875840 unmapped: 26550272 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:03.733920+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 115900416 unmapped: 26525696 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:04.734099+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 115900416 unmapped: 26525696 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:05.734268+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 155 heartbeat osd_stat(store_statfs(0x1b7be1000/0x0/0x1bfc00000, data 0x35cacad/0x36ac000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1388362 data_alloc: 184549376 data_used: 5390336
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 115900416 unmapped: 26525696 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:06.734507+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.702314377s of 10.868867874s, submitted: 39
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 115941376 unmapped: 26484736 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:07.734680+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 155 ms_handle_reset con 0x56102cf8e000 session 0x561031074f00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 155 heartbeat osd_stat(store_statfs(0x1b7bdf000/0x0/0x1bfc00000, data 0x35cd253/0x36ae000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 115941376 unmapped: 26484736 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:08.734842+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Got map version 47
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/4212177170,v1:172.18.0.108:6811/4212177170]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 155 heartbeat osd_stat(store_statfs(0x1b7bd1000/0x0/0x1bfc00000, data 0x35d9295/0x36bc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 115941376 unmapped: 26484736 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:09.735000+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 155 ms_handle_reset con 0x56102e38e800 session 0x5610310752c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610330ca000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 120250368 unmapped: 22175744 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:10.735141+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 155 ms_handle_reset con 0x5610330ca000 session 0x561031075860
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610330ca400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 155 ms_handle_reset con 0x5610330ca400 session 0x5610330ce960
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1488354 data_alloc: 184549376 data_used: 5390336
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 116097024 unmapped: 26329088 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:11.758804+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610330ca800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 155 ms_handle_reset con 0x5610330ca800 session 0x5610330cf2c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 116129792 unmapped: 26296320 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 155 ms_handle_reset con 0x56102cf8e000 session 0x5610330cf860
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:12.759006+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 155 heartbeat osd_stat(store_statfs(0x1b6fb3000/0x0/0x1bfc00000, data 0x41f782b/0x42db000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 116146176 unmapped: 26279936 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:13.759164+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 116170752 unmapped: 26255360 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:14.759330+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 116170752 unmapped: 26255360 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:15.759536+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1401898 data_alloc: 184549376 data_used: 5390336
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 155 heartbeat osd_stat(store_statfs(0x1b7bb9000/0x0/0x1bfc00000, data 0x35f17a5/0x36d3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 116170752 unmapped: 26255360 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:16.759726+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.313735008s of 10.068339348s, submitted: 154
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 117219328 unmapped: 25206784 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:17.759922+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 116219904 unmapped: 26206208 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:18.760083+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 116219904 unmapped: 26206208 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:19.760247+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 116219904 unmapped: 26206208 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:20.760408+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1401514 data_alloc: 184549376 data_used: 5390336
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 116219904 unmapped: 26206208 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 155 heartbeat osd_stat(store_statfs(0x1b7ba3000/0x0/0x1bfc00000, data 0x360915b/0x36ea000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:21.760948+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 116219904 unmapped: 26206208 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:22.761163+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 116219904 unmapped: 26206208 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:23.761445+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 117284864 unmapped: 25141248 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:24.761696+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 155 heartbeat osd_stat(store_statfs(0x1b7b90000/0x0/0x1bfc00000, data 0x361aab0/0x36fd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 117284864 unmapped: 25141248 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:25.761919+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1404186 data_alloc: 184549376 data_used: 5390336
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 117284864 unmapped: 25141248 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:26.762280+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.640464783s of 10.000023842s, submitted: 53
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 117284864 unmapped: 25141248 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:27.762546+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 117293056 unmapped: 25133056 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:28.764193+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 117293056 unmapped: 25133056 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:29.764358+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 155 heartbeat osd_stat(store_statfs(0x1b7b77000/0x0/0x1bfc00000, data 0x3632d72/0x3715000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 117301248 unmapped: 25124864 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:30.764738+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 155 heartbeat osd_stat(store_statfs(0x1b7b72000/0x0/0x1bfc00000, data 0x363938b/0x371c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1409504 data_alloc: 184549376 data_used: 5390336
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 117325824 unmapped: 25100288 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:31.765518+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 117358592 unmapped: 25067520 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:32.766585+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 117358592 unmapped: 25067520 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:33.766806+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 155 ms_handle_reset con 0x56102e38e800 session 0x5610322992c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 118407168 unmapped: 24018944 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:34.767192+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 155 heartbeat osd_stat(store_statfs(0x1b7b48000/0x0/0x1bfc00000, data 0x36644df/0x3745000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 118407168 unmapped: 24018944 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:35.767397+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610330ca000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1409584 data_alloc: 184549376 data_used: 5390336
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 155 ms_handle_reset con 0x5610330ca000 session 0x561032298d20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 118423552 unmapped: 24002560 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:36.767499+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 155 ms_handle_reset con 0x561030981000 session 0x561030d7c5a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.650327682s of 10.006856918s, submitted: 362
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 119242752 unmapped: 23183360 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:37.767675+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 119259136 unmapped: 23166976 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:38.767880+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Got map version 48
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/4212177170,v1:172.18.0.108:6811/4212177170]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 119308288 unmapped: 23117824 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:39.768160+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 127713280 unmapped: 14712832 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:40.768315+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1735468 data_alloc: 184549376 data_used: 5390336
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 155 heartbeat osd_stat(store_statfs(0x1b4b32000/0x0/0x1bfc00000, data 0x667a5f0/0x675b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 119332864 unmapped: 23093248 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:41.768589+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:42.768759+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 119341056 unmapped: 23085056 heap: 142426112 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610330ca400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:43.768946+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 119349248 unmapped: 31473664 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:44.769085+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 119414784 unmapped: 31408128 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 155 heartbeat osd_stat(store_statfs(0x1b3b2a000/0x0/0x1bfc00000, data 0x7682730/0x7764000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 155 heartbeat osd_stat(store_statfs(0x1b3b2a000/0x0/0x1bfc00000, data 0x7682730/0x7764000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:45.769222+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 127909888 unmapped: 22913024 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2121626 data_alloc: 184549376 data_used: 5390336
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:46.769399+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 119685120 unmapped: 31137792 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.250531197s of 10.012410164s, submitted: 66
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:47.769623+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 119685120 unmapped: 31137792 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:48.769788+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 120848384 unmapped: 29974528 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:49.769981+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 120971264 unmapped: 29851648 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:50.770174+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 121061376 unmapped: 29761536 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 155 heartbeat osd_stat(store_statfs(0x1ad16a000/0x0/0x1bfc00000, data 0xce9ec2b/0xcf82000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2458014 data_alloc: 184549376 data_used: 5390336
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:51.770348+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 121135104 unmapped: 29687808 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:52.770532+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 121200640 unmapped: 29622272 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:53.770676+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 121290752 unmapped: 29532160 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:54.770821+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 121413632 unmapped: 29409280 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 155 heartbeat osd_stat(store_statfs(0x1aa95b000/0x0/0x1bfc00000, data 0xf6b14ab/0xf793000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:55.770987+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 121487360 unmapped: 29335552 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2788800 data_alloc: 184549376 data_used: 5390336
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:56.771157+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 121487360 unmapped: 29335552 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.254003525s of 10.068947792s, submitted: 82
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:57.771336+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 129966080 unmapped: 20856832 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:58.771517+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 121716736 unmapped: 29106176 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 155 heartbeat osd_stat(store_statfs(0x1a893a000/0x0/0x1bfc00000, data 0x116cfe34/0x117b3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:09:59.771665+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 121716736 unmapped: 29106176 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 155 heartbeat osd_stat(store_statfs(0x1a893a000/0x0/0x1bfc00000, data 0x116cfe34/0x117b3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:00.771816+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 121872384 unmapped: 28950528 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 3007644 data_alloc: 184549376 data_used: 5390336
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:01.771988+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 130342912 unmapped: 20480000 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:02.772143+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 122085376 unmapped: 28737536 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 155 handle_osd_map epochs [155,156], i have 155, src has [1,156]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 156 handle_osd_map epochs [156,156], i have 156, src has [1,156]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:03.772309+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 121987072 unmapped: 28835840 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 156 heartbeat osd_stat(store_statfs(0x1a690c000/0x0/0x1bfc00000, data 0x136fc764/0x137e1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 156 heartbeat osd_stat(store_statfs(0x1a690c000/0x0/0x1bfc00000, data 0x136fc764/0x137e1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:04.772488+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 130383872 unmapped: 20439040 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:05.772684+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 122126336 unmapped: 28696576 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 3342536 data_alloc: 184549376 data_used: 5398528
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:06.772890+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 122183680 unmapped: 28639232 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.418739319s of 10.453207016s, submitted: 113
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 156 heartbeat osd_stat(store_statfs(0x1a50f5000/0x0/0x1bfc00000, data 0x14f14479/0x14ff9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:07.773099+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 123256832 unmapped: 27566080 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 156 heartbeat osd_stat(store_statfs(0x1a50f5000/0x0/0x1bfc00000, data 0x14f14479/0x14ff9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:08.773274+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 131743744 unmapped: 19079168 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:09.773440+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 123396096 unmapped: 27426816 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:10.773641+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 123592704 unmapped: 27230208 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 3627378 data_alloc: 184549376 data_used: 5398528
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:11.773776+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 123707392 unmapped: 27115520 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 157 handle_osd_map epochs [157,157], i have 157, src has [1,157]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 157 handle_osd_map epochs [157,158], i have 157, src has [1,158]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 158 handle_osd_map epochs [158,158], i have 158, src has [1,158]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:12.773917+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 158 handle_osd_map epochs [158,158], i have 158, src has [1,158]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 158 heartbeat osd_stat(store_statfs(0x1a20c7000/0x0/0x1bfc00000, data 0x17f3fd1a/0x18027000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 123813888 unmapped: 27009024 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 158 heartbeat osd_stat(store_statfs(0x1a20c7000/0x0/0x1bfc00000, data 0x17f3fd1a/0x18027000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:13.774036+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 123977728 unmapped: 26845184 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:14.774213+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 124108800 unmapped: 26714112 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 158 heartbeat osd_stat(store_statfs(0x19f0b3000/0x0/0x1bfc00000, data 0x1af521db/0x1b03b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:15.774385+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 132628480 unmapped: 18194432 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 4131796 data_alloc: 184549376 data_used: 5419008
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 159 handle_osd_map epochs [159,159], i have 159, src has [1,159]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:16.774571+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 124575744 unmapped: 26247168 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 159 handle_osd_map epochs [159,159], i have 159, src has [1,159]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.833921432s of 10.140710831s, submitted: 165
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:17.774728+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 124796928 unmapped: 26025984 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 160 handle_osd_map epochs [160,160], i have 160, src has [1,160]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:18.774869+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 161 ms_handle_reset con 0x5610330ca400 session 0x56102e7e5680
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 124805120 unmapped: 26017792 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 161 handle_osd_map epochs [161,161], i have 161, src has [1,161]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 161 heartbeat osd_stat(store_statfs(0x19c874000/0x0/0x1bfc00000, data 0x1d78cbf5/0x1d879000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:19.775016+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 124813312 unmapped: 26009600 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:20.775174+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 124821504 unmapped: 26001408 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 161 handle_osd_map epochs [162,162], i have 162, src has [1,162]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 161 handle_osd_map epochs [162,162], i have 162, src has [1,162]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 161 handle_osd_map epochs [162,162], i have 162, src has [1,162]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 162 ms_handle_reset con 0x56102cf8e000 session 0x5610330ce1e0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1521074 data_alloc: 184549376 data_used: 5443584
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:21.775330+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 125059072 unmapped: 25763840 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 163 handle_osd_map epochs [163,163], i have 163, src has [1,163]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 163 heartbeat osd_stat(store_statfs(0x1b6857000/0x0/0x1bfc00000, data 0x37a8f7e/0x3896000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 163 ms_handle_reset con 0x56102e38e800 session 0x5610310750e0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:22.775662+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030981000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 163 ms_handle_reset con 0x561030981000 session 0x56102faf6b40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 125075456 unmapped: 25747456 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:23.775834+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 125149184 unmapped: 25673728 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610330ca000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 163 ms_handle_reset con 0x5610330ca000 session 0x56102faf6780
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:24.776004+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 125272064 unmapped: 25550848 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610330cac00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 163 ms_handle_reset con 0x5610330cac00 session 0x56102fa40f00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:25.776175+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 125280256 unmapped: 25542656 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1532192 data_alloc: 184549376 data_used: 5443584
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:26.776552+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 164 ms_handle_reset con 0x56102cf8e000 session 0x561032271a40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 164 handle_osd_map epochs [164,164], i have 164, src has [1,164]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 125313024 unmapped: 25509888 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 164 ms_handle_reset con 0x56102e38e800 session 0x561032270d20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:27.776787+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 164 heartbeat osd_stat(store_statfs(0x1b680a000/0x0/0x1bfc00000, data 0x37f4584/0x38e4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 164 handle_osd_map epochs [165,165], i have 165, src has [1,165]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 164 handle_osd_map epochs [165,165], i have 165, src has [1,165]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.559008598s of 10.248350143s, submitted: 231
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 125534208 unmapped: 25288704 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 165 handle_osd_map epochs [165,165], i have 165, src has [1,165]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030981000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 165 ms_handle_reset con 0x561030981000 session 0x5610330ce3c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:28.776961+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 125534208 unmapped: 25288704 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610330ca000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b67f9000/0x0/0x1bfc00000, data 0x3800dea/0x38f4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:29.777174+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 165 ms_handle_reset con 0x5610330ca000 session 0x56102eb72780
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 125542400 unmapped: 25280512 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:30.777335+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610330cb000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 165 ms_handle_reset con 0x5610330cb000 session 0x56102eb954a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 124903424 unmapped: 25919488 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1547000 data_alloc: 184549376 data_used: 5464064
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:31.777580+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 127016960 unmapped: 23805952 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 165 ms_handle_reset con 0x56102cf8e000 session 0x561030d7cb40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 165 ms_handle_reset con 0x56102e38e800 session 0x56102eb95e00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:32.777808+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 127025152 unmapped: 23797760 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:33.777938+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 127205376 unmapped: 23617536 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030981000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 165 ms_handle_reset con 0x561030981000 session 0x56102d92b680
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610330ca000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 165 ms_handle_reset con 0x5610330ca000 session 0x56103140fe00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:34.778158+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b55ff000/0x0/0x1bfc00000, data 0x385982d/0x394e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6caf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 127238144 unmapped: 23584768 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:35.778306+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 127238144 unmapped: 23584768 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610330cb000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1548482 data_alloc: 184549376 data_used: 5464064
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 165 handle_osd_map epochs [165,166], i have 165, src has [1,166]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 166 ms_handle_reset con 0x5610330cb000 session 0x56103140e1e0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:36.778471+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 166 handle_osd_map epochs [166,166], i have 166, src has [1,166]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 127336448 unmapped: 23486464 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 166 handle_osd_map epochs [166,167], i have 166, src has [1,167]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:37.778633+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.552536964s of 10.063187599s, submitted: 166
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 167 handle_osd_map epochs [167,167], i have 167, src has [1,167]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 127344640 unmapped: 23478272 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 167 handle_osd_map epochs [167,167], i have 167, src has [1,167]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:38.778862+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 127344640 unmapped: 23478272 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 167 ms_handle_reset con 0x56102cf8e000 session 0x56102fa10780
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 167 handle_osd_map epochs [167,168], i have 167, src has [1,168]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Got map version 49
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/4212177170,v1:172.18.0.108:6811/4212177170]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:39.779018+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 127287296 unmapped: 23535616 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 168 ms_handle_reset con 0x56102e38e800 session 0x56102e86e3c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030981000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 168 handle_osd_map epochs [169,169], i have 168, src has [1,169]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 168 handle_osd_map epochs [169,169], i have 169, src has [1,169]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:40.779256+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 169 ms_handle_reset con 0x561030981000 session 0x561031c3a3c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 169 heartbeat osd_stat(store_statfs(0x1b51be000/0x0/0x1bfc00000, data 0x3896397/0x398f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x70af9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610330ca000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 127229952 unmapped: 23592960 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1572189 data_alloc: 184549376 data_used: 5492736
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 169 handle_osd_map epochs [170,170], i have 169, src has [1,170]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:41.779393+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610330cb400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 170 ms_handle_reset con 0x5610330cb400 session 0x56102fa105a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610330cb800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 170 ms_handle_reset con 0x5610330cb800 session 0x56102eb6e960
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 128483328 unmapped: 22339584 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 170 handle_osd_map epochs [170,171], i have 170, src has [1,171]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:42.779526+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 171 handle_osd_map epochs [171,171], i have 171, src has [1,171]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 128761856 unmapped: 22061056 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 171 handle_osd_map epochs [171,171], i have 171, src has [1,171]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 171 ms_handle_reset con 0x5610330ca000 session 0x561032298000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:43.779691+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 128933888 unmapped: 21889024 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 171 ms_handle_reset con 0x56102cf8e000 session 0x56102f90fa40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:44.779850+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 171 handle_osd_map epochs [172,172], i have 171, src has [1,172]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 172 handle_osd_map epochs [172,172], i have 172, src has [1,172]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 172 handle_osd_map epochs [172,172], i have 172, src has [1,172]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 172 ms_handle_reset con 0x56102e38e800 session 0x561032299c20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 128983040 unmapped: 21839872 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:45.780023+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Got map version 50
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/4212177170,v1:172.18.0.108:6811/4212177170]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 129081344 unmapped: 21741568 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 172 heartbeat osd_stat(store_statfs(0x1b5151000/0x0/0x1bfc00000, data 0x38ff445/0x39fd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x70af9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1586157 data_alloc: 184549376 data_used: 5492736
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:46.780246+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 172 handle_osd_map epochs [173,173], i have 172, src has [1,173]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 173 handle_osd_map epochs [173,173], i have 173, src has [1,173]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030981000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 129114112 unmapped: 21708800 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:47.780477+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 173 handle_osd_map epochs [174,174], i have 173, src has [1,174]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 174 ms_handle_reset con 0x561030981000 session 0x561032298960
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610330cb400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.053727150s of 10.000649452s, submitted: 299
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 129138688 unmapped: 21684224 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 174 ms_handle_reset con 0x5610330cb400 session 0x5610322990e0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:48.780674+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 129146880 unmapped: 21676032 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:49.780850+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 129171456 unmapped: 21651456 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:50.781045+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 129171456 unmapped: 21651456 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 174 handle_osd_map epochs [174,175], i have 174, src has [1,175]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1594385 data_alloc: 184549376 data_used: 5513216
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:51.781211+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 175 handle_osd_map epochs [175,175], i have 175, src has [1,175]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 175 heartbeat osd_stat(store_statfs(0x1b5133000/0x0/0x1bfc00000, data 0x391ac22/0x3a1a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x70af9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 129187840 unmapped: 21635072 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:52.781383+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 129187840 unmapped: 21635072 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:53.781574+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 129187840 unmapped: 21635072 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 175 heartbeat osd_stat(store_statfs(0x1b5119000/0x0/0x1bfc00000, data 0x393240c/0x3a34000, compress 0x0/0x0/0x0, omap 0x649, meta 0x70af9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:54.781786+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 129187840 unmapped: 21635072 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:55.781968+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 129187840 unmapped: 21635072 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 175 handle_osd_map epochs [175,176], i have 175, src has [1,176]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1606875 data_alloc: 184549376 data_used: 5525504
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:56.782146+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 129220608 unmapped: 21602304 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:57.782406+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 129220608 unmapped: 21602304 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:58.782634+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 129220608 unmapped: 21602304 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 176 heartbeat osd_stat(store_statfs(0x1b50f8000/0x0/0x1bfc00000, data 0x3952a38/0x3a54000, compress 0x0/0x0/0x0, omap 0x649, meta 0x70af9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:10:59.782782+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 129236992 unmapped: 21585920 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:00.782967+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.901246071s of 13.189706802s, submitted: 86
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 129179648 unmapped: 21643264 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1609021 data_alloc: 184549376 data_used: 5525504
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:01.783087+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 130301952 unmapped: 20520960 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:02.783228+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 130301952 unmapped: 20520960 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:03.783412+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 130301952 unmapped: 20520960 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:04.783620+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 176 heartbeat osd_stat(store_statfs(0x1b6091000/0x0/0x1bfc00000, data 0x3999fef/0x3a9c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 176 ms_handle_reset con 0x56102cf8e000 session 0x56102fd47a40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 130318336 unmapped: 20504576 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:05.783779+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 130318336 unmapped: 20504576 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1611891 data_alloc: 184549376 data_used: 5525504
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:06.783933+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 130293760 unmapped: 20529152 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:07.784141+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 176 ms_handle_reset con 0x56102e38e800 session 0x56102d92b4a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030981000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 136486912 unmapped: 14336000 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 176 ms_handle_reset con 0x561030981000 session 0x5610322712c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610330ca000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 176 ms_handle_reset con 0x5610330ca000 session 0x56103140f860
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:08.784328+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 130678784 unmapped: 20144128 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:09.784544+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 130678784 unmapped: 20144128 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610330cbc00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 176 ms_handle_reset con 0x5610330cbc00 session 0x56102f90fe00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:10.784684+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 176 heartbeat osd_stat(store_statfs(0x1b587f000/0x0/0x1bfc00000, data 0x41ac798/0x42ae000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 130670592 unmapped: 20152320 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1679511 data_alloc: 184549376 data_used: 5525504
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 176 handle_osd_map epochs [176,177], i have 176, src has [1,177]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.988625526s of 10.451142311s, submitted: 100
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:11.784853+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 177 ms_handle_reset con 0x56102cf8e000 session 0x56102ebb4960
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 130686976 unmapped: 20135936 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:12.784994+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 177 ms_handle_reset con 0x56102e38e800 session 0x561032270960
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030981000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 177 ms_handle_reset con 0x561030981000 session 0x561032271a40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 130695168 unmapped: 20127744 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:13.785158+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 130695168 unmapped: 20127744 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:14.785344+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 130711552 unmapped: 20111360 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610330ca000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 177 ms_handle_reset con 0x5610330ca000 session 0x561032270d20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610330cbc00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 177 ms_handle_reset con 0x5610330cbc00 session 0x56102faf6000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:15.785539+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 177 handle_osd_map epochs [177,178], i have 177, src has [1,178]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 178 ms_handle_reset con 0x56102cf8e000 session 0x56102fafbe00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 130719744 unmapped: 20103168 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1686418 data_alloc: 184549376 data_used: 5545984
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 178 heartbeat osd_stat(store_statfs(0x1b583b000/0x0/0x1bfc00000, data 0x41ec74b/0x42f2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:16.785659+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 178 handle_osd_map epochs [178,179], i have 178, src has [1,179]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 179 handle_osd_map epochs [179,179], i have 179, src has [1,179]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 179 handle_osd_map epochs [179,179], i have 179, src has [1,179]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 130744320 unmapped: 20078592 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 179 ms_handle_reset con 0x56102e38e800 session 0x56102eb6ef00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:17.785840+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 130744320 unmapped: 20078592 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030981000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:18.786005+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 131801088 unmapped: 19021824 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 179 handle_osd_map epochs [179,180], i have 179, src has [1,180]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 180 handle_osd_map epochs [180,180], i have 180, src has [1,180]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610330ca000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:19.786149+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 180 ms_handle_reset con 0x5610330ca000 session 0x56102eb6f4a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 180 ms_handle_reset con 0x561030981000 session 0x56102fa0d2c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 131424256 unmapped: 19398656 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:20.786328+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610330cbc00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 180 ms_handle_reset con 0x5610330cbc00 session 0x56102ebd6000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 180 ms_handle_reset con 0x56102e38e800 session 0x561031075c20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 131432448 unmapped: 19390464 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 180 heartbeat osd_stat(store_statfs(0x1b580f000/0x0/0x1bfc00000, data 0x42132e6/0x431d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 180 ms_handle_reset con 0x56102cf8e000 session 0x56102d92b680
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1700290 data_alloc: 184549376 data_used: 5558272
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:21.786535+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.042214394s of 10.368903160s, submitted: 77
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 132481024 unmapped: 18341888 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:22.786765+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 132481024 unmapped: 18341888 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030981000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610330ca000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 180 ms_handle_reset con 0x5610330ca000 session 0x5610310743c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561031fb5800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 180 ms_handle_reset con 0x561031fb5800 session 0x56102eba8960
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561031fb5400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:23.786905+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 180 ms_handle_reset con 0x561031fb5400 session 0x56102e7e5680
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 132431872 unmapped: 18391040 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 180 handle_osd_map epochs [181,181], i have 180, src has [1,181]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:24.787039+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 181 ms_handle_reset con 0x561030981000 session 0x56102eb95680
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 132431872 unmapped: 18391040 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:25.787181+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 132431872 unmapped: 18391040 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1706419 data_alloc: 184549376 data_used: 5562368
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:26.787333+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 181 ms_handle_reset con 0x56102cf8e000 session 0x561032271680
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 132448256 unmapped: 18374656 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 181 heartbeat osd_stat(store_statfs(0x1b57db000/0x0/0x1bfc00000, data 0x424555b/0x4351000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:27.787548+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 132448256 unmapped: 18374656 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 181 ms_handle_reset con 0x56102e38e800 session 0x56102fa0dc20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561031fb4800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 181 ms_handle_reset con 0x561031fb4800 session 0x5610330cf0e0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 181 handle_osd_map epochs [182,182], i have 181, src has [1,182]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:28.787690+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561031fb5c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 182 ms_handle_reset con 0x561031fb5c00 session 0x561031074b40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 132481024 unmapped: 18341888 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 182 handle_osd_map epochs [182,183], i have 182, src has [1,183]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 183 handle_osd_map epochs [183,183], i have 183, src has [1,183]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 183 ms_handle_reset con 0x56102cf8e000 session 0x56102e7e5a40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:29.787834+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 183 handle_osd_map epochs [183,183], i have 183, src has [1,183]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 132505600 unmapped: 18317312 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:30.787987+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 132530176 unmapped: 18292736 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 183 handle_osd_map epochs [183,183], i have 183, src has [1,183]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 183 ms_handle_reset con 0x56102e38e800 session 0x561030d7cb40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1721307 data_alloc: 184549376 data_used: 5570560
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:31.788176+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.471020699s of 10.003564835s, submitted: 140
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b57c7000/0x0/0x1bfc00000, data 0x4255547/0x4365000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 132538368 unmapped: 18284544 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 183 handle_osd_map epochs [183,184], i have 183, src has [1,184]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030981000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:32.788318+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 184 ms_handle_reset con 0x561030981000 session 0x561032298d20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 184 handle_osd_map epochs [184,184], i have 184, src has [1,184]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 184 heartbeat osd_stat(store_statfs(0x1b57a9000/0x0/0x1bfc00000, data 0x42750aa/0x4383000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 132554752 unmapped: 18268160 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561031fb4800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 184 ms_handle_reset con 0x561031fb4800 session 0x56102ebd7860
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:33.788525+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 132562944 unmapped: 18259968 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561031fb5000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 184 ms_handle_reset con 0x561031fb5000 session 0x561031c3ad20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:34.788679+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 132562944 unmapped: 18259968 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 184 heartbeat osd_stat(store_statfs(0x1b5788000/0x0/0x1bfc00000, data 0x42954f2/0x43a4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:35.788893+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 132562944 unmapped: 18259968 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 184 handle_osd_map epochs [185,185], i have 184, src has [1,185]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 185 handle_osd_map epochs [186,186], i have 185, src has [1,186]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 186 ms_handle_reset con 0x56102cf8e000 session 0x5610322992c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1730891 data_alloc: 184549376 data_used: 5595136
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:36.789063+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 186 handle_osd_map epochs [186,186], i have 186, src has [1,186]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 186 ms_handle_reset con 0x56102e38e800 session 0x561031eb52c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 132620288 unmapped: 18202624 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030981000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 186 handle_osd_map epochs [185,186], i have 186, src has [1,186]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 186 handle_osd_map epochs [187,187], i have 186, src has [1,187]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 187 handle_osd_map epochs [187,187], i have 187, src has [1,187]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 187 heartbeat osd_stat(store_statfs(0x1b577f000/0x0/0x1bfc00000, data 0x4299b51/0x43ad000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:37.789268+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 187 ms_handle_reset con 0x561030981000 session 0x561030b0d860
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561031fb4800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 187 ms_handle_reset con 0x561031fb4800 session 0x56102d8a8b40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561031fb5800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 187 ms_handle_reset con 0x561031fb5800 session 0x56102d92b860
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 133586944 unmapped: 17235968 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:38.789442+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 187 heartbeat osd_stat(store_statfs(0x1b5018000/0x0/0x1bfc00000, data 0x49fcdce/0x4b12000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 187 ms_handle_reset con 0x56102cf8e000 session 0x56102d92b4a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 133709824 unmapped: 17113088 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 187 handle_osd_map epochs [188,188], i have 187, src has [1,188]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:39.789660+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 188 ms_handle_reset con 0x56102e38e800 session 0x56102ebd6000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 133734400 unmapped: 17088512 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030981000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 188 ms_handle_reset con 0x561030981000 session 0x56102eb49a40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561031fb4800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:40.789774+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 188 ms_handle_reset con 0x561031fb4800 session 0x56102faf6d20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 133914624 unmapped: 16908288 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1757069 data_alloc: 184549376 data_used: 5595136
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561031fb5800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b573c000/0x0/0x1bfc00000, data 0x42d7725/0x43ee000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 188 handle_osd_map epochs [189,189], i have 188, src has [1,189]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 188 handle_osd_map epochs [189,189], i have 189, src has [1,189]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 188 handle_osd_map epochs [189,189], i have 189, src has [1,189]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 188 handle_osd_map epochs [189,189], i have 189, src has [1,189]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 188 handle_osd_map epochs [189,189], i have 189, src has [1,189]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:41.789917+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610330ca000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 189 ms_handle_reset con 0x5610330ca000 session 0x56102fd47c20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 189 ms_handle_reset con 0x561031fb5800 session 0x56102e811e00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 133955584 unmapped: 16867328 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:42.790094+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 133955584 unmapped: 16867328 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:43.790229+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 189 handle_osd_map epochs [190,190], i have 189, src has [1,190]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.988271713s of 11.912605286s, submitted: 259
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 134004736 unmapped: 16818176 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:44.790499+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 190 heartbeat osd_stat(store_statfs(0x1b571d000/0x0/0x1bfc00000, data 0x42f4ff0/0x440f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 134021120 unmapped: 16801792 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:45.790691+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 190 ms_handle_reset con 0x56102cf8e000 session 0x56102e86ef00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 190 heartbeat osd_stat(store_statfs(0x1b570f000/0x0/0x1bfc00000, data 0x4304e17/0x441f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 134086656 unmapped: 16736256 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1762197 data_alloc: 184549376 data_used: 5595136
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:46.790852+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 190 handle_osd_map epochs [191,191], i have 190, src has [1,191]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 191 handle_osd_map epochs [191,191], i have 191, src has [1,191]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 135192576 unmapped: 15630336 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:47.791036+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 191 ms_handle_reset con 0x56102e38e800 session 0x561031c3a1e0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 135200768 unmapped: 15622144 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 191 heartbeat osd_stat(store_statfs(0x1b56e9000/0x0/0x1bfc00000, data 0x432828c/0x4443000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:48.791222+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 191 heartbeat osd_stat(store_statfs(0x1b56e8000/0x0/0x1bfc00000, data 0x432831c/0x4444000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [0,0,0,1])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 134610944 unmapped: 16211968 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:49.791379+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 191 heartbeat osd_stat(store_statfs(0x1b56e4000/0x0/0x1bfc00000, data 0x432e9c3/0x444a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 134610944 unmapped: 16211968 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:50.791536+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 134610944 unmapped: 16211968 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1771069 data_alloc: 184549376 data_used: 5607424
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:51.791698+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 191 heartbeat osd_stat(store_statfs(0x1b56cb000/0x0/0x1bfc00000, data 0x4346cd8/0x4462000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 191 handle_osd_map epochs [192,192], i have 191, src has [1,192]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 191 handle_osd_map epochs [192,192], i have 192, src has [1,192]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 191 handle_osd_map epochs [192,192], i have 192, src has [1,192]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 191 handle_osd_map epochs [192,192], i have 192, src has [1,192]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 134660096 unmapped: 16162816 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 192 heartbeat osd_stat(store_statfs(0x1b56b4000/0x0/0x1bfc00000, data 0x435d0e8/0x4479000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:52.791851+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030981000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 192 ms_handle_reset con 0x561030981000 session 0x56102f8ff680
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 134668288 unmapped: 16154624 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:53.791995+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561031fb4800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.872537613s of 10.279749870s, submitted: 148
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 135798784 unmapped: 15024128 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 192 handle_osd_map epochs [193,193], i have 192, src has [1,193]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:54.792123+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 193 ms_handle_reset con 0x561031fb4800 session 0x56102fafad20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 135839744 unmapped: 14983168 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:55.792503+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 193 handle_osd_map epochs [194,194], i have 193, src has [1,194]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 194 handle_osd_map epochs [193,194], i have 194, src has [1,194]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 135970816 unmapped: 14852096 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1788103 data_alloc: 184549376 data_used: 5627904
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:56.792674+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 194 handle_osd_map epochs [194,195], i have 194, src has [1,195]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 195 handle_osd_map epochs [195,195], i have 195, src has [1,195]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 195 handle_osd_map epochs [195,195], i have 195, src has [1,195]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 195 heartbeat osd_stat(store_statfs(0x1b5666000/0x0/0x1bfc00000, data 0x43a6b4f/0x44c5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [1])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 195 ms_handle_reset con 0x56102cf8e000 session 0x561030b0d0e0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 136003584 unmapped: 14819328 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:57.792863+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 195 handle_osd_map epochs [196,196], i have 195, src has [1,196]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 196 handle_osd_map epochs [196,196], i have 196, src has [1,196]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 196 ms_handle_reset con 0x56102e38e800 session 0x56102fa10f00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030981000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 136019968 unmapped: 14802944 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561031fb5800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:58.793011+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 196 handle_osd_map epochs [197,197], i have 196, src has [1,197]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 197 handle_osd_map epochs [197,197], i have 197, src has [1,197]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 197 ms_handle_reset con 0x561031fb5800 session 0x56102e7e43c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 136077312 unmapped: 14745600 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 197 ms_handle_reset con 0x561030981000 session 0x56102eba92c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:11:59.793190+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030d60c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 197 handle_osd_map epochs [197,198], i have 197, src has [1,198]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 198 handle_osd_map epochs [197,198], i have 198, src has [1,198]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 198 ms_handle_reset con 0x561030d60c00 session 0x56102ebb52c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 136126464 unmapped: 14696448 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:00.793347+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 198 ms_handle_reset con 0x56102e38e800 session 0x56102fa41680
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 136134656 unmapped: 14688256 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 198 ms_handle_reset con 0x56102cf8e000 session 0x56102fa41a40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030981000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 198 handle_osd_map epochs [199,199], i have 198, src has [1,199]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 199 handle_osd_map epochs [199,199], i have 199, src has [1,199]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:01.793497+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1812276 data_alloc: 184549376 data_used: 5656576
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 199 heartbeat osd_stat(store_statfs(0x1b563f000/0x0/0x1bfc00000, data 0x43c09a9/0x44ea000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 137265152 unmapped: 13557760 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 199 handle_osd_map epochs [200,200], i have 199, src has [1,200]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:02.793668+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 200 ms_handle_reset con 0x561030981000 session 0x56102e7e5860
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 137322496 unmapped: 13500416 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030d60c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:03.793813+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 200 ms_handle_reset con 0x561030d60c00 session 0x56102fa0c960
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 137445376 unmapped: 13377536 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.036987305s of 10.330767632s, submitted: 231
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:04.793976+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 137461760 unmapped: 13361152 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:05.794154+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 200 heartbeat osd_stat(store_statfs(0x1b55e4000/0x0/0x1bfc00000, data 0x4420292/0x454a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561031fb5800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 200 ms_handle_reset con 0x561031fb5800 session 0x56102e86fc20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 137502720 unmapped: 13320192 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 200 handle_osd_map epochs [200,201], i have 200, src has [1,201]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:06.794324+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1819676 data_alloc: 184549376 data_used: 5664768
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 201 handle_osd_map epochs [201,201], i have 201, src has [1,201]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 137617408 unmapped: 13205504 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:07.794583+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 201 handle_osd_map epochs [202,202], i have 201, src has [1,202]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 201 handle_osd_map epochs [202,202], i have 202, src has [1,202]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 202 ms_handle_reset con 0x56102cf8e000 session 0x5610322703c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 202 heartbeat osd_stat(store_statfs(0x1b51ad000/0x0/0x1bfc00000, data 0x444dcc0/0x457e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 137650176 unmapped: 13172736 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:08.794771+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030981000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 202 ms_handle_reset con 0x561030981000 session 0x56102eb6f4a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 202 ms_handle_reset con 0x56102e38e800 session 0x561032270960
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 137650176 unmapped: 13172736 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:09.794963+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030d60c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 137822208 unmapped: 13000704 heap: 150822912 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561031fb5800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:10.795115+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 137945088 unmapped: 29671424 heap: 167616512 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:11.795269+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2255625 data_alloc: 184549376 data_used: 5668864
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 137904128 unmapped: 29712384 heap: 167616512 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:12.795419+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 202 heartbeat osd_stat(store_statfs(0x1b057a000/0x0/0x1bfc00000, data 0x9081d15/0x91b4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [0,0,0,0,0,0,0,1])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 138027008 unmapped: 29589504 heap: 167616512 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:13.795575+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 142336000 unmapped: 25280512 heap: 167616512 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.924440384s of 10.028986931s, submitted: 120
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:14.795763+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 202 heartbeat osd_stat(store_statfs(0x1ab15c000/0x0/0x1bfc00000, data 0xe4a0c9d/0xe5d2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 143482880 unmapped: 24133632 heap: 167616512 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 202 heartbeat osd_stat(store_statfs(0x1ab15c000/0x0/0x1bfc00000, data 0xe4a0c9d/0xe5d2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [0,0,0,0,0,0,0,2])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:15.796181+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 147103744 unmapped: 20512768 heap: 167616512 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:16.796337+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 3289927 data_alloc: 184549376 data_used: 5668864
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 202 heartbeat osd_stat(store_statfs(0x1a753a000/0x0/0x1bfc00000, data 0x120c483f/0x121f4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 144039936 unmapped: 23576576 heap: 167616512 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102fe59400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:17.796529+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 202 ms_handle_reset con 0x56102fe59400 session 0x56102e7e4000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 202 heartbeat osd_stat(store_statfs(0x1a753a000/0x0/0x1bfc00000, data 0x120c483f/0x121f4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 145399808 unmapped: 26419200 heap: 171819008 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:18.796703+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030b67800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 157818880 unmapped: 14000128 heap: 171819008 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 202 heartbeat osd_stat(store_statfs(0x1a2931000/0x0/0x1bfc00000, data 0x16cce5dd/0x16dfd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [0,0,0,0,0,0,0,0,1])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102dc96800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:19.796843+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 162029568 unmapped: 9789440 heap: 171819008 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:20.797008+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 202 ms_handle_reset con 0x561030b67800 session 0x56102e7e5680
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 149479424 unmapped: 22339584 heap: 171819008 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:21.797140+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 4389497 data_alloc: 184549376 data_used: 5668864
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 158023680 unmapped: 22200320 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 202 handle_osd_map epochs [203,203], i have 202, src has [1,203]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 203 handle_osd_map epochs [203,203], i have 203, src has [1,203]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:22.797260+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 203 handle_osd_map epochs [203,203], i have 203, src has [1,203]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 203 ms_handle_reset con 0x56102dc96800 session 0x56102fa414a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 203 ms_handle_reset con 0x561031fb5800 session 0x56102eb72780
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 203 ms_handle_reset con 0x561030d60c00 session 0x561032270d20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 203 ms_handle_reset con 0x56102e38e800 session 0x56102fa10f00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102fe59400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 141508608 unmapped: 38715392 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 203 ms_handle_reset con 0x56102fe59400 session 0x561030b0d0e0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:23.797428+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 203 handle_osd_map epochs [204,204], i have 203, src has [1,204]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 204 ms_handle_reset con 0x56102cf8e000 session 0x56102fa41e00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 204 heartbeat osd_stat(store_statfs(0x19b8f1000/0x0/0x1bfc00000, data 0x1dd05cab/0x1de38000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102dc96800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 141565952 unmapped: 38658048 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.134745598s of 10.004568100s, submitted: 154
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 204 handle_osd_map epochs [205,205], i have 204, src has [1,205]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 204 handle_osd_map epochs [205,205], i have 205, src has [1,205]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 204 handle_osd_map epochs [205,205], i have 205, src has [1,205]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 205 handle_osd_map epochs [205,205], i have 205, src has [1,205]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:24.797589+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 205 ms_handle_reset con 0x56102dc96800 session 0x561031c3bc20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 205 ms_handle_reset con 0x56102e38e800 session 0x56102eb730e0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030d60c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 141180928 unmapped: 39043072 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 205 handle_osd_map epochs [206,206], i have 205, src has [1,206]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:25.797734+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 206 ms_handle_reset con 0x561030d60c00 session 0x56102e86f680
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 206 heartbeat osd_stat(store_statfs(0x1b50d9000/0x0/0x1bfc00000, data 0x451dce5/0x4654000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 141230080 unmapped: 38993920 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561031fb5800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 206 handle_osd_map epochs [207,207], i have 206, src has [1,207]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:26.797864+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 207 ms_handle_reset con 0x561031fb5800 session 0x56102fd474a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1979605 data_alloc: 184549376 data_used: 5689344
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 207 handle_osd_map epochs [207,207], i have 207, src has [1,207]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 141344768 unmapped: 38879232 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 207 ms_handle_reset con 0x56102cf8e000 session 0x56102fd46000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:27.798055+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 207 handle_osd_map epochs [208,208], i have 207, src has [1,208]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102dc96800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 207 handle_osd_map epochs [208,208], i have 208, src has [1,208]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 142508032 unmapped: 37715968 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:28.798280+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 208 handle_osd_map epochs [208,209], i have 208, src has [1,209]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 209 ms_handle_reset con 0x56102dc96800 session 0x56102fd47860
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 142548992 unmapped: 37675008 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:29.798463+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 209 ms_handle_reset con 0x56102e38e800 session 0x56102e7e5a40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030d60c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 209 ms_handle_reset con 0x561030d60c00 session 0x561031c3af00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561031fb5800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 142704640 unmapped: 37519360 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:30.798655+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 209 ms_handle_reset con 0x561031fb5800 session 0x561030d7d860
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 209 handle_osd_map epochs [210,210], i have 209, src has [1,210]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 210 handle_osd_map epochs [210,210], i have 210, src has [1,210]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 142761984 unmapped: 37462016 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 210 handle_osd_map epochs [211,211], i have 210, src has [1,211]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 211 handle_osd_map epochs [211,211], i have 211, src has [1,211]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102dc96800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:31.798816+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 211 ms_handle_reset con 0x56102dc96800 session 0x561030b0d4a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 1991320 data_alloc: 184549376 data_used: 5693440
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 211 ms_handle_reset con 0x56102cf8e000 session 0x56102fd470e0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 211 heartbeat osd_stat(store_statfs(0x1b5072000/0x0/0x1bfc00000, data 0x457f4ad/0x46bb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030d60c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 142843904 unmapped: 37380096 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030981000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 211 ms_handle_reset con 0x561030981000 session 0x561030d7c3c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 211 ms_handle_reset con 0x561030d60c00 session 0x56102fd465a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610343bfc00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:32.798955+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 211 ms_handle_reset con 0x5610343bfc00 session 0x561031c3a960
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 211 ms_handle_reset con 0x56102e38e800 session 0x561030d7dc20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 211 handle_osd_map epochs [212,212], i have 211, src has [1,212]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 212 handle_osd_map epochs [212,212], i have 212, src has [1,212]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 142876672 unmapped: 37347328 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 212 ms_handle_reset con 0x56102cf8e000 session 0x56102fa0d4a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102dc96800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:33.799109+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030981000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 212 ms_handle_reset con 0x561030981000 session 0x561031c3a1e0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030d60c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 212 ms_handle_reset con 0x56102dc96800 session 0x56102ebd6780
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 212 ms_handle_reset con 0x561030d60c00 session 0x56102e86ef00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 142974976 unmapped: 37249024 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.414744377s of 10.164743423s, submitted: 314
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:34.799240+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 212 ms_handle_reset con 0x56102cf8e000 session 0x56102ebd70e0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102dc96800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 212 handle_osd_map epochs [212,212], i have 212, src has [1,212]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 212 handle_osd_map epochs [213,213], i have 212, src has [1,213]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 213 handle_osd_map epochs [213,213], i have 213, src has [1,213]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 213 ms_handle_reset con 0x56102dc96800 session 0x56103140f680
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 143024128 unmapped: 37199872 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:35.799392+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 213 ms_handle_reset con 0x56102e38e800 session 0x561032298780
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030981000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 213 ms_handle_reset con 0x561030981000 session 0x561031c3a5a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 143065088 unmapped: 37158912 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 213 handle_osd_map epochs [214,214], i have 213, src has [1,214]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 214 heartbeat osd_stat(store_statfs(0x1b504d000/0x0/0x1bfc00000, data 0x45a0fe5/0x46e0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:36.799549+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2002630 data_alloc: 184549376 data_used: 5713920
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030d60c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 214 ms_handle_reset con 0x561030d60c00 session 0x561031c3a3c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 143212544 unmapped: 37011456 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:37.799734+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 143212544 unmapped: 37011456 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:38.799881+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 214 heartbeat osd_stat(store_statfs(0x1b5016000/0x0/0x1bfc00000, data 0x45d35af/0x4717000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 214 handle_osd_map epochs [214,215], i have 214, src has [1,215]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 215 ms_handle_reset con 0x56102cf8e000 session 0x56102f884780
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 143294464 unmapped: 36929536 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:39.800072+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102dc96800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 215 ms_handle_reset con 0x56102e38e800 session 0x56102fd47c20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 215 ms_handle_reset con 0x56102dc96800 session 0x561031c3ba40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 143548416 unmapped: 36675584 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:40.800215+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030981000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 215 ms_handle_reset con 0x561030981000 session 0x561030d7cd20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030d60c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 215 ms_handle_reset con 0x561030d60c00 session 0x56103140f860
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 143564800 unmapped: 36659200 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:41.800360+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2012521 data_alloc: 184549376 data_used: 5713920
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 215 handle_osd_map epochs [216,216], i have 215, src has [1,216]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 216 handle_osd_map epochs [216,216], i have 216, src has [1,216]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 216 ms_handle_reset con 0x56102cf8e000 session 0x56102faf6000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102dc96800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b4ff3000/0x0/0x1bfc00000, data 0x45f4ddb/0x473a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 143589376 unmapped: 36634624 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:42.800521+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 216 ms_handle_reset con 0x56102e38e800 session 0x56102faf6d20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 216 handle_osd_map epochs [217,217], i have 216, src has [1,217]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 217 ms_handle_reset con 0x56102dc96800 session 0x56102e86f0e0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 143589376 unmapped: 36634624 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:43.800692+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030981000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610343bfc00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 217 ms_handle_reset con 0x561030981000 session 0x5610310d3a40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 217 ms_handle_reset con 0x5610343bfc00 session 0x56102eb49a40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 143753216 unmapped: 36470784 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:44.800901+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 143753216 unmapped: 36470784 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.354174614s of 10.940576553s, submitted: 155
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102dc96800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 217 ms_handle_reset con 0x56102dc96800 session 0x56102e7e4d20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:45.801045+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 217 ms_handle_reset con 0x56102cf8e000 session 0x56102faf6f00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 217 heartbeat osd_stat(store_statfs(0x1b4fcf000/0x0/0x1bfc00000, data 0x4615d9c/0x475d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 144908288 unmapped: 35315712 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:46.801281+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2019738 data_alloc: 184549376 data_used: 5726208
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 217 ms_handle_reset con 0x56102e38e800 session 0x561031c3bc20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030981000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 217 ms_handle_reset con 0x561030981000 session 0x561031c3a960
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 145022976 unmapped: 35201024 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:47.801529+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 217 handle_osd_map epochs [218,218], i have 217, src has [1,218]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 145022976 unmapped: 35201024 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 218 handle_osd_map epochs [218,218], i have 218, src has [1,218]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:48.801674+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 218 heartbeat osd_stat(store_statfs(0x1b4f8d000/0x0/0x1bfc00000, data 0x4659f7f/0x47a0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 218 handle_osd_map epochs [219,219], i have 218, src has [1,219]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 218 handle_osd_map epochs [219,219], i have 219, src has [1,219]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 218 handle_osd_map epochs [219,219], i have 219, src has [1,219]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 144326656 unmapped: 35897344 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:49.801847+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 144326656 unmapped: 35897344 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:50.802042+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102eb0bc00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 219 ms_handle_reset con 0x56102eb0bc00 session 0x561030d7c3c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 144334848 unmapped: 35889152 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 219 handle_osd_map epochs [219,220], i have 219, src has [1,220]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:51.802319+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2035047 data_alloc: 184549376 data_used: 5738496
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 220 handle_osd_map epochs [220,220], i have 220, src has [1,220]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 220 ms_handle_reset con 0x56102cf8e000 session 0x56102ebb4780
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 144334848 unmapped: 35889152 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:52.802638+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 220 heartbeat osd_stat(store_statfs(0x1b4f77000/0x0/0x1bfc00000, data 0x466b243/0x47b6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 144351232 unmapped: 35872768 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:53.802783+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 144351232 unmapped: 35872768 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 220 heartbeat osd_stat(store_statfs(0x1b4f77000/0x0/0x1bfc00000, data 0x466b243/0x47b6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:54.802989+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 220 heartbeat osd_stat(store_statfs(0x1b4f77000/0x0/0x1bfc00000, data 0x466b243/0x47b6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 144179200 unmapped: 36044800 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 220 heartbeat osd_stat(store_statfs(0x1b4f77000/0x0/0x1bfc00000, data 0x466b243/0x47b6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:55.803213+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102dc96800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.483802795s of 10.835494995s, submitted: 141
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 144220160 unmapped: 36003840 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 220 handle_osd_map epochs [221,221], i have 220, src has [1,221]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:56.803331+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 221 ms_handle_reset con 0x56102dc96800 session 0x56102e7e4000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2040837 data_alloc: 184549376 data_used: 5754880
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 221 ms_handle_reset con 0x56102e38e800 session 0x56102e86ed20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 145276928 unmapped: 34947072 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:57.803544+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 145276928 unmapped: 34947072 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:58.803703+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 221 heartbeat osd_stat(store_statfs(0x1b4f55000/0x0/0x1bfc00000, data 0x468ea14/0x47d8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 145276928 unmapped: 34947072 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:12:59.803916+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 145326080 unmapped: 34897920 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:00.804109+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 221 heartbeat osd_stat(store_statfs(0x1b4f4b000/0x0/0x1bfc00000, data 0x4699865/0x47e3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 145326080 unmapped: 34897920 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:01.804303+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2040837 data_alloc: 184549376 data_used: 5750784
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 145326080 unmapped: 34897920 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:02.804440+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 145334272 unmapped: 34889728 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:03.804650+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 145334272 unmapped: 34889728 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:04.804861+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 145334272 unmapped: 34889728 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:05.805028+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 221 heartbeat osd_stat(store_statfs(0x1b4f40000/0x0/0x1bfc00000, data 0x46a4ec0/0x47ee000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030981000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.016202927s of 10.145637512s, submitted: 47
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 145334272 unmapped: 34889728 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 221 ms_handle_reset con 0x561030981000 session 0x56102fa41e00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:06.805166+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2044803 data_alloc: 184549376 data_used: 5750784
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 145342464 unmapped: 34881536 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:07.805365+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102d901800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 221 ms_handle_reset con 0x56102d901800 session 0x56102e811c20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 145350656 unmapped: 34873344 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:08.805710+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 145350656 unmapped: 34873344 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:09.806006+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 221 heartbeat osd_stat(store_statfs(0x1b4f2f000/0x0/0x1bfc00000, data 0x46b4ae2/0x47fe000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 145408000 unmapped: 34816000 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:10.806258+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 221 heartbeat osd_stat(store_statfs(0x1b4f0f000/0x0/0x1bfc00000, data 0x46d45b3/0x481f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 145408000 unmapped: 34816000 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:11.806584+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2047597 data_alloc: 184549376 data_used: 5750784
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 145408000 unmapped: 34816000 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:12.806784+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 146587648 unmapped: 33636352 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:13.807016+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 146587648 unmapped: 33636352 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:14.807239+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 221 ms_handle_reset con 0x56102cf8e000 session 0x5610322703c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 221 heartbeat osd_stat(store_statfs(0x1b4edb000/0x0/0x1bfc00000, data 0x470641c/0x4853000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 146604032 unmapped: 33619968 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:15.807499+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.919329643s of 10.067067146s, submitted: 33
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 146726912 unmapped: 33497088 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:16.807653+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2056259 data_alloc: 184549376 data_used: 5750784
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102dc96800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 221 ms_handle_reset con 0x56102dc96800 session 0x561032270d20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 146726912 unmapped: 33497088 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:17.808157+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 221 handle_osd_map epochs [222,222], i have 221, src has [1,222]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 222 ms_handle_reset con 0x56102e38e800 session 0x56102eb6f4a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030981000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 146726912 unmapped: 33497088 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:18.808320+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 222 ms_handle_reset con 0x561030981000 session 0x56102fa414a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102fd90000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 222 handle_osd_map epochs [223,223], i have 222, src has [1,223]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 223 handle_osd_map epochs [223,223], i have 223, src has [1,223]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 223 ms_handle_reset con 0x56102fd90000 session 0x56102cd5be00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 146874368 unmapped: 33349632 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:19.808580+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 223 ms_handle_reset con 0x56102cf8e000 session 0x56103140e5a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102dc96800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 223 ms_handle_reset con 0x56102dc96800 session 0x5610310741e0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 146931712 unmapped: 33292288 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:20.809699+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030981000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 223 heartbeat osd_stat(store_statfs(0x1b4e7d000/0x0/0x1bfc00000, data 0x475d8e8/0x48b0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 223 handle_osd_map epochs [224,224], i have 223, src has [1,224]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 224 ms_handle_reset con 0x56102e38e800 session 0x5610322705a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 147193856 unmapped: 33030144 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:21.809878+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 224 ms_handle_reset con 0x561030981000 session 0x561031eb52c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2077128 data_alloc: 184549376 data_used: 5771264
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 224 handle_osd_map epochs [224,224], i have 224, src has [1,224]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e452800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 224 ms_handle_reset con 0x56102e452800 session 0x56102fafb2c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:22.810073+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 147193856 unmapped: 33030144 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 224 ms_handle_reset con 0x56102cf8e000 session 0x56102fd47860
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:23.810276+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 147210240 unmapped: 33013760 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102dc96800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 224 ms_handle_reset con 0x56102dc96800 session 0x56102f8854a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:24.810416+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 147374080 unmapped: 32849920 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 224 ms_handle_reset con 0x56102e38e800 session 0x5610330ce000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 224 heartbeat osd_stat(store_statfs(0x1b4e57000/0x0/0x1bfc00000, data 0x4782606/0x48d5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:25.810685+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 147382272 unmapped: 32841728 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e452800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 224 ms_handle_reset con 0x56102e452800 session 0x561030d7c960
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:26.810879+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 146677760 unmapped: 33546240 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 224 handle_osd_map epochs [225,225], i have 224, src has [1,225]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.590348244s of 10.287829399s, submitted: 239
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030981000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 225 ms_handle_reset con 0x561030981000 session 0x56102ebd63c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2086060 data_alloc: 184549376 data_used: 5787648
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 225 ms_handle_reset con 0x56102cf8e000 session 0x56103140f2c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:27.811089+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 146997248 unmapped: 33226752 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 225 handle_osd_map epochs [226,226], i have 225, src has [1,226]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102dc96800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 226 ms_handle_reset con 0x56102dc96800 session 0x561030d7c1e0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:28.811230+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 147120128 unmapped: 33103872 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 226 heartbeat osd_stat(store_statfs(0x1b4e23000/0x0/0x1bfc00000, data 0x47b49e0/0x490a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 226 handle_osd_map epochs [227,227], i have 226, src has [1,227]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 226 handle_osd_map epochs [227,227], i have 227, src has [1,227]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 226 handle_osd_map epochs [227,227], i have 227, src has [1,227]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 226 handle_osd_map epochs [227,227], i have 227, src has [1,227]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 226 handle_osd_map epochs [227,227], i have 227, src has [1,227]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 227 ms_handle_reset con 0x56102e38e800 session 0x561032271680
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e452800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:29.811531+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 148258816 unmapped: 31965184 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 227 ms_handle_reset con 0x56102e452800 session 0x561032298f00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030981000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561031d43c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:30.811698+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 227 ms_handle_reset con 0x561030981000 session 0x561030b0dc20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 148299776 unmapped: 31924224 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 227 ms_handle_reset con 0x561031d43c00 session 0x561032270780
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:31.811845+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 148340736 unmapped: 31883264 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102dc96800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 227 ms_handle_reset con 0x56102cf8e000 session 0x56102fafa000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 227 ms_handle_reset con 0x56102dc96800 session 0x56102fdf83c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2094083 data_alloc: 184549376 data_used: 5787648
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 227 heartbeat osd_stat(store_statfs(0x1b4e1c000/0x0/0x1bfc00000, data 0x47ba484/0x4912000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 227 ms_handle_reset con 0x56102e38e800 session 0x56102eb954a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:32.811954+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e452800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 148365312 unmapped: 31858688 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 227 ms_handle_reset con 0x56102e452800 session 0x56102e86f860
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:33.812126+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 148430848 unmapped: 31793152 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 227 handle_osd_map epochs [228,228], i have 227, src has [1,228]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 228 ms_handle_reset con 0x56102cf8e000 session 0x56102e7e4f00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:34.812523+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 148439040 unmapped: 31784960 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102dc96800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:35.812858+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 148471808 unmapped: 31752192 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 228 ms_handle_reset con 0x56102dc96800 session 0x5610310d2b40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 228 ms_handle_reset con 0x56102e38e800 session 0x56102fdf85a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 228 heartbeat osd_stat(store_statfs(0x1b4e17000/0x0/0x1bfc00000, data 0x47bc6d0/0x4916000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561031d43c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 228 handle_osd_map epochs [229,229], i have 228, src has [1,229]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:36.813058+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 148488192 unmapped: 31735808 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 229 handle_osd_map epochs [229,229], i have 229, src has [1,229]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.534854889s of 10.000902176s, submitted: 193
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610330f6800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2109991 data_alloc: 184549376 data_used: 5804032
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 229 ms_handle_reset con 0x5610330f6800 session 0x56102fdf8780
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 229 handle_osd_map epochs [230,230], i have 229, src has [1,230]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:37.813283+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 230 ms_handle_reset con 0x561031d43c00 session 0x5610330ce3c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 148529152 unmapped: 31694848 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 230 handle_osd_map epochs [230,230], i have 230, src has [1,230]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 230 ms_handle_reset con 0x56102cf8e000 session 0x561032270b40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610330f6800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 230 heartbeat osd_stat(store_statfs(0x1b4df3000/0x0/0x1bfc00000, data 0x47df5d3/0x493b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 230 ms_handle_reset con 0x5610330f6800 session 0x56102e86e5a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:38.813558+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 148553728 unmapped: 31670272 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102dc96800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 230 ms_handle_reset con 0x56102dc96800 session 0x56102fa10f00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313b0c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 230 ms_handle_reset con 0x56102e38e800 session 0x56102fdf8b40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313b1400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 230 ms_handle_reset con 0x5610313b0c00 session 0x561030d7c5a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:39.813811+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 148635648 unmapped: 31588352 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 230 heartbeat osd_stat(store_statfs(0x1b4df0000/0x0/0x1bfc00000, data 0x47e17d3/0x493d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 230 handle_osd_map epochs [231,231], i have 230, src has [1,231]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 230 handle_osd_map epochs [231,231], i have 231, src has [1,231]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 230 handle_osd_map epochs [231,231], i have 231, src has [1,231]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 231 ms_handle_reset con 0x5610313b1400 session 0x56102fdf92c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 231 ms_handle_reset con 0x56102cf8e000 session 0x56102cd5be00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:40.813979+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102dc96800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 148725760 unmapped: 31498240 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 231 ms_handle_reset con 0x56102dc96800 session 0x56102d92b4a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:41.814195+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 148807680 unmapped: 31416320 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2112587 data_alloc: 184549376 data_used: 5808128
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b4ded000/0x0/0x1bfc00000, data 0x47e3a27/0x4940000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 231 ms_handle_reset con 0x56102e38e800 session 0x56102eb6f4a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:42.814502+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 148832256 unmapped: 31391744 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:43.814683+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 148955136 unmapped: 31268864 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:44.814867+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 148963328 unmapped: 31260672 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610330f6800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 231 ms_handle_reset con 0x5610330f6800 session 0x56102f90fa40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102dc96800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 231 ms_handle_reset con 0x56102dc96800 session 0x56102fdf9c20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b4dc7000/0x0/0x1bfc00000, data 0x4809aad/0x4967000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:45.814990+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 153600000 unmapped: 26624000 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 231 ms_handle_reset con 0x56102cf8e000 session 0x56102e86f0e0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 231 handle_osd_map epochs [231,232], i have 231, src has [1,232]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:46.815118+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 232 ms_handle_reset con 0x56102e38e800 session 0x56102eb49a40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 149594112 unmapped: 30629888 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.194107056s of 10.000376701s, submitted: 180
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2243599 data_alloc: 184549376 data_used: 5824512
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313b1400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 232 ms_handle_reset con 0x5610313b1400 session 0x56102d8a92c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313b1800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:47.815288+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 151437312 unmapped: 28786688 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 232 handle_osd_map epochs [232,233], i have 232, src has [1,233]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 233 handle_osd_map epochs [233,233], i have 233, src has [1,233]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 233 ms_handle_reset con 0x5610313b1800 session 0x561030d7cd20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 233 ms_handle_reset con 0x56102cf8e000 session 0x56102fd47c20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102dc96800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:48.815491+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 233 handle_osd_map epochs [233,233], i have 233, src has [1,233]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 151527424 unmapped: 28696576 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 233 ms_handle_reset con 0x56102e38e800 session 0x56102f8ffa40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313b1400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 233 ms_handle_reset con 0x56102dc96800 session 0x561032298780
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 233 ms_handle_reset con 0x5610313b1400 session 0x56102fa0de00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 233 handle_osd_map epochs [234,234], i have 233, src has [1,234]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313b1c00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 234 ms_handle_reset con 0x5610313b1c00 session 0x56102fafbe00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 234 handle_osd_map epochs [234,234], i have 234, src has [1,234]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102dc96800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:49.815625+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 150945792 unmapped: 29278208 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 234 ms_handle_reset con 0x56102dc96800 session 0x56102fdf9860
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 234 ms_handle_reset con 0x56102cf8e000 session 0x56102e7e4f00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 234 handle_osd_map epochs [235,235], i have 234, src has [1,235]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 235 ms_handle_reset con 0x56102e38e800 session 0x56102e86f860
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313b1400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:50.815834+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 235 ms_handle_reset con 0x5610313b1400 session 0x56102fdf83c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 151183360 unmapped: 29040640 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 235 heartbeat osd_stat(store_statfs(0x1b48f6000/0x0/0x1bfc00000, data 0x48d4e3a/0x4a36000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:51.816021+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 151207936 unmapped: 29016064 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 235 heartbeat osd_stat(store_statfs(0x1b48dd000/0x0/0x1bfc00000, data 0x48eab09/0x4a4e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2157694 data_alloc: 184549376 data_used: 5828608
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610310c5000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 235 ms_handle_reset con 0x5610310c5000 session 0x561032270780
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:52.816189+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 151314432 unmapped: 28909568 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102dc96800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 235 ms_handle_reset con 0x56102dc96800 session 0x56102fdf81e0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:53.816378+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 235 handle_osd_map epochs [236,236], i have 235, src has [1,236]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 236 ms_handle_reset con 0x56102cf8e000 session 0x5610310d3a40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 151453696 unmapped: 28770304 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 236 handle_osd_map epochs [236,236], i have 236, src has [1,236]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102e38e800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:54.816554+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610310c5000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 151453696 unmapped: 28770304 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 236 ms_handle_reset con 0x5610310c5000 session 0x56102eb95e00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 236 ms_handle_reset con 0x56102e38e800 session 0x561032298f00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 236 heartbeat osd_stat(store_statfs(0x1b4891000/0x0/0x1bfc00000, data 0x4936c7e/0x4a9c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:55.816815+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313b1400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 236 handle_osd_map epochs [237,237], i have 236, src has [1,237]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102fd86400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 151617536 unmapped: 28606464 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 237 ms_handle_reset con 0x56102fd86400 session 0x56102eb48b40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 237 ms_handle_reset con 0x5610313b1400 session 0x561032271680
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 237 ms_handle_reset con 0x56102cf8e000 session 0x5610310d3c20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102dc96800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 237 ms_handle_reset con 0x56102dc96800 session 0x561030d7c1e0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 237 handle_osd_map epochs [237,238], i have 237, src has [1,238]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:56.816973+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 238 handle_osd_map epochs [237,238], i have 238, src has [1,238]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 152723456 unmapped: 27500544 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.347815514s of 10.004862785s, submitted: 181
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 238 handle_osd_map epochs [238,238], i have 238, src has [1,238]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2179416 data_alloc: 184549376 data_used: 5844992
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:57.817161+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 152723456 unmapped: 27500544 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 238 handle_osd_map epochs [239,239], i have 238, src has [1,239]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 239 handle_osd_map epochs [239,239], i have 239, src has [1,239]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:58.817276+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 152756224 unmapped: 27467776 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:13:59.817389+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 239 handle_osd_map epochs [240,240], i have 239, src has [1,240]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 152772608 unmapped: 27451392 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 240 handle_osd_map epochs [240,240], i have 240, src has [1,240]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b366d000/0x0/0x1bfc00000, data 0x49b5a74/0x4b1f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7a6f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:00.817645+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b366d000/0x0/0x1bfc00000, data 0x49b5a74/0x4b1f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7a6f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 152887296 unmapped: 27336704 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610310c5000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 240 ms_handle_reset con 0x5610310c5000 session 0x56103140f2c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610330ca000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 240 ms_handle_reset con 0x5610330ca000 session 0x56102ebd63c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:01.817860+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 240 handle_osd_map epochs [240,241], i have 240, src has [1,241]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 241 handle_osd_map epochs [241,241], i have 241, src has [1,241]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 152920064 unmapped: 27303936 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 241 ms_handle_reset con 0x56102cf8e000 session 0x5610330ce000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102dc96800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 241 ms_handle_reset con 0x56102dc96800 session 0x56102f8854a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610310c5000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2193492 data_alloc: 184549376 data_used: 5853184
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 241 handle_osd_map epochs [241,241], i have 241, src has [1,241]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 241 ms_handle_reset con 0x5610310c5000 session 0x56102fafb2c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:02.818031+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 152944640 unmapped: 27279360 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313b1400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 241 ms_handle_reset con 0x5610313b1400 session 0x5610322705a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610343bf800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:03.818191+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 241 ms_handle_reset con 0x5610343bf800 session 0x5610310741e0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 152952832 unmapped: 27271168 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 241 handle_osd_map epochs [242,242], i have 241, src has [1,242]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:04.818366+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 152985600 unmapped: 27238400 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 242 ms_handle_reset con 0x56102cf8e000 session 0x56102eb95680
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102dc96800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 242 heartbeat osd_stat(store_statfs(0x1b363f000/0x0/0x1bfc00000, data 0x49e2055/0x4b4e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7a6f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:05.818501+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 152985600 unmapped: 27238400 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 242 handle_osd_map epochs [243,243], i have 242, src has [1,243]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 243 ms_handle_reset con 0x56102dc96800 session 0x56102fa403c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610310c5000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 243 handle_osd_map epochs [243,244], i have 243, src has [1,244]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:06.818666+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 244 handle_osd_map epochs [244,244], i have 244, src has [1,244]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 153141248 unmapped: 27082752 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 244 ms_handle_reset con 0x5610310c5000 session 0x56102f8ff2c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.444165230s of 10.000581741s, submitted: 153
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2199840 data_alloc: 184549376 data_used: 5849088
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:07.818884+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 152354816 unmapped: 27869184 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:08.819061+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 152354816 unmapped: 27869184 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 244 heartbeat osd_stat(store_statfs(0x1b3617000/0x0/0x1bfc00000, data 0x4a09f8b/0x4b77000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7a6f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:09.819676+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 152371200 unmapped: 27852800 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:10.819844+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 152207360 unmapped: 28016640 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 244 heartbeat osd_stat(store_statfs(0x1b35ea000/0x0/0x1bfc00000, data 0x4a3377e/0x4ba3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7a6f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:11.820007+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 152207360 unmapped: 28016640 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 244 handle_osd_map epochs [245,245], i have 244, src has [1,245]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2209814 data_alloc: 184549376 data_used: 5861376
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:12.820179+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 153321472 unmapped: 26902528 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:13.820341+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 153321472 unmapped: 26902528 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:14.820512+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 153321472 unmapped: 26902528 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 245 heartbeat osd_stat(store_statfs(0x1b358a000/0x0/0x1bfc00000, data 0x4a906e0/0x4c02000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7a6f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:15.820686+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 153321472 unmapped: 26902528 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:16.820840+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 153436160 unmapped: 26787840 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.762175560s of 10.001336098s, submitted: 75
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 245 heartbeat osd_stat(store_statfs(0x1b356f000/0x0/0x1bfc00000, data 0x4aae0a6/0x4c1f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7a6f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2222538 data_alloc: 184549376 data_used: 5861376
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:17.821104+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 153436160 unmapped: 26787840 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 245 heartbeat osd_stat(store_statfs(0x1b3536000/0x0/0x1bfc00000, data 0x4ae5b17/0x4c57000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7a6f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313b1400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 245 ms_handle_reset con 0x5610313b1400 session 0x56102eb57680
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:18.821303+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 153526272 unmapped: 26697728 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:19.821488+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 153731072 unmapped: 26492928 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313c0000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:20.821675+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 245 ms_handle_reset con 0x5610313c0000 session 0x56102eb56000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 154804224 unmapped: 25419776 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 245 ms_handle_reset con 0x56102cf8e000 session 0x56102fdf9e00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:21.821854+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 155459584 unmapped: 24764416 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2309713 data_alloc: 184549376 data_used: 5861376
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:22.822049+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 154886144 unmapped: 25337856 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102dc96800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 245 ms_handle_reset con 0x56102dc96800 session 0x56102fa103c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:23.822201+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 245 handle_osd_map epochs [246,246], i have 245, src has [1,246]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 245 handle_osd_map epochs [246,246], i have 246, src has [1,246]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 157097984 unmapped: 23126016 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610310c5000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313b1400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 246 heartbeat osd_stat(store_statfs(0x1b2a8b000/0x0/0x1bfc00000, data 0x5591c97/0x5703000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7a6f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 246 ms_handle_reset con 0x5610313b1400 session 0x56102eba8960
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:24.822532+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 157065216 unmapped: 23158784 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 246 ms_handle_reset con 0x5610310c5000 session 0x561030b0d860
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 246 heartbeat osd_stat(store_statfs(0x1b0f19000/0x0/0x1bfc00000, data 0x5f5dc2b/0x60d2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 246 handle_osd_map epochs [247,247], i have 246, src has [1,247]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 246 handle_osd_map epochs [247,247], i have 247, src has [1,247]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 246 handle_osd_map epochs [247,247], i have 247, src has [1,247]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 246 handle_osd_map epochs [247,247], i have 247, src has [1,247]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 246 handle_osd_map epochs [247,247], i have 247, src has [1,247]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:25.822750+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 157597696 unmapped: 22626304 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313c0000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 247 ms_handle_reset con 0x5610313c0000 session 0x56102ebd7860
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 247 heartbeat osd_stat(store_statfs(0x1b0e74000/0x0/0x1bfc00000, data 0x6003607/0x6179000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:26.822888+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.336826324s of 10.009739876s, submitted: 125
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 157720576 unmapped: 22503424 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2405205 data_alloc: 184549376 data_used: 5869568
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 247 heartbeat osd_stat(store_statfs(0x1b0e52000/0x0/0x1bfc00000, data 0x6025c32/0x619b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:27.823097+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 157818880 unmapped: 22405120 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 247 ms_handle_reset con 0x56102cf8e000 session 0x56102f90fc20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102dc96800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610310c5000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313b1400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 247 ms_handle_reset con 0x5610310c5000 session 0x56102ebd6d20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:28.823227+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 247 ms_handle_reset con 0x5610313b1400 session 0x561033040000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 159457280 unmapped: 20766720 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313c1000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 247 handle_osd_map epochs [248,248], i have 247, src has [1,248]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 248 ms_handle_reset con 0x5610313c1000 session 0x561032270780
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 248 ms_handle_reset con 0x56102dc96800 session 0x56102ebd74a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 248 handle_osd_map epochs [248,248], i have 248, src has [1,248]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:29.823398+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 159473664 unmapped: 20750336 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:30.823575+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 159678464 unmapped: 20545536 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 248 handle_osd_map epochs [249,249], i have 248, src has [1,249]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 249 handle_osd_map epochs [249,249], i have 249, src has [1,249]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 249 handle_osd_map epochs [249,249], i have 249, src has [1,249]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 249 handle_osd_map epochs [249,249], i have 249, src has [1,249]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 249 ms_handle_reset con 0x56102cf8e000 session 0x5610330403c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:31.823724+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 160161792 unmapped: 20062208 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2428273 data_alloc: 184549376 data_used: 5881856
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610310c5000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:32.823947+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 160178176 unmapped: 20045824 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 249 handle_osd_map epochs [249,250], i have 249, src has [1,250]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 250 heartbeat osd_stat(store_statfs(0x1b0d8b000/0x0/0x1bfc00000, data 0x60ea1ee/0x6261000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:33.824189+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 250 ms_handle_reset con 0x5610310c5000 session 0x56102fa0d2c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 160301056 unmapped: 19922944 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:34.824392+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 161570816 unmapped: 18653184 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313b1400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 250 handle_osd_map epochs [251,251], i have 250, src has [1,251]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 251 handle_osd_map epochs [251,251], i have 251, src has [1,251]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 251 ms_handle_reset con 0x5610313b1400 session 0x56102f90e3c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:35.824569+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 161579008 unmapped: 18644992 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 251 handle_osd_map epochs [251,252], i have 251, src has [1,252]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:36.824721+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 252 handle_osd_map epochs [252,252], i have 252, src has [1,252]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.353116035s of 10.002175331s, submitted: 180
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 161587200 unmapped: 18636800 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 252 handle_osd_map epochs [252,252], i have 252, src has [1,252]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313c1000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2293577 data_alloc: 184549376 data_used: 5894144
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 252 ms_handle_reset con 0x5610313c1000 session 0x56102fa410e0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:37.824928+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 161636352 unmapped: 18587648 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 252 heartbeat osd_stat(store_statfs(0x1b2132000/0x0/0x1bfc00000, data 0x4d40cf3/0x4ebc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:38.825123+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 161800192 unmapped: 18423808 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 252 ms_handle_reset con 0x56102f8ae400 session 0x56102fa10b40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102f8ae400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 252 heartbeat osd_stat(store_statfs(0x1b2112000/0x0/0x1bfc00000, data 0x4d5fd68/0x4edc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:39.825305+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 161808384 unmapped: 18415616 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:40.825547+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 161914880 unmapped: 18309120 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 252 handle_osd_map epochs [253,253], i have 252, src has [1,253]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:41.825744+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 162004992 unmapped: 18219008 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2315625 data_alloc: 184549376 data_used: 5906432
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:42.825947+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 162004992 unmapped: 18219008 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:43.826144+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 253 ms_handle_reset con 0x56102cf8e000 session 0x56102e86f860
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610310c5000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 253 ms_handle_reset con 0x5610310c5000 session 0x56102fdf9860
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313b1400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 253 ms_handle_reset con 0x5610313b1400 session 0x56102fafbe00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 161824768 unmapped: 18399232 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313c1000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 253 ms_handle_reset con 0x5610313c1000 session 0x56102fa0de00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313ba000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 253 ms_handle_reset con 0x5610313ba000 session 0x561032298780
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 253 heartbeat osd_stat(store_statfs(0x1b1cd2000/0x0/0x1bfc00000, data 0x519a785/0x531c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:44.826306+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 161824768 unmapped: 18399232 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:45.826411+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 161832960 unmapped: 18391040 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:46.826617+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 253 ms_handle_reset con 0x56102cf8e000 session 0x56102f8ffa40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 162054144 unmapped: 18169856 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610310c5000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313b1400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.813349724s of 10.222197533s, submitted: 109
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 253 heartbeat osd_stat(store_statfs(0x1b1ca4000/0x0/0x1bfc00000, data 0x51c4887/0x5349000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2362161 data_alloc: 184549376 data_used: 5906432
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:47.826915+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 253 heartbeat osd_stat(store_statfs(0x1b1ca4000/0x0/0x1bfc00000, data 0x51c4887/0x5349000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 162078720 unmapped: 18145280 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:48.827192+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 253 heartbeat osd_stat(store_statfs(0x1b1ca4000/0x0/0x1bfc00000, data 0x51c482b/0x5348000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 162086912 unmapped: 18137088 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:49.827385+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 162086912 unmapped: 18137088 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 253 heartbeat osd_stat(store_statfs(0x1b1ca4000/0x0/0x1bfc00000, data 0x51c482b/0x5348000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:50.827732+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 162168832 unmapped: 18055168 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:51.827961+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 162168832 unmapped: 18055168 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2377431 data_alloc: 184549376 data_used: 7524352
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:52.828126+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 253 heartbeat osd_stat(store_statfs(0x1b1ca5000/0x0/0x1bfc00000, data 0x51c48a0/0x5349000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 162168832 unmapped: 18055168 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:53.828490+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 162168832 unmapped: 18055168 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:54.828705+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 162177024 unmapped: 18046976 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:55.828914+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 162177024 unmapped: 18046976 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:56.829101+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 253 heartbeat osd_stat(store_statfs(0x1b1ca4000/0x0/0x1bfc00000, data 0x51c485b/0x5349000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 162177024 unmapped: 18046976 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 253 heartbeat osd_stat(store_statfs(0x1b1ca3000/0x0/0x1bfc00000, data 0x51c49d6/0x534b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2382771 data_alloc: 184549376 data_used: 7524352
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:57.829363+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 162177024 unmapped: 18046976 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.057118416s of 11.145279884s, submitted: 20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:58.829502+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 165494784 unmapped: 14729216 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:14:59.829627+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 167731200 unmapped: 12492800 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:00.830103+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 167854080 unmapped: 12369920 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:01.830323+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 167919616 unmapped: 12304384 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2485573 data_alloc: 184549376 data_used: 8245248
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:02.830520+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 253 handle_osd_map epochs [254,254], i have 253, src has [1,254]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 167936000 unmapped: 12288000 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 254 heartbeat osd_stat(store_statfs(0x1b10e1000/0x0/0x1bfc00000, data 0x5d82b77/0x5f0a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:03.830665+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 168132608 unmapped: 12091392 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 254 heartbeat osd_stat(store_statfs(0x1b10e0000/0x0/0x1bfc00000, data 0x5d82c12/0x5f0b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:04.830871+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 167387136 unmapped: 12836864 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:05.831133+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 167460864 unmapped: 12763136 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:06.831333+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 167591936 unmapped: 12632064 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2491233 data_alloc: 184549376 data_used: 8265728
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 254 handle_osd_map epochs [255,255], i have 254, src has [1,255]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:07.831569+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 255 handle_osd_map epochs [255,255], i have 255, src has [1,255]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 167936000 unmapped: 12288000 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.292410851s of 10.012044907s, submitted: 241
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 255 heartbeat osd_stat(store_statfs(0x1b1068000/0x0/0x1bfc00000, data 0x5dfa6f3/0x5f84000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:08.831746+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 255 heartbeat osd_stat(store_statfs(0x1b1068000/0x0/0x1bfc00000, data 0x5dfa6f3/0x5f84000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 167968768 unmapped: 12255232 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:09.831919+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 255 handle_osd_map epochs [255,256], i have 255, src has [1,256]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313c1000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 168165376 unmapped: 12058624 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 256 ms_handle_reset con 0x5610313c1000 session 0x56102f90f4a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:10.832236+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 256 handle_osd_map epochs [256,256], i have 256, src has [1,256]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 256 handle_osd_map epochs [256,256], i have 256, src has [1,256]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313ba800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 256 handle_osd_map epochs [257,257], i have 256, src has [1,257]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 257 ms_handle_reset con 0x5610313ba800 session 0x56102fa0cf00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 169394176 unmapped: 10829824 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:11.832444+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 257 handle_osd_map epochs [258,258], i have 257, src has [1,258]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 258 handle_osd_map epochs [257,258], i have 258, src has [1,258]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 258 handle_osd_map epochs [258,258], i have 258, src has [1,258]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 169926656 unmapped: 10297344 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2523227 data_alloc: 184549376 data_used: 8278016
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:12.832757+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 170254336 unmapped: 9969664 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:13.833029+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 170254336 unmapped: 9969664 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 258 handle_osd_map epochs [259,259], i have 258, src has [1,259]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610343c2800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 259 ms_handle_reset con 0x5610343c2800 session 0x56102fa0da40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 259 heartbeat osd_stat(store_statfs(0x1b0fa8000/0x0/0x1bfc00000, data 0x5eb8aea/0x6045000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [0,0,1])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:14.833255+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 171450368 unmapped: 8773632 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610343c2000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 259 handle_osd_map epochs [260,260], i have 259, src has [1,260]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 260 ms_handle_reset con 0x5610343c2000 session 0x56102d8a83c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:15.833421+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 260 handle_osd_map epochs [260,260], i have 260, src has [1,260]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 171696128 unmapped: 8527872 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:16.833754+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 260 handle_osd_map epochs [261,261], i have 260, src has [1,261]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 261 ms_handle_reset con 0x56102cf8e000 session 0x561033040d20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 171728896 unmapped: 8495104 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2540009 data_alloc: 184549376 data_used: 8286208
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 261 handle_osd_map epochs [261,261], i have 261, src has [1,261]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:17.833978+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 261 heartbeat osd_stat(store_statfs(0x1b0f34000/0x0/0x1bfc00000, data 0x5f2877f/0x60b8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 171827200 unmapped: 8396800 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:18.835014+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 172072960 unmapped: 8151040 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313ba800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 261 handle_osd_map epochs [262,262], i have 261, src has [1,262]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.521577835s of 10.952056885s, submitted: 118
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 262 handle_osd_map epochs [262,262], i have 262, src has [1,262]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 262 ms_handle_reset con 0x5610313ba800 session 0x56102faf63c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:19.835222+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 262 handle_osd_map epochs [262,263], i have 262, src has [1,263]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 263 heartbeat osd_stat(store_statfs(0x1b0ed0000/0x0/0x1bfc00000, data 0x5f8a6fb/0x611d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 172228608 unmapped: 7995392 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:20.835385+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 263 handle_osd_map epochs [264,264], i have 263, src has [1,264]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 172244992 unmapped: 7979008 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:21.835585+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313c1000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 264 handle_osd_map epochs [265,265], i have 264, src has [1,265]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 265 heartbeat osd_stat(store_statfs(0x1b0eb8000/0x0/0x1bfc00000, data 0x5fa06c6/0x6134000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [0,0,0,0,1])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 173490176 unmapped: 6733824 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2559591 data_alloc: 184549376 data_used: 8310784
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:22.835779+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 265 handle_osd_map epochs [266,266], i have 265, src has [1,266]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 266 heartbeat osd_stat(store_statfs(0x1b0e56000/0x0/0x1bfc00000, data 0x60016a8/0x6197000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 266 ms_handle_reset con 0x5610313c1000 session 0x561033041e00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 173506560 unmapped: 6717440 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:23.835935+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 173506560 unmapped: 6717440 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:24.836089+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 172662784 unmapped: 7561216 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 266 ms_handle_reset con 0x5610310c5000 session 0x56102f8845a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 266 ms_handle_reset con 0x5610313b1400 session 0x5610322703c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:25.836246+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 171687936 unmapped: 8536064 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 266 ms_handle_reset con 0x56102cf8e000 session 0x56102ebb5a40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:26.836411+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 266 handle_osd_map epochs [267,267], i have 266, src has [1,267]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 267 handle_osd_map epochs [267,267], i have 267, src has [1,267]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 172843008 unmapped: 7380992 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2436094 data_alloc: 184549376 data_used: 5980160
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:27.836708+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 267 handle_osd_map epochs [268,268], i have 267, src has [1,268]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 173342720 unmapped: 6881280 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:28.836995+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 268 heartbeat osd_stat(store_statfs(0x1b1928000/0x0/0x1bfc00000, data 0x512b3a9/0x52c4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 173350912 unmapped: 6873088 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.245109558s of 10.063236237s, submitted: 215
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:29.837161+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 173572096 unmapped: 6651904 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:30.837600+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 172531712 unmapped: 7692288 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:31.837773+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 173670400 unmapped: 6553600 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2449852 data_alloc: 184549376 data_used: 5996544
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:32.837914+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 269 heartbeat osd_stat(store_statfs(0x1b187b000/0x0/0x1bfc00000, data 0x51d88d3/0x5372000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 269 handle_osd_map epochs [270,270], i have 270, src has [1,270]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 269 handle_osd_map epochs [270,270], i have 270, src has [1,270]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 269 handle_osd_map epochs [270,270], i have 270, src has [1,270]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 269 handle_osd_map epochs [270,270], i have 270, src has [1,270]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 269 handle_osd_map epochs [270,270], i have 270, src has [1,270]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 173760512 unmapped: 6463488 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:33.838107+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610310c5000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 270 handle_osd_map epochs [271,271], i have 270, src has [1,271]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 271 ms_handle_reset con 0x5610310c5000 session 0x56102eb6e960
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 173441024 unmapped: 6782976 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:34.838366+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 173735936 unmapped: 6488064 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 271 heartbeat osd_stat(store_statfs(0x1b182c000/0x0/0x1bfc00000, data 0x5226106/0x53c1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:35.838572+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 173752320 unmapped: 6471680 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 271 handle_osd_map epochs [271,272], i have 271, src has [1,272]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:36.838716+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 172367872 unmapped: 7856128 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 272 handle_osd_map epochs [272,272], i have 272, src has [1,272]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2461274 data_alloc: 184549376 data_used: 6017024
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:37.838966+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 172367872 unmapped: 7856128 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:38.839239+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 172367872 unmapped: 7856128 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.819927216s of 10.257976532s, submitted: 147
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:39.839443+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 172515328 unmapped: 7708672 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 272 heartbeat osd_stat(store_statfs(0x1b17b9000/0x0/0x1bfc00000, data 0x529b508/0x5435000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:40.839592+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 172515328 unmapped: 7708672 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 272 handle_osd_map epochs [272,273], i have 272, src has [1,273]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:41.839748+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313ba800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 273 handle_osd_map epochs [273,273], i have 273, src has [1,273]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 273 ms_handle_reset con 0x5610313ba800 session 0x56102fd470e0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 172515328 unmapped: 7708672 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2468104 data_alloc: 184549376 data_used: 6029312
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:42.839895+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 273 heartbeat osd_stat(store_statfs(0x1b178a000/0x0/0x1bfc00000, data 0x52c84d6/0x5463000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610313c1000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 273 handle_osd_map epochs [274,274], i have 273, src has [1,274]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 274 ms_handle_reset con 0x5610313c1000 session 0x56102fd47860
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 172613632 unmapped: 7610368 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:43.840178+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 274 heartbeat osd_stat(store_statfs(0x1b1787000/0x0/0x1bfc00000, data 0x52ca6ed/0x5466000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610343c2800
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 274 ms_handle_reset con 0x5610343c2800 session 0x56102fd474a0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x56102cf8e000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 172687360 unmapped: 7536640 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:44.840414+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 274 handle_osd_map epochs [274,275], i have 274, src has [1,275]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Got map version 51
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/4212177170,v1:172.18.0.108:6811/4212177170]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 275 ms_handle_reset con 0x56102cf8e000 session 0x56102eba9a40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 171761664 unmapped: 8462336 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:45.840580+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 171769856 unmapped: 8454144 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 275 handle_osd_map epochs [275,275], i have 275, src has [1,275]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610310c5000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 275 handle_osd_map epochs [275,275], i have 275, src has [1,275]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 275 ms_handle_reset con 0x5610310c5000 session 0x561031c3a780
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:46.840738+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 171769856 unmapped: 8454144 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2484634 data_alloc: 184549376 data_used: 6029312
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:47.840935+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 171769856 unmapped: 8454144 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:48.841127+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 275 heartbeat osd_stat(store_statfs(0x1b170f000/0x0/0x1bfc00000, data 0x53404e3/0x54df000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 172843008 unmapped: 7380992 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:49.841316+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 172843008 unmapped: 7380992 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:50.841544+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 275 heartbeat osd_stat(store_statfs(0x1b1711000/0x0/0x1bfc00000, data 0x53402d8/0x54dd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 172843008 unmapped: 7380992 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.410889626s of 11.751901627s, submitted: 91
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:51.841667+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 275 handle_osd_map epochs [276,276], i have 275, src has [1,276]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 172523520 unmapped: 7700480 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2491874 data_alloc: 184549376 data_used: 6041600
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:52.841800+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 276 heartbeat osd_stat(store_statfs(0x1b16b2000/0x0/0x1bfc00000, data 0x539e0c3/0x553b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 173572096 unmapped: 6651904 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:53.841951+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 173572096 unmapped: 6651904 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:54.842364+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 173572096 unmapped: 6651904 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:55.842523+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 172548096 unmapped: 7675904 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:56.842711+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 172548096 unmapped: 7675904 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2496762 data_alloc: 184549376 data_used: 6041600
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:57.842940+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 276 heartbeat osd_stat(store_statfs(0x1b166b000/0x0/0x1bfc00000, data 0x53e40b6/0x5583000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 172548096 unmapped: 7675904 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:58.843112+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 172662784 unmapped: 7561216 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:15:59.843272+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 172670976 unmapped: 7553024 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:00.843525+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.767909050s of 10.019207001s, submitted: 62
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 172687360 unmapped: 7536640 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:01.843696+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 172572672 unmapped: 7651328 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:02.843854+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 276 handle_osd_map epochs [276,277], i have 276, src has [1,277]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2509088 data_alloc: 184549376 data_used: 6049792
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 172646400 unmapped: 7577600 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:03.844038+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 277 heartbeat osd_stat(store_statfs(0x1b15d0000/0x0/0x1bfc00000, data 0x547e25d/0x561d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 172785664 unmapped: 7438336 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:04.844213+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 172785664 unmapped: 7438336 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:05.844539+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 172785664 unmapped: 7438336 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:06.844670+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 277 heartbeat osd_stat(store_statfs(0x1b15d0000/0x0/0x1bfc00000, data 0x547e25d/0x561d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 173137920 unmapped: 7086080 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:07.844850+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2510572 data_alloc: 184549376 data_used: 6049792
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 174194688 unmapped: 6029312 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:08.845002+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 174194688 unmapped: 6029312 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:09.845145+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 277 heartbeat osd_stat(store_statfs(0x1b1548000/0x0/0x1bfc00000, data 0x5505e19/0x56a6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 175505408 unmapped: 4718592 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:10.845298+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 175521792 unmapped: 4702208 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:11.845542+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 277 handle_osd_map epochs [278,278], i have 277, src has [1,278]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.286061287s of 10.561756134s, submitted: 63
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 278 handle_osd_map epochs [278,278], i have 278, src has [1,278]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1545000/0x0/0x1bfc00000, data 0x55091f0/0x56a9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 175529984 unmapped: 4694016 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:12.845712+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2519746 data_alloc: 184549376 data_used: 6062080
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 175439872 unmapped: 4784128 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:13.845854+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 175439872 unmapped: 4784128 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:14.846031+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 175439872 unmapped: 4784128 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:15.846184+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b150e000/0x0/0x1bfc00000, data 0x553ed80/0x56e0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 175972352 unmapped: 4251648 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:16.846439+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 175980544 unmapped: 4243456 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:17.846698+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2525650 data_alloc: 184549376 data_used: 6062080
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 175980544 unmapped: 4243456 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:18.846830+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 175980544 unmapped: 4243456 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b14ae000/0x0/0x1bfc00000, data 0x559fb75/0x5740000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:19.846977+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 175980544 unmapped: 4243456 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:20.847154+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 175980544 unmapped: 4243456 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:21.847329+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.881276131s of 10.009908676s, submitted: 39
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 175996928 unmapped: 4227072 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:22.847509+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2527088 data_alloc: 184549376 data_used: 6062080
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 175996928 unmapped: 4227072 heap: 180224000 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:23.847716+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 176103424 unmapped: 5169152 heap: 181272576 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:24.847909+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 176103424 unmapped: 5169152 heap: 181272576 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b143b000/0x0/0x1bfc00000, data 0x561355a/0x57b3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:25.848210+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b143b000/0x0/0x1bfc00000, data 0x561355a/0x57b3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 176111616 unmapped: 5160960 heap: 181272576 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:26.848405+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b143b000/0x0/0x1bfc00000, data 0x561355a/0x57b3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 176111616 unmapped: 5160960 heap: 181272576 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:27.848636+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2536878 data_alloc: 184549376 data_used: 6062080
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 176119808 unmapped: 5152768 heap: 181272576 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:28.848811+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 176119808 unmapped: 5152768 heap: 181272576 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:29.848987+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b13ed000/0x0/0x1bfc00000, data 0x56600d9/0x5801000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 177152000 unmapped: 4120576 heap: 181272576 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:30.849177+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b13c6000/0x0/0x1bfc00000, data 0x56878f1/0x5827000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [0,0,0,1])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 176316416 unmapped: 4956160 heap: 181272576 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b138e000/0x0/0x1bfc00000, data 0x56c0cba/0x5860000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:31.849348+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.706105232s of 10.003957748s, submitted: 55
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 176447488 unmapped: 4825088 heap: 181272576 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:32.849505+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 278 handle_osd_map epochs [278,279], i have 278, src has [1,279]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2545858 data_alloc: 184549376 data_used: 6070272
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 176553984 unmapped: 5767168 heap: 182321152 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:33.849703+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 279 heartbeat osd_stat(store_statfs(0x1b1312000/0x0/0x1bfc00000, data 0x5739a0c/0x58db000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 176594944 unmapped: 5726208 heap: 182321152 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:34.849871+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 176594944 unmapped: 5726208 heap: 182321152 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 279 heartbeat osd_stat(store_statfs(0x1b1312000/0x0/0x1bfc00000, data 0x5739a0c/0x58db000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:35.850048+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 279 heartbeat osd_stat(store_statfs(0x1b1312000/0x0/0x1bfc00000, data 0x5739a0c/0x58db000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 176594944 unmapped: 5726208 heap: 182321152 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:36.850222+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610343f0400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 176816128 unmapped: 5505024 heap: 182321152 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:37.850400+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2551450 data_alloc: 184549376 data_used: 6070272
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 279 heartbeat osd_stat(store_statfs(0x1b12df000/0x0/0x1bfc00000, data 0x576d4b2/0x590f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 176816128 unmapped: 5505024 heap: 182321152 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:38.850555+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 176816128 unmapped: 5505024 heap: 182321152 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:39.850766+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 176816128 unmapped: 5505024 heap: 182321152 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:40.850961+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 9000.1 total, 600.0 interval
                                                          Cumulative writes: 20K writes, 81K keys, 20K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.01 MB/s
                                                          Cumulative WAL: 20K writes, 7249 syncs, 2.85 writes per sync, written: 0.07 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 12K writes, 47K keys, 12K commit groups, 1.0 writes per commit group, ingest: 44.32 MB, 0.07 MB/s
                                                          Interval WAL: 12K writes, 5057 syncs, 2.42 writes per sync, written: 0.04 GB, 0.07 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 176824320 unmapped: 5496832 heap: 182321152 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:41.851065+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 279 handle_osd_map epochs [280,280], i have 280, src has [1,280]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.680538177s of 10.003588676s, submitted: 95
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 176693248 unmapped: 5627904 heap: 182321152 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:42.851263+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2572464 data_alloc: 184549376 data_used: 6082560
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 280 heartbeat osd_stat(store_statfs(0x1b123e000/0x0/0x1bfc00000, data 0x580918b/0x59af000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 176947200 unmapped: 5373952 heap: 182321152 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:43.851548+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 176586752 unmapped: 5734400 heap: 182321152 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:44.851945+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 176717824 unmapped: 5603328 heap: 182321152 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:45.852121+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 176717824 unmapped: 5603328 heap: 182321152 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:46.852315+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 176717824 unmapped: 5603328 heap: 182321152 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:47.852519+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2573450 data_alloc: 184549376 data_used: 6082560
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 176717824 unmapped: 5603328 heap: 182321152 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:48.852720+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 280 heartbeat osd_stat(store_statfs(0x1b11c6000/0x0/0x1bfc00000, data 0x5882d4b/0x5a28000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 280 heartbeat osd_stat(store_statfs(0x1b11bc000/0x0/0x1bfc00000, data 0x588cdcc/0x5a32000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 176717824 unmapped: 5603328 heap: 182321152 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:49.852888+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 280 heartbeat osd_stat(store_statfs(0x1b11bc000/0x0/0x1bfc00000, data 0x588cdcc/0x5a32000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 176717824 unmapped: 5603328 heap: 182321152 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:50.853078+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 176603136 unmapped: 5718016 heap: 182321152 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:51.853280+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.668525696s of 10.001476288s, submitted: 79
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 176840704 unmapped: 6529024 heap: 183369728 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:52.853477+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 280 heartbeat osd_stat(store_statfs(0x1b113e000/0x0/0x1bfc00000, data 0x590a7fa/0x5ab0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2588846 data_alloc: 184549376 data_used: 6082560
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 176840704 unmapped: 6529024 heap: 183369728 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:53.854394+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 176840704 unmapped: 6529024 heap: 183369728 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:54.855441+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 281 heartbeat osd_stat(store_statfs(0x1b1113000/0x0/0x1bfc00000, data 0x593215f/0x5ada000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 176906240 unmapped: 6463488 heap: 183369728 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:55.855817+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 177963008 unmapped: 5406720 heap: 183369728 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:56.856802+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 177963008 unmapped: 5406720 heap: 183369728 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:57.857048+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2589860 data_alloc: 184549376 data_used: 6103040
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 282 handle_osd_map epochs [282,282], i have 282, src has [1,282]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 282 heartbeat osd_stat(store_statfs(0x1b109d000/0x0/0x1bfc00000, data 0x59a8be0/0x5b50000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 178167808 unmapped: 5201920 heap: 183369728 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:58.857211+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 178167808 unmapped: 5201920 heap: 183369728 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:16:59.857528+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 178167808 unmapped: 5201920 heap: 183369728 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:00.857670+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:01.857860+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 178167808 unmapped: 5201920 heap: 183369728 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.713974953s of 10.002536774s, submitted: 125
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:02.858156+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 178446336 unmapped: 5971968 heap: 184418304 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2605640 data_alloc: 184549376 data_used: 6103040
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:03.858309+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 178618368 unmapped: 5799936 heap: 184418304 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 283 heartbeat osd_stat(store_statfs(0x1b1025000/0x0/0x1bfc00000, data 0x5a1e40b/0x5bc8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 283 heartbeat osd_stat(store_statfs(0x1b1022000/0x0/0x1bfc00000, data 0x5a219bb/0x5bcb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:04.858613+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 178880512 unmapped: 5537792 heap: 184418304 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:05.858805+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 178388992 unmapped: 6029312 heap: 184418304 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:06.859010+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 178388992 unmapped: 6029312 heap: 184418304 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 283 handle_osd_map epochs [283,284], i have 283, src has [1,284]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:07.859226+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 178487296 unmapped: 5931008 heap: 184418304 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2608068 data_alloc: 184549376 data_used: 6115328
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:08.859586+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 178487296 unmapped: 5931008 heap: 184418304 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:09.859798+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 178487296 unmapped: 5931008 heap: 184418304 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 284 heartbeat osd_stat(store_statfs(0x1b1003000/0x0/0x1bfc00000, data 0x5a40588/0x5bea000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:10.860005+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 178487296 unmapped: 5931008 heap: 184418304 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:11.860368+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 178487296 unmapped: 5931008 heap: 184418304 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.832897186s of 10.003415108s, submitted: 36
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:12.860609+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 178626560 unmapped: 5791744 heap: 184418304 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2616370 data_alloc: 184549376 data_used: 6115328
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:13.860794+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 178798592 unmapped: 5619712 heap: 184418304 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:14.861002+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 178798592 unmapped: 5619712 heap: 184418304 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 284 heartbeat osd_stat(store_statfs(0x1b0fa5000/0x0/0x1bfc00000, data 0x5a9f932/0x5c49000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:15.861183+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 178798592 unmapped: 5619712 heap: 184418304 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:16.861363+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 178798592 unmapped: 5619712 heap: 184418304 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 284 heartbeat osd_stat(store_statfs(0x1b0fa5000/0x0/0x1bfc00000, data 0x5a9f932/0x5c49000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:17.861584+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 179109888 unmapped: 5308416 heap: 184418304 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2613374 data_alloc: 184549376 data_used: 6115328
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:18.861888+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 179109888 unmapped: 5308416 heap: 184418304 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:19.862123+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 179109888 unmapped: 5308416 heap: 184418304 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:20.862334+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 179109888 unmapped: 5308416 heap: 184418304 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 284 heartbeat osd_stat(store_statfs(0x1b0f70000/0x0/0x1bfc00000, data 0x5ad42d3/0x5c7e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:21.862583+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 179109888 unmapped: 5308416 heap: 184418304 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.911779404s of 10.002739906s, submitted: 16
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 284 heartbeat osd_stat(store_statfs(0x1b0f2c000/0x0/0x1bfc00000, data 0x5b18524/0x5cc2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:22.862790+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 178946048 unmapped: 6520832 heap: 185466880 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2618174 data_alloc: 184549376 data_used: 6115328
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:23.862997+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 178946048 unmapped: 6520832 heap: 185466880 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 284 heartbeat osd_stat(store_statfs(0x1b0f2c000/0x0/0x1bfc00000, data 0x5b18524/0x5cc2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:24.863249+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 178946048 unmapped: 6520832 heap: 185466880 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:25.863440+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 179150848 unmapped: 6316032 heap: 185466880 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:26.863668+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 179150848 unmapped: 6316032 heap: 185466880 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:27.863881+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 284 heartbeat osd_stat(store_statfs(0x1b0f08000/0x0/0x1bfc00000, data 0x5b3c0c9/0x5ce6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 179175424 unmapped: 6291456 heap: 185466880 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2624606 data_alloc: 184549376 data_used: 6115328
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 284 heartbeat osd_stat(store_statfs(0x1b0f08000/0x0/0x1bfc00000, data 0x5b3c0c9/0x5ce6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:28.864084+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 179126272 unmapped: 6340608 heap: 185466880 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 284 heartbeat osd_stat(store_statfs(0x1b0eea000/0x0/0x1bfc00000, data 0x5b58828/0x5d04000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:29.864267+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 179126272 unmapped: 6340608 heap: 185466880 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:30.864443+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 179126272 unmapped: 6340608 heap: 185466880 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 284 heartbeat osd_stat(store_statfs(0x1b0ee2000/0x0/0x1bfc00000, data 0x5b60507/0x5d0c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:31.864628+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 179273728 unmapped: 6193152 heap: 185466880 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.873370171s of 10.001022339s, submitted: 22
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:32.864768+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 179412992 unmapped: 6053888 heap: 185466880 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2632702 data_alloc: 184549376 data_used: 6115328
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:33.864876+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 179412992 unmapped: 6053888 heap: 185466880 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 284 heartbeat osd_stat(store_statfs(0x1b0e69000/0x0/0x1bfc00000, data 0x5bd9785/0x5d85000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:34.865035+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 179625984 unmapped: 5840896 heap: 185466880 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 284 heartbeat osd_stat(store_statfs(0x1b0e6b000/0x0/0x1bfc00000, data 0x5bd964f/0x5d83000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:35.865163+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 179625984 unmapped: 5840896 heap: 185466880 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:36.865300+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 179716096 unmapped: 5750784 heap: 185466880 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:37.865525+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Got map version 52
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/4212177170,v1:172.18.0.108:6811/4212177170]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 179716096 unmapped: 5750784 heap: 185466880 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2631458 data_alloc: 184549376 data_used: 6115328
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:38.865734+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 179716096 unmapped: 5750784 heap: 185466880 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:39.865922+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 179716096 unmapped: 5750784 heap: 185466880 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 284 heartbeat osd_stat(store_statfs(0x1b0e3f000/0x0/0x1bfc00000, data 0x5c04b38/0x5daf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:40.866121+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 179830784 unmapped: 5636096 heap: 185466880 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:41.866329+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 179830784 unmapped: 5636096 heap: 185466880 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 284 heartbeat osd_stat(store_statfs(0x1b0df8000/0x0/0x1bfc00000, data 0x5c4bc3f/0x5df6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:42.866544+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 180994048 unmapped: 4472832 heap: 185466880 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2635938 data_alloc: 184549376 data_used: 6115328
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 284 heartbeat osd_stat(store_statfs(0x1b0df8000/0x0/0x1bfc00000, data 0x5c4bc3f/0x5df6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.495679855s of 11.622138977s, submitted: 24
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:43.866660+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 180846592 unmapped: 4620288 heap: 185466880 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:44.866818+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 180871168 unmapped: 4595712 heap: 185466880 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:45.866977+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 180871168 unmapped: 4595712 heap: 185466880 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:46.867128+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 180977664 unmapped: 4489216 heap: 185466880 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:47.867327+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 180985856 unmapped: 4481024 heap: 185466880 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2640244 data_alloc: 184549376 data_used: 6115328
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:48.867544+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 180985856 unmapped: 4481024 heap: 185466880 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 284 heartbeat osd_stat(store_statfs(0x1afbfd000/0x0/0x1bfc00000, data 0x5ca72db/0x5e51000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:49.867709+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 180985856 unmapped: 4481024 heap: 185466880 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:50.867894+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 181354496 unmapped: 4112384 heap: 185466880 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:51.868022+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 181338112 unmapped: 4128768 heap: 185466880 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:52.868160+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 181518336 unmapped: 3948544 heap: 185466880 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2647812 data_alloc: 184549376 data_used: 6115328
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:53.868299+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 181518336 unmapped: 3948544 heap: 185466880 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 284 heartbeat osd_stat(store_statfs(0x1afb82000/0x0/0x1bfc00000, data 0x5d22ec7/0x5ecc000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:54.868541+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 284 heartbeat osd_stat(store_statfs(0x1afb82000/0x0/0x1bfc00000, data 0x5d22ec7/0x5ecc000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 181526528 unmapped: 3940352 heap: 185466880 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:55.868761+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 181526528 unmapped: 3940352 heap: 185466880 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 284 heartbeat osd_stat(store_statfs(0x1afb82000/0x0/0x1bfc00000, data 0x5d22ec7/0x5ecc000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:56.869082+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.191453934s of 13.348818779s, submitted: 31
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 179617792 unmapped: 5849088 heap: 185466880 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:57.869404+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 179617792 unmapped: 5849088 heap: 185466880 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2645536 data_alloc: 184549376 data_used: 6115328
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 284 heartbeat osd_stat(store_statfs(0x1afb64000/0x0/0x1bfc00000, data 0x5d40cb0/0x5eea000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:58.870119+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 179617792 unmapped: 5849088 heap: 185466880 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:17:59.870723+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 179617792 unmapped: 5849088 heap: 185466880 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:00.871098+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 179748864 unmapped: 6766592 heap: 186515456 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:01.871386+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 284 heartbeat osd_stat(store_statfs(0x1afb38000/0x0/0x1bfc00000, data 0x5d6c97d/0x5f16000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 181862400 unmapped: 5701632 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:02.871767+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 181862400 unmapped: 5701632 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2645986 data_alloc: 184549376 data_used: 6115328
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:03.872009+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182009856 unmapped: 5554176 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:04.872351+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182009856 unmapped: 5554176 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 284 heartbeat osd_stat(store_statfs(0x1ae96a000/0x0/0x1bfc00000, data 0x5d9c0e8/0x5f44000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:05.872717+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 284 heartbeat osd_stat(store_statfs(0x1ae96a000/0x0/0x1bfc00000, data 0x5d9c0e8/0x5f44000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182009856 unmapped: 5554176 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:06.873041+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 284 ms_handle_reset con 0x5610343f0400 session 0x56102e7e52c0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.866339684s of 10.001930237s, submitted: 128
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182575104 unmapped: 4988928 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:07.873433+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Got map version 53
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/4212177170,v1:172.18.0.108:6811/4212177170]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182591488 unmapped: 4972544 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2653480 data_alloc: 184549376 data_used: 6123520
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:08.873804+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182599680 unmapped: 4964352 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:09.873946+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 285 heartbeat osd_stat(store_statfs(0x1ae934000/0x0/0x1bfc00000, data 0x5dd0027/0x5f79000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182599680 unmapped: 4964352 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:10.874286+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182599680 unmapped: 4964352 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:11.874640+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182599680 unmapped: 4964352 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:12.875025+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182599680 unmapped: 4964352 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2654016 data_alloc: 184549376 data_used: 6123520
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:13.875246+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182599680 unmapped: 4964352 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:14.875540+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182607872 unmapped: 4956160 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:15.876168+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 285 heartbeat osd_stat(store_statfs(0x1ae92e000/0x0/0x1bfc00000, data 0x5dd676a/0x5f80000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182607872 unmapped: 4956160 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:16.876397+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182616064 unmapped: 4947968 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _renew_subs
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _send_mon_message to mon.np0005541912 at v2:172.18.0.103:3300/0
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.199963570s of 10.292005539s, submitted: 223
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92e000/0x0/0x1bfc00000, data 0x5dd676a/0x5f80000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:17.876676+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182616064 unmapped: 4947968 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2657470 data_alloc: 184549376 data_used: 6135808
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:18.876918+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182616064 unmapped: 4947968 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:19.877163+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182616064 unmapped: 4947968 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:20.877337+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182616064 unmapped: 4947968 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92a000/0x0/0x1bfc00000, data 0x5dd8880/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:21.877527+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182616064 unmapped: 4947968 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:22.877680+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182616064 unmapped: 4947968 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2657470 data_alloc: 184549376 data_used: 6135808
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:23.877873+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182616064 unmapped: 4947968 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:24.878087+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182632448 unmapped: 4931584 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:25.878248+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182632448 unmapped: 4931584 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:26.878534+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182632448 unmapped: 4931584 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92a000/0x0/0x1bfc00000, data 0x5dd8880/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:27.878825+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182632448 unmapped: 4931584 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2657470 data_alloc: 184549376 data_used: 6135808
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92a000/0x0/0x1bfc00000, data 0x5dd8880/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:28.879096+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182632448 unmapped: 4931584 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:29.879222+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182632448 unmapped: 4931584 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:30.879338+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 ms_handle_reset con 0x561030b67400 session 0x56102fd46d20
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x5610343f2000
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 ms_handle_reset con 0x5610313bb000 session 0x56102eb72b40
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: handle_auth_request added challenge on 0x561030b67400
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182632448 unmapped: 4931584 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:31.879537+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182632448 unmapped: 4931584 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:32.879707+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182640640 unmapped: 4923392 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2657470 data_alloc: 184549376 data_used: 6135808
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:33.879828+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92a000/0x0/0x1bfc00000, data 0x5dd8880/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182640640 unmapped: 4923392 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:34.880010+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182640640 unmapped: 4923392 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92a000/0x0/0x1bfc00000, data 0x5dd8880/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:35.880213+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182640640 unmapped: 4923392 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92a000/0x0/0x1bfc00000, data 0x5dd8880/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:36.880357+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182640640 unmapped: 4923392 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:37.880558+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182640640 unmapped: 4923392 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2657470 data_alloc: 184549376 data_used: 6135808
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:38.880729+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182640640 unmapped: 4923392 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:39.880915+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182640640 unmapped: 4923392 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:40.881107+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92a000/0x0/0x1bfc00000, data 0x5dd8880/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182665216 unmapped: 4898816 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92a000/0x0/0x1bfc00000, data 0x5dd8880/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:41.881263+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182665216 unmapped: 4898816 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:42.881433+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92a000/0x0/0x1bfc00000, data 0x5dd8880/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182665216 unmapped: 4898816 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2657470 data_alloc: 184549376 data_used: 6135808
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92a000/0x0/0x1bfc00000, data 0x5dd8880/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:43.881625+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182673408 unmapped: 4890624 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:44.881781+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182673408 unmapped: 4890624 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:45.881950+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92a000/0x0/0x1bfc00000, data 0x5dd8880/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182673408 unmapped: 4890624 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:46.882102+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182673408 unmapped: 4890624 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:47.891012+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182673408 unmapped: 4890624 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2657470 data_alloc: 184549376 data_used: 6135808
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:48.891214+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182673408 unmapped: 4890624 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:49.891373+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182673408 unmapped: 4890624 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:50.891509+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182673408 unmapped: 4890624 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92a000/0x0/0x1bfc00000, data 0x5dd8880/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:51.891692+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182673408 unmapped: 4890624 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:52.891855+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92a000/0x0/0x1bfc00000, data 0x5dd8880/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182673408 unmapped: 4890624 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2657470 data_alloc: 184549376 data_used: 6135808
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:53.892040+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182673408 unmapped: 4890624 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92a000/0x0/0x1bfc00000, data 0x5dd8880/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:54.892204+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182673408 unmapped: 4890624 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:55.892360+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182673408 unmapped: 4890624 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:56.892688+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182673408 unmapped: 4890624 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:57.892958+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182673408 unmapped: 4890624 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2657470 data_alloc: 184549376 data_used: 6135808
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92a000/0x0/0x1bfc00000, data 0x5dd8880/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:58.893158+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182673408 unmapped: 4890624 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:18:59.893349+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182673408 unmapped: 4890624 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:00.893540+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182673408 unmapped: 4890624 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:01.893779+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92a000/0x0/0x1bfc00000, data 0x5dd8880/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182681600 unmapped: 4882432 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:02.894244+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182681600 unmapped: 4882432 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2657470 data_alloc: 184549376 data_used: 6135808
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:03.895353+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182681600 unmapped: 4882432 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:04.895545+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182681600 unmapped: 4882432 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:05.895809+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182681600 unmapped: 4882432 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:06.896605+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92a000/0x0/0x1bfc00000, data 0x5dd8880/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 49.599216461s of 49.630859375s, submitted: 19
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 ms_handle_reset con 0x56102e38e400 session 0x561030d7cf00
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182960128 unmapped: 4603904 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:07.896900+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Got map version 54
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/4212177170,v1:172.18.0.108:6811/4212177170]
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182968320 unmapped: 4595712 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2656990 data_alloc: 184549376 data_used: 6135808
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:08.897543+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182968320 unmapped: 4595712 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:09.897794+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182968320 unmapped: 4595712 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:10.898142+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182968320 unmapped: 4595712 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:11.898530+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92b000/0x0/0x1bfc00000, data 0x5dd8a93/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182968320 unmapped: 4595712 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:12.898865+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182968320 unmapped: 4595712 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2656990 data_alloc: 184549376 data_used: 6135808
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:13.899040+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182968320 unmapped: 4595712 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:14.899430+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92b000/0x0/0x1bfc00000, data 0x5dd8a93/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182968320 unmapped: 4595712 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:15.899717+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182968320 unmapped: 4595712 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:16.900029+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182968320 unmapped: 4595712 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:17.900406+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92b000/0x0/0x1bfc00000, data 0x5dd8a93/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182968320 unmapped: 4595712 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2656990 data_alloc: 184549376 data_used: 6135808
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:18.900673+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182968320 unmapped: 4595712 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:19.900979+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182984704 unmapped: 4579328 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:20.901228+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92b000/0x0/0x1bfc00000, data 0x5dd8a93/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182984704 unmapped: 4579328 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:21.901434+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182984704 unmapped: 4579328 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:22.901695+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182984704 unmapped: 4579328 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2656990 data_alloc: 184549376 data_used: 6135808
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92b000/0x0/0x1bfc00000, data 0x5dd8a93/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:23.901912+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182984704 unmapped: 4579328 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:24.902182+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182984704 unmapped: 4579328 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:25.902375+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92b000/0x0/0x1bfc00000, data 0x5dd8a93/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182984704 unmapped: 4579328 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:26.902616+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182984704 unmapped: 4579328 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92b000/0x0/0x1bfc00000, data 0x5dd8a93/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:27.902902+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183001088 unmapped: 4562944 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2656990 data_alloc: 184549376 data_used: 6135808
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:28.903112+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183001088 unmapped: 4562944 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:29.903327+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183001088 unmapped: 4562944 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:30.903474+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183001088 unmapped: 4562944 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:31.903698+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92b000/0x0/0x1bfc00000, data 0x5dd8a93/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183001088 unmapped: 4562944 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:32.904085+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183009280 unmapped: 4554752 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2656990 data_alloc: 184549376 data_used: 6135808
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:33.904236+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183017472 unmapped: 4546560 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:34.904375+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183017472 unmapped: 4546560 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:35.904541+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183025664 unmapped: 4538368 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:36.904693+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92b000/0x0/0x1bfc00000, data 0x5dd8a93/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183025664 unmapped: 4538368 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:37.904879+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92b000/0x0/0x1bfc00000, data 0x5dd8a93/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183025664 unmapped: 4538368 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2656990 data_alloc: 184549376 data_used: 6135808
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:38.905087+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183025664 unmapped: 4538368 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:39.905245+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183025664 unmapped: 4538368 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:40.905435+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183033856 unmapped: 4530176 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:41.905649+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183033856 unmapped: 4530176 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:42.905791+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92b000/0x0/0x1bfc00000, data 0x5dd8a93/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183033856 unmapped: 4530176 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2656990 data_alloc: 184549376 data_used: 6135808
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:43.905949+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183033856 unmapped: 4530176 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:44.906147+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183033856 unmapped: 4530176 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:45.906319+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183033856 unmapped: 4530176 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:46.906504+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92b000/0x0/0x1bfc00000, data 0x5dd8a93/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183033856 unmapped: 4530176 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:47.906662+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183033856 unmapped: 4530176 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2656990 data_alloc: 184549376 data_used: 6135808
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:48.907004+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183042048 unmapped: 4521984 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:49.907172+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92b000/0x0/0x1bfc00000, data 0x5dd8a93/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183042048 unmapped: 4521984 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:50.907370+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92b000/0x0/0x1bfc00000, data 0x5dd8a93/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183042048 unmapped: 4521984 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:51.907558+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92b000/0x0/0x1bfc00000, data 0x5dd8a93/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92b000/0x0/0x1bfc00000, data 0x5dd8a93/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183050240 unmapped: 4513792 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:52.907730+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183050240 unmapped: 4513792 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:53.907903+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2656990 data_alloc: 184549376 data_used: 6135808
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183050240 unmapped: 4513792 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:54.908099+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92b000/0x0/0x1bfc00000, data 0x5dd8a93/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183050240 unmapped: 4513792 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:55.908259+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92b000/0x0/0x1bfc00000, data 0x5dd8a93/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183050240 unmapped: 4513792 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:56.908554+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183050240 unmapped: 4513792 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:57.908751+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183050240 unmapped: 4513792 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:58.908904+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2656990 data_alloc: 184549376 data_used: 6135808
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183050240 unmapped: 4513792 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:19:59.909069+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183058432 unmapped: 4505600 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:00.909227+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183058432 unmapped: 4505600 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:01.909411+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92b000/0x0/0x1bfc00000, data 0x5dd8a93/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92b000/0x0/0x1bfc00000, data 0x5dd8a93/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183058432 unmapped: 4505600 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:02.909619+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:03.909816+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183058432 unmapped: 4505600 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2656990 data_alloc: 184549376 data_used: 6135808
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:04.909972+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183058432 unmapped: 4505600 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92b000/0x0/0x1bfc00000, data 0x5dd8a93/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:05.910160+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183058432 unmapped: 4505600 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:06.910337+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183058432 unmapped: 4505600 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92b000/0x0/0x1bfc00000, data 0x5dd8a93/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183058432 unmapped: 4505600 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:07.949406+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:08.951081+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183066624 unmapped: 4497408 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2656990 data_alloc: 184549376 data_used: 6135808
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:09.951572+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183066624 unmapped: 4497408 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:10.951881+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183066624 unmapped: 4497408 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92b000/0x0/0x1bfc00000, data 0x5dd8a93/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:11.952495+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183066624 unmapped: 4497408 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:12.952879+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183066624 unmapped: 4497408 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:13.953135+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183066624 unmapped: 4497408 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2656990 data_alloc: 184549376 data_used: 6135808
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92b000/0x0/0x1bfc00000, data 0x5dd8a93/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:14.953306+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183074816 unmapped: 4489216 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:15.953472+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183074816 unmapped: 4489216 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:16.954052+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183091200 unmapped: 4472832 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:17.954544+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183091200 unmapped: 4472832 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92b000/0x0/0x1bfc00000, data 0x5dd8a93/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:18.955048+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2656990 data_alloc: 184549376 data_used: 6135808
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183091200 unmapped: 4472832 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:19.955402+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183091200 unmapped: 4472832 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:20.955631+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92b000/0x0/0x1bfc00000, data 0x5dd8a93/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183091200 unmapped: 4472832 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:21.955774+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183091200 unmapped: 4472832 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:22.955987+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183091200 unmapped: 4472832 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:23.956269+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2656990 data_alloc: 184549376 data_used: 6135808
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183091200 unmapped: 4472832 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:24.956421+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183091200 unmapped: 4472832 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:25.956608+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183091200 unmapped: 4472832 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92b000/0x0/0x1bfc00000, data 0x5dd8a93/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:26.956760+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183091200 unmapped: 4472832 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:27.956939+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183091200 unmapped: 4472832 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:28.957076+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2656990 data_alloc: 184549376 data_used: 6135808
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183091200 unmapped: 4472832 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:29.957241+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183091200 unmapped: 4472832 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:30.957438+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92b000/0x0/0x1bfc00000, data 0x5dd8a93/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183091200 unmapped: 4472832 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:31.957628+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183091200 unmapped: 4472832 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:32.957775+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183099392 unmapped: 4464640 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:33.957939+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2656990 data_alloc: 184549376 data_used: 6135808
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183099392 unmapped: 4464640 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:34.958082+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183099392 unmapped: 4464640 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:35.958234+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 183091200 unmapped: 4472832 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: osd.1 286 heartbeat osd_stat(store_statfs(0x1ae92b000/0x0/0x1bfc00000, data 0x5dd8a93/0x5f83000, compress 0x0/0x0/0x0, omap 0x649, meta 0xb34f9b7), peers [0,2,3,4,5] op hist [])
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: do_command 'config diff' '{prefix=config diff}'
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:36.958414+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: do_command 'config show' '{prefix=config show}'
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: do_command 'counter dump' '{prefix=counter dump}'
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: do_command 'counter schema' '{prefix=counter schema}'
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182812672 unmapped: 4751360 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:37.958681+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: prioritycache tune_memory target: 3561601228 mapped: 182919168 unmapped: 4644864 heap: 187564032 old mem: 2222054675 new mem: 2222054675
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: tick
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_tickets
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-02T10:20:38.958900+0000)
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.307692
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0727273
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: bluestore.MempoolThread(0x56102c027b60) _resize_shards cache_size: 2222054675 kv_alloc: 922746880 kv_used: 2144 kv_onode_alloc: 218103808 kv_onode_used: 464 meta_alloc: 855638016 meta_used: 2656990 data_alloc: 184549376 data_used: 6135808
Dec 02 10:21:09 np0005541914.localdomain ceph-osd[31770]: do_command 'log dump' '{prefix=log dump}'
Dec 02 10:21:09 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v829: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:21:09 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Dec 02 10:21:09 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/144381832' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Dec 02 10:21:09 np0005541914.localdomain rsyslogd[759]: imjournal from <localhost:ceph-osd>: begin to drop messages due to rate-limiting
Dec 02 10:21:09 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Dec 02 10:21:09 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/538524888' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Dec 02 10:21:09 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Dec 02 10:21:09 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1189769955' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Dec 02 10:21:10 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Dec 02 10:21:10 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1679132258' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Dec 02 10:21:10 np0005541914.localdomain ceph-mon[301710]: from='client.49611 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:10 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/2776594278' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Dec 02 10:21:10 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/963341960' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Dec 02 10:21:10 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/2033131510' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Dec 02 10:21:10 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/1367499796' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Dec 02 10:21:10 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/2067912337' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Dec 02 10:21:10 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/2812853472' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Dec 02 10:21:10 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/144381832' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Dec 02 10:21:10 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/538524888' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Dec 02 10:21:10 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/846716803' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Dec 02 10:21:10 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/3628003697' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Dec 02 10:21:10 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/1240671825' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:21:10 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/3134299501' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Dec 02 10:21:10 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/3941513563' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Dec 02 10:21:10 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/1189769955' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Dec 02 10:21:10 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/3646386249' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Dec 02 10:21:10 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/550533180' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Dec 02 10:21:10 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/1679132258' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Dec 02 10:21:10 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Dec 02 10:21:10 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2146630095' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Dec 02 10:21:10 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:10.468 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:21:10 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Dec 02 10:21:10 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1366845384' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Dec 02 10:21:10 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 02 10:21:10 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2620739229' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Dec 02 10:21:10 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.
Dec 02 10:21:10 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.
Dec 02 10:21:10 np0005541914.localdomain podman[332519]: 2025-12-02 10:21:10.971400956 +0000 UTC m=+0.075349290 container health_status 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 10:21:11 np0005541914.localdomain podman[332519]: 2025-12-02 10:21:11.004975295 +0000 UTC m=+0.108923629 container exec_died 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec 02 10:21:11 np0005541914.localdomain systemd[1]: 3ca0d6f92f65cfd4452a076c91f6a9aa51d0debd6ef0d7c4d15cb2a2da401ec6.service: Deactivated successfully.
Dec 02 10:21:11 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Dec 02 10:21:11 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2548576217' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Dec 02 10:21:11 np0005541914.localdomain podman[332520]: 2025-12-02 10:21:11.013783175 +0000 UTC m=+0.112862599 container health_status bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, version=9.6, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Dec 02 10:21:11 np0005541914.localdomain podman[332520]: 2025-12-02 10:21:11.098869141 +0000 UTC m=+0.197948525 container exec_died bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7, name=openstack_network_exporter, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=edpm, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, release=1755695350)
Dec 02 10:21:11 np0005541914.localdomain systemd[1]: bf35e5f88401acc4ccc73195ceda7516dba7f9ef72f3bad1c3a533b41916f5be.service: Deactivated successfully.
Dec 02 10:21:11 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd utilization"} v 0)
Dec 02 10:21:11 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2247838553' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Dec 02 10:21:11 np0005541914.localdomain ceph-mon[301710]: pgmap v829: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:21:11 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/434608318' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Dec 02 10:21:11 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/4228538185' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Dec 02 10:21:11 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/2146630095' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Dec 02 10:21:11 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/1895407980' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Dec 02 10:21:11 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/66508782' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Dec 02 10:21:11 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/3393577515' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:21:11 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/1366845384' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Dec 02 10:21:11 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/607218307' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Dec 02 10:21:11 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/4154530576' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Dec 02 10:21:11 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/2620739229' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Dec 02 10:21:11 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/1999813796' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Dec 02 10:21:11 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/4143362805' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Dec 02 10:21:11 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/2548576217' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Dec 02 10:21:11 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/1464133244' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Dec 02 10:21:11 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/2247838553' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Dec 02 10:21:11 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/2173933401' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Dec 02 10:21:11 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/1015905978' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Dec 02 10:21:11 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.69893 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:11 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v830: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:21:11 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.69899 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:11 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.59494 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:11 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.69911 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:11 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.59500 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:11 np0005541914.localdomain systemd[1]: Starting Hostname Service...
Dec 02 10:21:11 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.69917 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:12 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.59509 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:12 np0005541914.localdomain systemd[1]: Started Hostname Service.
Dec 02 10:21:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:21:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:21:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:21:12 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec 02 10:21:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:21:12 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec 02 10:21:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:21:12 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec 02 10:21:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:21:12 np0005541914.localdomain openstack_network_exporter[241816]: ERROR   10:21:12 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec 02 10:21:12 np0005541914.localdomain openstack_network_exporter[241816]: 
Dec 02 10:21:12 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.69926 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:12 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/1518916062' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Dec 02 10:21:12 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/358616255' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Dec 02 10:21:12 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/66728782' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Dec 02 10:21:12 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/1047167114' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Dec 02 10:21:12 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/1091717628' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Dec 02 10:21:12 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/896269902' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Dec 02 10:21:12 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.59515 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:12 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.59521 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:12 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.59527 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:12.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:21:12 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:12.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:21:12 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:21:12 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.69941 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:12 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.49746 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:12 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.59536 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:12 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.49752 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:12 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "quorum_status"} v 0)
Dec 02 10:21:12 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2706382169' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Dec 02 10:21:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.69953 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:13.137 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:21:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.59551 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.49758 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.49767 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:13 np0005541914.localdomain ceph-mon[301710]: from='client.69893 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:13 np0005541914.localdomain ceph-mon[301710]: pgmap v830: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:21:13 np0005541914.localdomain ceph-mon[301710]: from='client.69899 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:13 np0005541914.localdomain ceph-mon[301710]: from='client.59494 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:13 np0005541914.localdomain ceph-mon[301710]: from='client.69911 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:13 np0005541914.localdomain ceph-mon[301710]: from='client.59500 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:13 np0005541914.localdomain ceph-mon[301710]: from='client.69917 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:13 np0005541914.localdomain ceph-mon[301710]: from='client.59509 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:13 np0005541914.localdomain ceph-mon[301710]: from='client.69926 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:13 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/3920558208' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Dec 02 10:21:13 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/2706382169' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Dec 02 10:21:13 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "versions"} v 0)
Dec 02 10:21:13 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2332191097' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Dec 02 10:21:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.69968 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v831: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:21:13 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:13.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:21:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.59566 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.49782 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.69983 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:13 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Dec 02 10:21:13 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/975355669' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 02 10:21:13 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.59581 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:14 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.49794 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:14 np0005541914.localdomain ceph-mon[301710]: from='client.59515 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:14 np0005541914.localdomain ceph-mon[301710]: from='client.59521 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:14 np0005541914.localdomain ceph-mon[301710]: from='client.59527 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:14 np0005541914.localdomain ceph-mon[301710]: from='client.69941 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:14 np0005541914.localdomain ceph-mon[301710]: from='client.49746 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:14 np0005541914.localdomain ceph-mon[301710]: from='client.59536 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:14 np0005541914.localdomain ceph-mon[301710]: from='client.49752 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:14 np0005541914.localdomain ceph-mon[301710]: from='client.69953 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:14 np0005541914.localdomain ceph-mon[301710]: from='client.59551 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:14 np0005541914.localdomain ceph-mon[301710]: from='client.49758 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:14 np0005541914.localdomain ceph-mon[301710]: from='client.49767 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:14 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/1162678121' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Dec 02 10:21:14 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/2332191097' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Dec 02 10:21:14 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/535179414' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Dec 02 10:21:14 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/975355669' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 02 10:21:14 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/3040349159' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Dec 02 10:21:14 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/3736550702' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 02 10:21:14 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Dec 02 10:21:14 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2937001150' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Dec 02 10:21:14 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.49806 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:14 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 02 10:21:14 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 02 10:21:14 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.49824 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:14 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 02 10:21:14 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 02 10:21:15 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "config dump"} v 0)
Dec 02 10:21:15 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2998395856' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Dec 02 10:21:15 np0005541914.localdomain ceph-mon[301710]: from='client.69968 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:15 np0005541914.localdomain ceph-mon[301710]: pgmap v831: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:21:15 np0005541914.localdomain ceph-mon[301710]: from='client.59566 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:15 np0005541914.localdomain ceph-mon[301710]: from='client.49782 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:15 np0005541914.localdomain ceph-mon[301710]: from='client.69983 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:15 np0005541914.localdomain ceph-mon[301710]: from='client.59581 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:15 np0005541914.localdomain ceph-mon[301710]: from='client.49794 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:15 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/2937001150' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Dec 02 10:21:15 np0005541914.localdomain ceph-mon[301710]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 02 10:21:15 np0005541914.localdomain ceph-mon[301710]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 02 10:21:15 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/2055653369' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Dec 02 10:21:15 np0005541914.localdomain ceph-mon[301710]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 02 10:21:15 np0005541914.localdomain ceph-mon[301710]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 02 10:21:15 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/687034646' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Dec 02 10:21:15 np0005541914.localdomain ceph-mon[301710]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 02 10:21:15 np0005541914.localdomain ceph-mon[301710]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 02 10:21:15 np0005541914.localdomain ceph-mon[301710]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 02 10:21:15 np0005541914.localdomain ceph-mon[301710]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 02 10:21:15 np0005541914.localdomain ceph-mon[301710]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 02 10:21:15 np0005541914.localdomain ceph-mon[301710]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 02 10:21:15 np0005541914.localdomain ceph-mon[301710]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 02 10:21:15 np0005541914.localdomain ceph-mon[301710]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 02 10:21:15 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/2998395856' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Dec 02 10:21:15 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/2891361070' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 02 10:21:15 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:15.470 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:21:15 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v832: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:21:15 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.70037 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:15 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.59641 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:15 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Dec 02 10:21:15 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/280261908' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Dec 02 10:21:16 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 02 10:21:16 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 02 10:21:16 np0005541914.localdomain ceph-mon[301710]: from='client.49806 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:16 np0005541914.localdomain ceph-mon[301710]: from='client.49824 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 02 10:21:16 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/2457193087' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Dec 02 10:21:16 np0005541914.localdomain ceph-mon[301710]: pgmap v832: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:21:16 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/3194501843' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Dec 02 10:21:16 np0005541914.localdomain ceph-mon[301710]: from='client.70037 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:16 np0005541914.localdomain ceph-mon[301710]: from='client.59641 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:16 np0005541914.localdomain ceph-mon[301710]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 02 10:21:16 np0005541914.localdomain ceph-mon[301710]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 02 10:21:16 np0005541914.localdomain ceph-mon[301710]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 02 10:21:16 np0005541914.localdomain ceph-mon[301710]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 02 10:21:16 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/280261908' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Dec 02 10:21:16 np0005541914.localdomain ceph-mon[301710]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 02 10:21:16 np0005541914.localdomain ceph-mon[301710]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 02 10:21:16 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/2048632244' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Dec 02 10:21:16 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df"} v 0)
Dec 02 10:21:16 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/661400808' entity='client.admin' cmd={"prefix": "df"} : dispatch
Dec 02 10:21:16 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:16.523 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:21:16 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.49884 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:16 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "fs dump"} v 0)
Dec 02 10:21:16 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3855858690' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Dec 02 10:21:16 np0005541914.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.
Dec 02 10:21:17 np0005541914.localdomain podman[333323]: 2025-12-02 10:21:17.08565387 +0000 UTC m=+0.087501402 container health_status 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 02 10:21:17 np0005541914.localdomain podman[333323]: 2025-12-02 10:21:17.096831112 +0000 UTC m=+0.098678624 container exec_died 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fa2bb8efef6782c26ea7f1675eeb36dd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 02 10:21:17 np0005541914.localdomain systemd[1]: 2726462fda535be7ff7e12ba18b45d5bf06269dfa8a5fa61e331cc108c803e2e.service: Deactivated successfully.
Dec 02 10:21:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "fs ls"} v 0)
Dec 02 10:21:17 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3647462322' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Dec 02 10:21:17 np0005541914.localdomain kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec 02 10:21:17 np0005541914.localdomain kernel: cfg80211: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec 02 10:21:17 np0005541914.localdomain kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec 02 10:21:17 np0005541914.localdomain kernel: cfg80211: failed to load regulatory.db
Dec 02 10:21:17 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/2635357428' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Dec 02 10:21:17 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/661400808' entity='client.admin' cmd={"prefix": "df"} : dispatch
Dec 02 10:21:17 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/3159311886' entity='client.admin' cmd={"prefix": "df"} : dispatch
Dec 02 10:21:17 np0005541914.localdomain ceph-mon[301710]: from='client.49884 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:17 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/3855858690' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Dec 02 10:21:17 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/3815186711' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Dec 02 10:21:17 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/4018752893' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Dec 02 10:21:17 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/3647462322' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Dec 02 10:21:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v833: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:21:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:17.527 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:21:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:17.528 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:21:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:17.551 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:21:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:17.552 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:21:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:17.552 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:21:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:17.552 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Auditing locally available compute resources for np0005541914.localdomain (node: np0005541914.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 02 10:21:17 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:17.553 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:21:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 02 10:21:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.70079 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:17 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.59686 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:17 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:21:17 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2394582050' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:21:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:18.007 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:21:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:18.139 281049 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 02 10:21:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:18.171 281049 WARNING nova.virt.libvirt.driver [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 02 10:21:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:18.172 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Hypervisor/Node resource view: name=np0005541914.localdomain free_ram=11249MB free_disk=41.837013244628906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 02 10:21:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:18.172 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 02 10:21:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:18.173 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 02 10:21:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:18.230 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 02 10:21:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:18.231 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Final resource view: name=np0005541914.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 02 10:21:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:18.245 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 02 10:21:18 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mds stat"} v 0)
Dec 02 10:21:18 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1802639970' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Dec 02 10:21:18 np0005541914.localdomain ceph-mon[301710]: pgmap v833: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:21:18 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/2890963783' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Dec 02 10:21:18 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/975817563' entity='client.admin' cmd={"prefix": "df"} : dispatch
Dec 02 10:21:18 np0005541914.localdomain ceph-mon[301710]: from='client.70079 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:18 np0005541914.localdomain ceph-mon[301710]: from='client.59686 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:18 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/2394582050' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:21:18 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/2444120639' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Dec 02 10:21:18 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/1802639970' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Dec 02 10:21:18 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 02 10:21:18 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/3415027665' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:21:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:18.683 281049 DEBUG oslo_concurrency.processutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 02 10:21:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:18.688 281049 DEBUG nova.compute.provider_tree [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed in ProviderTree for provider: 9ec09c1a-d246-41d7-94f4-b482f646a9f1 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 02 10:21:18 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "mon dump"} v 0)
Dec 02 10:21:18 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/2335335146' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Dec 02 10:21:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:18.952 281049 DEBUG nova.scheduler.client.report [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Inventory has not changed for provider 9ec09c1a-d246-41d7-94f4-b482f646a9f1 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 02 10:21:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:18.953 281049 DEBUG nova.compute.resource_tracker [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Compute_service record updated for np0005541914.localdomain:np0005541914.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 02 10:21:18 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:18.953 281049 DEBUG oslo_concurrency.lockutils [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 02 10:21:19 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.70118 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:19 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.49923 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:19 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.70124 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:19 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/3603341291' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Dec 02 10:21:19 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/3415027665' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 02 10:21:19 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.106:0/3195466827' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Dec 02 10:21:19 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.108:0/2335335146' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Dec 02 10:21:19 np0005541914.localdomain ceph-mon[301710]: from='client.? 172.18.0.107:0/910381125' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Dec 02 10:21:19 np0005541914.localdomain ceph-mon[301710]: from='client.70118 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:19 np0005541914.localdomain ceph-mon[301710]: from='client.49923 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:19 np0005541914.localdomain ceph-mon[301710]: mon.np0005541914@2(peon) e17 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Dec 02 10:21:19 np0005541914.localdomain ceph-mon[301710]: log_channel(audit) log [DBG] : from='client.? 172.18.0.108:0/1536255720' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Dec 02 10:21:19 np0005541914.localdomain ceph-mgr[287188]: log_channel(cluster) log [DBG] : pgmap v834: 177 pgs: 177 active+clean; 227 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Dec 02 10:21:19 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.70136 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:19.953 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:21:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:19.954 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 02 10:21:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:19.954 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 02 10:21:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:19.984 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 02 10:21:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:19.984 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:21:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:19.985 281049 DEBUG oslo_service.periodic_task [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 02 10:21:19 np0005541914.localdomain nova_compute[281045]: 2025-12-02 10:21:19.985 281049 DEBUG nova.compute.manager [None req-7c4a3d09-2fa6-42ea-8bdd-a22f35f1cdc5 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 02 10:21:20 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.59716 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 02 10:21:20 np0005541914.localdomain ceph-mgr[287188]: log_channel(audit) log [DBG] : from='client.70142 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
